Download as pdf or txt
Download as pdf or txt
You are on page 1of 853

INDEX

S.No. Topic Page No.


WEEK- 1
1 Introduction 01
2 Octal and Hexadecimal Number Systems 15
Signed and Unsigned Binary Number
3 26
Representation
4 Binary Addition and Subtraction 47
5 BCD and Gray Code Representations 61
WEEK- 2

6 Error Detection and Correction 77


7 Logic Gates 90
8 Logic Families to Implement Gates 102
9 Emerging Technologies (Part 1) 115
10 Emerging Technologies (Part 2) 127
WEEK- 3

11 Switching Algebra 137


12 Algebraic Manipulation 153
13 Properties of Switching Functions 168
14 Obtaining Canonical Representations of Function 184
15 Functional Completeness 195
WEEK- 4

16 Minimization Using Karnaugh Maps ( Part - I ) 212


17 Minimization Using Karnaugh Maps (Part- II) 229
18 Minimization Using Karnaugh Maps (Part -III) 246
19 Minimization Using Tabular Method (Part -1) 262
20 Minimization Using Tabular Method (Part -2) 274

WEEK- 5
21 Design of Adders (Part – I) 289
22 Design of Adders (Part – II) 303
23 Design of Adders (Part – III) 321
24 Logic Design (Part -I) 336
25 Logic Design (Part -II) 352

WEEK- 6

26 Logic Design (Part -III) 367


27 Binary Decision Diagrams (Part I) 384

28 Binary Decision Diagrams (Part II) 399

29 Logic Design using AND-EXOR Network 415


30 Threshold Logic and Threshold Gates 430

WEEK- 7

31 Latches and Flip-Flops (Part I) 446


32 Latches and Flip-Flops (Part II) 462
33 Latches and Flip-Flops (Part III) 476
34 Clocking and Timing (Part I) 493
35 Clocking and Timing (Part II) 507

WEEK- 8
Synthesis of Synchronous Sequential Circuits (Part 520
36
I)
Synthesis of Synchronous Sequential Circuits (Part 533
37
II)
Synthesis of Synchronous Sequential Circuits (Part 547
38
III)
Synthesis of Synchronous Sequential Circuits (Part 560
39
IV)
40 Minimization of Finite State Machines (Part- I) 568

WEEK- 9
41 Minimization of Finite State Machines (Part- II) 582
42 Design of Registers (Part - I) 594
43 Design of Registers (Part – II) 610
44 Design of Registers (Part – III) 626
45 Design of Counters (Part – 1) 643

WEEK- 10
46 Design of Counters (Part – 2) 656
47 Digital-to-Analog Converter ( Part I ) 667
48 Digital-to-Analog Converter ( Part II ) 678
49 Analog-to-Digital Converter (Part I) 692
50 Analog-to-Digital Converter (Part II) 705

WEEK- 11
51 Analog-to-Digital Converter (Part III) 721
52 Asynchronous Sequential Circuits (Part I) 733
53 Asynchronous Sequential Circuits (Part II) 747
54 Algorithmic State Machine (ASM) Chart 759
55 Testing of Digital Circuits 768

WEEK- 12
56 Fault Modeling 780
57 Test Generation Pattern 796
58 Design for Testability 809
59 Built-in Self-Test (Part I) 822
60 Built-in Self-Test (Part II) 837
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 01
Introduction

So, I welcome you all to this NPTEL MOOC course on Switching Circuits and Logic Design.
Now, as you may be knowing that this course is primarily targeted to the undergraduate
students in their 2nd and 3rd years primarily in the disciplines of computer science and
engineering, information technology, electronics engineering and of course, also electrical
engineering. Before I start I would like to summarize the topics or the coverage of this course
which may help you in understanding the path that we are trying to follow.

So, in our first lecture here introduction I shall be introducing you to this course and a few
basic concepts shall be discussed.

(Refer Slide Time: 01:17)

Firstly as I had said let us briefly look into the main objectives of the course and what are the
things that we are expected to cover here. You see, when we talk about switching circuits and
logic design the basic thing that we need to understand is something called number system;
because whatever you do whatever design we talk about we shall be working with some kind
of numbers.

1
There are various ways in which you can represent numbers as we shall be discussing over
the next few lectures. So, the first thing that would be talking about are the number systems
ok; number systems are basically the way numbers are represented in a computer system or in
any digital circuit in general. So, we shall see that to implement these number systems we
need something called logic gates.

So, we shall be talking about different kinds of logic gates; how they work and how they can
be implemented. And finally, we shall be looking at some kind of formalisms called Boolean
algebra where we shall be looking into the theory behind the design of digital circuit we can
say. There is some kind of an algebra or set of mathematical rules using which formalize we
can design; we can optimize various kinds of functions that you want implement and we can
realize them using the gates that we are talking about ok. Then Boolean algebra talks about
Boolean functions, various functions that we want to implement let us say we want to
implement an adder or subtractor or comparator various kinds of this kinds of functionalities
may be there.

So, when we talking about Boolean function there are several things how to represents the
functions, there are various ways we shall be talking about how to manipulate these functions
in various ways. And of course, when we are talking about you realizing this functions using
gates one thing is important called minimization, where we are basically concerned about
how to reduce or minimize the number of gates that are required to realize a particular given
functionality fine.

And talking about circuits broadly speaking we can classify them into either combinational,
and sequential we shall be talking about this. Combinational circuits are those who are
whenever you apply some input you immediately get some output, but sequential circuits are
those where not only the inputs, but it also memorizes the previous history in some sense and
then it will generate some output.

Let me take a very simple example suppose, I have implemented the circuit that counts the
number of persons that are entering a room through a door, there is some kind of a sensor in a
circuit. Now, whenever there is a person entering the room the light inside the room will
automatically glow ok. Now, as long as minimum 1 person is there in the room the light
should remain glowing.

2
So, let us say 5 persons have entered now if 1 person goes out that should not switch off the
light. The system should remember that there were 5 persons were entered and only when all
5 have left the room only then the light has to be switched off ok. This is an example of a
sequential circuit where you need to memorize or remember the previous states of the system
right ok.

Now, these sequential circuits are also sometimes represented as something called finite state
machine and when you talk about such finite state machines there are various ways to
minimize them. And there is a concept called algorithmic state machine or ASM which can be
used to model or build relatively larger systems. We shall also be talking briefly about such
ASM modelling which are called ASM charts, how to model larger systems using this
formulism ok.

And of course, there is something called asynchronous circuits which are relatively much
more difficult to handle and analyze. Well in a normal sequential circuits when you talk about
there is a concept of a clock or a periodic signal which synchronizes the activity of the
system. So, whenever there is a clock only then the system will move from one state to the
next. But in case asynchronous circuit there is no clock, depending on the inputs the way they
change, the way the inputs arrives the system will automatically adapt itself and change its
states.

So, the sequential circuit whenever it is synchronous it is much easier to model because I
know the events of blocks when we need to change. But for asynchronous circuits changes
may happen anytime depending on the delays of the gates, inputs arriving various kinds of
events can be there ok. So, it is much more difficult analysis we shall see this.

And lastly whenever we design something there is always the possibility of some faults
appearing in the circuits that can be some defects in manufacturing during operation, there
can be some short circuit or some wires might come out. There could be lot of faults that may
happen in an operational circuit or a design. Now, lastly you shall be talking about how we
can test such circuits and fault diagnosis means how to locate where the fault has occurred.
So, we shall be talking about these things, alright. So, let us move forward now.

So, here we shall I mean we have given very rough bird’s eye view about what our coverage
during the next few weeks is expected to be like. So, we now you have fair idea about the
topics that we shall be covering; Some of these are part of your basic undergraduate course

3
curriculum, but few things I shall be discussing which will be at a slightly advanced level
which may not be part of your course or syllabus, but it is always good to know about them
alright.

(Refer Slide Time: 09:08)

Let us start with the basic concept of number systems. Number systems form the heart of any
digital system; before I talk about number system let me very briefly tell you about these
circuits or the systems that we are talking about. So, we shall be coming to this later again.
Circuits can be broadly classified into two types, one we can call digital, other you can call
analog.

For digital circuits the inputs as well as the outputs they are discrete in nature, discreet means
they cannot assume any arbitrary values. They can contain only some discrete values and
these discrete values we typically represent in terms of some numbers like we can say 0, 1, 2,
3 these are the discrete values. But for an analog circuit the inputs can be continuous, let us
say we are sensing the temperature of the environment temperature can be anything it can be
25.1 degree Celsius, 25.2, 25.25, 25.225 and so on.

So, there is no discreteness evolved here the values are continuous and those continuous
values are fed as input, similarly the outputs that is generated they can also be that kind of
continuous or analog. So, when you talk about digital systems most important thing to
understand are the number systems. Number system is basically the way to represent numbers
and some rules to manipulate numbers.

4
Now, let us talk about some examples decimal number system. Decimal number system is the
number system which we are all familiar with like we write numbers like this 25.36, this is an
example of a decimal number system

So, in a decimal number system we have the digits 0 to 9, there are some positional weights.
So, when you write a number so, we can find out what is the value. Roman number system
you are ensure you are familiar with this, here the concept is slightly different.

There are some distinct symbols which represent distinct values like for example, this I
represents the number 1, V represents 5, X represents 10, and so on. There are some rules
also if I write V followed by I it means 6, but if I write I before V this means 4. So,
depending on whether a smaller digit say I is smaller right I is 1 is V is 5.

So, if this smaller one appears after the larger one the two are added, but if the smaller one
appears before then the smaller one is subtracted from the larger one. So, the rules fairly
complicated in that sense like if I write XIV this means 14. So, the rules to find out the values
are not as straight forward as the decimal number system, now ok.

(Refer Slide Time: 13:03)

Next let us talk about the binary number system, binary like decimal we shall mostly be
discussing about the binary number system. Here you talk about only two digits 0 and 1 and
numbers are represented as combinations of 0 or 1 like for example, I can have a number like

5
this point 01001 let us say. This is an example of a binary number. So, we shall be discussing
this in much more detail.

Sexagesimal number system which is one which we all use knowingly or unknowingly you
imagine our clock seconds or minutes. So, how do you count? Seconds or minutes we count
from 0 we go up to 59 after 59 it again comes back to 0. Similarly minute; for hours it is 12
or 24. Sexagesimal number system means 60 that means, the way we count seconds and
minutes, we count from 0 up to 59 and then back to 0.

You think of a decimal number system here we count from 0 up to 9 and then back to 0 there
are 10 digits it is called decimal, binary there are two digits 0 and 1. Sexagesimal there are 60
such things 0 up to 59, after 59 again back to 0 ok.

(Refer Slide Time: 14:51)

And these number systems, there is another way to classify based on whether they are
weighted or non-weighted. The weighted number systems are like decimal and binary for
example, when you write a decimal number 2 3 5 it means that each of the digit position has a
weight associated; 5 has a weight 10 to the power 0, 3 has a weight 10 to the power 1, 2 has a
weight 10 to the power 2.

So, if we multiply these digits by the corresponding weights and add them up we get the
value 235, but you think of the roman number system when I write XIV. So, I cannot identify
any weights like this which I multiply and add them up, I will get the value. So, these are non

6
weighted codes. There are other codes also which will see later gray code which is also non
weighted code right ok.

(Refer Slide Time: 15:58)

So, let us move on so, as I had said we are most familiar with the decimal number system
where there are 10 digits 0 to 9 and as I had said that every digit position in a number, when I
write 236 has a weight and this weights are powers of 10, 10 to the power 0, 10 to the power
1, 10 to the power 2 and so on right.

Now, this number 10 is called the base or the radix of the number system, for the decimal
number system the base or the radix is 10 fine.

7
(Refer Slide Time: 16:50)

So, let us take some examples here. Let us take this the number 230 as I had said; each of this
positions have a weight 10 to the power 0, 10 to the power 1 and 10 to the power 2. You
multiply the digits by the weights, add them up, you get the value. Now, if there is a decimal
point the concept is similar, but the only difference is that after the decimal point, the powers
become negative 6 into 10 to the power minus 1 sorry then minus 2 and so on.

So, this weights will become negative, but the concept is same you multiply every digit 2 into
100, 5 into 10, 0 into 1, 6 into point 1, 7 into point 01 can you add them up you get the value
250.67.

8
Coming to the binary number system here we talk about two digits 0 and 1 because there are
two digit the concept is similar. So, here we talk about a weight that is a power of 2. So, in
decimal it was power of 10. So, here the base or radix is 2 and binary digits are sometimes in
short called bits. So, a bit represent a binary digit that means, a 0 or a 1, 0 or a 1 is called a bit
alright.

So, let us take a examples again suppose, a binary number if I write as 110 following the
same rule like decimal we multiply each digit position by a weight which is now a power of
2, 20 , 21 , 22 which means, 20 means what 20 is 1. This is 1, 21 is 2, 22
is 4. So, if you add 1 into 0 plus 2 into 1 plus 4 into 1 the value becomes equal to 6.

Similarly, for a number with fractional point rule is same 2 to the power 0, 1 and 2 and after
the fractional point it will be 2−1 , 2−2 . So, in this case the value will be you can check 5
point after the point 0 into this, 1 into this so, it is 0.5 and 0.25. It will be 0.25 this will be the
value of this number right. So, in binary when you write down the number the value can be
calculated like this, following this same rule as in the decimal number system.

(Refer Slide Time: 19:58)

So, let us now talk in general terms if we consider a general radix based system let us call
radix r in general for decimal r=10 , for binary r=2 . Radix denotes the number of
distinct digits which let us say its starts with 0 and it goes up to r−1 . Every digit position
has a weight which is some power of r for the integer part the weights will be this k will be
greater than or equal to 0.

9
Like if this is the fractional part d0 will be having a weight r 0 , d1 will be a have a
weight r1 and so on; d n−1 will having r n−1 . Similarly, for fractional part k will be
negative d−1 will have a weight r−1 and so on; d−m will have r−m . So, the rule is
same for a n+m digit number where there are n digits in the integer part m digits in the
fractional part you multiply digits by the corresponding weights, add them up you get the
value.

(Refer Slide Time: 21:30)

So, when you talk about binary to decimal number conversion the rule as I had said simple I
already mentioned the rule.

Given a binary number so, when you want to convert to decimal multiply each digit position
by the corresponding weight which is the power of 2 which can go from minus m up to plus n
minus 1 add them up you get the decimal number, ok. This I have already mentioned.

10
(Refer Slide Time: 22:01)

Now, some examples are here 101011 just compute the weights, 2 to the power 0 1 2 3 4 5,
multiply the corresponding digits, add them up you get 43. Now, we write it like this, this
number base or radix 2 we use it as a suffix; that means, it is the binary number and (43)10
means it is a decimal number. Take another example a purely fractional number.

So, the weights will be minus 1 minus 2 minus 3 and minus 4 if you add them up the weights
are how much 2 the power minus 1 is 0.5 0.25. So, the weight are like this here 0.5 0.25
0.125 0.0625 and so on. So, you multiply the corresponding weights to these digits add them
up to get the value right. Take a number with both integer and fraction parts. So, some of the
weights will be positive, some of the weights will be negative and you can compute the value.

So, this is the way you can convert a binary number into decimal very simple.

11
(Refer Slide Time: 23:31)

But when you want convert decimal to binary we need to consider the integer and fractional
parts separately. The rules are like this for the integer part I shall show an example, we
repeatedly divide the number of 2 and go on accumulating the remainders. And we consider
all remainders in the reverse order that will be your binary equivalent.

For a decimal number the integer part you have to repeatedly divide by 2 and for the
fractional part you will have to repeatedly multiply by 2. So, I will take an example to
illustrate and after every multiplication you will have to take out the integer part remember it
and the integer parts if you take them in the order they are generated that is your binary
equivalent ok.

12
(Refer Slide Time: 24:31)

Let us take some examples. Take the number 239 we repeatedly divide it by 2, you divide by
2 119 with the remainder of 1 you divide again 59 with a remainder of 1 29 remainder of 1
and so on; you go on repeating until this becomes 0.

So, once you get 0 the reminders that you have accumulated you take it in the reverse order 1
1 1 0 1 1 1 1. This will be your binary equivalent right. Take another example 64 you
repeatedly divide by 2, 32 remainders are all 0’s 0 0 0 0 at the last stage remainder will be 1,
take in the reverse order 1 followed by six 0’s this is 64.

Now, considering the fractional part I said you have to multiply by 2 take an example 0.634
you multiply by 2. So, it comes to 1.268 you take out the integer part remove that part 0.268
you take it and again multiply by 2 the next is 0.536 again multiply by 2 it is 1, 0.072 again
multiply by 2 it is 0, 0.144 again multiply by 2 it is 0.

So, this integer parts that have been generated after this multiplication you take them in the
same order, not in the reverse order like here 1 0 1 0 0 this will be your binary representation.
So, it depends how long you want to go the more number of digits you generate, the more
accurate your representation will be ok. And if a number has both integer and fraction parts
like 37.0625 let us say you do it separately, the integer part you follow this rule repeated
division. Let us say this is the binary number, the fractional part you follow repeated
multiplication you get this and they will combine the two this is your final answer.

13
So, with this we come to the end of this first lecture where we essentially talked about the
various weighted number systems like decimal, binary and radix r in general and specifically
we talked about how to convert decimal number into binary and a binary number to decimal.
So, we shall be continuing with our discussion in the next lecture.

Thank you.

14
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 02
Octal and Hexadecimal Number Systems

Welcome back. In this lecture we shall be mainly talking about something called Octal and
Hexadecimal Number Systems. But first let us try to understand the motivation why do we
need this. In the last lecture if you recall, we talked about the decimal number system which
we are all familiar with. Since our childhood we have been taught decimal numbers. So, all
arithmetic that you do by hand are based on decimal number system the things that have been
taught in our school. Binary number system is something which all computer systems are
based on, that is why we also have to learn binary number system.

Now, one problem with binary number system as you can understand that is that, because the
weights of the digits are in powers of two, the number of digits you require to represent a
number may require many digits or bits. Take an example. So, there is one example which
you took in the last lecture, a number 64. So, in decimal to represent 64 we need only 2 digits
right. But in binary we need one followed by 6 zeros; that means, 7 bit. So, when we express
a number or when you write down a number, we may need a large number of bits. So, it may
be inconvenient to write so many bits. So, this octal and hexadecimal numbers are in a sense
a compact way of representing binary numbers that is one way you can look at it.

Of course, they are separate number systems in their own right, but their main use is to
represent binary numbers in a compact way ok. Let us start with this motivation.

15
(Refer Slide Time: 02:23)

The octal number system: octal number system this octal means 8. Basically this is a
weighted number system with the radix of 8. So, you can say here our radix is 8, which
means our digits are 0 to 7 let us say. And in this table because I said that one of its use is to
represent binary numbers in a compact way. Here one thing you note that the radix that we
chosen here as 8, which is some power of 2 this is important.

Radix 8=23

Because it is power of 2, so the idea is that every 3 bit binary number can be represented in
octal as a single digit like you look at binary number 0 0 0 what is the value in decimal? 2 to
the power 0 ( 20 ), 2 to the power 1 ( 21 ), 2 to the power 2 ( 22 ) this will be the
weights 2 to the power 0, 2 to the power 1 and 2 to the power 2 multiply them this is 0. So,
this corresponds to 0. 0 0 1, 0 0 1 the last one is 1, it corresponds to 1. 0 1 0, second one is 2,
it is 2. 011 second and third 2 and 1, 3; 1 0 0 this is 4; 1 0 1, 4 and 1, 5; 1 1 0, 4 plus 2, 6; 1 1
1, all 3 are there 7. So, here you see this octal digit 0 to 7 is each of them they correspond to a
3 bit binary number. This is one correspondence that you have to look at here.

( 000 )2=0 × 22+ 0× 21 +0 × 20=( 0 )8


( 001 )2=0 ×22 +0 × 21 +1 ×20=( 1 )8
( 010 )2=0 × 22+1 ×21 +0 ×20= ( 2 )8
( 011 )2=0× 22+ 1× 21 +1× 20=2+1=( 3 )8

16
( 100 )2=1 ×22 +0 ×21 +0 ×20=( 4 )8
( 101 )2=1× 22+ 0 ×21 +1 ×20 =4 +1=( 5 )8
( 110 )2=1 ×22 +1 ×21 + 0 ×20=4+2=( 6 )8
( 111 )2 =1× 22+1 ×21 +1× 20=( 7 )8

(Refer Slide Time: 04:27)

Now when you talk about binary to octal conversion, this is basically following this principle.
So, when you have an integer part, you scan the number from right to left. I will take
examples. Given a binary number you scan the number from the least significant position
right to the most significant digit position left and during scanning, you make groups of 3 bits
and each group of 3 bits will be replaced by the corresponding octal digit.

This is the basic rule that we are following. You scan the binary number from right to left,
group 3 digits each and each group of 3 digit you replace by the corresponding octal digit.
Only for the most significant part, if the number of bits left is less than 3 you can pad zeros in
the beginning to make it 3. So, add leading zeros if necessary. For the fractional part you do
the same, but now the scanning is from left to right why? Because for a fractional part you
can add zeros at the end, you cannot add zeros in the beginning because the value will
change. But for the integer part you can add zeros in the beginning that is the only difference
right. So, fractional part scan it from left to right, do the same thing make groups of 3 bits and
for the last part you can add some trailing zeros is required. Let us take some examples.

17
(Refer Slide Time: 06:06)

Take a binary number like this 101 101 000 011. So, the numbers of bits are divisible by 3
there are 12 bits. So, you make groups of 3 3 3 starting from least significant side 3 3 3 3. 011
means 3, 000 means 0, and this 101 means 5, and 101 means 5. So, it is a straight forward
conversion. Now you can see the advantage of octal. I said octal is a way to represent a
binary number in a compact way. This is why I said that. There is a one to one
correspondence, for conversion you do not have to carry out any multiplication or division
like in decimal to binary or binary to decimal.

( 101101000011 )2= (101 101 000 011 )2=( 5 5 0 3 )8

Let us take another example where there are you can count 10 digits. So, again scan from
right to left 100 is 1, this is 4, 010 is 2 and you have a single one left. You add 2 zeros in the
beginning it becomes 001, which is 1. This is octal, this 8 indicates octal. So, for this case 2
leading zeros are added. Take a number means a pure fraction. So, pure fraction is scan from
left to right. So, 100 is 4, 001 is 1, then you have a single one. So, you have to add 2 zeros it
makes 100 which is 4 and for a mix number both integer and fractional part, for integer part
we scan from right to left it is 11 add one 0 it is 3. Here 010 is 2, 111 is 7, 1 add two zeros it
becomes 4.

( 1010100001 )2=( 001 010100 001 )2= (1 2 4 1 )8


( .1000011 )2=( . 100 001100 )2 =( .4 1 4 )8
( 11.0101111 )2=( 011.010 111100 )2=( 3 .2 7 4 )8

18
So, a leading 0 is added here in the integer part, and 2 trailing zeros are added in the fraction
part right. So, binary to octal is done like this.

(Refer Slide Time: 08:20)

Now octal to binary is even simpler. Just you take an octal number and each octal digit is
replaced by its 3 bit binary equivalent like 1645 replace 1 by 001, replace 6 by 110, 4 by 100,
5 by 101. Similarly a fractional number 2 by 010, 2 010 again then dot, 1 001, 7 111, 2 010
and so on.

( 1645 )8=( 001 110 100 101 )2


( 22.172 )8=( 010 010 . 0 0 111 1 010 )2

So, you see converting a binary number to octal or an octal number to binary is very trivial,
but if you want to convert for some reason, decimal to octal or octal to decimal you can
follow the same rule like in binary. Like I am giving one specific example for decimal to
octal conversion. Let us take a specific example. Let us say I have a number 3762, this is a
decimal number. So, I want to convert it to octal. So, what I do, I take this number, I
repeatedly divide by 8, I divided by 8. 8 fours 32, 5, 7 56, 0 with a remainder of 2, divide by
8 again.

8 5 47, 8 8 64 and remainder of 6. Divide by 8 again, 8 7 56, remainder of 2, divide again 0


with a remainder of 7. You have you have arrived at 0. So, stop and you take it in the reverse
order 7 2 6 2. So, 7262 is the equivalent representation in octal. So, conversion from decimal

19
to octal can be done in this way following a principal, which is almost identical to that for
decimal to binary only difference is here you are dividing by 8 instead of dividing by 2. For
fractional part it is same you this time instead multiplying by 2 you will be multiplying by 8
let me just take an example again.

¿
8 ⌊3762 ¿ ¿
¿
8 ⌊ 470−2¿ ¿
¿
8 ⌊58−6 ¿ ¿
¿
8 ⌊7−2 ¿ ¿
−7
8 ⌊ 0 ¿¿

(Refer Slide Time: 11:04)

Let us say I have a decimal number let 0.356 this is decimal. I want to convert it to octal. So,
what I do? 0.356 I multiply by 8. So, how much is the? 4, 4, 24, 28 and integer part is 2. So,
you remember this integer part. Now fractional part is 8 4 8. So, you take 848 multiplied by
8. So, now, it become 6, 38, 3, 7 and 6. So, now, the integer part 6, take 784 multiply by 8, 3,
6 like this.

And this continues. So, now, if you take it in the same order 265, so this will be equivalent to
0.265 in octal right. This is how we can convert from directly from decimal to octal or octal
to decimal.

0.356 ×8=2 .848


( 0.356 )10 ≅ ( 0.266 )8

20
0.848 ×8=6 .784
0.784 × 8=6 .272

(Refer Slide Time: 12:30)

Now, let us come to hexadecimal, which is one step further. Octal is 8, hexadecimal is 16. So,
the earlier case we are grouping 3 binary digits 3 bits to form one octal digit, now we will be
grouping 4 binary digits or bits to form a hexadecimal digit. So, hexadecimal will be even
more compact in terms number of digits.

Radix 16=24

So, in case of hexadecimal number system, radix is defined to be 16. So, 16 is again a power
of 2 that is why binary to hexadecimal conversion are easy and this 16 digits are defined as
follows: The first 10 are 0 to 9 last 6 are A to F, A B C D E F. So, 0 1 2 3 4 up to 9, then A B
C D E F. This is how the 16 digits are defined in hexadecimal. So, this table shows the
different hexadecimal digits 0 up to F and the corresponding binary equivalents. See you take
any one 0110 means what? You multiply by the weights it is 6 decimal. So, it is equivalent to
6 digit.

( 0110 )2 =0 ×23 +1× 22 +1× 21 +0 ×20 =( 6 )16

Similarly, you take let us say 1100 it is 8 plus 2, 12. C means 12. See 9, 10, 11, 12 13, 14, 15.
So, it represents the digits. Similarly 1111 means 8 4 2 1, 15; 15 means F; so every 4 bit

21
combination is equivalent to one binary or is equal to one hexadecimal digit, this is the basic
idea.

( 1100 )2=1 ×23 +1 ×22 +0 ×21 +0 ×20 =8+4=12=( C )16


( 1111 )2=1 ×23 +1× 22 +1× 21 +1 ×20=8+ 4 +2+1=15=( F )16

(Refer Slide Time: 14:38)

So, when you talk about binary to hexadecimal conversion it is quite similar to octal. For the
integer part you scan it from left to right, right to left sorry binary number is scanned from
right to left and instead of 3 bits now you group 4 bits. And each group of 4 bits you replace
by the corresponding hexadecimal digit and the last one if it is less than 4, you add leading
zeros.

And similarly for fractional part, you scan from left to right. So, again you make groups of 4
replace it by the hexadecimal digits and add trailing zeros if required.

22
(Refer Slide Time: 15:21)

Take some examples here, this is a binary number 1011 is what? 8 2 1; that means 11, 11
means is the digit B, 0100 is 4, 0011 is 3 this is the hexadecimal equivalent 16 indicates hex.
Similarly this number you scan from right to left 0001 is 1, 1010 is A, it is 10, 10 means A, 1
0 at 2 zeros 0010 is 2. For fractional numbers scan from left to right 1000 is 8, 010 add one 0,
0100 which is 4.

Similarly, another example here, here you add a 0, 0101 is 5, this is 5 and 111 and a 0, it is 14
which is E ok. So, this binary to hexadecimal is fairly simple almost same as binary to octal
only the size of these groups are different.

( 101101000011 )2= (1011 01000011 )2=( B 4 3 )16


( 1010100001 )2=( 0010 1010 0001 )2 =( 2 A 1 )16
( .1000010 )2=( . 1000 0100 )2=( . 8 4 )16
( 101.0101111 )2=( 0101 .0101 1110 )2=( 5 .5 E )16

23
(Refer Slide Time: 16:30)

Hexadecimal to binary again is very simple, you take the hexadecimal digit replace them by
their 4 bit binary equivalents. 3 A 5 this is 3, this is A this is 5. 12.3 D this is 1 2.3 and D. 1.8
18 ok.

( 3 A 5 )16=( 0011 1010 0101 )2


( 12.3 D )16=( 0001 0010. 0011 1101 )2
( 1.8 )16=( 0001 .1000 )2

(Refer Slide Time: 16:54)

24
So in general I just give an example just sometime back for r equal to 8.

So, if you have any arbitrary radix r number and if you want to convert from decimal to radix
r and vice versa, you follow a similar principle as we showed for binary. For radix r to
decimal you multiply each digit by corresponding weight and add them up. Like I am giving
an example, let us say I talk about an octal number. Let us say I have an octal number 237
and I want to find out its value, I want to convert it to decimal. So, each digit position will be
some have they will have some weights. So, it be 2 multiplied by 8 to the power 2 plus 3
multiplied by 8 to the power 1 plus 7 multiplied by 8 to the power 0. So, 8 square is 64, 64
into 2 is 128 plus 24 plus 7 so, it comes to 159.

( 237 )8=2× 82+ 3× 81 +7 ×8 0=( 159 )2

This will be the equivalent decimal number right. So, radix r to decimal will follow this
principle, multiply each digit by the weight, add them up. Decimal to radix will be the this
will using repeatedly division or repeated multiplication for the integer part you repeatedly
divide by r and take the remainder, and consider reminders reverse order.

I illustrated this for octal r equal to 8 and similarly for the fractional part, you multiply by r
and accumulate the integer part. This also I had shown for octal. So, in this lecture we have
basically looked at the octal and the hexadecimal number system, we saw how we can
convert binary to hexadecimal and octal and vice versa because I repeatedly said, binary
numbers are most important to us when we are talking about designing digital circuits and
computer design, because they all work on the principle of the binary number system. And
also I talked about for a general radix r, how we can convert a decimal number to radix r or
vice versa. These basic techniques will help you later when you want to convert any arbitrary
number from one radix system to another.

Thank you.

25
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 03
Signed and Unsigned Binary Number Representation

So, if you recall in the previous lectures, we talked about the various number systems
decimal, binary, octal, hexadecimal. And we mention one thing that binary number system is
important from the point of view of circuit design, specifically digital circuit design. So, in
this lecture we shall try to first motivate you why that is so, and secondly we shall show you
or tell you that how to represent arbitrary numbers in binary like. So, far we did not
considered negative numbers how do we represent negative numbers, we assumed all
numbers to be positive ok. So, we shall be talking about these specific things in the current
lecture. So, this lecture is titled Signed and Unsigned Binary Number Representation.

(Refer Slide Time: 01:13)

So, let us first motivate ourselves why binary numbers are important, why do we use them.
So, I just mentioned that binary numbers are important from the point of view of your circuit
design, but why is it so ok. To answer this question you will have to understand how circuits
are designed and implemented, modern day circuits are invariably implemented using basic
building blocks like transistors, they are typically MOS transistors, metal oxide
semiconductor symbolically a transistor is represented like this.

26
G

Depending on what we apply to a terminal called gate, there may be a current flowing
between the other 2 terminals or there may not be any current flowing. So, transistor acts like
a switch depending on my control input G. G can be regarded as a controlling input. So, it is
either conducting or non-conducting, there are 2 states. Now, in a modern day VLSI chip,
there are billions of such transistors; that means, there are billions of such switches very small
in size, they all turn on and turn off in synchronism depending on some external event or
applied input and, they realize some functionality that we wanted to realize ok.

So, as it said a switch can represent 2 states either conducting or non-conducting. Now, as an
analogy binary number system also has 2 states or digit 0 and 1. So, if I can represent the
conducting state as 1 and the non conducting state as 0, then I can say that a MOS transistor
the state of a MOS transistor represents a binary digit. It is either conducting or non
conducting means it is either 1 or 0. So, there is a one to one correspondence.

Similarly, other conventions you can follow, this is what I just mention open switch is 0 close
switch is 1, or you can talk about voltages at some point if the voltage is low you can say it is
0, if the voltage is high you can say it represents binary 1. Then absence of current or
presence of current like the example I took, or in some circuits nowadays we use light for the
purpose of computation.

So, if there is no light it can represent binary 0, if there is a light it can represent binary 1.
This is the basic principle behind photonic communication like, you know nowadays optical
fibers are used for long distance high speed communication the basic principle is like this,
you send light and you sense light at the other side whether the light is coming or not coming.
So, bits can be represented depending on the presence or absence of light ok.

27
(Refer Slide Time: 04:45)

So, in this course because we are talking about digital circuits, we shall be concentrating on
binary numbers. Now, let us introduce some definitions, a single binary digit which is 0 or 1
is termed as a bit as you also mentioned in the last lecture. Now, when you have a collection
of 4 bits that is called a nibble, collection of 8 bits is called a byte. Next higher thing we call
it as a word, but of course, the definition word is not well defined, depending on the context
16 bits, 32 bits or 64 bits can be considered as a word, but the definition of bit, nibble and
byte are well defined. It is 1 4 and 8 that many bits right.

(Refer Slide Time: 05:44)

28
Now let us talk about representing numbers in binary. So, the question that we are trying to
answer is that. Suppose we are representing a binary number in n number of bits, let us say n
equal to 16 or something let us say n equal to 16.

Now, I want to ask a question that in 16 bits, how many distinct numbers can I represent. This
of course, I want to know like in 4 bits you had already seen earlier that when you are talked
about hexadecimal numbers, in 4 bits we can represents number from 0 up to 15, octal for 3
bits 0 up to 7. So, my question is for n bits in general, what will be the range of numbers how
many total numbers I can represent ok. So, it is a simple combination calculation at each bit
position there are 16 bit positions right. So, each bit positions can be either 0 or 1.

So, there are 2 combinations, second bit position again 0 or 1. So, you multiply 2 into 2 into 2
up to n times. So, you get 2 to the power n this is the total distinct numbers you can represent.
So, so you recall for hexadecimal n equal to 4 you had numbers in the range of 0 to 15.

n׿=2n
2× 2× … ×2 ¿

So, how many numbers are possible 2 to the power 4, 2 to the power 4 means 16. So, total 16
numbers could be represented. So, here for n bits I can represent total of 2 to the power n
numbers ok. Now, next question is how to incorporate sign in a number numbers can be
unsigned, or there can also be a sign associated with the number. So, unsigned number will
only have a magnitude, but no sign, but signed number will be having a magnitude as well as
a sign, which will indicate that it is either positive or it is negative right ok.

n
Total number can be represented=2 , f ¿ n=4, 24=16

29
(Refer Slide Time: 08:11)

Let us first talk about unsigned binary numbers. Now, we just now saw that in n bits you can
have 2 to the power n distinct combinations. So, the minimum number that you can represent
is 0 all 0’s and maximum number is 2 to the power n minus 1 all 1. You think of hexadecimal
000 means 0 and 111 means 15. For octal if you take n equal to 3 the 8 distinct combinations
should be 000, 001 up to 111, if you count there are 8 combinations 000 means 0, 111 means
7 which is 2 to the power 3 minus 1, 2 to the n minus 1 ok.

Minimum number=00 … 0(n zeros)=0

n
Maximum number=11 …1 ( n ones )=2 −1

This table shows that as the value of n increases. So, how much is the range of the numbers
change for 8 bits, it will go up to 2 to the power 8 minus 1 means 256 minus 1, which means
255, for 16 bits this is 65 535 for 32 bits it will be about 4 billion. So, you can say numbers
up to 10 digits can be represent, but in 64 it will be even larger. So, if you just calculate this it
will be approximately about 21 digits. So, large number so, as the number of bits increases
the range of the number increases very rapidly ok. So, as this table shows, this you have to
remember.

8
For n=8, therange is0 ¿ 2 −1, i .e .0¿ 255

For n=16,the range is 0 ¿ 216−1, i. e .0 ¿ 65,536

30
(Refer Slide Time: 10:00)

Now, this already we have to talked about earlier, this I am showing it in a, you can say in a
mathematical form. Let us say I have an n bit binary number, where the digits are represented
as this is an integer number no fractions point b0 b1 b2 up to bn minus 1. b0 is the least
significant bit position and bn minus 1 is the most significant bit position.

An n bit binary number=b n−1 b n−2 … b 2 b1 b0

So, the weights will be 2 to the power 0 ( 20 ), 2 to the power 1 ( 21 ), 2 to the power 2 (
22 ) and so, on. So, when you try to calculate the decimal equivalent, you multiply the
digits by the corresponding weights and add them up, like b0 you multiply by 2 to the power
0 ( 20 ), b1 by 2 to the power 1 ( 21 ), and so on bn minus 1 by 2 to the power n minus 1 (
n−1
2 ). So, in binary we have seen that each digit position has a weight, which is some
power of 2, this way you can calculate the decimal equivalent ok.

The decimal equivalent=b n−1 ×2n−1 +… +b 2 ×22 +b 1 ×21 +b 0 × 20

31
(Refer Slide Time: 11:09)

Just an example for n equal to 4 for 4 digit just like hexadecimal so, if you express all
possible combinations in binary and try to convert them to decimal, you will see that the
range of numbers you can represent will start with plus 0, it will go up to plus 15. These are
considered to be unsigned number; that means, positive no sign ok. This already we have
seen for hexadecimal numbers. Same thing is there.

(Refer Slide Time: 11:45)

Now let us come to the issue of sign. So, in a most of our calculations we have to deal with
both positive and negative integers. So, now, the question is how do you represent negative

32
numbers in a system or in a circuit and how do we manipulate on them ok. So, the answer to
this question how to represent sign there are 3 broad approaches which are used in practice,
these are called sign magnitude one’s complement and two’s complement representations.
Now, we shall be explaining these three representations one by one. These are all used to
represent signed numbers, numbers which are both positive and negative.

(Refer Slide Time: 12:38)

Let us look at the simplest first the sign magnitude representation. You see this sign
magnitude representation conceptually looks like this. Suppose I have an n bit representation
this whole thing is n bits. So, I reserve 1 bit for this sign, this is my sign the remaining n
minus 1 bits will represent magnitude.

So, as a matter of convention we can follow this in this sign, 0 means the number is positive,
1 means the number is negative. So, the most significant bit or MSB, in short it is called it
represents the sign, which can be 0 or 1. 0 is positive, 1 is negative. And the remaining n
minus 1 bits will represent the magnitude. And already we know that in n minus 1 we can
represent a number in the range 0 up to 2 to the power n minus 1 minus 1 ( 2n−1−1 ). So,
the magnitude can be in this range.

n−1
For n−1 bit , the range is 0 ¿(2 −1)

So, in this representation the range of numbers can be, sign will indicate whether its positive
or negative, but the maximum magnitude can be 2 to the power n minus 1 minus 1 (

33
n−1
2 −1 ). So, this smallest number will be this, the largest number will be this right. This
follows directly from the binary representation.

(Refer Slide Time: 14:20)

But this one issue here, you can see that in this representation 0 is represented by 2 different
bit patterns because, for the number 0 the magnitude obviously will be 0. So, this will be all
0’s, but this sign can be either 0 or 1, it does not matter because 1 will indicate plus 0, other
will indicate minus 0. But technically speaking both are 0’s right plus 0 or minus 0.

So, one drawback of this representation is that 1 bit pattern we are wasting, we are using up
two different bit patterns to represent the same number 0 ok. This is one small drawback in
this representation.

34
(Refer Slide Time: 15:08)

So, this table shows you the sign magnitude representation in 4 bits, if it is 4 bits you see the
first column represents the MSB most significant bit is 0, which means the numbers are
positive. And the last 3 bits indicate magnitude 000 is 0, 011 is 3. So, on 110 is 6, 111 is 7 and
second column indicates, where the most significant bit is 1, 1 indicates the number is
negative. But the magnitude is again ranging from 0 to 7. So, you see that you can represent a
total of 15 distinct numbers because, plus 0 and minus 0 are having 2 different
representations and on one side I can represent plus 1 to plus 7, on the other side minus 1 to
minus 7. So, 14 plus 1, 15 total numbers I can represent ok.

35
(Refer Slide Time: 16:11)

Let us come to 1’s complement representation. So, what is 1’s complement representation, in
this method if the number is positive, then it is represented exactly like in sign magnitude
form no difference. The first bit indicates sign which is 0 and the remaining part represents
magnitude, but negative numbers are represented in a different way that we should see.
Suppose I have a number minus 5 so, I do something called 1’s complement of 5, 1’s
complement of 5.

So, it is explained what is 1’s complement. So, once I do ones complement of 5, this is my
representation for minus 5. So, when I have a negative number I take the magnitude 5, I
compute something called 1’s complement of 5 and that will give me my representation of
minus 5. So, what is ones complement it is very simple, you take the number let us say we
have 5, let us say in 4 bits 5 is what 0101 this is 5 right, in 4 bits. You compliment every bit
of a number. That means, 1 you make, 0 you make 1, 1 you make 0, 0 you make 1, 1 you
make 0. You change or flip every number and this will be your 1’s complement. This 1010
will indicate minus 5 right.

4 bit representation ,+5=0101

1' scomplement of +5=1010=−5

Now, you see here again the most significant bit will indicate sign because, when you do this
the plus 5 was having the most significant bit as 0, but as you flip it has become 1. So, for

36
negative numbers the MSB will automatically be still 1. So, just by looking at the most
significant bit, you can tell whether the number is positive, or whether the number was
negative ok.

(Refer Slide Time: 18:41)

So, this table shows you the representations in 1’s complement form. For the positive
numbers as I said they are same as the sign magnitude form 0 1 2 3 4 5 6 7, but for negative
numbers it is a little different. Well here, I worked out one. Let us say I want to represent
minus 4, I want to represent minus 4. So, how do I do it. I first take plus 4, what is plus 4,
plus 4 is my 0100, I take 0100, then I take 1’s complement of 0100. I flip the bits 0 becomes
1, 1 becomes 0, they become 1 and 1. So, it is 1011, this way you can take something, let us
take minus 2, 2 means 0010.

+4=0100, 1' scomplement of + 4=1011=−4

'
+2=0010, 1 scomplement of +2=1101=−2

So, therefore, minus 2 will be take 1’s complement 1101, you see minus 2 is 1101. So, in this
way, we can calculate for everyone. So, here again you will see that there will be 2 distinct
representations coming from 0, plus 0 will be having representation 0000, minus 0 will be
having representation 111, because you take plus 0 make negative 1111, both represents 0.
So, here again 1 of the combination is wasted just like sign magnitude.

37
'
+0=0000,1 scomplement of + 0=1111=−0

(Refer Slide Time: 20:32)

So, it is mentioned here the range of numbers that can represented here is just like sign

2
magnitude form, maximum is plus 2 to the power n minus 1 minus 1 [ (¿¿ n−1−1) ,
+¿

2
minimum is minus 2 to the power n minus 1 minus 1 [ (¿¿ n−1−1) . Because, we look in
−¿
the previous one, in the positive side it was plus 7, negative side also it was minus 7 ok. So,
plus for n equal to 4 it becomes 2 to the power 3 minus 1 7, minus 7. And also this I
mentioned that there will be 2 different representations of 0.

Maximum number=+ ( 2n−1−1 )

Minimum number=−(2n−1−1)

But one big advantages that this of course, we shall not be discussing now we will discussing
later, that one big advantage of 1’s complement representation is that, when you do
subtraction you do not need to separately subtract numbers because, you can subtract

38
numbers using addition only. So, what does it mean, well in a computer system we have both
addition circuits, we are both subtraction circuits. Now, we are saying that if we represent our
numbers in 1’s complement form, we do not need subtractor circuits. We can have only the
adder circuit using adder, we can do addition, you can also do subtraction, this is a big
advantage. So, we shall see later how this is, how this is done or how this is achieved ok.

(Refer Slide Time: 22:11)

Now, let us move on to the third representation 2’s complement which in fact is an extension
of 1’s complement and in fact, this is the most widely used representation today. So, here
again the positive numbers are represented as in sign magnitude form, for negative numbers
you represent them in something called 2’s compliment form.

Now, what is 2’s complement form, the definition is simple well earlier we had seen what is
meant by 1’s complement, just flip all the bits you do 1’s complement, then you add 1 to the
number. This is defined as 2’s complement right. Complement every bit of the number and
then add 1 to the resulting number. So, here again the most significant bit will indicate the
sign 0 for positive, 1 for negative.

2' complement of a binary number , X =1' s complement of X +1

39
(Refer Slide Time: 23:24)

Let us see how the representation works. So, again an example for n equal to 4 so, for the
positive numbers it is same as sign magnitude or 1’s complement form no difference. But for
negative number there is a difference. Let us again take the example for minus 4, plus 4 is
0100.

So, when you want to represent minus 4 like here we first we have to take the 2’s
complement of plus 4, plus 4 is this. 0100 was plus 4, you first take the ones complement 1’s
complement means you flip the bits 1011; 1011 means what in decimal 8 plus 2 plus 1 13, 8
plus 2 plus 1 11. You add 1 to it. So, we shall be taking about rules of addition later, if you
add 1 to it will become 12, 12 is 1100. So, 1100 will be the representation of minus 4.

Signed representation of + 4=0100

1' scomplement of +4=1011

'
2 scomplement of + 4=1011 +1=1100=−4

Now, another advantage here is that, well if you just look at the issue of 0, earlier we had the
issue of 0, that 2 representations ok.

40
(Refer Slide Time: 24:55)

Let us try to compute minus 0, what is minus 0, you take plus 0, you take the 1’s complement
add 1 to it. If you want to add 1 to it you will see that in 4 bits the result will again become all
0’s, which means minus 0 will also be having the same representation.

Signed representation of + 0=0000

1' scomplement of +0=1111

2' scomplement of + 4=1111+ 1=0000=−0

So, there will be no 2 separate representations of 0, but what will this extra combination
indicate, here I can add one additional number minus 8, this combination 1000 will indicate
minus 8. This we shall see later also how this is coming, but in 2’s complement this is one
advantage. On the negative side, I can represent one extra number, on the positive side 0 to 7,
in the negative side minus 1 to minus 8.

Maximum number=+ ( 2n−1−1 )

Minimum number=−2n−1

41
(Refer Slide Time: 26:09)

So, the first difference is that the range of number is increasing by 1 on the minimum site we
can represent 1 extra number. And this is because we have a unique representation of 0 and,
just like ones complement, we can here also we can do subtraction by addition and, as I had
said almost all computers today use 2’s complement representation.

So, when we talk about representing negative numbers today. So, invariably we shall be
talking about 2’s complement representation ok.

(Refer Slide Time: 27:00)

42
Let us talk about a few other features of 2’s complement representation, well this may not be
very obvious, but when you are calculating the value of a 2’s complement number this can
help. Suppose I have a n bit number I have a n bit number representation, in 2’s complement.
Let us let us say in 4 bit I take two examples, let us say 0101 and let us say 1101. I want to
find out its value. So, a simple rule is each bit position will be having weights, you add them
up most significant bit will also be having a weight, but negative.

So, when I talk about 0101, 0 multiplied by anything is 0. So, the value will be 0 plus 4 plus

2 2
this again 0 plus 1 which is 5. 2 to the power 0 (¿¿ 0) , 2 to the power 1 (¿¿ 1) , 2 to
¿ ¿

2 2
power 2 (¿¿ 2) , 2 to the power 3 (¿¿ 3) . But here because the most significant bit is 1 it
¿ ¿
will be minus 8 plus 4, 0 is 0 plus 1 so, this comes to minus 3. So, you can check minus 3,
you take plus 3, plus 3 is 0011, take the 1’s complement, add 1 it becomes 1101, the same
1101. So, this is a very quick way of finding out the value of a 2’s complement number, you
again follow that weighted some principle, only for MSB the weight is negative this is the
only difference ok.

0101=0+ 4+ 0+1=5

1101=−8+ 4+ 0+1=−3

43
(Refer Slide Time: 28:54)

The other interesting thing is that when you have 2’s complement representation. So, if you
shift left a number by k positions, this is equivalent to multiply the number by 2 to the power
k. Let us take some example, let us say in 8 bit representation I am showing plus 19, this is 1
2 4 8 and 16 so, 16 plus 2 plus 1 is 19. So, if I shift left the number by 2; that means, 10011 is
getting shifted and 2 0’s are inserted on the right. So, if you evaluate this number again 1, 2,
4, 8, 16, 32, 64, so 64 plus 8 plus 4 become 76. So, 19 multiplied by 4 is 76, 2 to the power 2.

+19=00010011

¿ by 2 bits=01001100

¿ 64+ 8+4

¿+76

¿(+19)× 4

So, even take the case of negative numbers, minus 29 is this you can check this, these minus
29 shift left by 2 shift left again 2 0’s are inserted, it is minus 116 minus 29 multiplied by 4 is
minus 116. So, shift left by every position is equivalent to multiplying a number by 2, if you
shift left twice multiply by 2 square.

−29=11100011

44
¿ by 2 bits=1 0 001100

¿−128+ 8+4

¿−11 6

¿(−2 9)× 4

(Refer Slide Time: 30:20)

Similarly, if we shift right it means dividing by 2, if you shift right by k position it means
dividing by 2 to the power k. So, again I give an example, there is a number this represents
plus 22, this you can check. If you shift right by 2, 0’s will be inserted on the left so this
becomes 5 so, 22 divide by 4 is 5 point something, if you ignore the decimal part, it is 5. But
only one thing if the number is negative and when you shift right, you shift the sign bit. Do
not shift 0’s, you pad it with this sign bit. Because, the number was negative, let the number
remain negative.

+22=00010110

¿ shift by 2 bits=00000101

¿ 4 +1

¿ 5(ignore the fractional part)

45
¿(22)÷ 4

So, it was minus 28 shift by 2, this number becomes minus 7, this you can check. So, again
we are dividing by 4. So, shift right means division, shift left means multiplication by some
power of 2 and one last trick let me tell you that just as this example shows when some sort
of so, you replicate or copy the sign bit as many times as you want without changing the
value of the number, this is called sign extension, like you consider an 8 bit number let say
plus 47 in 8 bits, this is plus 47 you can verify 1, 2, 4, 8, 16, 32, 32 plus 8 plus 4 plus 2 plus
1. Now, suppose I want to represent this number in 32 bits. So, I have this representation and
I add 24 0’s in the beginning, this is called sign extension.

−28=11100100

¿ shift by 2 bits=11111001( pad with signbit )

¿−128+64+32+16+ 8+1

¿−7

¿(−2 8)÷ 4

So, you can check this number will also represent plus 47, the same thing holds for a negative
number also. The only rule is that you are replicating the sign bit not adding 0 like here, the
sign bit was 1, this is the representation of minus 93 this also you can check. So, you can
replicate this sign bits, so many times, your number will remain still minus 93. So, when you
change the size of a number you can freely replicate the sign bit and add it in the beginning as
many times you want, this is a big advantage of 2’s complement representation ok.

So, with this we come to the end of this lecture. So, we shall continue with our discussion in
the next lecture again.

Thank you.

46
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 04
Binary Addition and Subtraction

If you recall in the last lecture we talked about the different ways of representing negative
numbers in binary. So, I mentioned when we talked about the 1’s complement and 2’s
complement representations that one big advantage they offer is that you can do subtraction
using addition. You do not need or require a separate subtraction circuit in your system, if you
design an adder that is enough you can also do subtraction.

So, in this lecture we shall talk about how addition and subtraction can be carried out in
binary and how 1’s complement and 2’s complement representations can help us in that ok.

(Refer Slide Time: 01:11)

So, let us look into this. Generally speaking when you talk about binary arithmetic, let us
simply consider 2 bits. Suppose I have a circuit, I am treating it as a black box. So, I am
applying 2 bits in the input these 2 bits are a and b. So, in this table this is sometimes the truth
table we shall talk about it later. So, I am showing the input bits a and b and I am showing all
possible combinations of a and b. There will be four possible combinations 2 to the power 2,
0 0, 0 1, 1 0 and 1 1.

47
Suppose we are trying to design an addition circuit, we are trying to add them. So, when you
add 2 digits my result of addition will also be 2 bits in the output. So, one of them I call it as
sum, one of them I call it as carry. So, how does it work? See in decimal if you think 0 plus 0
is 0 so, the sum will be 0, no carry, 0 plus 1 is 1. So, my sum is 1, but no carry. Similarly 1
and 0 if you add result will be 1, no carry, but if we add 1 and 1 in decimal the result is
supposed to be 2.

So, in binary what is 2? In binary 2 is represented by 1 0. So, we say that my sum is 0 and
there is a carry of 1 ok. So, my output binary number, 2 digit number is considered as carry
followed by sum, 1 and 0 right. This is how addition is carried out on this a and b.

(Refer Slide Time: 03:22)

Similarly, if I do subtraction, now I am saying that my circuit is such that I am doing


subtraction. Same again my 2 numbers are there, a and b. So, now, I have the two outputs one
I am calling as difference and other I am calling as borrow.

So, again you see if you subtract 0 from 0 the result is obviously, 0. So, difference is 0 borrow
is 0. But if you subtract 1 from 0, 0 minus 1 which means you are taking a borrow. So, in this
case both difference will be 1 and borrow will be 1. This is the rule of subtraction in binary;
that when you are subtracting 1 from a 0 you will have to take a borrow from the previous
stage.

48
So, the difference will be 1 and also because you have taken a borrow, borrow will also be 1.
But if you subtract 0 from 1 result will be 1 so, difference is 1, borrow is 0. If you subtract 1
from 1 result is 0 so, it is 0. And finally, if you do multiplication let us say if you are doing
multiplication of a and b it is fairly simple. So, anything multiplied by 0 is 0. So, first three
will be 0, 0, 0 and 1 multiplied by 1 is 1 ok. But here we are more concerned about addition
and subtraction in the present lecture; this product we shall see later.

(Refer Slide Time: 05:06)

Let us talk about binary addition first and let us work out some example to explain the
process. So, we are familiar with decimal addition when you add 2 decimal numbers. Let us
say when you add let us say 26 and 37 what you do, we add 6 and 7 13, 3 and 1 becomes a
carry then we add 3 plus 2 plus 1 6 and no carry this will be our result. So, for binary we do
something very similar. The corresponding bits are added and if there is a carry it will be
added to the next pair of digits on the left.

So, let us try to do some trail addition. For the time being assumed that the numbers are all
positive sign is not there, let us take two 5 bit number 0 1 0 1 0 and 1 0 1 1 1. So, initially
there is no carry, 0 and 1 are added 1 with no carry, carry is 0, 0 1 1 means 2 means 0 will be
the sum and 1 will be the carry, 1 0 0 again 2, 0 with a 1 carry, 1 1 0 again 2, 0 with a 1 carry,
1 0 1, 0 with the 1 carry.

49
(Refer Slide Time: 06:38)

So, this will be my final result this is my 5 bit result and there is a carry which is finally come
out right.

Let us take another example. So, in binary this means this number represent 15 plus 1 the
result should be 16. So, let us again initially no carry, 1 plus 1, 0 with a carry of 1, 1 1 0 you
add 0 with a carry of 1, again 0 carry of 1 again 0 with a carry of 1, 1 0 0 is 1, no carry. So,
this is my result. Let us take another example similar 1 0 is 1, no carry, 0 0 1 is 1, no carry, 0
1 1, 0, 1 1 0 is 0, 1 0 0 is 1. This is the result. So, in this way you can add any numbers by
keeping track of carry’s that are generated.

So, the point to note is that at every stage after of course, the first one you are required to add
3 bits, the 2 bits you are just adding plus the carry which is coming ok. This is the process of
binary addition simple, just like decimal addition.

50
(Refer Slide Time: 08:05)

Now, talk about subtraction. Subtraction is a little more complex because you need to, means
understand how the borrow bits are generated. Let us take 2 examples, 3 examples in fact. So,
I am subtracting this second number from the first number, 0 minus 0 is 0, no borrow, this is
the borrow, no borrow, this 1 minus 0 is 1, no borrow.

Now, 0 minus 1, 0 minus 1 if you recall it from table difference will be 1 and also borrow will
be 1. So, because borrow is 1 so, effectively you are subtracting 1 from 1. So, it becomes 0
and since we have taken a; borrow this 1 will not be there. Finally, it will be 0 minus 0, 0
because this borrows has been taken out.

Similarly, there is a another example, 1 minus 1 is 0, no borrow, 1 minus 0 is 1, no borrow, 1


minus 0, 1, no borrow, 0 minus 1 is 1, there is a borrow, because you have taken a borrow this
1 disappears, so 0 minus 0 is 0.

The third example 1 minus 0, 1 minus 0, 0 minus 1, 1 with a borrow, you have taken a borrow
already here ok. So, again you are subtracting this 0 minus 1 because this is a borrow, 0
minus 1 will be 1, again a borrow, again 0 minus 1, 1 with a borrow. So, here your result will
be this with a borrow of 1.

So, you see binary subtraction is a little more complicated because the rule for the borrow is
not as easy as in case of decimal subtraction ok. So, what you are trying to do is that we try to
avoid subtraction all together and say that well if we use 1’s complement and 2’s

51
complement, we can do subtraction using addition only. We do not need to remember how
subtraction is done ok.

(Refer Slide Time: 10:35)

So, you can see that will be pretty nice this kind of complex rules you do not have remember
anymore ok. Let us see subtraction using addition in 1’s complement. First let us look at this;
the rule is as follows suppose I want to compute A minus B, A is binary number B is another
binary number I want to subtract B from A.

So, when I want to subtract B from A what I am say telling is that you take the 2’s
complement of B, 2’s compliment, you take the 1’s compliment of B. Let us call it B1. So,
instead of subtraction you add B1 with A, 1’s complement of B let us call it B1, you add B1
to A. Then there is a correction step, this I shall be illustrating in an example that after this
addition if you see that there is a carry coming out then you add that carry back to the result.
This is called end-around carry, but if there is no carry you do not do anything.

Computation of R= A−B
¿ A + B1(+1 , if there is a end −around carry)
'
where B 1 is1 s complement of B

So, if a carry is generated you can deduce two things, first is that the result is a positive
number and of course, that carry has to be added back to get the correct result. But if there is
no carry, else part, the first thing is that your result is negative and whatever result you have

52
got it is already in 1’s complement form that negative number. So, using addition only you
are able to do the subtraction.

Let us illustrate this with the help of an example.

(Refer Slide Time: 12:26)

Let us take a simple example of subtracting 2 from 6, let us consider 4 bit representation; 6 is
0 1 1 0 and what will be the representation of minus 2, 2 is 1 1 0 1, 2 is 0 0 1 0. So, if you
take 1’s complement to 2, flip the bits 1 1 0 1, 1 1 0 1. So, you add 1 1 0 1 to this. This is your
B1. So, as per the previous thing this is your B1. So, if you add using the rule I have just
mentioned 0 plus 1 is 1, no carry, 1 plus 0 is 1, no carry, 1 plus 1 is 0, there is a carry, 1 plus 1
is 0, there is a carry, this carry comes out.

So, now you see there is a carry. So, this is called end-around carry. You take this carry back
and add this carry with this number 0 0 1 1 plus 1. So, 1 on 1 is 0 with a carry of 1, 1 and 1 is
0 with a carry of 1, 1 plus 0 is 1, no carry, 0. So, the final result is 4, 0 1 0 0, 6 minus 2
supposed to be 4 ok. So, we have done subtraction using addition only ok.

Let us another example where there is no end-around carry; that means, a number that means
a result is negative.

53
(Refer Slide Time: 14:12)

Let us take 3 minus 5. So, 3 can be represented like this 0 0 1 0 and 5 is 0 1 0 1. So, 1’s
complement is 1 0 1 0. Add them up, 1 plus 0, 1, 1 and 1 is 0 with a carry of 1, 1 0 0, 1, no
carry, 0 1 is 1, no carry.

So, here this no carry you do not do any nothing leave it as it is and you can check from the
table 1 1 0 1 is nothing but the 1’s complement of 2; that means, minus 2. So, your result is
already there. So, so if there is a end-around carry you will have to add it back; if there is no
carry you do not add anything your result will already be in 1’s complement ok. This is what
you mean by you can do subtraction using addition.

54
(Refer Slide Time: 15:13)

Let us look into the 2’s complement representation of same thing. Here the idea is very
simple, simpler than 1’s complement in fact. So, to compute A minus B you compute the 2’s
complement of B, let us call it B2 and add B2 to R. Here there is no concept of end-around
carry you have to adding back, nothing it is required. So, if a carry is obtained simply ignore
the carry, the result would be positive number.

And if the carry is not there in the result is negative and the result is already in 2’s
complement form, which means no correction step is required here like in 1’s complement.
That is why I told you 2’s compliment is much more convenient and almost all computer
systems today use 2’s complement representation for storing negative numbers and also
manipulating, carrying out arithmetic on negative numbers ok. This is the reason.

The second correction step as was required in 1’s complement is not required here ok.

Computation of R= A−B
¿ A + B2
where B 2 is2 ' s complement of B

55
(Refer Slide Time: 16:36)

Let us take an example that same example 6 minus 2. So, we have 6 minus 2 so, again plus 2
is 0 0 1 0. So, 1's complement of 2 is 1 1 0 1. So, 2’s complement will be you add 1, 1 1 1 0
so, this is 2’s complement. So, you add this up, 0 and 0, no carry, 0 1 1 is 0 with a 1 carry, 1 1
1 means 1 with a 1 carry 1 0 1 is 0 with a 1 carry, this 1 comes out.

Because a carry is coming out you simply ignore the carry whatever remains that is a result 0
1 0 0 is plus 4. So, you can see this is much simpler than 1’s complement; you simply add,
ignore the carry whatever is there that is the result right.

Let us look at another example when the result is negative.

56
(Refer Slide Time: 17:48)

Let take 3 minus 5. So, 3 is 0 0 1 1 and 5 again 5 is 0 1 0 1. So, 1’s complement is 1 0 1 0. To


take 2’s compliment add 1, 1 0 1 1. This is 1 0 1 1. So, add 1 0 1 1, 1 and 1 is 0 with a carry
of 1, 1 1 1 is 1 with the carry of 1 again, 1 0 0 is 1, no carry, 0 0 1 is 1, no carry. So, you are
left with 1 1 1 0. So, you follow the rule what is 1 1 1 0 represent 2’s complement, I
mentioned the weight of the MSB will be negative. So, the value will be minus 8 plus 4 plus
2 the last one is 0 so, there is nothing here. So, this is minus 2.

So, you see simply do an addition you do not have to worry about anything your result will
automatically come correctly. But the only thing is that all negative numbers you have to
represent in 2’s complement form that is the rule you to follow. So, if so if in a computer
system all your numbers are already stored in 2’s complement form you need not have to
bother about anything.

So, when you are subtracting you take 2’s complement add it otherwise you simply add it
numbers can be negative, numbers can be positive there is no issue. This is the big advantage
of 2’s complement that you do not need subtraction at all ok. So, when we later on when we
will be designing circuits we shall be ignoring subtractor circuits all together, we shall only
be constructing on designing adder circuits because we know if we have an adder we can also
do subtraction.

57
(Refer Slide Time: 20:03)

So, the last thing we talk about today is that suppose we are doing an addition it may so
happen so, let us take a small example. Suppose there are two positive numbers ok, let us say
0 1 1 0 this represents 6 and let us say 0 1 0 1 this represents 5; I want to add them up. It is 1
1 0 1. Now, you see 6 and 5 is supposed to be 11 now, if I add them up you see the most
significant bit is becoming 1 which indicates the number is negative.

So, why it is becoming negative? Because you see the sum is supposed to be 11. Now, in 4 bit
2’s compliment I cannot represent 11, I can represent numbers in the range plus 7 up to minus
8 that is the range.

Now, here the result after addition is coming to 11 which cannot be represented. This is
referred as overflow ok.

58
(Refer Slide Time: 21:26)

Let us take another example, let us take some negative numbers; let us say 1 0 1 0 which
represents you can check this is minus 6 and let us say 1 1 0 0 this is minus 4. So, here again
if you add 0 1 1 this becomes 0 with a carry of 1 carry we are ignoring, you see the numbers
were negative but the result this is 0, the result has become positive.

So, again when we add this the result is supposed to be minus 10 which cannot be represented
in 4 bits; this again is a overflow ok. So, how do you detect overflow? Detection overflow is
not that difficult, you see the overflow cannot occur if one of the number is negative or there
is positive and if you add them there can be no overflow.

Overflow can only happen when both the numbers are negative or both the numbers are
positive, then only it can go out of range ok. So, the sign of the two numbers must be same
this is the first requirement and the sign of the sum is different from sign of either of the
numbers which was happening here.

In the first example the sign of the two numbers were 0 0, but the sign of the sum was
becoming 1. In the second example numbers were negative they were 1 and 1, but sum the
sign was 0 so, they are changing. But there are other ways to check also. For the last addition
stage, if the carry in and carry out you see they are different that will also indicate there is an
overflow. So, there are multiple ways in which you can check. But the essential idea is same;
the two numbers must be of the same sign and after addition the result will become of the
other sign that indicates that there is an overflow on addition ok.

59
So, with this we come to the end of this lecture. Now, in the next lecture we shall be talking
about some of the other codes that we use in practice typically for representing decimal
numbers. There are something called BCD codes, grey codes. So, we shall be looking at those
special kind of codes in our next lecture.

Thank you.

60
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 05
BCD and Gray Code Representations

So, in the previous lectures if you recall, we had talked about the different number: systems
binary, decimal, octal and hexadecimal. And also assured how to convert one number system
to the other and also with respect to some examples illustrated, how we can represent
numbers both positive and negative in binary and, also carry out some arithmetic operations
like addition and subtraction.

Now, there are some applications where the primary numbers that we want to manipulate on
are decimal in nature. You take an example of a payroll application, where the employs
salaries are calculated. So, in those kind of applications, there is very little arithmetic or
calculations involved. See when you have very heavy calculations then it make sense to
convert the numbers into binary, because binary circuits are much faster. Binary adders,
subtractors, multipliers are much more efficient to work on numbers, but when you have such
decimal numbers there should be some alternative so that we need not have to convert every
time into binary and back, because there is very little calculation in such kind of applications,
mostly will be doing some reading from a file and printing stuff like that.

So, in today’s lecture we shall be considering on precisely that. So, the topic of the lecture is
BCD and gray code representations of numbers.

61
(Refer Slide Time: 02:00)

So, let us see what the idea is. We start with something called binary coded decimal numbers.
Now, as I said that it is sometimes desirable to manipulate numbers in the decimal number
system directly, instead of converting them to binary. Why? The reason is very simple we
have seen that of course we can do decimal to binary and binary to decimal conversion. But
this conversion processes are not very trivial, we will have to do either repeated
multiplication, or repeated division to carry out the process.

So, instead if we have an alternate representation where, you do not have to make this
conversion do not have to carry out these multiplications and divisions, then it will be very
efficient in terms of the overall computation. So, what we talk about are something called
binary coded decimal or BCD. BCD is the acronym for binary coded decimal.

So, what is the idea in BCD? Here every decimal digit is represented by 4 bit binary
equivalent. So, you can say if I have a K digit number, then each of the digits will be
represented by 4 bits. So, you will be having finally, 4 into K that many bits, this is the basic
idea. So, here are some examples, suppose I have a decimal number 238. So, if you want to
convert it to BCD, you convert it by digit by digit. 2, the 4 bit binary equivalent is 0 0 1 0. 3,
it is 0 0 1 1. 8, it is 1 0 0 0. So, this is your BCD representation. Talk about a number with
decimal point, let us say 12.39. 12, 1 and 2, 1, 2, then the point 3 and 9.

K digit decimal number=4 ×bits BCD code

62
(238)10=0010 0011 1000 , corresponding BCD code

So, you see the conversion is one to one, straight forward. Just the only point to note here is
that we are talking about the numbers 0 to 9 which are the decimal digits and we are
converting them into 4 bit binary. Now, you recall in 4 bit binary, we can represent 2 to the
power 4 ( 24 ) or 16 distinct numbers. So, out of them we are using only 10, the remaining
6 were not using right. So, which 6 combination? These 6 combinations, we are not using
1010, 1011, 1100, 1101, 1110 1 and 1111. These 6 combinations we are not using in case of
the BCD system right. So, let us move on fine.

(Refer Slide Time: 05:25)

So, here you see the 10 decimal numbers, digits 0 to 9 and their corresponding 4 bit binary
equivalents. Now, as I had said the remaining 6 combinations which corresponds to 10 which
is 1 0 1 0 up to 1 1 1 1, these are not used in this case, right.

63
(Refer Slide Time: 06:00)

This is something you have to remember, Now, next let us see we said that the advantage of
BCD is that, we can convert decimal numbers into BCD coded binary representation very
easily, that we had seen how. Next is, if you want to do some simple calculations let us say
some additions, how do we do it directly in BCD. So, now we shall be talking about some
rules for addition of BCD numbers. See the rule is fairly simple you. See that, when you add
BCD numbers, there may be some correction steps we may have to go for. There can be some
correction steps.

Now in the correction steps, what you do, we may have to add 6 which in binary 0 1 1 0 to
one of the nibbles. Now, the rules are very simple. When do we do the correction? We do the
correction, if either one of the nibbles is corresponding to an invalid combination, which
means that the combination from 10 to 15, so, the one of those 6. So, if it is one of those 6, it
is an invalid combination. Then we apply the correction step, or there is another alternative if
there is a carry from the previous stage from the previous nibble, then also you carry out this
correction. Let us look into an example to illustrate this point fine. So, here there are 3
examples which are worked out, let us look at the first example here.

So, here we are adding 23 and 46. So, you see 23 we have coded in BCD like this 2, 0 0 1 0
and 3, 0 0 1 1. 46 we have coded like this, 4 is 0 1 0 0. 6 is 0 1 1 0. Now, when you add you
see bit by bit we follow the same rule for binary addition 1 plus 0 is 0 is 1, no carry. 1 and 1 is
0 with the carry of 1. 1 and 1 is 0 with the carry of 1. 1 0 0 is 1, no carry or 0 0. 1 and 0 is 1,

64
no carry. 0 1 is 1, no carry. 0 0 is 0. So, you see after this addition these both these nibbles are
valid decimal digit, BCD digit. This is 6. This is 9 and also there is no carry from one stage to
the next. So, none of these 2 correction conditions are true here. Therefore, whatever you got
here this is our final result. You see 69, 23 plus 46 is 69 fine. This is one example. Let us take
another example where one of the nibbles become invalid, take a example 23 plus 48. So, this
is 23, 2 and 3 and this is 48, 4 and 8.

So, let us again add. 1 plus 0 is 1, no carry. 1 and 0 is 1, no carry. All 0 is 0. 0 and 1 is 1, no
carry. All 0. 1 0 is 1, no carry. 0 1 is 1, no carry. 0. So, you see here there is no carry from
either this stage to here, or carry out from the final stage. But one of the nibbles here, this one
is invalid 1 0 1 1. 1 0 1 1 is an invalid combination in BCD ok. That is why we apply a
correction step on this nibble. We add 6. We add 6 again following the rule of binary addition.
1 and 0 is 1. 1 and 1 is 0 with the carry of 1. 1 0 1 is 0 with the carry of 1. 1 1 0 is 0 with the
carry of 1. So now, there is a carry coming ok.

So, this 1 and 0 will become 1. 1, 1, 0. So, you see the result is coming to 71, this is 7, this is
1 which is 23 plus 48. So, the result is coming out correct. Now take another example, let us
say 28 plus 39. We take this example here. This is 28. This is 39. Let us do the addition again.
0 and 1 is 1. 0 0 0, 0. 1 1 is 0 with the carry of 1. You see now there is a carry coming from
here to here, this 1 and 1 is 0 with the carry of 1. 1 1 1 is 1 with a carry of 1. 1 0 0 is 1. 0. So,
you see after addition the nibbles are both valid, 0 1 1 0 and 0 0 0 1 both valid. But there was
a carry here from the first nibble to this second. So, you add a correction steps 6 here. So,
after addition this will become this which is 67, 28 plus 39 is 67.

So, now you understand these are the simple rules for BCD addition. So, whenever we have
such BCD numbers, you simply just add the numbers first in binary, then see either one of the
nibbles are becoming invalid, or there is a carry. Well if either of the condition is true, then
you add 6 to one of the digits. Now, here the example I have taken applies to the first nibble it
can also happened to the second nibble, then also you will have to do the same thing right,
you will have to add 6 to it. So, the rule is like this.

65
(Refer Slide Time: 12:30)

Next let us talk about another kind of code using which also you can represent decimal digit
not only decimal digits, any binary number any multi bit number you can represent in gray
code.

Now, let us try to see why we need gray code first. And what is the basic reason that we are
moving to a new code ok. Let us first talk about the conventional number system, like when
we talk about the numbers, let us say in decimal 0 1 2 3 etc. So, when you write the
representations in let us a 4 bit representation 0 0 0 0, 1 is 0 0 0 1, 2 is 0 0 1 0, 3 is 0 0 1 0, 4
is 0 1 0 0 and so on. Now, the issue is let us see from 1 combination to the next, how many
bits are changing. Let us look at that. See from 0 to 1 only the last bit is changing. So, only 1
bit is changing 1. But from 1 to 2 both the last 2 bits are changing. So, there are 2 bits
changing. 2 to 3 again only last bit only 1, but 3 to 4 you see all the 3 last bits are changing
this 3.

Now, there are many applications we shall be talking about, where it is good if you can
reduce the number of bits that are changing between successive counts, if multiple number of
bits are changing there can be a problem, we shall take an example and show how that kind
of problem can come, but for the time being assume that if multiple bits change there can be a
problem. So, we are trying to avoid that, we are trying to come up with the counting system,
where only 1 bit is changing across successive count values. This is what we are trying to do.
So, let us see what is a gray code?

66
The first thing about gray code is it is non weighted, means unlike binary numbers where
every bit position has a weight 2 to the power 0 (20 ) , 2 to the power 1 (21) like that, in
a gray code number the bit positions you cannot assign in weight like that and, any and the
second important thing which I have already mentioned that successive code words deferred
in only one bit and any code where this property holds, where there is a 1 bit difference
between successive words is called a cyclic code.

So, gray code is a cyclic code. Now, the second part I shall be explaining a little later with the
help of an example, but for the time being let us assume that for any application where do
you need to convert from analogue to digital. Analogue to digital means say, any value which
is continuous in nature, like you think of temperature, you think of motion, you think of the
rotation of a wheel, these are all continuous events. So, you cannot say the I will only rotate
by 1 degree (1o ) , 2 degree (2o ) , 3 degree (3o ) . I can rotate by 1.5 degree (1.5o ) ,
1.2 degree (1.2o ) , 1.15 degree (1.15o ) anything I want right. These are continuous
motion. But when you want to represent this kind of a continuous quantity or parameter in
binary, so, there has to be a mechanism for this conversion alright.

So, if you convert it into 4 bit representation, then you can have 16 possibilities. Suppose the
rotation of a wheel you are representing in 4 bits. So, you can have 16 possible rotation
angles. Because, in 4 bits you can represent 0 to 15, but if you represent it in let us say 8 bits
binary, then you can have 256 rotation angles that you can represent. 2 to the power 8 ( 28 )
is 256 right. So, gray code is useful here, and this point I shall be explaining why it reduces
error in conversion. This I will explain a little later. The second thing is that it is fairly simple
to convert a binary number to gray code or a gray code number to binary we shall also see
through example.

So, this example of a wheel we shall be seeing later and explaining this points with respect to
that. Let us talk about a 4 bit gray code.

67
(Refer Slide Time: 17:45)

This table shows a 4 bit gray code and also the corresponding binary representations, the first
column shows the decimal numbers 0 to 7, 8 up to 15. And these are first the binary
representations. So, as we have already seen 0 1 2 3 4 up to 15. Side by side, we have also
shown the corresponding gray codes, these are the grey codes for this decimal number 0 is
represented as 0 0 0 0, 1 as 0 0 0 1, 2 as 0 0 1 1, 3 as 0 0 1 0 and so on. You see between any
pair of codes exactly 1 bit is changing. Let us say if you consider these two, last bit is
changing. If you consider these two this second large middle bit is changing.

So, between any pair of bit patterns so you see here take let us say this and this. So, here also
the middle bit is changing. Exactly one bit changes. Similarly from 15 back to 0 only the first
bit changes; this is what gray code basically is. So, across successive codes exactly 1 bit
positions change the states right. This is one property we mentioned of so called cyclic codes.
This is an example of a cyclic code, fine.

68
(Refer Slide Time: 19:30)

There is another interesting thing about gray code, for which it is also sometimes called self
reflecting code. Now, the idea of self reflecting code is like this; suppose, I have my gray bit
gray code representation for 4 bits. Now, I want to have it extended to 5 bits I want to have a
5 bit gray code representation. Now from 4 bits it is very easy to convert it into 5 bits, from 5
bits it is very easy to convert it to 6 bits. This is the basic idea and such codes is sometime
called self reflecting code, for reason that we will see how.

The idea is very simple. Suppose I have an m bit representation, I have m bit gray code
representation already available with me. So, I am showing it is as a box, let us say this is my
m bit representation, what I do, I write down that m bit representation again not directly, but
as a mirror image. The last number goes first the first number goes last, that is called self
reflecting as if it is reflecting and, in the first segment I am adding a bit 0 in the beginning
and in the second segment I am adding the bits 1 in the beginning.

So, I get the gray code for m plus 1 (m+1) bits. This is a basic idea. So, see what do we
mention. To obtain the gray code representation for m plus 1 (m+1) bits, we write down
two m bit representations one below the other just like I showed, with the second one being
the mirror image of the first one, it is just the mirror image. Then we add 0 in the beginning
of every code in the first block and add a 1 at the beginning of every code in the second
block.

69
(Refer Slide Time: 21:34)

So, let us take an example this one. Suppose I have a gray code for 2 bits, this is the gray
code representation 0, 1, 2, and 3. So, when I want to generate the gray code for 3 bits, you
see I write this twice. First as it is. Second as a mirror image. Mirror image means, 1 0 comes
first, 1 1 next, 0 1 next, and 0 0 last. Then I add 0’s in the beginning of the first block and 1 in
the beginning of the second block.

So, I get 3 bit gray code you see check here. Across every pair of codes exactly 1 bits are
changing, 1 bit is changing. Similarly from 3 bit to 4 bit you follow the same principal, you
write down the 3 bit gray code block twice, one the mirror image of the other. Same way just
reflecting, add 0 in the beginning of the first block, add 1 in the beginning of the second
block. So, what you get is 4 bit gray code.

So, generating gray code from m bits to m plus 1 (m+1) bit is fairly simple as you can see
right. So, this is also a reason why it is called self reflecting code. Now, let us look at binary
to gray code and gray code to binary conversions.

70
(Refer Slide Time: 23:07)

First let us look at how we can convert a binary number to gray code. Let us consider in
general that we have n bit representations. Suppose my binary number is represented like this
b0, b1 up to bn minus 1 and, the corresponding gray code number will be g0, g1 up to gn
minus 1. T he rule is fairly simple. The most significant bit straight away copied. This should
be b, read it as b right.

Binary number=bn−1 … b1 b 0

Gray Code=g n−1 … g1 g 0

So, gn minus 1 equal to bn minus 1, the most significant bit is straightaway copied and for all
other bits, you take the modulo 2 sum, this is called modulo 2 sum of the corresponding
binary bit and the next more most significant binary bit bi plus 1 for all other values of i, this
is i. This is the rule and this modulo 2 sum is defined as follows 0 modulo sum 0 is sorry this
is 0 this is 0 just a second.

gn−1=b n−1

gi=b i ⨁ b i+1 , for the rest of bits bn −2 … b1 b0

So, what I said is that this is actually b and this is actually 0 and 0 modulo sum 1 is 1, 1
modulo sum 0 is 1, 1 modulo sum 1 is 0. Now, when you convert there is an example let us

71
say 1 0 1 1 0 1 is a binary number, the most significant part 1 you simply copy here. The next
bit of the gray you take as the as modulo 2 sum of the corresponding binary bit and the
previous one, this is 0 and 1. 0 and 1 modulo 2 sum is 1, it becomes 1. The next bit is 1
modulo sum 0 this is also 1, next bit is 1 and 1 0, next bit is 0 and 1 1, next bit is 1 and 0 1.

(Refer Slide Time: 25:37)

Let us take another example. Suppose I have a binary number 0 1 1 0 1 0 1 0. It is an 8 bit


number. So, let us follow the same rule. The first bit is straight away copied 0, the next bit
will be the modulo 2 sum of this bit to previous bit 1 and 0, this is 1, next one is 1 and 1
which is 0, next one is 0 and 1 which is 1, 1 and 0 which is 1, 0 and 1 is 1, 1 and 0 is 1, 0 and
1 is 1. So, this is the corresponding gray code representation of this binary number. So, given
a binary number you can very easily convert it into gray code right ok.

72
(Refer Slide Time: 26:30)

Let us look at the reverse, gray code to binary number conversion. Well, the rule is fairly
simple as you can see. So, we start to the left most bit position and move towards the right.
And the rule is the binary bit is said to be the same as the gray code bit, if the number of ones
preceding gi is even and, we set it to the compliment this gi dash indicates it is are the
compliment ok. The compliment of that, compliment if the number of one’s preceding gi is
odd ok.

Let us take an example. Suppose I have a gray code, let us say 0 1 0 1 1 0 0 1. Let us say, let
us take an 8 bit representation, 8 bit gray code and I want to convert it to binary. So, when
you move to the first bit there is no preceding bits. So, number of ones preceding this is 0
which is even. So, I simply copied bi equal to gi, this 0 bit is copied. I move to the next 1
number of preceding 1 is still 0 even. So, I sorry I straight away copied this will be 1, then I
come to 0 the number of preceding ones is 1 which is odd.

bi=g i , if number of 1' s preceding gi is even

' '
¿ g i , if number of 1 s preceding gi is odd

So, it will get complimented 0 will become 1, 1 number of preceding 1 is again 1, which is
odd this will get complimented, 1 number of preceding 1 is 1 and 2 it is even so, it will be
straight away copied. 0 preceding 1 2 3 odd. So, it will get complimented 0 again 1 2 3 odd it

73
will be complimented and 1, 1 2 3 odd this will also bit complemented this will be the
equivalent binary number for this. So, you see conversion is not that difficult. So, if you
know the rule you can straightaway convert it like this fine ok.

(Refer Slide Time: 28:47)

Now, let us talk about that example that we had talking about; that means, why we say that
grey code can reduce errors well. Let us take that example of a rotation of a wheel look at this
diagram, there is a wheel let us say it is a the wheel of a car, or any kind of a mechanism,
which is rotating and you are trying to measure or monitor the degree of rotation of the
wheel. So, what you do on the wheel on one place we drill some holes and on one side there
are some light sources and on the other side there are some light sensors. So, whenever those
holes come under the light.

So, if there is a hole light will pass and the sensor will be sensing it and, if there is no hole
there will be no light and the sensor will not find any light ok. So, it is something like this,
there are some light sources as you can see on one side of the wheel, they are arranged like
this five of them are shown. And on the other side there are some sensors these are some light
sensors.

And the wheel we are coding like, if you see some part of the wheel is shown as white, some
part of the wheel is shown as colored. The idea is colored means they are opaque, they are
solid, light cannot pass through it. And white means they are transparent may be they are

74
made of glass light can pass through it. And the way we have made this coloring and
whitening base is based on the gray code. Let us take an example of 3 bits this is at the 3 bit
gray code representation.

So, the successive numbers you have written down 0 0 0, 0 0 1, 0 1 1, 0 1 0 and so, on. And if
it is 0 0 0 you see we have made it all white, white, white. So, white means 0 solid means 1. 0
0 1 is 0 0 1, 1 is solid, 0 1 1, 0 1 1, 0 1 0, 0 1 0, 1 1 0, 1 1 0, 1 1 1, 1 1 1 and so, on. Now,
because it is gray code so, across one segment and the next exactly one of the bits are
changing state.

Now, we imagine if we did the same thing with respect to binary, think of the binary
numbers. Let us say in 4 bit consider the number 7, 0 1 1 1 next number is 8 1 0 0 0. So, all
the 4 bits are changing. So, now, imagine if the wheel is just between 7 and 8 just it is
moving. So, some of the lights may be passing some not passing due to some alignment,
there can be a little error some of them is sensing some of them is not sensing, in the worst
case there can be an error of so, many bits because, all 4 bits are changing, but in gray code
because only one bit is changing between 1 segment to the next.

So, during the transition even if there is small error only that 1 bit can be either 0 and 1 other
bits will be very stable. So, the error can be at most in a single bit in gray code during this
rotation sensing right. This is the main idea behind using this kind of gray coding in, this is
just one example I have cited, there are many other examples.

Like here I am just mentioning one example, that in the design of circuits we shall see later
when you talk about sequential circuits, we shall see that if we use some kind of gray code
encoding, then between successive states when some signal is coming, if we ensure that only
one of the outputs are changing state at a given time, then it can provide us with the
mechanism with which you can reduce the amount of power dissipation, because power
dissipation or energy dissipation is very important nowadays, because most of a devices are
working on batteries. So, minimizing power is very important and this gray code encoding
also helps in that all right.

75
(Refer Slide Time: 33:20)

So, this is what he mentioned binary code was used instead of gray code multiple bit position
might be changing across segments and so the error could have been higher, but in gray code
you can have only in a single bit position during reading.

So, with this we come to the end of this lecture, in the lecture number 6 we shall be talking
about something else. We shall be talking about error detection and correction say in binary
we have represent some numbers they might be stored, or they might be communicated from
one place to the other. There are chances because of various reason that some bits may get
flipped or erroneous; so how to detect and correct them that will be the topic of a next lecture.

Thank you.

76
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 06
Error Detection and Correction

In this lecture we shall be talking about Error Detection and Correction when we store or
communicate some digital data. The idea is very simple, when you have some data, we store
it somewhere maybe in a hard disk or maybe other place with time some bits might get
changed or corrupt. Similarly, when you are sending some data over a network, over a
communication channel due to some noise in the communication channel some of the bits
again might get corrupt.

So, it is a very important requirement to provide some error detection and error correction
mechanism during this kind of storage and communication process. Now, there are many
methods available, many sophisticated algorithms and techniques available. Now, here we
shall be just giving you the basic concept and one simple method using which you can carry
out error detection and correction ok.

This is the objective of the present lecture.

(Refer Slide Time: 01:19)

77
So, the basic concept of error detection and correction is suppose I have a data or a number
which I want to store or communicate, this is my basic data.

(Refer Slide Time: 01:31)

So, one thing to understand is that whenever we are talking about error detection and
correction, first thing is that we must add some extra bits to my data. Because if I, these are
sometimes called check bits. It is called check. So, if I do not add some additional bits it will
not be possible for me to tell just by looking at my data that whether the data is correct or
some error has taken place.

So, it is mandatory to provide some kind of additional information in the form of check bits
and the data along with the check bits will form something called code words ok. So, we have
the concept of code words, extra bits are added to the data and we get code words. Now, now
the concept behind error detection is when you talk about code words we say there can be
valid codes, there can be invalid codes. So, code words can be valid or invalid.

Normally, whenever we are storing or communicating some data they correspond to valid
code words. Now, if I design my codes in such a way, let us say I am talking about a single
bit error, 1 bit is changing. So, if I design my code in such a way that every single bit error
will convert a valid code word into an invalid code word then my task is done. I should have
a way to test whether a code word is invalid or not. So, if I see it is invalid which means that
some error has taken place. So, it was a valid code word and some bit error has converted that

78
valid code word into an invalid code word; this is a basic concept fine. So, let us talk about
one definition here distance between code words.

Let us say we have two code words Ci and Cj, the distance between them sometimes it is
denoted as dist(Ci, Cj). It basically means how many bit positions are different between Ci
and Cj. Let us take a specific example, consider a code word 01001 and 11100. So, how
many positions are different, first position is 0 and 1 it is different, second position is same,
third position is different, fourth position is same, fifth position is different. So, number of
differing position is 3.

We say that the distance between the code word is 3. So, the reason we are talking about
distance is that the distinction between valid and valid and invalid code words are often based
on the concept of this distance.

(Refer Slide Time: 05:04)

So, this is the kind of scenario I talked about I have valid code words, I have invalid code
words and there should be a mechanism I said to check whether a code word is valid or
invalid. Let us say our task is to detect single bit error, what I claim is that if this is my
requirement then the distance between any pair of valid code words must be at least 2. Why?
Consider two valid code word let us say C1 and C2. I am saying that any single bit error will
transform it into an invalid code right.

79
Single bit error means what, I am increasing the distance by 1 with respect to C1 or C2. Now,
if distance of C1 and C2 was less than 2; suppose C1 and C2 distances 1. Let us say let us
take an example C1 is 10010 and C2 is 10000, only 1 bit position is changing, let us say the
distance is 1. So, it can happen that the valid code word C1, there is a single bit error on this
position and it has become 10000 which is also a valid code word. So, valid remains valid ok.

So, distance of 1 will not satisfy my property, distance must be at least 2, so that any single
bit error will transform C1 into some other code which is not in this set, which is in the other
set.

(Refer Slide Time: 07:23)

So, across all valid code words between any pair distance must be at least 2. This is the
mandatory requirement and generalizing this concept if we are talking about detecting k
errors, so, k numbers of bit errors can be there and k or less errors should not convert one
valid code word into another valid code word. So, the distance here must be at least k plus 1.
These are some basic theory behind the idea. Now, if I want to talk about not detection also
correction, correction means suppose an error has taken place while I detect the error not only
I detect the error I also find out the location of the error. So, that I can correct it, this is what
is meant by error correction.

Now, if I want to do error correction what is the concept? You consider C1 and C2. Suppose,
I have this code C1, I have this code C2 there can be single bit errors. Due to single bit error

80
some bit C1 can go to some other states, C1 can go to some other state, C2 can go
somewhere, C2 can go somewhere. Now, what I am saying is that if C1 and C2 are going to
the same place. Let us say here, then I cannot say that this is because of an error in C1 or
because of an error in C2.

What I am saying is that for single error correction, you must be uniquely able to tell that
which is the code from where it has there was a error and you had come to that, which means
so, every such, say due to a single error you must arrive here. But if you want to go from C2
to here at least 2 errors must be there.

So, the distance between C1 and C2 must be at least 3. If we have a requirement like this
satisfied then only you can say that you can also provide error correction in addition ok. This
is the basic concept.

(Refer Slide Time: 09:34)

So, let us first talk about a very simple scheme for detecting errors. This is a very popular
technique using something called parity codes. Parity is very simple. Parity is defined with
respect to a given binary word, you check whether the number of 1’s in the word is odd or
whether it is even. Well, there can be different conventions, but as a matter of convention you
can say that you add a parity bit to your data so that the total number of 1’s in every valid
code word becomes even.

81
Let us take an example: you consider 3-bit binary words like I show here let us say these are
my data bits. It can be 0 0 0 up to 1 1 1. I add 1 additional parity bit to every word in such a
way that the number of 1’s in every row you can check becomes even; 0 0 1, I add a 1 so that
number of 1’s become 2. 0 1 0, again 0 1 1 already it is 2 so, I make it 0 even, 1 1 1, 3 and 1,
4 so, like this.

So, parity bit you can define like this it will be 0, if the number of 1’s in the binary code word
is even, it is 1 if the number of 1’s is odd. You see it is 1 if the number of 1’s is odd, if the
number of 1’s is odd. So, this is how we add binary bits these parity bits.

(Refer Slide Time: 11:34)

Now, once you add the parity bits what is the advantage you gain, you see that any odd
number of errors will be detected. Why? Because, now my code word will consist of not only
my data, but also the parity right, the whole thing and we know that the number of 1’s is
even. Now, any odd number of errors which means single error, 3 errors for larger number 5,
7, 9 we will convert it into a code where the number of 1’s will become odd. So, if I have a
checking mechanism where I check whether the number of 1’s is remaining even or not.

So, any odd number of errors will make the number of 1’s odd. So, it can be detected. So, I
can use it for error detection, very simple mechanism. Not only single bit errors, but rather
any odd number of bit errors single error, 3 errors, 5 etc. right.

82
(Refer Slide Time: 12:36)

Now, let us come to a more complex mechanism where I also want to do error correction. The
rule for error correction I mention for single error that for 1-bit error correction I need a
distance of 3. So, here I am generalizing that statement. If I want to correct up to k bit errors,
the minimum distance between every pair of valid code words should be at least 2k plus 1
(2 k +1) .

Now, Hamming code is one method where we can correct single bit errors, k equal to 1
(k =1) right. So, let us see how Hamming code works. The Hamming code that we shall
be explaining here that is for k equal to 1 (k =1) .

The idea is suppose I have my data bits or information bits there are m such bits, we add a set
of parity bits not 1, but k number of parity bits ok, do not confuse this k with this k. We add k
number of parity bits so, that we create a m plus k ( m+ k ) bit word.

83
(Refer Slide Time: 14:23)

Now, these m plus k ( m+ k ) bit words, I shall take an example will have a bit location like
what I mean is that; these each of this m plus k ( m+ k ) bit locations they will be having
value suppose I have this is my code word these are my bits. So, I assign numbers 1 2 3 4
from left to right like this.

So, each of this bit positions I am assigning a number and how many parity bits are to be
added, there is a rule we will have to follow this rule. The smallest value of k that satisfies
this inequality, 2 to the power k ( 2k ) should be greater than or equal to m plus k plus 1
( m+ k +1 ) .

2k ≥m+k +1

So, suppose my information bits are 4 in number m equal to 4 ( m=4 ) . So, you see what
smallest value of k will satisfy this. So, right hand side will be 4 plus k plus 1 ( 4 +k +1 ) , if
k is 2 left hand side is 4, 2 square ( 22 ) and right hand side is 4 plus 2 plus 1 ( 4 +2+1 ) , 7
so, it does not satisfy. If k is 3 this is 8, 4 plus 3 plus 1 ( 4 +3+1 ) , 8 so, it satisfies. So, the
smallest value of k is 3 here and in this bit assignment 1 2 3 4 5 that I had said so, I will be
adding k number of parity bits, which of these will be the parity bit.

They will be the positions which are powers of 2 like position 1, position 2, position 4, a
possible position 8 ok. So, the powers of 2, I will be inserting the parity bits in those positions

84
only and there are some well defined formula using which we can calculate the parity bits, I
will illustrate.

(Refer Slide Time: 16:23)

Let us take this example for this m equal to 4 ( m=4 ) as I said. So, for m equal to 4
( m=4 ) we had said that we have to have k equal to 3 ( k =3 ) the value of k must be equal
to 3 sorry, k is 3. So, we need 7 bits 4 plus 3 ( 4 +3 ) , b1 b2 to b7 and I said we inserted the
parity bits in those positions which are powers of 2; that means, bit position 1, position 2 and
4. These are powers of 2 and my 4 information bits are located in the remaining positions 3rd,
5th, 6th and 7th. So, if you look at the remaining I am only showing decimal digits you can
also add the other numbers also.

So, only 4 I am showing only this 10 I am showing; you see 0 is if you look at the
information bits m1 m2 m3 0 0 0 0. 1 is 0 0 0 1. 2 is 0 0 1 0 and so on, 9 is 1 0 0 1. Now, we
are inserting parity bits in a suitable way. How? The rules are as follows you look at this
table.

So, I write down the 3 parity bits p1 p2 p3 and other than the all 0 combination I write down
all possible combinations 0 0 1 1 2 3 4 5 6 and 7. 0 1 0 is 2, 0 1 1 is 3, 1 0 0 is 4. So, other
than the all 0 combination I write down all 7 combinations and this 7-bit positions I write on
top 1 2 3 4 5 6 7. For p1 you look at which positions are having 1 here, here, here and here.
So, those are positions 1 3 5 and 7.

85
So, I note them down here 1 3 5 7; p2 is having here 2 3 6 and 7, 2 3 6 and 7 and p3 is having
in position 4 5 6 7 here and now I can write down the parity generation formula like this; this
symbol indicates modulo 2 sum. This we introduced earlier modulo 2 sum; see p1
corresponds to 1 3 5 7 right 1 3 5 7, out of them 1 corresponds to the bit position 1
corresponds to p1.

So, remaining are 3 5 7. So, in 3 what you have placed m1, 5 is m2, 7 is m4. So, m1 m2 m4
modulo 2 sum of these; talk about p2, p2 is 2 3 6 7 and p2 is already in position 2 so, 3 6 7. 3
is m1, 6 is m3 and m4, m1 m3 m4. Similarly, 4 5 6 7, 4 is your p3, 5 6 7.

So, m2 m3 m4; So, like this you can calculate the parity like, let us take an example; let us
consider that 9, the last one p1 is m1 m2 m4. So, recall this parity indicates whether the
number is odd or even, if it is odd it is 1, if it is even it is 0. This modulo 2 sum works like
that, that is parity.

So, m1 m2 m4 you see m1 1 0 1 is even number of 1’s so, it is 0. Then p2 is m1 m3 m4, 1 0 1


again even so, p2 is also 0; p3 is m2 m3 m4, 0 0 1 it is odd that is why p3 is 1. So, in this way
you fill up all the parity bits right, this is how you do.

(Refer Slide Time: 21:04)

Now, after you have done this now, you need to detect and correct errors. See correction of
errors are fairly simple, see in the previous slide you see that p1 p2 p3 you have calculated
using these numbers 1 3 5 7, 2 3 6 7, 4 5 6 7 you remember these numbers. You straight away

86
find out three check bits c1 c2 c3 just by taking modulo sum of those corresponding base 1 3
5 7, 2 3 6 7 and 4 5 6 7 you see these are these numbers 1 3 5 7, 2 3 6 7, 4 5 6 7 and this
modulo sum means I am repeating if these bits number of 1’s are even then c1 will be 0. If
number of 1’s is odd it will be 1.

Now, the rule is simple after you have calculated these check bits, if you find that it is all 0
which means there is no error. But if it is non-zero which means there is an error and
whatever you have got that will directly give you the position of the error. Let us take an
example, suppose I have a code word let us take one of the valid code words I am taking from
that table; let us say 1 0 0 1 1 0 0. This was the 7 bit number, let us say this 0 there is an error
and this 0 has become 1. So, now my code word has become 1 0 0 1 1 1 0, this bit is in error.

Let us try to calculate using these formulas. So, what will be the value of c1, 1 3 5 7 this is 1
3 5 7 even number, it will be 0; c2 2 3 6 7, 2 3 6 7 odd number it will be 1; c3 4 5 6 7, 1 1 1 0
again odd number this will be 1. So, you see c3 c2 c1 will respond to 1 1 0. So, what does 1 1
0 mean in binary, it is 6. So, bit number 6 is in error 1 2 3 4 5 6, it will tell you exactly which
bit position is in error and since you know that position you can change this 1 back to 0, you
can correct it. So, this you can verify with some other examples also you see that for any bit
position error out of this 7 single bit error can be corrected in this method.

This is the advantage of Hamming code right, using just few parity bits you can detect and
correct it.

87
(Refer Slide Time: 24:16)

Just let us work out another one, I am just giving you the schematic the rest I shall leave you
as an exercise to verify. Suppose, we extend the concept we want to design a Hamming code
for m equal to 5 ( m=5 ) . So, again look at that inequality how many parity bits you
require, smallest value of k you can check it will be 4 because 2 to the power 5 ( 25 ) is 32
and 5 plus 4 plus 1 ( 5+4 +1 ) is 10. But if you take it 4, 16 and 4 is 6 sorry, k is 4, 2 to the

power 4 is 16 ( 24 ) , 5 plus 4 plus 1 ( 5+4 +1 ) is 10, 16 greater than equal to 10


( 16 ≥10 ) , but if you take k equal to 3 then the right hand side will become larger.

So, the first thing is the bit assignment. So, I said there will be 5 plus 4 ( 5+4 ) 9 bits b1 b2
to b9. The 4 parity bits you assign the powers of 2, 1 2 4 and 8, the rest you fill up with the
information bits m1 m2 m3 m4 m5. So, 4 parity bits as usual you write down their binary
equivalents other than all 0’s then you see that how these numbers is stacking up. Let us say
p1, p1 will be correspond to 1 3 5 7 9, 1 3 5 7 and 9, p2 will be 2 3 6 7, 2 3 6 7, p3 will be 4 5
6 7 and p4 will be only there are two 1’s you can see 8 and 9.

So, in this way you can actually design the check bit mechanism, you can also fill up the
table, you can fill up the parity bits using these formulas just like the previous one. So, I will
leave it as an exercise for you to work it out and see how it works and here also you can
check that if there are any single errors, bit errors it will be detected and also corrected right.

88
So, with this we come to the error, come to the end of our very short discussion on error
detection and correction. From the next lecture onwards we shall be moving on to the actual
logic gates and their implementation.

Thank you.

89
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 07
Logic Gates

So, we start with our discussion on so called Logic Gates from this lecture. The title of the
lecture is Logic Gates. Now, if you recall we have talked about numbers, number systems,
binary primarily and other derived number systems like octal, hexadecimal, decimal, grey
code, etc., BCD. Now, when we talk about binary number system, the next step or the
objective of the present course will be to build some circuits that are based on the binary
number system or something which is called switching circuits.

This logic gates form the basic building block of such switching circuits. So, we first talk
about the different kinds of common logic gates that are used which we shall be using in our
subsequent lectures and discussions ok, logic gates fine.

(Refer Slide Time: 01:19)

90
(Refer Slide Time: 01:21)

So, the first thing is that we are talking about designing digital circuits. This logic gates will
be the basic building block of such circuits. There are many different types of logic gates
which have been reported by various people. The most common logic gates are NOT, AND
and OR. This three are the most basic once and of course, NAND, NOR and exclusive OR
(EXOR) these are also quite popular, there are few others.

Now, when you design a circuit, nowadays we talk about integrated circuits. There is a large
amount of circuitry which are packed or integrated within a single integrated device. So, you
can see such integrated circuits in every circuit board, you open a computer, open a laptop,
open any open a mobile phone, anything that you see, you open a washing machine, open a
air condition whatever we have, you will see the there are lot of electronic circuitry and there
all in terms of integrated circuits.

Now, here have shown you some typical pictures of integrated circuits how they look like.
They can vary in shapes and sizes. And over the years there have been several generations of
integrated circuits which have come up; starting with the late 1960’s where we talked about
something called small scale integration where inside, these are called chips; integrated
circuits are sometimes are also called chips.

There will be about 10 to 12 maximum gates per chip very small circuits, then came in the
70’s something called medium scale integration, MSI where we were able to pack about 100
gates in a chip. Then came large scale integration where we are talking about 1000’s of gates,

91
then very large scale integration, 10’s of thousands of gates and this VLSI is actually
progressing. As of today we are able to pack almost in excess of 10 to the power 8 or 100
million gates per chip. So, you see today we have chips of fantastic complexity with so huge
number of components or basic gates inside a chip that allows us to build incredibly complex
systems in a very compact space. Because the sizes of this integrated circuits are very small
ok. This gives us a very big advantage in system design.

Now, another thing I just wanted to mention, there is a empirical observation which was
made by Gordon Moore who was the co-founder of Intel long back in the year 1965. So, what
he predicted based on his observations is that the number of transistors in a chip basically
should double every year. So, in the initial years this trend was found to be hold, holding
good. The so called Moore’s law predicts that such trend will continue in the foreseeable
future.

See the pace has slowed down over the years ok, but what we see today is that still today the
kind of technological advancements that we see that has allowed us to have the number of
transistors per chip or per square inch whatever you say, actually we are talking a per chip to
double approximately every 18 months. See the initially Moore talked about 1 year, but now
is about 18 months. This actually means an exponential growth it is doubling every fix
number of months right.

(Refer Slide Time: 05:50)

92
So, a very typical curve is shown here. So, here the x-axis shows the years, this is projected
up to 2030 and y-axis shows the number of transistor per chip on a logarithmic scale. So,
straight line means exponential growth.

So, here we are only showing the Intel’s and a few others Motorola, AMD, mostly Intel, the
ones on the bottom these are Intel processors. So, you see that the trend the showing that
exponential growth and this is continuing till today right.

(Refer Slide Time: 06:35)

Now, let us come to binary logic. This digital logic gates that we are going to talk about, they
typically operate on binary logic. Now, in binary logic we have already seen, we talked about
two digits 0 and 1. So, the question is why do we want to go for binary logic and not other
logic like decimal, octal anything else. The first thing is that when we talk about designing
electronic circuits, circuits are built using small electronic components, most of these
components can be very easily visualizes has having two distinct states.

If you can map this distinct states into those two values 0 and 1, then there can be a direct one
to one correspondence between the circuits and the binary numbers that we want to talk about
or we want to compute using this circuits ok. So, this electronics circuits can be visualized as
having, as I have said two distinct states.

Some typical example, a miniature electronics switch can be either open or closed, two states.
The voltage at a line you can say it is either low voltage or high voltage, two states; current is

93
either a flowing or not flowing, resistance value is either high or low. So, there are many
technologies which use either opening or closing of switches, voltages, currents, resistances
in various ways to represent these two states ok. This is the basic idea behind binary logic.

(Refer Slide Time: 08:28)

Now, let us come to the basic logic gates. We start with the simplest gate, which is called a
NOT gate and also sometimes it is referred to as an inverter. Because it inverts the state of the
input line and NOT gate has a single input and a single output; pictorially it can be depicted
by symbol like this. So, your input is A and the output you refer to as A bar. Now, the
behavior of a NOT gate can represented by something called the truth table.

Now, what is truth table? In a truth table on one side we show the inputs, on the other side we
show the expected outputs. Now, here there is a single inputs, so I am only showing A and I
am listing all possible values of A, can be 0 and 1 and what is the expected value? It is the
invert of that, reverse of that, if the input is 0, output will be 1. If the input is 1, output will be
0. That is how a NOT gate works ok.

94
(Refer Slide Time: 09:52)

Let us move on to the other gates. Next comes AND gate. Well AND gate, the first thing is
that AND gate can have two or more inputs. So, here I am showing you the symbol of an
AND gate with two inputs A and B and the output. The AND function is denoted like this
with a dot, A dot B. Let us first understand what or how an AND gate works. This is depicted
by the truth table again. So, here are your inputs where all possible values of A and B are
shown is 0 0, 0 1, 1 0, 1 1 and this is the expected output.

So, what does the AND gate do? The output will be 1 if both the inputs are at 1; otherwise it
is 0. You see, when both the inputs are 1 only then the output is 1; otherwise it is 0. Now, this
concept you can also extend to an AND gate with three number of inputs or any number of
inputs where the output will be 1 only when all the inputs are 1 and it will be 0 otherwise.
This is the definition of an AND gate that the output is 1 if A and B and C whatever inputs are
there all of them are 1 and if all of them are not 1 then the output will be 0; this is AND gate.

95
(Refer Slide Time: 11:28)

Similarly, you can have something called an OR gate where firstly, this symbol is like this,
this is the OR gate symbol and OR you denote as plus, A plus B in logic, denotes not
addition, but OR operation. Here the idea is that the output will be 1 if at least one of the
inputs are at 1.

So, you see for a two input gate these are the inputs, the output will be 1 if either the input are
0 1, 1 0 or 1 1; only when both of them are 0, only then the output is 0; otherwise the output
is 1. So, here again you can extend the definition of a OR gate to include any number of
inputs. So, the output will be 0 if all the inputs are 0; otherwise the output will be 1, fine.

96
(Refer Slide Time: 12:30)

Now, let us come to slightly, you can say this is like a composition of two gates. So, I will
explain what I mean by that, something called a NAND gate. First look the symbol, you see
NAND gate, it uses the symbol of an AND gate with a small bubble at the output and
symbolically I show it A, dot means AND I told you, AND B whole bar.

So, essentially NAND gate means that I have an AND operation followed by a NOT; this is
my NAND operation. So, if this is my A AND B so, I have A AND B out here and A AND B
bar out here, right. So, the operation will be just opposite to AND. You see the output will be
0 if both the inputs are 1; otherwise the output is 1, for AND gate the output was 1 when both
the inputs were 1.

But since, because there is a NOT operation in the output, so, the output will be 0 if both the
inputs are 1. Now, here gain you can extend NAND gate to any number of inputs right. So,
functionally a NAND gate is nothing, but an AND gate followed by a NOT gate that is why I
have said this is like a derived gate; if you have AND and NOT you can also make a NAND.

97
(Refer Slide Time: 14:19)

Next comes NOR gate which is an extension of OR gate again. So, again NOR gate looks
like this, it is like an OR gate with a bubble indicating a NOT operation in the output. So,
here again if we have an OR gate and a NOT gate this makes a NOR gate.

So, here for an OR gate you recall the output was 0 only when both the inputs are 0; for other
cases output was 1. So, you have a NOT, so it has reversed, for 0 0 the output is 1, for all
other combination the output is 0. This is what a NOR gates means.

(Refer Slide Time: 15:11)

98
Now, let us come to something called exclusive OR gate. Well you have already seen this
symbol. So, I mentioned the symbol as modulo-2 sum when we talked about it. Now, let me
tell you this modulo-2 sum is nothing but the exclusive-OR operation. Because now you are
introducing the specific gate exclusive-OR gate or EXOR in short. This symbol is like this a
OR with a double line in the input side, this is a symbol of an exclusive-OR. Well for a two
input exclusive-OR, if you look at the truth table this is the input. The output will be 1 if the
inputs are 0 1 and 1 0 which means exactly one of the inputs are at 1, but if both the inputs
are 0 or both the inputs are 1 then the output is 0.

So, here again you can extend an exclusive-OR definition to any number of inputs, but what
will be the definition of such a gate. Well so, instead of saying that one is 0, one is 1, we say
that the output will be 1 if odd number of inputs are at 1. You see the output is 1 if odd
number of inputs are 1 or how many are 1, 1. 1 is odd, 1 odd, 2 is even, 0 is even and if it is
even it will be 0.

So, for larger gates the output can be defined like this right. So, if odd number of inputs are at
1 the output will be 1, if even number of inputs are 1 the outputs will be 0 fine.

(Refer Slide Time: 17:12)

Now, let us come to something called exclusive NOR gate. Now, exclusive NOR gate is just
the exclusive OR you see there is an exclusive OR symbol and there is a NOT symbol or a
bubble at the end. So, in the output column you see the just the reverse when you have 0 1 or
1 0 for EXOR the output is 1 now it is 0, but for the other combination it is 1.

99
So, definition wise the output will be 1 if even number of inputs are 1; that means, 0 0
number of 1’s is 0 even 1 1 number of inputs is two that is also even, but if it is odd it will be
0 and symbolically the operation is denoted like this EXOR bar.

(Refer Slide Time: 18:20)

So, one thing you see here you have the basic operations like I talked AND, I talked about
OR, I talked about NOT. These three constitute the basic operations and there are derived
operations like NAND, NOR, EXOR and EXNOR; derived means these can be realized using
these basic gates also. Like as I said an AND gate followed by a NOT gate is equivalent to a
NAND gate ok. Later on we see that not only this way some of these gates for example,
NAND. This NAND gates can also be used to realize any other types of gates or also NOR
gates, this we shall see later.

So, we have talked about the various kinds of gates, this AND, OR, NOT, NAND, NOR,
EXOR, EXNOR. So, in our next lecture we shall be talking about how some of this gates can
be actually implemented or realized following which we shall be going into the design right,
when you have some function or some application how do use this gates to design our
functionality.

So, this will be our flow during the next few lectures. So, with this actually we come to the
end of this lecture. So, as I had said in the next lecture we shall be talking about some of the
ways in which these gates we had discussed today can be actually built.

100
Thank you.

101
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 08
Logic Families to Implement Gates

If you recall in the last lecture, we had talked about the various logic gates. Now, in the
present lecture we shall be talking about some Logic Families to Implement Gates. Now,
what we mean by logic families, see over the years there have been many techniques or many
ways in which people have come up with for designing these basic gates, because the
ultimate objective was to design circuits, digital circuits and this gates were the basic building
block.

Now, we shall be looking at some of these techniques or logic families using which such
gates can be constructed, which will help us in designing larger circuits. Now, we shall be
looking at some of the conventional methods, which have been used and also some
unconventional methods. So, this we shall be discussing in the next couple of lectures.

(Refer Slide Time: 01:22)

So, today in this lecture we shall be talking about the issue of logic families, which means
how to construct logic gates so that they may be used in circuits.

102
Now, when you talk about logic families there are various logic families which have been
used, like conventionally there are techniques like diode transistor logic, transistor transistor
logic, emitter coupled logic. Today the most widely used is something called CMOS,
complementary metal oxide semiconductor. This is universal used today and there are some
unconventional families also which we shall also be discussing in the next lectures. Just let
me tell you unconventional it can mean, some kind of optical circuits, where instead of
voltages and currents, we talk about lights presence and, absence of lights or there are
something called resistive circuits, where the resistance value of some device is changed and
that can represent the state 0 and 1 ok.

But in this lecture we shall be looking at some of the more conventional logic families and
how they work very briefly.

(Refer Slide Time: 02:56)

We start with diode transistor logic which was one of the first logic families that were
proposed. So, as the name implies we use semiconductor diodes and transistors. So, when
these logic families proposed bipolar transistors were used for the implementation. So, I am
showing you some simple examples. Let us consider this case where you want to build a 2
input AND gate. So, the way this circuit was designed was like this, they were 2 diodes
connected like this and on the output site, there was a resistance connected to a positive
supply voltage.

103
Now, let us see how this work, this method used to work. So, the inputs are A and B. Suppose
I have grounded both the inputs, ground means let us say logic 0 and, this positive supply
voltage means logic 1, let us follow this convention. So, if the inputs are at ground, then you
see from this power supply, there will be a current flowing path, current will be flowing
towards A and B. And the voltage at the output node will be very low, it will be ground plus
the drop across the diode maybe about 0.6, 0.7 volt like that.

So, the output will be at a very low voltage and the very low voltage let us say it is equivalent
to logic 0. This will be true if at least one of the inputs are ground, if A is ground, B can be
high voltage also, or B is ground, A can be high voltage also. Then also there will be a current
flowing path and the same thing will happen. Output will be 0. But if both the inputs are at
the high voltage, plus voltage, then the diode should be reversed biased, no current is flowing
and the output will be equal to VCC. So, the output will be 1. So, you see this is the truth
table of an AND gate. So, you can realize a AND gate like this.

(Refer Slide Time: 05:26)

And just a thing if you just reverse the direction of these diodes, if you connect them like this,
then it becomes an OR gate. OR gate says if at least 1 of the inputs are 1, then 1 of the diode
will be forward biased. There will be high voltage here and the output will be high, but if
both of them are 0, then this will be forward biased, the voltage here is there. So, this you can
design like this.

104
And to build a NAND gate for example, you have already seen how to build an AND gate
this and followed by a transistor circuit like this, this realizes a NOT, and AND followed by a
NOT is an NAND. You see this transistor, how it works. If this voltage is 0, low this
transistor will be of no current will be flowing and the output will be almost equal to VCC
high, but if this is 1 then the transistor will be conducting, there will be a current flowing and
the output will be almost connected to ground, it will be 0. So, this will be a NOT operation,
if it is 1, it is 0, if it is 0, it is 1. So, this is NAND. So, in diode transistor logic used to use
diode and transistor like this to build circuits.

(Refer Slide Time: 06:58)

Then came transistor transistor logic where diodes were replaced by transistor, which made it
switch faster. See, I am not going into detail of the explanation. This is the circuit diagram for
2 input NOR, this is the circuit diagram of a 2 input NAND. Let us say NAND gate. So,
whenever I mean at least 1 of the inputs are at 0. So, this emitter multi emitter transistor, this
emitter would be conducting. This would force this transistor to be off and then, it would
force this transistor to be conducting and VCC will come to the output. Output will become 1.

So, this is roughly how it used to work, I am not going into the detailed operation of this. But
I am just showing that in TTL there are so, many resistances and transistors, bipolar
transistors that are used to realize the basic gates ok. These TTL’s are used for SSI and MSI
level logic at one point in time, but today you do not see this kind of circuits anymore ok.

105
(Refer Slide Time: 08:16)

Today, as I had said that most of the circuits we see is based on the CMOS technology and,
CMOS technology is designed based on the premise of a switchm, electronic switch. A switch
closing and a switch opening. Let us try to understand the concept. So called switch based
circuits, they rely on the operation of miniature switches, tiny switches. Switches can be in 2
states, they are either open or closed.

So, you know how are switches. So, an equivalently I am showing a bulb connected in a
circuit with a power supply and a switch. So, if this switch is, this switch is open, no current
is flowing, bulb is not glowing. But if the switch is closed, current will flow and the bulb will
glow. So, you see in this kind of switch based circuits, there can be distinctly two states,
switch open state and switch close state. So, in one state current is flowing and the light is,
current is not flowing and the light is not glowing, off. In the other state, current is flowing
and the light is on right ok.

106
(Refer Slide Time: 09:48)

So, let us from here, let us now look at some relevant issues, when we are trying to design
gates. Let us consider, suppose I am, we are trying to design a (Refer Time: 10:08), let us say
an AND gate. So, we are applying some input. So, inputs are typically in the form of
voltages. But in digital domain, in binary we say it is 0 or 1. And the output will be something
that will be 0 for an AND gate.

Now, when I say 0 or a 1 what does that mean? 0 can mean it is a low voltage, 1 can mean it
is a high voltage. But well you may ask this low and high are very subjective terms, how low
or how high. So, how low the voltage should be so that the gate can treat the input as 0 and,
how high the input should be, the voltage so that the gate treats the input as 1.

These are some issues that need to be talked about. So, we are talking about something called
voltage ranges and noise margins. So, I just now said with this example, a range of voltages is
treated as logic 0 while some other range is treated as logic, sorry, this should be 1, logic 1.
Now, the exact range of voltages, it of course, will depend on the logic family. What kind of
logic family you are using and, for reliable operation there should be sufficient margin
between the 2 levels. Because, your electronic circuits are never perfect. There will be small
variations in their operation and performance.

So, if your logic 0 and logic 1 are sufficiently spaced, there is a lot of gap in between, then
even in the presence of small variations in performance and the parameters, the circuits will
continue to work correctly ok. So, you see, here I am showing an example, which is an

107
example that pertain to the TTL family, transistor transistor logic. Here the digital value 0 and
1 are represented by the voltage range 0 to 0.8 volts, and 2 volts and above.

These are the ranges of voltages to represent 0 and 1, and whatever is there in between
greater than 0.8 volt and less than 2.0 volt. This is considered as an invalid voltage range and
this is the so called noise margin ok. Because of some external noise a 0 should not become 1
or 1 should not become 0, there should be sufficient gap in between and this noise margin,
this ensures that.

(Refer Slide Time: 13:02)

Let us now come to slowly towards CMOS, complementary MOS. So, in CMOS we have 2
types of transistors, n type and p type. Without going into the detail, just look at the
functionality. We have n type and p type transistors. So, when you talk about n type metal
oxide semiconductor or MOS transistor, the schematic diagram is like this. This is how the
transistor looks like. There are three terminals gate and the other two terminals are called
source and drain. So, when we apply a positive voltage on the gate, this switch is closed, like
is shown in this diagram. The switch is closed, this is the equivalent circuit and between
points 1 and 2 there can be a current flowing.

But if gate is 0 voltage, the second scenario this transistor get non conducting and there will
be no current between 1 and 2. So, equivalently I can say that this switch has become open.
So, for n type MOS transistor, when we apply gate 1, 1 means high voltage, switch is closed.
If we apply 0, switch is open. But for p type transistor the situation is exactly reverse.

108
(Refer Slide Time: 14:30)

For a p type transistor which is represented like this. There is a small bubble here. This
indicates p type. Here when the, well here the positive voltage can be anything. Here I am
showing as plus point 2.9. It can be higher also, 5 volts also. So, here if the gate is low
voltage 0, then it is conducting, but if the gate is high, then it is non-conducting.

So, the behavior of the switch is just reverse. If the gate has positive voltage, switch is open.
If the gate has 0 voltage, the first one, the switch is closed right. So, the behavior of the p type
and n type transistors are reverse with respect to the switching behavior.

(Refer Slide Time: 15:26)

109
Now, let us come to CMOS circuits. So, how was CMOS gate is built. So, let us come to the
simplest circuit, we show a CMOS NOT gate. So, here the input is “In”, in the diagram the
output is “Out”. So, the truth table will be like. This for a NOT gate. 0 will become 1, 1 will
become 0. Let us see how this circuit works.

You see there are 2 transistors that are connected. One is n type transistor; other is a p type
transistor. Let us call them T1 and T2. Let us suppose, my input voltage is 0; that means, low
voltage. So, for input voltage 0, what we have seen the n type transistors would be off and a
switch will be open, and p type transistor it will be on. So, T1 will be on and T2 will be off.
So, what does this mean? T2 is off means from Out to ground there is no connection, it is off.
But this is on means, plus V is connected to the output, which means output will be high
voltage. That is why output is one.

But if your “In” is 0, “In” is 1 now, the reverse will happen. Your T1 will become off and T2
will be come on. Because, T2 is on, now “Out” will be connected to ground. It will be low
voltage. So, it will be 0. It works as a NOT gate. This is how a CMOS NOT gate works ok.
So, other kind of gates are also fairly easy to build. Let us say a NAND gate.

(Refer Slide Time: 17:34)

NAND gate will require 4 transistors, as we show in this diagram. 2 n type transistors here
and 2 p type transistors here. This is the truth table. So, when let us consider this last row
first. When both the inputs are 1 and 1, then what will happen?

110
This, both this pull down transistors will be on and this will be off. Because these are on,
from the output to ground there will be a path. So, output will become 0. Output will become
0 right. But if you consider any of the other scenarios 0 0, 0 1 or 1 0, here at least one of these
2 transistors will be off, let say B.

(Refer Slide Time: 18:29)

So, one of them will be off and at least one of these 2 transistors will be on. So, because one
of A and B are off, one of them is 0 at least. So, from output to ground there will be no path.
But because one of, at least one of these 2 transistors on, from plus V to “Out” there is a path.
So, output will be high. So, output will become 1.

This is your NAND function right. So, this is how this NAND gate works in CMOS. NOR
gate operation is similar just the structure of the circuit will be little different. It will look like
this. So, these p type transistors are coming in series and n type transistor are coming in
parallel.

111
(Refer Slide Time: 19:30)

So, here again you see when both the inputs are 0. 0 means what, this is off. This is off. Both
these are on, this is on, this is on. So, from output to ground there is no path, but from plus V
to output there is a path. So, output will be 1. Output is 1.

But for any other condition, you can check at least one of these two n type transistor should
be on. Because one of them is 1, so, output to ground there will be a path. Output will be low.
And both this transistors are not on so, plus V to out there is no path. So, this is NOR
operation ok. So, it is easy to verify that it works. So, here I am assuming these transistors to
be just switches, I am not looking into the other electronic characteristics of the MOS
transistors, just functionally how it works as a gate ok.

112
(Refer Slide Time: 20:29)

Now, when you want other type of gates for example, and well in CMOS you cannot directly
implement an AND gate. So, what we have to do, you have to first implement a NAND gate
followed by a NOT gate. So, in CMOS, you remember in CMOS it is much easier to design a
NAND gate, more difficult to design an AND gate.

You see, this is an AND. Because what you have done a NAND followed by a, NAND means
what AND followed by NOT. Two NOT one after the other, they cancel each other. It
becomes AND ok. So, CMOS AND gate will be one NAND gate followed by a NOT gate.

(Refer Slide Time: 21:27)

113
Similarly NOR gate. So, OR gate will be NOR gate followed by a NOT gate. So, there will
be a NOR gate followed by a NOT gate. Now, this concept can be extended to any number of
(Refer Time: 21:48) inputs. You can see instead of 2 transistors C, D, you can connect many
of them in parallel. Instead of C, D you can connect many of them in series. So, you can have
a multi-input NOR gate. Similarly NAND ok.

So, here I am not going into the detail of CMOS designs, but just giving you some idea that
in CMOS how you can build gates and the advantage of CMOS is that in modern day
technology, you can build these CMOS transistor or you can manufacture the CMOS
transistor, in a very small dimension. Because of which in a chip you can pack so many gates
today. You can pack close to a billion of transistors and gates in a chip. So, CMOS gives you
that big advantage. And in the CMOS design you do not need any resistances and any other
components diodes. These kind of things are not required because, resistances typically
require much more area to fabricate in a chip. Here, you need only the transistors ok.

So, with this we come to the end of this lecture, where we had a look at some of the logic
families. In the next lecture we shall be continuing a little bit more on CMOS and, then we
shall be talking about a couple of unconventional technologies. These technologies are
coming up. They can also be used to design, digital circuits and gates and we shall be looking
at those things.

Thank you.

114
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 09
Emerging Technologies (Part 1)

If you recall in the last lecture, we are talking about some of the technologies using which
you can implement gates like diode transistor logic, transistor-transistor logic and of course,
CMOS. Now in the present lecture we shall be continuing our discussion on CMOS and after
that we shall be discussing about a few of the emerging technologies. Means emerging
technologies are something which is slightly unconventional, but there is lot of opportunities
for these technologies for developing future system. So, I feel that it will be interesting for all
of us to have some idea about some of these so called emerging technologies.

So, the title of the present lecture is emerging technology the first part.

(Refer Slide Time: 01:15)

But before we move on to these emerging technologies, we continue a little bit on CMOS.
Now here what we say, we are trying to develop some kind of complex switches. This
complex switches and this complex switches are called multiplexers. We shall be coming
back to multiplexer in much more detailed later, but what is a multiplexer like. Here we have
a simple example, this is called a 2 to 1 multiplexer. 2 to 1 means there are 2 inputs A and B,
there is one output Z.

115
The way a multiplexer functions that one of the inputs will be copied to the output. Now
which input that depends on so called select line. Here S0 is a select line, if S0 is equal to 0,
then the value of A will be copied to Z. But; however, if S0 is 1, then the value of B will be
copied to Z. So, you can say that this is like a multi way switch. S0 is the control of the
switch. So, either I connect A to Z or I connect B to Z. This is the basic idea. Now this
concept can be extended for example, we can have a 4 to 1 multiplexer, where we have four
inputs, let us say A B C and D and we have an output Z. So, now, this is like a four way
switch, I am connecting one of these four inputs A B C and D to the output.

Now to select one of four things, I would clearly required 2 select lines. Because, with one
select line I had only combination 0 and 1, two possibilities. So, I have to select one of two.
But 2 select lines I have four possibilities 0 0, 0 1, 1 0 and of course, 1 1. So, if S0 S1 is 0 0
then A is selected, if it is 0 1, S1 is 0, S0 is 1 then B is selected, if S1 is 1 and S0 is 0, then C
is selected and when both of them are 1, D selected. So, there are many applications where
these kinds of multi way switches find lot of interesting uses ok. So, first let us see how we
can realize this multiplexer, so called multiplexer or multi way switches using CMOS
transistors.

(Refer Slide Time: 04:31)

So, this is what we are trying to look at how to build multiplexers using basic CMOS
switches or CMOS transistors. Now the idea behind is that, from the various input like for
example, I can have inputs A B C and D, there will be multiple paths. The idea is like this,

116
there will be multiple paths that will be connected to the output, let us say Z. Now the using
some mechanism we would be selecting exactly one of the path. Let us say we are selecting
the second path. Then B will be connected to Z, A C D will be disconnected. So, here among
this parallel path, we have to ensure that somehow exactly one of the paths get selected and
once we can do this or multiplexer is implemented. Let us see how this is done.

(Refer Slide Time: 05:37)

So, we required something called a transmission gate. Let us try to understand what the
transmission gate is. You see we have talked about nMOS transistors and pMOS transistors.
This is an nMOS transistor. This is a pMOS transistor. Now; so, when the switch is closed,
say for an nMOS transistor, switch is closed, when you apply a 1 on the gate, this control
input and for a pMOS transistor it is closed when I apply a 0 on the gate. Now some property
of this transistor is that if you have our nMOS transistor and if we apply a low voltage here
like a voltage close to 0 volts, then the output you get exactly zero volts no voltage
degradation.

But if you apply a higher voltage, let us say I apply five volts, then there is a drop across this
transistor which is called the threshold voltage. So, in the output let us say will be getting
something like 4.2 volts. There will be a 0.8 volts drop, but for a for a pMOS transistor on the
other hand, if I apply a 0 and the switch is closed, there will be a drop and the output will be 0
point let us say 8 (0.8) or something, but if I apply a high voltage this will be transmitted
without any drop. So, you see the properties are complementary.

117
So, an n type transistor can transmit low voltage very well, a p type transistor can transmit a
high voltage very well. So, if I connect to such transistors in parallel, then I can connect what
I can transmit both high voltage and low voltage equally well this is the idea behind
transmission gate.

(Refer Slide Time: 07:38)

This is how a transmission gate looks like. Just like I have mentioned you see there is a n type
transistor, there is a p type transistor which are connected in parallel and the gates of this
transistor are selected together like I connect S here, I connect the not of S. S bar is actually
not of S. Suppose I have a NOT gate if the input is S the output will be S prime. So, if S equal
to 0, then both this transistors will be off. So, the switch is off. If S is 1 then both the n type
and p type transistor will be 1, will be on and so x and f will be connected.

Now, symbolically we represent it like this, this is the symbol of a so called transmission gate
where x is the input, f is the output and S is the select lines, S and S bar. So, you can see in
one case there is no bubble which means if S equal to 1, the n type is selected and the bubble
is for the p type transistor, if S bar is 0 then this is selected right.

118
(Refer Slide Time: 09:03)

Now using this transmission gates you can very easily implement multiplexers. Let us look at
this. A simple 2 to 1 multiplexer, this is a connection fine. Now, you will see how it works.

There are 2 transmission gates I have used, one here and one here. At the inputs of the
transmission gate at one side I have the 2 input signals x1 and x2. And on the other side I
have connected them together. This is my output f. Now I have selected these 2 transmission
gates using a select line S as follows. You see S is connected directly here and this is also
connected to the inverting input of here and I have a NOT gate. So, on this side I have S bar,
which is not of S. S bar I am connecting to the reverse side to the non inverting part of this
and to the inverting part of this.

So, what will happen? If S is 0 you see, then which of the transmission gates will be on? S is
0 means this is 0. So, p type this will be conducting. So, this will be conducting and S is 0, S
bar will be 1. So, this n type is also conducting this transistor will be on. So, x1 will be
connected to f. But if S equal to 1 the reverse will happen. S equal to 1 means a 1 will be
connected here. So, this is off. This will be 0. This will be off. But on the other hand, this 1
will come here and 0 will come here. So, this will be on.

So, x2 will be connected. So, in this way I can have 2 to 1 multiplexer very conveniently.

119
(Refer Slide Time: 11:09)

So, using transmission gates it is very easy to do so. You can have a 4 to 1 multiplexer in the
same way. I am not showing with the complete diagram, just giving you an idea. Suppose my
four inputs are x0, x1, x2 and x3. And each of these lines I connect 2 such switches, 2 such
transmission gates. I am showing it as a circle like this, and the outputs I connect together let
us say I call it Z this is the output.

Now, there are 2 select lines let us say S1 and S0. So, the way I connect them is that this
switch is selected by S0 bar which means when S0 is 0 this will be selected and S1 bar. So,
when both are 0 0, this first path will be selected. Second row this will be S1 bar this will be
S0. Third row this will be S1 this will be S0 bar and last row both S1 and S0. So, depending
on the value of S1 and S0, you can easily see that exactly one of the path will be selected that
will be conducting. Both the transmission gate and the corresponding input will be moving to
the output right. This is how multiplexers can be conveniently implemented using CMOS
transmission gates.

Now, later on we shall see some alternate ways of implementing multiplexers. We shall see
that even using gates, the AND gate, OR gate NOT gate, NAND, NOR we talked about.
Using gates also we can design and implement not only multiplexer, any kind of circuit or
function that you want to.

120
(Refer Slide Time: 13:09)

This we will see later. Now let us talk about, I mean one kind of an emerging technology.
Here we are talking about all optical implementation. So, what is the basic idea? You see we
are hearing that optical fibres are being used for communication, you must have heard there
are lot of optical fibres which have been laid under the ocean connecting countries,
connecting continents. They can communicate very fast, they have very fast communication
speed and so on and so forth.

Now, here we are saying that can we explore photo mixer optics for carrying out some basic
gate operation. Well if it is so, then you can also implement some circuits using manipulation
of this kind of light or photons. Now as a matter of convention, let us say that if there is a
light, let us call it logic 1 and if there is no light I call it logic 0. Suppose I have a torch in my
hand, I send you signal on off, on on, off off, on on off. So, you can read out the signal. On
means 1, off means 0. This is one way of communicating.

So, the idea behind optical communication is very similar, we send digital information using
light, presence or absence of light. So, let us see. Here we are saying that all computations are
being carried out using light and for that we require various kinds of optical or photonic
devices. So, I am not going into the detail just very basic idea, there are some devices called
interferometers, beam splitter where one optical beam is divided into 2 beams or more,
optical coupler reverse 2 optical beams are combined into a single optical beam and as I said

121
as a matter of convention, presence of light will denote logic 1, while absence of light will
denote logic 0.

There are several approaches that have been explored; we shall be very briefly talking about
one such technology, which is based on something called Mach-Zehnder Interferometer. Let
us see what are Mach-Zehnder Interferometer look like and how we can implement some
functions out of it. .

(Refer Slide Time: 15:58)

Here we have a very high level schematic diagram of a Mach-Zehnder Interferometer. As you
can see it is a device which works on the basis of relative phase shift of 2 beams of light. As
you can see in this diagram on the left there are 2 paths which are shown one is via here other
is via here.

So, some optical beam which is coming, they get split into 2 parts and they follow 2 different
paths. And in between these 2 different paths there can be some phase shift and on the other
side there is a coupler. This coupler depending on the phase shift of the 2 signal, it will be
either a constructive interference or a destructive interference. Constructive means the
intensity of the light will increase, destructive means they will cancel out. So, depending on
whether the phase shift is in phase or out of phase, on the output we shall get either a strong
light or no light. If it is destructive interference the 2 beams will be cancelling out and we
shall not be getting any light in the output. This is the basic idea.

122
So, on a plane on the figure on the right hand side showed you the diagram, same diagram,
that this is actually you can say your input where you are sending a beam this is a some kind
of a splitter, where the beam is being split into 2 paths one is flowing here one is flowing
here. Depending on the second input this is also another input; depending on this the phase
shift will be determined and on the other side there is a again a some kind of a coupler, which
will be coupling the 2 signals in 2 different ways and we generating 2 outputs output 1 and
output 2.

Now, this kind of interferometer can be fabricated on silicon, just like CMOS gates. Now
without going into the details of the optics, let us see functionally how this behaves.

(Refer Slide Time: 18:17)

Functionally it behaves like this; I am showing a schematic diagram. This is my


interferometer, this is my incoming signal and this is the control signal. Let us called them
this is not A, this is B. Let us call them A and B ok. Here I have a coupler or a beam splitter
and here I have another coupler this is C2 and these are the two different paths. So, one beam
will be following this path, one beam will be following this path and depending on the
control signal, the phase shift or the phase difference between these 2 beams will be
determined. And the way Mach-Zehnder Interferometer works is that, in the 2 outputs which
are traditionally called bar port and cross port. In terms of the logic this implements AND, A
AND B ( AB ) , which means if there is a light on the input A and also a light on the input B,

123
which means A equal to 1 and B equal to 1 only then this output will be 1; that means, some
light will come out, A AND B ( AB ) .

But on the cross port it is A AND B bar ( A B́) , that means, if A is 1 that means, that is
light on A, but there is no light on B then only this output will be 1. So, you can see these 2
outputs are generated. So, logically speaking we can say that a Mach-Zehnder Interferometer
implements logic function let us call it B again, there is a typo, this is B. A and B. So, the
outputs will be AB ( AB ) and AB bar ( A B́) . This is what Mach-Zehnder Interferometer
is.

(Refer Slide Time: 20:19)

So, if I treat this as a black box, there are 2 inputs A and B, there are 2 outputs which are A
AND B ( AB) , the other one is A AND B bar ( A B́) . Now see on the first output, A and
B are applied we will get A and B; that means, we can implement the AND function. Let us
also assume that I apply a constant 1 on the input A that means, there is a constant light here
then what will be the output? The first one will be B and the second one this A is 1, we shall
talk about this operations the next lecture, this will be B bar ( B́) .

Because A is always 1, something and something is the other one, it will be B bar ( B́) . So,
we can also implement the not function B, not of B. And we shall see later that this AND and
NOT this set forms something called a functionally complete set, which means that if I can

124
implement AND and NOT, I can implement any function I want. This implies that Mach-
Zehnder Interferometer can implement a functionally complete set and we can realize any
circuit functionality, this is the basic idea behind Mach-Zehnder Interferometer. This is one of
the all optical technologies that I have talked about, where the input data as well as the
outputs they are represented as lights.

If there is a light I say it is logic 1, if there is no light that is logic 0. So, this is a futuristic
technology because already optics or light is used for long distance communication using
optical fibres. Even inside a VLSI chip, inside a circuit chip there are high speed
interconnects that are implement using optical technologies today. So, this can be the next
step forward that we can also implement some logic circuits, some functions using all optical
technologies also.

(Refer Slide Time: 23:07)

So, some of the advantages here, because light is very fast you can have high speed
computation. Power consumption will be low, because everything is depending on the flow of
light only, no other circuits are required and because we are already doing communication,
so, electrical to optical and optical to electrical conversions are not required if you can do
everything in optical.

Because you see normally, you have a processor you are doing some calculations on the other
side there can be another processor doing some calculations and your communication
medium can be all optical. So, here you need to convert electronics to optics and on the other

125
side you need to convert optics to electronics. So, here what is says that if this processing can
also be done in the all optical domain, then these conversions will not be required.

But the things are not that simple there are lot of technological challenges still remaining; till
today this MZI switches are relatively large in size as compared to the MOS switches, which
we used in our circuits and connecting many such switches in parallel is also not so easy
because the intensity of the light tends to fade away or decrease as we connect more number
of such switches in cascade one after the other ok. And the associated circuitry to drive the
switches, the switches do not consume much power, but the circuits that may be required to
drive the switches, they can still consume significant power. So, these are some of the
drawbacks. So, once is drawbacks are addressed by researchers possibly you can have a
feasible technology for implementing logic.

So, with this we come to the end of this lecture, where we talked about firstly, how we can
use CMOS transistors to build something called transmission gates and implement multi way
switches or multiplexers using such transmission gates, then we talked about one of the
emerging technologies the all optical way to implement logic functions we talked about one
such method using Mach-Zehnder Interferometer. Now in the next lecture we shall be talking
about another such emerging technology called Memristors.

Thank you.

126
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 10
Emerging Technologies (Part 2)

So, we continue with our discussion on some Emerging Technologies for implementing logic
gates. So, we talked about all optical implementations in the last lecture.

(Refer Slide Time: 00:33)

In this lecture which is the second part of emerging technology. We shall be talking about a
very interesting technology called memristors. Memristor is actually a combination of the 2
terms memory and resistor, let us try to understand what memristor is.

Well the presence of memristor was first predicted in 1971 by a Professor, Leon Chua who
considered this device as the fourth fundamental circuit element, passive circuit element.
Now you know that we have some fundamental circuit element that we know about resistor,
capacitor and inductor. And also if you look at the relationship, voltage and current are
related by v equal to R i, voltage and charge are related by q equal to C v, these are well
known equation we have studied in schools. Current and flux are related by phi equal to l i
and also cross relationship is also there, d phi dt is nothing but the voltage and d q dt rate of
change of charge is the current.

127
v =R i
Q=Cv
ϕ=li
V =R i

=voltag e
dt
dq
=Rate of change of charge
dt

Now you see we live in a world where symmetry is seen everywhere. So, Chuas first
argument was that something is missing, there is a missing link then he theoretically
postulated that there should be another such fundamental circuit element which should
connect flux and charge. So, this phi equal to M q this was the relationship that this new
device should exhibit and this new device he gave the name memristor. But forgetting his
theory behind it, essentially speaking a memristor is a device whose resistance changes in
response to an applied voltage, not only that it can memorize the last resistance value even
when we switch off the power, we withdraw the voltage. This is a very interesting property
let us see.

(Refer Slide Time: 03:03)

Very briefly speak, you see Leon Chua predicted the presence of memristor in 1971, but it
was much later in 2008. So, it is about 37 years later means a team from the HP lab USA,
they fabricated a device they did not knew what the device was, but they saw that it is

128
exhibiting some very peculiar properties, later on they looked at the paper by Chua and they
saw that well the properties are very closely matching with what Chua had predicted. So, they
accidentally they fabricated a memristor. So, how did they fabricate the device?

They used a material like here shown in the figure, they created a device where on one side
there was a region of pure titanium dioxide and there was a other side this is also an oxide of
titanium, but with some oxygen vacancy, this is called oxygen vacancy. Because in Ti O 2
( TiO2 ) the number of oxygen atoms is double that of titanium, but you see it is Ti O 2
minus x sum ( TiO2− x ) . So, so one such oxide is Ti 4 O 7 ( Ti4 O7 ) something like this
was put here and on the both side for connection they used platinum. So, in this device what
they found, the size of the device is very small cross section area was as small as 9 nanometer
square and the length was 10 to 12 nanometers.

Now, they observed, symbolically they represented this device like this, this is actually
proposed by Chua. This side where there is this oxygen vacancy, this part is solidified and
this symbol is like this. This is a symbol of a memristor. So, the idea was that if I apply a
positive voltage on this side and a negative voltage on this side, then what will happen? This
boundary will be moving to the right because oxygen vacancies are positively charged. They
will be repelled by this positive, they will be moving right and this is a region of low
resistance and this is a region of high resistance. So, the low resistance region will be
expanding the overall resistance will decrease.

But if I apply a reverse voltage, then the boundary will be moving towards this direction. So,
this oxide region will be shrinking and the titanium dioxide region will be expanding. So, the
overall resistance will be increasing. So, just by applying a voltage of a suitable polarity, I
can either reduce the voltage or increase the voltage. Now another interesting property was
that this oxygen vacancies has a very unique property that if you withdraw, the voltage they
will remain in the position they were the last point in time.

So, they will not change, which means the change in resistance that has taken place is
nonvolatile in nature. What is meant by non volatile? Some change is said to be non volatile,
if it remains even when I have withdrawn the voltage or the power supply. Here it is
something like that, I can change the resistance by applying a suitable voltage then if I
withdraw the voltage, the old resistance value will remain. The device will be remembering

129
the resistance. So, you can understand that I can implement a very simple memory device
using this; it can memorize the resistance right.

(Refer Slide Time: 07:30)

So, very briefly let us talk about the characteristics of such a device, this has been quite
widely studied by many researcher. Voltage and current characteristics exhibits this kind of
behavior. It is called a pinched hysteresis loop. There are 2 region, you can see one here and
one here.

So, this can be approximate by a curve like this, one with a low resistance. So, see this is
voltage and current. So, higher slope means low resistance or this is current here voltage
reverse and this will be R OFF ( ROFF ) is high resistance, because this current voltage slope
means current is smaller resistance is higher. So, there are 2 distinct resistive regions low
resistance and high resistance and I can switch from one resistance to the other by applying a
suitable voltage. If I apply a voltage V open or any voltage beyond that let us have V clear,
then it will switch from R ON ( RON ) to R OFF ( ROFF ) . R ON ( RON ) to R OFF

( ROFF ) ., but if I apply a voltage negative voltage beyond V close, then it will switch from
R OFF ( ROFF ) to R ON ( RON ) . So, I can switch from low resistance to high resistance
or high resistance to low resistance by applying a voltage of suitable polarity.

Now, this property that a memristor can remember the last resistance value, it makes it very
useful for implementing memory devices, storage and we shall see it can also be used for

130
implementing some gates, logic gates, logic design. So, we shall very briefly look at this
technology.

(Refer Slide Time: 09:30)

So, we had already said memristor is something which is very small in size, much smaller
than the current CMOS transistors.

Well, it consumes a little power only during operation, but when you are not operating you
can withdraw the power. So, there is very low power consumption, it can remember the last
resistance, non volatile. Cross bar structure means you can have a compact implementation,
some rows and columns. These are 2 different rows and you can connect the memristor
between rows and columns like this very conveniently etc. This is called a cross bar structure.
Some of the application just to name one is we can use this to implement memory systems,
nonvolatile memory systems. This is called resistive random access memory, resistive
random access memory.

We can model brains. There is also some work where some neural networks or some neurons
were implemented using memristors and of course, we can implement gates, some digital
circuits ok. So, we are not going into the detail of this because this is slightly beyond the
scope of discussion here, I just wanted to name a few application.

131
(Refer Slide Time: 11:08)

Now, let us briefly look at how we can implement logic gates using minister without going
into much detail. We shall be talking about 2 broad techniques one is called memristor aided
logic in short magic. Now using magic you can implement any of the gates AND, OR, NOT,
NAND, NOR and in this method you see for a conventional gate when you had implemented
using TTL or CMOS this inputs we were applying as voltages and the output we were getting
as voltage.

But for circuit using memristor, it is different, we call it stateful design style where we apply
the inputs not as not as voltages, but as resistive states of the memristors. The meaning is
something like this, suppose I am implementing a gate with 2 inputs. There will be 2 input
memristors. By applying a suitable voltage, I shall be initializing those memristor to either
the high resistance or the low resistance states right. If it is high resistance I call it logic 0, if
it is low resistance I call it logic 1.

So, the inputs I am not applying as voltages, but I am applying as resistance values in the
memristors. This is called stateful logic ok.

So, this is the convention and in memory stated logic, when you carry out a gate operation
you have to initialize another memristor, where the output will be stored. So, we initialize it
to either 0 or 1 depending on the type of the gate and some voltage has to be applied on the
input site to evaluate the gate, I just show you how this circuit looks like.

132
(Refer Slide Time: 13:21)

Here I have just taken an example of a NOR gate, this is an n input NOR gate. There are n
number of inputs and one output. Now you can very easily implement it using memristor like
this. There are n number of memristors x1, x2 to xn which represent the inputs and one
memristor representing the output and the previous slide we mentioned for NOR gate this
output memristor must be initialized to the one state which means on and depending on the
inputs, we will have to initialize these input memristor.

Suppose the inputs are all zeros, because it is a NOR gate when the inputs are all zeros the
output has to be 1; if the inputs are all zeros means what these are all high resistances. So,
this V 0 is connected to this point V 0 is connected here, because this is a high resistance very
little current will be flowing and these are all connected on this side and because this high
resistance no current will be flowing and this point is grounded and this point will be
approximately at the ground potential ground. So, on the output memristor there will be no
current flowing and it was one it will remain as one. So, it remains as one.

But let us say at least one of the input is that 1 let us say the last input is 1. So, for a NOR
gate the output should be 0. Let us see how it happens. Let us say one of them is 1, may this
xn, let us say this is on; this is the on means V0 will be connected through this is a low
resistance and this will not be ground, this will now be V0. And this V0 will be applied across
this memristor see this part is ground, this point is V0. So, you are applying a higher voltage

133
on the other side and a lower voltage, on that ti 4 o 7 side. So, this will be reversed biased and
the output will be changing to 0 to off. So, this is how it works.

Now, not only like this you can also connect it like this in a different way, where of course,
the connections will be different, but I have just shown this to you just for your convenience
ok. This is one way you can implement some gates using memristors just to give you an idea.

(Refer Slide Time: 16:09)

And there is another kind of a logic design style which is also quite widely explored, this is
called imply function. An imply function is defined like this, you take the not of a variable A
and then do a OR. This is defined as imply function. Now using memristor you can very
easily implement imply function also. Now, it can be shown that imply function along with
constant 0 is again functionally complete. Like for example, if B equal to 0 ( B=0 ) , if B
equal to 0 ( B=0 ) , then you can implement what A bar ( Á ) , or 0 means it will be A bar
( Á ) ; that means, you can implement NOT operation.

So, since you can implement NOT operation then you can just apply the input A was that, you
apply A bar ( Á ) and then imply B, this will become A OR B which is OR operation. So, so
you can implement a NOT, you can implement an OR. This again is a functionally complete
set as we shall see later. So, you can realize any arbitrary function also using imply. So, here I
am not going into detail, just to give you a basic idea that this memristor is a very promising
technology and you can use it for designing logic also.

134
(Refer Slide Time: 17:55)

And imply gate can be realized like this 2 memristors and one register, the convention is that
the input should be stored in A and B and after operation the output; that means, now this A
implies B this will be the output, this value will get stored in B.

So, the value in B will be overwritten. So, again the voltage we are applying here is V cond
( V cond ) and V set ( V set ) , I am not going into the detail here. So, the basic concept is like
this that you have a circuit comprising of 2 memristors and one register. So, whenever you
want to carry out some logic operation for some given value of A and B let us say A equal to
1 and B equal to 0, then you will have to initialize the values of A and B appropriately.

Let us say I initialized A to 1, I initialized B to off, then I apply V cond ( V cond ) and V set
( V set ) then this A bar OR B ( Á+ B) will be 1. So, what will happen is that after
computation B will become 1. This will be the output, because output will also be stored in
memristor right. So, actually this is how the imply gate works. So, I here in this lecture we
have given a very brief idea without going to the detail regarding how this emerging
technology memristor works. So, if some of you are interested to know more about
memristors, you can read the literature, if you do a Google search, you will get lot of material
about memristors, a lot of interesting applications that people are talking about.

So, a few things people are predicting that the first kind of application of memristor that you
will see in the market will be some non volatile memory devices like your pen drives, USB

135
drives that we use today. They will be of much higher capacity much faster as compared to
the present day USB drives. So, let us hope that we get to see such devices in the market very
soon in the near future.

So, with this we come to the end of this lecture, now from the next lecture onwards we shall
be starting our discussion on so called switching functions and switching expressions, how do
we represent such functions, how do you manipulate, how do you minimize. And later on we
shall see how do we implement them using gates and various other circuit modules, digital
circuit modules to implement the desired functionality. So, we shall be starting this discussion
from the next lecture.

Thank you.

136
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 11
Switching Algebra

In this lecture, we shall be starting our discussion on Switching Algebra and Switching
Expressions; so, the title of this lecture Switching Algebra. So, here we shall basically look at
how we can represent so called switching expression. See switching expressions are ways in
which you can express the functionality of a circuit in a mathematical way.

Now once you can express it in a mathematical way, there will be the number of ways to
manipulate them, minimize them, optimize them so that the final circuit implementation of
realization will be smaller or cheaper. So, let us look into this switching algebra.

(Refer Slide Time: 01:15)

We start with some basic definitions; what is a switching algebra? You see a algebra is a
mathematical structure; I am not going into the formal definition of an algebra, what is an
algebra. But I am mentioning the essential idea in switching algebra, you see switching
algebra is supposed to be mathematical formalism which we are using to express or represent
the functionalities of digital circuits, gates, binary, input and output, 0, 1. So, this is the basic
formalism that you going to express in a mathematical way. This is what we referred to as

137
switching algebra. So, what are the basic components of a switching algebra? Let us see that
first.

First is the numbers that we are working on 0 and 1. So, this algebra is defined on the set 0
and 1. Everything is either 0 or 1; nothing outside that and there are 2 binary operations AND
and OR and one unary operation NOT. So, what is meant by a binary operation? Well an
operation is said to be binary, if there are 2 inputs and 1 output, binary means 2, if there are 2
inputs and 1 output, I call it a binary. Let us say AND, we talked about a 2 input AND gate.

Suppose I have a AND operation, I have 2 inputs A and B, and I have the output A AND B.
this is the binary operation. Similarly I can have OR, the OR is also a binary operation.
Output will be A OR B. What do you think of NOT? In case of NOT, there is a single input
right. If you talk about NOT; NOT takes a single input A and it generates as output NOT of A.
So, there is a single input which is called unary; unary means 1. There is a single input fine.

(Refer Slide Time: 03:50)

So, the AND operation how do we represent it? This AND operation is typically represented
by a dot (.) or this symbol pointed hat (∧) . Let me tell you, sometimes for
convenience; we can also omit the dot. Like we can say A .B which means AND of A and
B. Sometimes we simply write AB ; we do not write the dot where the dot is implied if
you write AB , it means it is AND of A and B. Dot is there fine. Similarly, we have the OR

138
operator. AND is sometimes called the logical product. OR is sometimes called logical sum
and it is denoted by the symbol +¿ or this reverse pointed hat (∨) .

So, as I said you can express OR as A + B , this +¿ is the OR operator; NOT operation
finally, it is also called complement, is denoted by single quote (') . Single quote means
A' ; A' means NOT. Sometimes not mentioning, also express sometimes as A bar
( Á) ; bar also means NOT. There are various ways to represent NOT or sometimes we can
write this A ( A ) . This also means NOT of A. There are various symbols which are used
to express these operations ok. So, these all represents the same thing NOT, either single
quote (') or bar ( ..)
´ or this tilde ( ) operation before this symbol NOT. So, you see
switching algebra consists of this few things the numbers 0 and 1; 2 binary operators AND
and OR and one unary operator NOT.

So, there is nothing outside. This the basic mathematical formalism says AND, OR and NOT,
these are the 3 basic fundamental operations. Well we talked about other kind of gates also.
We talked about NAND, NOR, exclusive OR, but those are not fundamental gates; those are
derived gates you can say. AND, OR, NOT are the basic ok.

(Refer Slide Time: 06:27)

Now we talked about switching algebra; now, any algebra there is a concept of variables. So,
when you learned algebra in your school, there you talked about the variables, A +B ,
ABCD or XYZ . You operate on the variables right.

139
So, here also whenever you define this kind of a switching algebra, you should deal with
variables. These are switching variables. Like when; when I write something like A +B ,
this A and B, they denote switching variables. The point to note is that because we are talking
about switching algebra, this A and B variables can only take 2 values 0 or 1, nothing outside
that. And switching expression is a general definition; it is an expression consisting of
switching variables, constants and operators. What does that mean?

(Refer Slide Time: 07:37)

I can write an expression like this

A . B+ A . C' +(1. B' )

You see there are variables, there are constant. 1 is a constant and operators AND, OR, NOT.
These are examples of switching expression.

Now, you observe in a switching expression basically you have nothing other than AND, OR
and NOT. Because these are the 3 fundamental operators AND, OR and NOT. Well of course,
you can use some brackets or parentheses and you can also use the constants 0 and 1 in the
expression if you require. So, there can be variables, any number of variables. There can be
constants, and one of these 3 operators. So, the idea is very simple here, switching
expression.

140
Now we shall be talking about a number of rules of switching expression; now the point to
note is that suppose I give you a switching expression, how to prove it?

(Refer Slide Time: 08:59)

See I say, let us say I give an expression, I say

A + A . B= A

Well how to say whether this is true or false, whether this is correct or this is wrong ok.

This is switching expression, this is a switching expression, this is switching expression. So,
and this is a equality, equal to this; so, how to prove this? You see to prove this there can be
multiple ways. But the first one is a simpler method, but more tedious. But right now we shall
be talking about this only. We prove by verifying the expression for all possible values of the
input variables. This is called truth table or perfect induction method of verification ok.

See we talked about truth table also earlier when we looked at the different gates AND, OR,
NOT. We said that I can have a truth table where I have all possible input values listed and
the expected output value; that is what my truth table was. Basically here for verification I
say that let us construct the truth table; left hand side let us calculate and find out what will be
the output values right hand side, similarly, let us calculate the output values and see whether
they match.

141
If they match, then the 2 functions are the same, the 2 expressions are the same, they are
verified alright. And the alternative method that we shall be learning and we will be using a
little later is we can use some kind of algebraic manipulation and then we try to prove and for
algebraic manipulation, you need to know some basic laws of switching algebra, basic laws
of basic rules.

(Refer Slide Time: 10:59)

So, we shall first, what we do here is that we shall look at some of these basic laws and these
basic laws we shall try to verify by using perfect induction. Later on we shall be trying to
apply these basic laws to use algebraic manipulation method for verifying other expressions;
step by step let us proceed. And in the examples that we have in that we shall show, we shall
be assuming that x, y, z; they denote switching variables ok.

142
(Refer Slide Time: 11:53)

(Refer Slide Time: 12:02)

Let us see basic laws of switching algebra. Let us look at them one by one; basic identities,
basic identities are go like this x OR 1 is 1, x OR 0 is 0, x and 1 is x and 0 is 0. You see I said
that you can prove them by the method of perfect induction; it is fairly easy. Let us look at the
first one. So, if we try to construct a truth table; so, here there is a single variable, only x. So,
x can be either 0 or x can be 1; so, I am listing 2 things I am listing x plus 1 because left hand
side is x plus 1. Plus is what, OR. You look at the truth table of an OR gate. For truth table of

143
the OR gate is what was you think? That the input is 0 and 0 output is 0, 0 and 1 output is 1, 1
and 0 output is 1, 1 and 1 also output is 1.

So, here the second input is always 1; 0 and 1 the output is 1, and 1 and 1 the output is 1. So,
you see that x plus 1 is always 1; so, we can always write it is equal to 1. Similarly we can
verify the other (Refer Time: 13:35) same way. So, this is quite easy to verify like this, but
you should be remembering these identities that that if you if you or something with 1 the
value of the expression becomes 1, but if you or something with 0, here there is a small
mistake this should be x; let us correct this out fine. Let us correct this out this should be x
alright fine this x ok.

So, let us proceed. The next set of rule is called the idempotent law; what does idempotent
law say? It says that is same variable if you do either an OR or AND with the same variable,
the value remains same, does not change.

(Refer Slide Time: 14:53)

So, again you try to construct such a truth table and see; suppose the value of x is, can be 0 or
1. Let us say the first one, what will be the x OR x? 0 OR 0 you recall OR gate, 0 OR 0 is 0
and 1 OR 1 is 1. So, you see whatever is the value of x, x plus x remains the same it is 0, it is
0, it is 1, it is 1; no change. So, the right hand side will also be x; similarly if you do AND
you see the same thing. 0 AND 0 is 0, 1 AND 1 is 1. Same thing, this is also the x ok; these
are fairly simple.

144
Next these are fairly means obvious, but again there is some rules. Because you need to have
some laws or rules which you can use to derive more complex expressions. It says that the
order of the variable does not matter when you take AND or OR; x plus y and y plus x they
are the same. x dot y, y dot x and are the same.

(Refer Slide Time: 16:11)

So, here again you can very easily prove by the method of perfect induction; you can have x
and y these are the inputs. There will be 4 input combinations. You can compute x plus y and
you can compute y plus x; x plus y means what, 0 plus 0 0, or 0 plus 1 is 1, 1 plus 0 is 1, 1
plus 1 is 1. But if you do the other way round, the definition does not change. 0 or 0 is 0; 1 or
0 is again 1, 0 or 1 is again 1, 1 or 1 is again 1. You see these 2 are identical.

So, this is true and for and also you can similarly prove, they are the same right. So, these are
the laws you need to remember. Because later on when you do algebraic manipulation, you
will be using this. Next is complementation; complementation means you take a variable and
its compliment, x and x bar; if you take OR, the result would be 1, if you take AND, result
will be 0.

145
(Refer Slide Time: 17:42)

Well, this is also fairly obvious. You see if you take a variable x, x can be 0 or 1. So, what is x
dash? It means NOT of this, 1 and 0. So, if I take let us say x OR x dash, OR of this and this,
0 and 1 what will be? It will be 1. 1 OR 0 also 1. You see this is always 1, this is 1. But if you
take AND, 0 AND 1 is 0, 1 AND 0 is also 0, both will be 0. This is complementation.

And lastly associative; associative says that when you have more than 2 variables and you
want to carry out OR or AND, so, it does not matter that in which order you do the OR or
AND.

(Refer Slide Time: 18:47)

146
For example, you can either take x OR y first and then OR it with z or you can do y OR z first
and then OR it with x.

Similarly, for AND. This I suggest you can try to construct the truth table and verify this
yourself using method of perfect induction. So, here there will be 3 inputs x y and z and you
can use them to check for all possible combination or whether the left hand side and right
hand side are the same or not, fine; I will leave it as an exercise for you. Some more laws.

(Refer Slide Time: 19:30)

Distributive law; you see now you are coming to slightly complex rules. Distributive law
looks very similar to the algebra that we know that you have studied in school. x and y plus z
is x and y plus x and z. Just like multiplication and addition we know.

But the second expression is slightly different. This is not what you have studied in school.
But now plus is OR and dot is AND, not addition and multiplication. x plus y dot z means x
plus y dot x plus z. Let us show this one on this. Let us try to verify. x, y and z. So, x y and z,
there can be 8 possible combinations 5, 6 7 and 8. These are the 8 possible combinations.
Now the second one, let us try to calculate the second one I am trying to do. Let me try to
calculate the left hand side and the right hand side.

So, you can step by step see, first we are doing y AND z then we are ORing it with x; let us
do it, y AND z, 0 AND 0 is 0, OR with 0, 0. 0 AND 1 is 0, OR with 0 again 0; 1 AND 0 is 0,
OR with 0 here also 0, 1 AND 1 is 1, OR with 0, 1. 0 AND 0 is 0 OR 1, 1. 0 AND 1 is 0 OR

147
with 1, anything OR with 1 is 1. This 0 OR with 1, 1 and this will be also be 1 and 1. This is
left hand side. Right hand you see x OR y, x OR z, AND of that. x OR y 0 0, x OR z 0 0, OR
of that is 0. Not OR, AND of that 0. x OR y 0, x OR z 1, 0 AND 1 is 0. x OR y 1, x OR z 0
AND 0, x OR y 1, x OR z also 1 AND of it is 1, x OR y 1, x OR z also 1 AND 1; this you can
check the others will also be 1 ok.

So, in this way you can verify this method of perfect induction right; similarly first one is
easier to verify same way you can also check this alright. these are so called distributive laws.

(Refer Slide Time: 22:53)

Then absorption law; absorption law says if you have an expression like

x+ x . y

This will become equal to x.

x .(x + y )

This will also become equal to x.

You see you can try to prove it by the method of perfect induction just like the way I have
said, but you can also try it slightly in a different way. Like let us see the first one. This says x
or well I am not writing the bracket, let us say xy. I am dropping that x dot y. So, what I can

148
write, this is equal to x, well this rule already you know x means x and 1 plus x and y; x equal
write as x and 1.

And now you have already studied the associative, the distributive law; you can take x
common 1 or y. And we already know 1 or y means 1, x and 1, x and 1 is x; you see these
rules you have already studied so far. By systematic application of these rules I can prove
this; this is prove using algebraic manipulation. Similarly, I can prove the next one also same
way. Like this one for example, I can write x and x plus y I can write x or 0 and x or y; x I
can write as x or 0. Again you can apply that in distributive law, this will be x plus 0 and y 0
and y is 0 x or 0 this is again x proved. This also proved. So, this if you know how to apply
this rules in a systematic way you can also arrive at these rules.

(Refer Slide Time: 25:23)

And there is another this will useful or does not have name like that. This is also quite useful
in minimization; it says x plus x bar y, x plus x bar y; I am not writing in the bracket equal to
x plus y. Now here also you can prove this in a number of different ways you can try out, well
I am just giving a hint that how you can proceed like x what you can write? This means x and
1 plus x bar y.

And this 1 you can write as y or y dash. This is also rule you have studied y or y bar is 1 plus
x bar y, we multiply this out x y plus x y bar plus x bar y. Now you can combine these 2 and
you can see that whether you can reduce this; there is a way in which you can do that. Like

149
for example, this x y this x y you can write as x y plus x y because there was a rule you will
recall x plus x equal to x ok.

So, anything plus anything were replaced by anything plus anything. So, now, you have 4
terms, let us combine this and this, in this to if I have take x common it becomes y or y bar,
then let us take these 2, if I take y common; it becomes x or x bar, now y or y bar is 1, x or x
bar is also 1. So, it is x and 1 is x, y and 1 is y; so, that is the right hand side. You see you can
again apply this rule systematically; of course perfect induction is fine. You can prove by
perfect induction, but this is a more, means algebraic or formal way of proving a thing.

(Refer Slide Time: 27:55)

Let us look at a couple more rules which are little more complex. Well again I leave it
exercise for you to prove it using the method of perfect induction; this is called consensus
theorem. What is consensus theorem? Let us see consensus theorem says if you have an
expression like this x y, y z and x bar z in between, then what it says this y z you can drop.
This is the same as x y or x bar z; same thing if you replace just with and or if you just
interchange, if you have x plus y if you have x bar z you also y plus z, this y plus z you can
drop just these 2 these are equivalent.

So, consensus theorem is this if you have 3 such terms; one of the term you can straight away
drop, this you can prove, try and prove ok. Then involution is fairly simple; not of not is the

150
same as the variable because when you do a not of 0; not of 0 becomes 1, if you do not again,
it again become 0.

Similarly, 1 not becomes 0, do a not again, it will again becomes 1; so, these two are the
same. So, actually we have seen a number of rules that you can have as the basis for carrying
out algebraic manipulation and minimizations ok. And we shall be seeing more rules later and
see that how we can use it for actually functional manipulation in various ways ok.

(Refer Slide Time: 29:57)

With this actually we just talk a little bit about function minimization and close today.
Function minimization is something that we shall be taking up next; what function
minimization says that we have a switching expression, how we can simplify it. We already
know the basic laws we saw many such laws.

When I say simplify it; we want to reduce the number of terms, we also want to reduce a
number of literals. Literal means this A B C these are literals, B bar these are called literals
and term is A B, A B C, plus B bar D there are 3 terms. So, this is an example. These 3 if you
minimize it becomes this; because AB and ABC if you take AB as AB 1, this is AB and ABC.
And if you take AB common it becomes 1 plus C and 1 plus C means 1, AB and 1 is AB. So,
it just becomes AB. There are other ways of transforming expressions also this we shall be
studying in our next lecture; one such very important theory is called De Morgan’s theorem
this we shall see.

151
So, with this way come to the end of this lecture, we shall be continuing with this kind of
algebraic manipulation in our next lecture or so.

Thank you.

152
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 12
Algebraic Manipulation

So, continuing with our last lectures discussion, we shall be talking about Algebraic
Manipulation in the present lecture. We already seen some examples of algebraic
manipulation that using the basic rules that we had introduced, we had learnt, we can apply
them in a systematic way to try and prove sum given expressions, left hand side and right
hand side are given, we try to show that their equivalent ok.

(Refer Slide Time: 00:47)

So, let us start with this. The first thing that we talk about is something called principle of
duality. Well, knowingly or unknowingly we have already got introduced to this principle. So,
what does this principle of duality say, principle of duality says that well suppose I have a
switching expression T1 given to me. So, I have this T1 given to me. It says, I can transform
this expression or this identity into another expression T2 by following some rules. So, what
are the rules? I interchange the AND and OR operations, I interchange 0 and 1. Like well we
have already seen some rules, let us very quickly recall some. You see we talked about a rule
if you recall, x OR x NOT y equal to x plus y.

x+ x́ y=x + y

153
This was one of the identities that we got introduced in the last lecture. Now, suppose if we
apply the principle of duality, left hand side we replace dot and plus this is dot. So, what the
left hand side will become? It will become x dot x bar, this dot will become plus. Right hand
side, it will become, plus will become dot. So, if this is true, this will also be true. This is
what principle of duality says right.

(Refer Slide Time: 02:46)

So, some examples x dot 0, x and 0 is 0 already known. So, just following this rule dot will
become plus 0 will become 1, 0 will become 1 this is also true.

This is the dual of this. Similarly this was one of the rule, dot replaced by plus, plus replaced
by dot, this was another rule. This was given. This I just mentioned this rule. So, you see
because of duality, when we presented the rule in the last lecture, most of the rules are
coming in pairs, pair of rules, one is the dual of the other ok. So, if we prove one, the
principle of duality says the other will also be proved. So, you need not have to prove both,
you prove 1 the other will follow from that. This is what principle of duality is.

154
(Refer Slide Time: 03:46)

Now, talking about simplification of switching expression, just you need to remember one
thing, this is in contradiction to our conventional algebra.

So, when we carry out conventional algebra, the thing the algebra that you studied in your
school it says, that if I have x plus y equal to x plus z, then I can cancel x from both sides, I
can say y equal to z. Why was that? It was because the inverse of a number exists sorry.
Suppose when I have a number x in algebra the inverse of this number minus x in our
conventional algebra. So, when we had an expression like x plus y equal to x plus z.

x+ y=x + z

So, what we did? We can add this same thing on both sides, we can add minus x to this side,
we can add minus x also to the other side. And these two got cancelled out, these two get
cancelled out. Hence, we could write y equal to z. But in switching algebra the inverse of a
variable does not exist, there is no concept of minus of a variable ok. Because of that you
cannot say if x plus y equal to x plus z, you cannot say y equal to z ( y=z ¿ .

155
(Refer Slide Time: 05:25)

Just a simple counter example, let us say x is 1, y is 0 and z is 1. So, what is x OR y? x OR y


is 1 OR 0 is 1. What is x OR z? x OR z is also 1 plus 1 (OR 1) is 1. So, this is true. But does
it imply y equal to z? You see y and z are different, you cannot simply cancel x like that from
here ok.

So, in switching expression minimization or simplification you should not cancel variables
like this. You should strictly limit yourself to applying the rules that you have learnt and
nothing outside that ok. You forget what algebra you studied in your school; do not try to
apply here. If you apply here your result or deduction will be wrong ok. This you have to
remember. This is what I mentioned the main reason is that inverse operations are not defined
in switching algebra; you cannot cancel variables just like that.

156
(Refer Slide Time: 06:39)

Let us work out some examples, very simple examples. Suppose, I have a expression A B bar
A B plus B C which we want to simplify.

Very simple, I can apply distributive law on the first two, I can take a common, B bar OR B,
B C remains and B bar OR B I have an identity this means 1. So, this is A AND 1 plus B
AND C. A AND 1, I have another rule this is only A, A OR B C. So, this expression gets
reduced to A OR B C right. Let us take some more examples.

(Refer Slide Time: 07:38)

157
Let us take an example like this, which is slightly more complex in the sense that you want to
minimize it. You see, let me try it first like this. You look at this expression; you see this A B
C bar and A B C has A B in common.

So, if you take A B common, you get C bar OR C. C bar OR C is 1. A B AND 1, this is only
A B. But can you simplify it any further? So, you do not see any apparent rules. But there are
another rule, but let me try it in a slightly different way that is of course, another rule you can
apply, but let me try it in a slightly different way.

(Refer Slide Time: 08:37)

So, what I am saying is that, well you had a rule remember x or y equal to x. So, if you have
x, you can create any number of times, you can also repeat this once more. x plus I mean any
number of times this will be equal to x. 0 OR 0 OR 0 OR 0 is 0. 1 OR 1 OR 1 OR 1 is 1. So,
what I am doing is that, this ABC I am writing as ABC OR ABC OR ABC. Because, there are
three terms, I will group one of them with one each. So, the original expression A bar B C and
A B C, I take 1. So, B C I am taking common; A bar OR A. Second A B bar C and this second
A B C. A C I am taking common. Last of course, A B you take common, C bar OR C.

So, this is 1, this is 1, this is 1. BC AND 1 is BC, this will be AC, this will be AB. This will
be your minimized expression right. So, there are multiply ways to proceed. This is one way
ok. This will be the minimized expression corresponding to this. Let us take another example
again.

158
(Refer Slide Time: 10:25)

Let us take this, this again you can see here, that what are the things you can combine. These
two you can combine, these two you can combine. You see A bar B and A B, if you take B
common it will become A bar OR A. The last two, if you take B bar common it will be C bar
OR C.

So, again A bar OR A is 1, C bar OR C is 1, B AND 1 will be B, B bar AND 1 will be B bar
and B OR B bar is 1. So, it minimizes to only 1. A constant expression right. So, in this way
you can simplify any given expression using the rules, if you know how to apply.

(Refer Slide Time: 11:27)

159
Now, this is a very important law that you can apply for transforming expressions in variety
of ways; this is called De Morgan’s theorem. This is the statement of De Morgan’s theorem in
2 variables, well you can extend it any number of variables.

It says x OR y, NOT of that is the same as x NOT AND y NOT and just the dual of this x
AND y, NOT of that equal to x NOT OR y NOT. Let us look at the first one, we can easily
prove it by perfect induction, just show it x and y. So, you have the different values 0 0, 0 1, 1
0 and 1 1. Let us show the two different expressions, x OR y, bar of that and also, let us show
x bar AND y bar.

Let us first take OR of x and y, and then NOT. 0 OR 0 is 0, NOT of that is 1. 0 OR 1 is 1,


NOT of that is 0, OR is 1 NOT, OR is 1 NOT. Here you are taking NOT of both and then do
ANDing. 1 and 1, AND is 1. 1 and 0, AND is 0. 0 and 1, AND is 0. 0 and 0, AND is 0. You
see these two are same. Similarly the second one you can prove.

(Refer Slide Time: 13:35)

Now, when I say, it can be extended to any number of variables and I am just showing one.
For three variables, you can write like this, x plus y plus z, suppose there are three variables,
if there is a bar. This will be same as x bar AND y bar AND z bar. Similarly x y z bar will be
equal to x bar plus y bar plus z bar right. This can be extended to any number of variables you
want.

160
(Refer Slide Time: 14:07)

This exactly the truth table I worked out, this is shown here. So, for all possible value x and y
the left hand side is shown, right hand side is shown also these two are shown. So, these and
these are identical, for the second one, this and this are also identical right ok.

(Refer Slide Time: 14:29)

Now, let us just work out some simple example. Here, we are trying to do some
simplification or transformation of the expressions. Now, for this kind of expression you have
to use De Morgan’s theorem. Because you see, you have an expression here A plus B bar, the

161
other one is fine. So, A plus B bar, you apply De Morgan’s theorem, it becomes A bar B bar,
then you have this A bar OR B bar.

Then you multiply it. You can use the distributive law. A bar B bar into this, so, A bar B bar A
bar OR A bar B bar B bar. This dot as I said, sometimes you can drop. So, it implies you see
here, there are 2 A bars right. A bar and A bar, this is associative. B bar A bar you can write A
bar B bar. You can bring it here, then A bar A bar means only A bar. Similarly here also B bar
B bar you can make it B bar. This is only B bar and, something OR the same thing it means
that same thing. A bar B bar, this is the final expression right. You see unless you knew De
Morgan’s law, you could not arrive at this expression. It would be quite difficult right. So, De
Morgan’s law is also very important.

(Refer Slide Time: 16:12)

Let us take another example. This is a slightly more complex example. So, this is something
plus something. I write it without dots it is easier, I write A B bar OR A bar B whole bar. So,
this is something plus something bar, I apply De Morgan’s law, this something dot and this
something bar.

So, I apply De Morgan’s law again on this, it will be A bar OR B bar bar AND, I apply De
Morgan’s law on this same way. It will be A bar bar OR B bar. So, this becomes A bar, B bar
bar means only B. A OR B bar. This you can multiply to again. So, this way you can proceed.
I am showing it partially. So, the final thing if you proceed, the final result there will be

162
getting for this one, this will be equal to A B OR A bar B bar. This will be the final minimized
expression right ok.

(Refer Slide Time: 17:55)

This is another expression; I leave it as an exercise for you. You see there is a term here, you
can apply De Morgan’s law here, there is another term here you can apply De Morgan’s law
here, or then simplify again you can apply some of the rules, whatever minimization you can
do, you can do right. So, this way you can apply the rules wherever you can and you can
carry out the minimization. This one last example, let us just work out this.

(Refer Slide Time: 18:25)

163
Here the first one you need De Morgan’s law, it becomes A bar OR B bar OR C bar. The other
2 are A OR C, A OR C bar. Let us multiply this out. Well you can use the rules of
multiplication for expansion. Let us suppose I first expand, multiply these two out. The first
one let us leave, we do not disturb this. A AND A, I am just doing a simplification. A AND A
is only A, A AND C bar is A C bar, C AND A is A C or C A whatever commutative, C AND C
bar, C C bar. Well I am only writing the second part. First part is same. You see, A A A in the
first three you can take a common, 1 OR C bar OR C and C C bar is nothing but 0 and 1 OR
anything is 1.

So, this is A AND 1. This is A OR 0. This is only A. This becomes only A. So, this multiplied
by A, AND, so, you just AND it with this. A A bar plus A B bar plus A C bar, this again is 0.
So, your final expression is A B bar OR A C bar, this is your minimized expression right.

(Refer Slide Time: 20:34)

So, here we had looked at a number of rules and ways to minimize it. So, this one more
expression I leave it for you to minimize and so, just try to work out ok.

164
(Refer Slide Time: 20:42)

Now, let us look at some practical problems and see the how we can convert it into a
switching expression. I am just giving you two examples here. Let take an example, a safe
has 5 locks v w x y z let say and to open this safe you have to unlock all 5 of them and, what
we are assuming is that the locks of the keys are distributed among 5 security officers A B C
D E like this. A has the keys for locks v and x, B has locks v and y, C has 2 each. So, we want
to find out what are the combinations of security officers that must be present, so, that they
lock can be opened.

You see there are 5 locks I need all of them right. Let us look at one by one, v who is having
v, v is A B and E. I write it like this A OR B OR E. This is the first requirement for lock v.
Then w who is having w, only C, So, C is must, C must be there. Then x, who is having x, A
and D, A OR D. Then y, y is B and C, B OR C. And finally z, D OR E, you see this is a
switching expression you have created. Now, you can multiply this out. I leave it an exercise
for you. Multiply this out, let us say one of the term you will see is A B C D, I am showing
you couple of terms. Another term you can see like this B C D.

Another term you can see like ECA like that and so on. This means there will be more terms.
This means, that if security officers BCD are all their, all three of them, then this 5 keys are
available. You see B is having v y, C is having w y and D is having x and z. So, all 5 are
available. Similarly ECA, so if you write down expression like this you can find out all
combinations of security officers that can open the safe right.

165
(Refer Slide Time: 23:46)

Let us take another such problem. There are 5 soldiers let us say A, B, C, D, E, they want to
go for some secret mission, some mission. And the conditions are as follows either A or B
must go or both must go, either C or E but not both must go, either both A and C goes or
neither goes, if D goes then E must go, if B goes then A and D must also go.

You see here also the condition you can write in a very similar way. It says either A or B or
both this is nothing but the OR function. I can write A OR B. A or B or both will make the
OR function true. Either C or E but not both; that means, not both. Either C or E. So, how
will you write this? You will be write this as C bar E OR C E bar. That means, C is false, E is
true; that means, C is not going E is going or C is going E is not going. Either both A and C
goes or neither goes. This is both A and C goes or neither goes. Like this you can write down
the expression. I will leave as an exercise for you to complete the rest ok.

This kind of problem if it is given, you can map it to a switching expression, why switching
expression, because you see, in this problem each of the variables A B C D E if you treat
them as variables, these are binary variables either 0 or 1. 0 means this soldier is not going, 1
means the soldier is going ok. So, for all such problems if you have a very well defined
specification provided, you can write down such a switching expression, minimize them.
And, when you get the expression you can find out what sequence of soldiers, or what group
of soldiers can be sent for the mission ok, I also leave this as an exercise for you, you can try
this out ok.

166
So, with this we come to the end of this lecture. Now, one thing we saw in this lecture,
towards the end, that given a problem, you can write down an expression for that. Well of
course, the expression can be pretty large in some cases. And if I give a large expression and,
if I ask you that we apply the rules that we have learnt to try and minimize it, sometimes it
may not be a very easy task. You may have to work out for multiple pages and pages to
minimize, simplify very large expression.

So, there should be some more systematic way to approach this problem. So, we shall later on
see that, there are some more systematic method for minimizing such logic expressions
switching expressions, which will tell you some systematic steps to be followed and if you do
that you will get some expressions which are in minimized form. So, we shall be learning all
these things in due course of time.

Thank you.

167
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 13
Properties of Switching Functions

So, in this lecture we shall be discussing about some of the properties of Switching Functions
and Switching Expressions, Properties of Switching Functions. Now, recall earlier we talked
about the notions of literals, switching function, switching expressions and so on. And we
looked at a number of rules, basic rules using which you can manipulate such switching
expressions. Now, since our ultimate target is to design and implement some functions using
the basic gates AND, OR, NOT, NAND etc.

So, let us try to move in that direction. So, in this lecture we shall be first introducing some of
the basic concepts of switching expressions and, then we shall see how we can use them in
designing circuits or functions fine.

(Refer Slide Time: 01:10)

The first thing we talk about is something called minterm and maxterm, but before trying to
define what minterm and maxterm is, let us define a literal. What is a literal? Well as this
definition says, literal is nothing but a variable which can appear in either un-complimented
form or in complemented form.

168
So, suppose I have an expression, let us say

A B́ C + A D́

Now, here examples of literals are this A, B bar, C, A, and D bar. There are a total of 5 literals
in this expression ok. Literal is a single variable either appearing in un-complimented form,
or a not complimented from. So, here are some examples are given x, x bar, y, y bar just like
that.

(Refer Slide Time: 02:29)

Now, let us try to see what is a minterm and a maxterm? Now, that we know a literal,
considered an n variable switching function f, where the input variables are let us say x1 to
xn.

So, any product term, product term means AND of some of these literals, well in again
complemented or un-complement form because literal definition is that. So, product term that
contains all the n literals, it is called the minterm. Let us consider a function, let us say f of 3
variables A B and C, in this function some examples of minterm will be A B bar C. Let us say
another example will be A B bar C bar and so on, A B C. So, all the literals should be
appearing, all the variables, literals for those should be appearing, they are called minterms

Similarly, maxterms refer to the all operations, say again for this 3 variable function, if I write
a sum term, let us say A plus B plus C bar, this is a maxterm. Then similarly A bar plus B bar
plus C, this is also a maxterm ok. These are the basic definitions.

169
(Refer Slide Time: 04:10)

So, some examples are shown here in the slide also, for the three variable function f of A B C,
A B C are the input variables some examples of minterms are this. You see if some of the
variable is missing for example, A B that will not be a minterm ok. A minterm must contain
all the variables, either in complemented or in un-complemented form. Similarly, maxterms
will refer to the sum of the literals, corresponding to all the variables ok, these are minterms
and maxterms.

(Refer Slide Time: 04:46)

170
So, some properties of minterms and maxterms are, well minterm will assume the value 1, for
exactly one combination of variable. Maxterm assumes the value 0 for exactly one
combination. Let us see what you mean by this. Take an example, a function of three
variables, A B C. Let us first look at the so called sum of products expression, where let us
say my function is like this

A B́ C + AB Ć+ ABC

So, according to definition these are minterms, there are three minterms. So, what I say is that
a minterm assumes the value 1 for exactly one combination of variables, you consider the
first minterm, only when the input variable A is 1, B is 0 and C is 1, the first minterm will be
having a value of 1. Because A, B bar means 0 bar is 1, C is 1, 1 AND 1 AND 1 it will be 1.
Similarly for the second one we require A equal to 1, B equal to 1 and C equal to 0 and for
the third one we require A equal to 1, B equal to 1, C equal to 1. So, for exactly one
combination of the input variables a particular minterm will assume the value of 1 ok.

(Refer Slide Time: 06:49)

Similarly, for a maxterm the condition is slightly different. Again let us say, take a function
again of three variables. So, I am writing it in terms of maxterms, let us say one maxterm is A
plus B bar plus C and, it is a A bar plus B bar plus C bar and let us say, A plus B plus C. So,
there are 3 maxterms. Now, let us take an example. Let us say if we follow our previous
approach, you try to see for which combination this will be 1. You see, this is a OR function.
So, either A 1 OR B is 0 OR C 1.

171
Let us say if A is 1, then the first maxterm will also be 1, because A is there, A OR, 1 OR
anything is 1. Similarly the last one, here also A OR is there, this will also be 1. So, just by
taking A equal to 1 will not serve our purpose. Rather what we will do, we will see for what
combination this minterms will be 0’s. See A OR B bar OR C, it means that if A is 0, if B is 1
and, if C is 0, then all these three things, literals will be 0’s. 0 OR 0 OR 0, the first maxterm
will be 0, or for the second one, let us say A is 1, B is 1 and also C is 1. And for the third one
A is 0, or B is 0, or and C is 0.

So, if we have these conditions, this condition says that that when this input combinations is
there, this maxterm will be 0. So, what I can say is that, you see this statement now. A
maxterm assumes the value 0 for exactly one combination of variables, for this exactly one
combination of variable the first maxterm will be 0. For exactly this combination second one
will be 0 and, for this combination the third one will be 0 right. So, just remember this.

(Refer Slide Time: 09:15)

Now, this minterms and maxterms can be classified as either true or false. So, I can say some
minterms as true minterms and some minterms of false minterms. So, how are they defined?
This definition is made with respect to some given values of input variables. For some
specified given value of the inputs, if I evaluate the values some of the minterms will be
having the value 1, some of the minterms will be having the value 0. So, accordingly I call
them either true minterm, or false minterm.

172
Similarly I can evaluate the maxterms some of them will be 1, some of them will be 0 and,
accordingly I call them true maxterms, or false maxterms. This is how the definition goes, let
us look at some examples.

Let us take a simple switching function here, f of three variables X, Y, Z. X bar Y plus X Y Z.
You see this X Y Z is a minterm all right. All three variables are there. But X bar Y is not a
minterm. So, what we do? This X bar Y, I multiplied by Z plus Z bar. Multiply means AND.
Because Z plus Z bar is 1, so, anything AND 1 will be the same thing. So, if I do this, this is
nothing, but X bar Y Z OR X bar Y Z bar. So, effectively this X bar Y this product term is
corresponding to two different minterms, X bar Y Z and X bar Y Z bar this is what is
mentioned here right.

Similarly, if you take a product of sum, there are some products, sum X plus Y, Y plus Z and
product of that, product of sum terms. So, here again these are not maxterms, because all the
variables are not present. Let us say X plus Y. So, what you can do, you can write it as X plus
Y plus Z Z bar, Z Z bar is 0. So, anything OR 0 is 0. Now, you have this. Now we apply
distributive law, this will be equivalent to X plus Y plus Z and X plus Y plus Z bar.

Similarly you can do for the second term. So, in this case the true maxterm will be X plus Y
plus Z, X plus Y plus Z bar and the third one will be coming from Y, Y plus Z X bar plus Y Z,
this xyz will be common to both ok. This is how you can calculate which minterm and
maxterm are true, the remaining ones will be false fine.

(Refer Slide Time: 12:34)

173
Now, let us try to take a small real example and with the help of that example let us see, how
this concept of minterm and maxterm goes. The example we take is that of a 3 bit adder. Well
we already have learnt earlier, how to carry out binary addition. 3 bit adder is sometimes
known as full adder. There are 3 inputs A B and C. And as the output you will be having a
sum 1 bit and a carry that will also be 1 bit. So, let us see how a full adder is defined. As I
have said, it will be having 3 bits A, B, C as input, it will be having a sum output and a carry
output. And here we are showing the truth table of the full adder.

So, you see for all combinations of A B and C we have listed, what will be the sum and what
will be the carry ok. So, if you add all 0’s, the sum is 0. No carry, 0 and 0. If you add 0, 0 and
1, the sum is 1, but no carry. Similarly 0 1 0 or 1 0 0 there is a single 1, sum will be 1, no
carry. But if there are two 1’s 0 1 1, 1 0 1 or 1 1 0. If there are two 1’s then sum will be 0 and
carry will be 1. Because, carry and sum, 1 0 in decimal means 2. And lastly if I add 1, 1 and 1
which is 3, 3 means 1 1 in binary right. So, sum will be 1 also carry will be 1.

Now, you see this again we shall be coming back later, the way we can just express, or write
down the expression for this sum and carry are very simple. First look at the sum, you look at
the column for sum, you find out that for which rows, this sum is 1. There are 4 such rows.
Then you look at the corresponding input combinations, these are the corresponding input
combinations for which this sum will be 1 ok. So, if it is 0 you use that variable in
complemented form, A bar. If it is 1 you use it in un complemented form, C, that is first say 0
0 1. So, you write A bar B bar C.

Second one says 0 1 0, A bar B C bar, 1 0 0, A B bar C and finally, 1 1 1, A B C right. This is
how we can write down the expression for this sum, switching expression directly from the
truth table. This is very simple right, you see which are the 1’s and then write down, we shall
again be coming back to this later as I had said ok.

Now, let us come to the carry, look at the carry function. Again for the carry also you see,
there are four 1’s, 1 2 3 and 4. So, if you follow that same principal what will be the
expression 0 is 0 1 1.

174
(Refer Slide Time: 16:10)

So, it is A bar B C plus, second one is 1 0 1, A B bar C plus, third one is 1 1 0, A B C bar and
1 1 1, plus A B C. So, I took the same example earlier once, this A B C I am writing as A B C
OR A B C OR A B C. I can write like this, there is a rule of switching algebra which allows
me to write something as same thing plus same thing. And I combine A B C, A bar BC with
the first one, this with this and this with this. This A bar BC and A B C if I take BC common,
it becomes A OR A bar. A from here and A bar from here, from the second two if I take A C
common, I get B from here and B bar from here. And lastly from this and this, if I take A B
common I get C plus C bar.

Now, A plus A bar is 1, B plus B bar is 1, C plus C bar is 1. So, I get finally,

AB+ BC +CA

So, what I am showing here is the same function, but in a minimized form A B OR B C OR C
A ok. This is what a full adder is.

175
(Refer Slide Time: 17:54)

So, talking about the true minterms for this sum there are 4. So, the true minterms will be
these 4, corresponding to the rows where sum is 1. Similarly for the carry function the 4 true
minterms will be 0 1 1, 1 0 1, 1 1 0 and 1 1 1 that corresponds to this.

So, in similar way you can just find out the maxterms ok, say for the 1 maxterm will be A bar
plus B bar plus C, second one it will be A bar plus B plus C bar, this one will be A plus B bar
plus C bar and so on fine.

(Refer Slide Time: 18:37)

176
Now, let us introduce some more definitions, there are something called unate function, you
see means we had defined what is the switching expression, we talked about literals, literal is
a variable which can appear in complemented form also in un complemented form. We say
function is unate, if for every variable let us say for a for a variable A, will A will means, A
will appear only in complimented form, or in un complemented form, not in both that will be
called a unate function.

So, let us see the definition, a switching function is called unate, if no variable appears in
both complimented and un complemented form. Just you note this term minimized
expression. Well, I will take some examples. Suppose switching function A B plus B D. This
is unite, because everything appears in un complemented form. Take another example, A B
bar plus B bar D, this is also unate. Let us take another term, let us say A D. So, A appears in
un complemented form, B appears only in complemented form, D appears only in un
complemented form, this is also unate.

Now, you consider a function like this, A B C bar or A B C. Now, you will say that well this is
not the unite, because C is appearing in both complimented and un complement form. But I
will argue that this expression is not minimized. So, if I take A B common, there will be C bar
OR C and C will cancel out, this is equal to 1. So, this will be only A B and A B is unate right.
So, this minimized, this term is important.

(Refer Slide Time: 20:52)

177
There are two special categories of unate function. So, I say a function is called positive
unate, if all variables appear in only un complemented form, there is no complemented
literals. Similarly I can defined negative unate, if all variables appear in only ok, this will be
complimented sorry, this should be complimented.

So, if it appears only in complemented form, then it is called negative unate right. And if it is
not unite, then it is called non-unate. So, for the full adder function, the carry is the positive
unate, if you can check

AB+ BC +CA

But for sum the variables appear in both complimented and un complemented form, that will
be non-unate fine.

Now, let us talk about some unique way of representing functions, you see we said that a
function, or a switching function is given to you, you can apply the rules to carry out
minimization. Now, minimization can be carried out in a number of ways. Well, I mean if you
can figure out which rule to apply, may be we are able to minimize it very well. But it may
also be possible that we have minimized it to some extent, but then we have missed out some
rules.

So, we have a reduced expression, but that may not be the smallest possible expression ok.
So, there is a question is it possible to have some kind of a unique representation of a
function? Well any unique representation is sometimes called canonical representation. So,
we are talking about canonical representations of functions let us see that.

178
(Refer Slide Time: 23:01)

Canonical representation of switching function canonical forms, now as I said canonical form
is nothing but some unique way of representing a function. Given a truth table, from the truth
table we can directly arrive at two different canonical forms, one is called canonical sum of
products, which is sometimes also known as disjunctive normal form, or as a product of sums
product of sum term which is called conjunctive normal form.

You see normal form is a term which is used in propositional calculus, where which refers to
some kind of canonical representation, some kind of a unique form of representation ok.

(Refer Slide Time: 23:58)

179
Let us first look at canonical sum of product. So, what you are saying is that, you have the
truth table from the truth table you identify all the true minterms, true minterms means the
rows for which the output is 1. Take this sum of all the minterms. So, this we have already
showed by an example earlier for the full adder. For the full adder sum will be like this and
carry will be like this. You take all the corresponding rows for which the sum is 1 there are 4,
carry is 1 there also there are 4.

Now, what you are saying that this is nothing but the canonical representation. You see I
mentioned that the carry function can be minimized, but here we are not minimizing. If you
are not minimizing from the truth table we have seen, that there are 4 1’s in the sum and,
there are 4 1’s in the carry. So, if I list out all the 4 like this, this will obviously be a unique
representation. There is only 4, it is not 3 or 5. There will only be 4 such minterms right. So,
if I express a function as a sum of the minterms, the true minterms, then it will obviously be a
unique representation.

Now, we can write such canonical expression in a short form as a minterm, using the sigma
or summation notation, well I will explain it with the help of the truth table that how it is
written, let us check this.

(Refer Slide Time: 25:43)

Let us consider again this truth table for the full adder, what we do is for every row of the
input, I write down the decimal equivalent of the binary the input 0 0 0 is 0, 0 0 1 is 1, 0 1 0 is
2, 0 1 1 3, 1 0 0 is 4, 1 0 1 is 5, 1 1 0, 1 1 1, 0 to 7. For this sum you list out which rows are 1,

180
it is 1, 2, 4 and 7. So, I write it like this, summation or sigma notation, sigma means sum 1, 2,
4, 7.

Similarly, carry 3, 5, 6, 7. Carry is 3, 5, 6, 7. So, this is a compact way of representation of


canonical sum of products. This 1, 2, 4, 7 or 3, 5, 6, 7, they all represent nothing but sum
minterms of the function, they represent 1 row of the true table right. This is one way in
which you can represent.

(Refer Slide Time: 27:05)

Similarly, we can have canonical product of sums form. So, what is the rule here again?
Rules are very similar, the only difference is you identify the false minterms, earlier we had
identify the true minterms, you find out the rows for which the outputs are 0 ok, false
minterm means this will be 0 not 1, 0 right.

Then for every false minterm you form a sum term, but now the sum term are reversed in
polarity that means, if it is 0 in the truth table, you use in uncomplemented form, if it is 1, we
use it in complemented form. I will take that example of the full adder and explain right. So,
the expressions for the full adder are shown. Let me go to next slide show, the example and
then again come back. So, the truth table is shown here right.

181
(Refer Slide Time: 28:10)

So, what we are saying is like this. Suppose I want to write down the expression for sum, I
want to write down the expression for sum. So, what I do here is that here, I look at the false
maxterms. And we see it is 0 0 0, if it is 0 I use it in un complemented form. So, here it will
be A plus B plus C, second term 0 1 1, it will be A OR B bar OR C bar and third one is 1 0 1,
A B bar OR C and lastly 1 1 1, it will be A bar plus B bar OR C bar. So, what is this actually
mean you see, here we are talking about the false minterms right, but when we complete,
false minterm means what? That when A bar B bar C bar, this condition is true A is 0, B is 0,
C is 0, then sum is 1.

But if either A is 1 OR B is 1 OR C is 1, like I write it like this, then this will be 1 right. So, it
is NOT A bar B bar C bar, see just apply De Morgan’s law, A bar B bar C and NOT of that is
nothing but A or B or C. So, we are actually applying the reverse 0 0 0 is 0 means NOT 0 0 0
is 1, NOT 0 1 1 is 1, NOT 1 0 1 1, NOT 1 1 1 is 1. So, this and this and this and this, this will
represent my function sum.

Similarly for the carry so, I can do the same thing. So, if you go back to the previous slide.
So, here we have written down the expression for the sum and also the carry, these are obtain
in a very similar way, sum I have shown so, you can see that the carry also can be obtained.

Now, again when you write this down again from the truth table, in a short form you can
write it in this way in a product notation using the pi notation, so again by noting down the
decimal equivalents.

182
(Refer Slide Time: 31:16)

So, let us see from the truth table once again here, if you again note down the decimal
equivalents 2, 3, 4, 5, 6 and 7. So, sum will be 0 for 0, 3, 5 and 7, 6. 0, 3, 5, 6 and carry will
be 0 for 0, 1, 2 and 4, 0, 1, 2, 4. So, you see this is a very simple way, in which starting from
the truth table I can directly write down a sum of product, or a product term sum expression,
which is canonical; canonical, means it consist of either a sum of sum minterms, or it consists
of a product of sum maxterms.

This is as I had said is called canonical representation. So, now, you know that from a
function specification in the form of a truth table, how to write down the expression, either in
sum of product, or in product of sum, later on we shall see, how we can manipulate this forms
in various different ways, ok.

So, with this we come to the end of this lecture. So, we shall be continuing with our
discussion in the next lecture.

Thank you.

183
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 14
Obtaining Canonical Representations of Function

If you recall in our last lecture, we had talked about how to obtain the canonical sum of
product and product of some representations of a function from the truth table. So, we
continue with our discussion in this lecture as well, we shall see that not only from the truth
table even from a given expression; we can systematically apply some algebraic manipulation
technique to obtain the canonical expression. So, let us try to look at it and see that what
kinds of transformations are possible?

So, the title is Obtaining Canonical Representations of Functions.

(Refer Slide Time: 00:56)

So, we first look at the algebraic procedure. So, what is the algebraic procedure? So, what we
are saying is that for a given function, let us say I have a function of let us call A B C, a
function of 3 variables A B C. And in this function I have one term let us say A B or
something else is there; you see this is certainly not a minterm because C is absent. So, what
you are saying is that for all missing variables you multiply the product term by that variable
plus its complement. The reason is very simple if you multiply them out it will be nothing but
A B C or A B C bar just like this right.

184
So, this same rule is being followed here you see; here we examine each term of a given sum
of product expression. Well, if it contains all the variables that mean it is a minterm do not do
anything, but if it is not a minterm; some of the variables must be missing. So, for every
missing variable let say xi multiply that term by as I said xi bar plus xi. Then using the
algebraic rules, you multiply out all the terms and finally, you will be getting a sum of
minterms; some of the minterms, some of them may be repeated. So, you eliminate the
repeated once, redundant once and what you get will be your canonical sum of product. Let
us take an example here.

(Refer Slide Time: 02:46)

Consider a 3 variable function given like this. So, here you see the first, the last one is
obviously a minterm, all the variables are there, but the first product term c is missing, the
second product term both a and c are missing. So, for the first one what we do we multiplied
by c or c bar. For the second one we multiplied by both a or a bar and also c or c bar, for the
last one we do not do anything because all the 3 are there. Then you simply multiply them out
a b bar c, a b bar c bar, and here you multiply them out I have just skip this step. See a c and b
a c bar and b, a bar c and b, a bar c bar and b you saw multiplied all of them out and a b c is
there.

Now, you see which of them are repeated. You see this a b c is appearing twice other than that
all are unique, there is no further repetition. So, you write down the expression you have 6
true minterms and this will be your canonical sum of product expression. So, given any

185
expression you can apply these rules to expand the function and using this multiplicable and
multiplication rules switching algebra rules; you can obtain the sum of product expression.

(Refer Slide Time: 04:46)

Similarly, let us look at the reverse; product of sum, product of sum means here I am saying
let us say again let us taken example a function of 3 variables A B C. Suppose I have term A
plus B and something else. Let us look at this one, this is certainly not a maxterm because C
is missing. So, if C is missing what I do I add a term C C bar to this, this I can do because C
C bar is nothing but 0, and of a variable and its complement. So, anything or 0 does not
change the function.

But if I do it then I can apply the distributed law A plus B plus C, A plus B plus C bar. So, I
can write it as A plus B plus C and A plus B plus C bar. So, now, I have the maxterms this is
the basic idea here right. So, let us see we examine each term of a product of sum expression
if it is a maxterm, we do not do anything; if it is not a maxterm then you look at all missing
variables and for every missing variables you add the term xi xi bar. Then as I said you can
apply this simplification rules and distributive laws to obtain the sum terms and you see if
any of them are redundant, if the redundant eliminate them.

186
(Refer Slide Time: 06:40)

Let us take an example, suppose I have a function like this where there are 2 sum terms the
first one contains only a bar second one contains b bar and c. So, what I do? For the first one
b and c are missing. So, we add b b bar, also c c bar. The third one a is missing. So, we add a
a bar. Now after doing this we apply the distributed law. So, here we have skipped a step
again these two things we are showing in a combined way, say a bar b c a bar b c bar a bar b
bar c a bar b bar c bar. So, there will be 4 terms generated and for this one b bar c a and b bar
c a bar one of them is repeated I am just eliminating the repetition. So, I am only showing the
final form.

No sorry here there is a repetition let see which one is repeated a bar b c is one, a bar b c bar a
bar b c bar is one, a bar b bar c a bar b bar c is repeated. So, you just use only one copy of it
other one you delete and you get the final form here this is your canonical product of some
expression. So, you see that means even using algebraic technique; you can start with any
given expression and you can find out either the product of sums, product of maxterms or
sum of minterms, both of them are canonical expressions of a function, alright.

187
(Refer Slide Time: 08:44)

Now, let us see how we can transform the forms from one to other, sum of product to product
of sum or product of sum to sum of product. So, I shall show you one, the other one you can
try out to the examples. Here the idea is to repeatedly apply De Morgan’s law or De Morgan’s
theorem. The idea is this say a given function if you complement it twice, the function value
remains the same. So, our idea is the same like intuitively I am telling if I have A B or C D
and if I compliment this once; what I will get? If I compliment this once I will get A bar plus
B bar and C bar plus D bar by 2 applications of De Morgan’s law you can see.

Then we are doing another complementation let us see what you are doing this let us take an
example.

188
(Refer Slide Time: 10:00)

Let us take an example; suppose I have a given sum of product expression like this, this is a
canonical sum of product expression. So, you consider that this is a 3 variable functions. So,
if I consider the truth table consider the truth table. So, how many rows will be there? There
will be 8 rows out of this 8 rows; 5 of them are one the output is one for 5 of them these are
the combinations. So, what I am saying that the functional remain the same if I complement it
twice ok.

So, f bar this was f; so, within the inner bracket f bar whole again bar. Now let us look at
what is the meaning of this inner thing; sum function bar. So, what this function represents?
This function represents these 5 rows of the truth table for which the output was 1; not of that
what is the not of that? Not of; that means, the remaining 3 rows which was 0. So, if I take
the not of this instead of algebraic manipulation I can do a shortcut, I can simply look at
which of those rows are not there, see a bar b c was not here a b c bar was not here and a b
bar c was also not here. So, if I want to complement this we simply consider the remaining
minterms; it is same as this. So, if I am verify you can also compliment it do a lot of
multiplications and simplifications you will be arriving at this form.

But here we have done a shortcut and finally, this again there is a not the last not here; so, if
you do this now we apply De Morgan’s law straight away; this a bar b c becomes a plus b bar
plus c bar plus becomes and a bar b bar c plus becomes and a bar b c bar this means c bar
sorry this means c bar fine. So, like this you can convert a canonical sum of product

189
expression to a canonical product of sum expression. Now see for this example, this sum of
product had 5 terms, but product of sum is having 3 terms, it will be smaller, but it may not
be the same for all the functions for some functions it can be the other way round also.

(Refer Slide Time: 12:51)

Now, this we have already seen earlier let us look at it once more; there is no harm. So,
whatever saying is that is how to obtain the canonical sum of product expression for a truth
table. This we have already shown with the help of an example of a full adder in the last
lecture. You consider rows of the truth table for which the output is 1. For each such row
form a minterm where the literals are defined like this, if a variable is 0; you compliment the
corresponding variable, if the variable is 1, you leave the variable as it is do not compliment
take this sum; what you get is the canonical sum of product expression. This is what we
discussed in our last lecture let us work out the example once more there is no harm in
working out again.

190
(Refer Slide Time: 13:55)

Let us take the example of this here just to recall what you said; just for this sum we said that
we look at the rows for which the output is 1 and it is 0 0 1, we write A bar B bar C 0 1 0 we
write A B C bar plus 1 0 0 A B bar C bar and finally, 1 1 1 A B C. So, you can straight away
write like this right this already we discussed earlier. Now since our ultimate objective is to
design the circuit once we do it what next?

(Refer Slide Time: 14:54)

So, can we convert this function into a gate level circuit? Let us see; how let us write down
that expression once more.

191
So, what you saw is sum is nothing but A bar B C plus A B bar C plus A B C bar plus A B C.
You see you have a function; I can straight away convert this function into a gate level
circuit; how? I use 4 AND gates one corresponding to every product term. And the output of
all AND gates, I take together and I connect them in an OR gate; this will be my final output.

Now let us look at the individual AND gates, first one is having A bar B C. So, I have a
variable A I use a not A bar B I connect straight C I connect straight. So, A bar B C this output
will be giving A bar B C. Second one is A B bar C; so, I connect A straight away B bar C. So,
I have here A B bar C then A B C bar I have A I have B then of course, there will not get here
there will be C bar this will be A B C bar.

And last one will be A B C no need of any NOT gates here, just connect A B C. So, the point
to note is that given any sum of products expression; you can directly convert it into a so
called AND-OR circuit realization. There will be one level containing only AND gates, there
will be one level containing a single OR gate and in the input you may be requiring sum NOT
gates because sum of the variables maybe complemented right

So, for this function you need 4 AND gates with 3 input each, one OR gate with 4 inputs and
3 NOT gates right, this is how we can convert a function into a circuit.

(Refer Slide Time: 17:44)

Similarly, you can do the same thing for a product of sum realization; here you recall this also
we discussed earlier. We consider rows of the truth table for which the output is 0, for each

192
row you form a maxterm where the convention for the variable polarity is just reverse with
respect to sum of product. If it is 0 then you use it in uncomplemented form, if it is 1 then you
complement and take the product of all the maxterms what you get is the canonical product of
sum expression.

This already we illustrated earlier with the help of an example.

(Refer Slide Time: 18:35)

So, for this full adder example again let us look at the carry, let us look at the carry. So, here
with the conventions says you look at the outputs where the value is 0 and A B C all 3 are 0 0
0 convention is do not compliment them A plus B plus C. Second one; 0 0 1 0 0 and 1 means
you compliment, third one 0 1 0 0 1 0 and last one is 1 0 0 1 0 0.

So, this will be the carry out, now again just like a sum of product expression I told you; if we
have this kind of an expression, you can directly convert this into a gate level circuit. But
here it will be different in the first level you will be using OR gates, there will be 4 OR gates
in the first level and there will be a single AND gate in the second level; this will generate the
final output. So, the output of the OR gates will be connected here, now you see the, or
connections first one says A B C. So, you connect A B C straight away, second one is A B; C
bar A B and C with an NOT gate C bar.

Third one A B bar C, A B with a NOT gate B bar and C last one A bar B C A bar B and C; so,
you have this 4 product terms appearing here which you finally, and them together using this

193
AND gate it is very simple. This is also a 2 level realization this is called a OR-AND
realization. Sum of products get mapped into AND-OR, but here you get map into OR-AND
and right this is the basic convention. So, actually what we have seen is that given a function,
you can either convert it into a sum of product expression; from the sum of product
expression you can realize the corresponding circuit or you can have a product of sum
expression from there you can convert it to a circuit.

Now, it depends whether you need an AND-OR OR an OR-AND. So, you will have to start
with either a sum of product or a product of sum, but one thing you should remember
ultimately our objective is to implement the circuit using gates and the implementation cost
should be reduced as much as possible. So, one simple metric or measure of the cost is how
many gates you need. Now it is not possible to say that which of the 2 will give you the better
solution sum of product or better or the product of sum means AND-OR or OR-AND.

So, this depends on the circuit and it varies from function to function; you will have to
analyze and find out that for the sum of product how many true minterms are there; are they
larger in number or small in number? If you see that there are a large number of true
minterms, then possibly product of sum will be better and the converse means also; if it is
less then sum of product will be better.

So, with this we come to the end of this lecture. In our next lecture, we shall be talking about
another characteristic of these gates and these basic functions. And we shall be discussing the
notion of functional completeness, this we shall be discussing in the next lecture.

Thank you.

194
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 15
Functional Completeness

We now talk about something called Functional Completeness. The idea is like this you see
we have talked about so many different kinds of gates. Now, if I ask you a question that what
kind of gates will be sufficient for designing a circuit; do you need all the different kinds of
gates, AND, OR, NOT, NAND, NOR exclusive OR, exclusive NOR everything or even if we
have a small subset of gates that will be sufficient to design everything or every circuit. This
forms the notion of functional completeness or the universal set of gates.

So, the title of this talk is functional completeness.

(Refer Slide Time: 01:05)

Let us understand the notion. Now, in switching algebra whatever we have seen from the
basic definition, the operations defined are NOT, AND and OR. What does this mean, that
any switching expression that you write is just a combination of AND, OR and NOT nothing
else. So, if I have sufficient number of AND OR and NOT gates available with me that
should be enough to design any circuit I want.

195
So, this set AND, OR NOT is by definition functionally complete because the definition of
switching algebra says that with respect to the variables and constants this three basic
operations are defined AND, OR and NOT. So, if you have gates to perform these three basic
operations, you can realize any switching expressions because switching expressions are
formed only using this AND, OR and NOT operators. So, the first thing we can say is that
NOT, AND and OR, they are functionally complete.

So, let us take a very simple example say a switching expression f equal to A bar B and B C
bar D bar. So, you see that is consist of only not A is not C is not there are some AND and
there is a OR. So, if I want to realize it is very simple I need two AND gates, I need to one
OR gate, I connect them like this first one is A bar B. So, A I connect using a NOT becomes
A bar and B second was a B C bar D, B comes directly C comes through a NOT C bar and D.
This is the realization I need only AND OR and NOT. So, given any switching expression we
have nothing, but AND, OR and NOT in it. So, this set AND OR NOT trivially is functionally
complete, such a set using which I can realize or implement any function is said to be
functionally complete, right.

So, with this notion let us look at what other gates or setup gates they possess this property.
But one thing we shall be assuming, we shall be assuming that this AND, OR NOT is our
basic set. So, if I say that some other gate x y z is also functionally complete then you have to
show that using x y z I can implement AND, I can implement OR and I can also implement
NOT.

So, if I can show that then only I can claim that x y z is also functionally complete fine.

196
(Refer Slide Time: 04:22)

So, we shall show some examples now.

(Refer Slide Time: 04:26)

The first and most important claim NAND gate; so you know; what a NAND gate is, NAND
gate symbolically is shown like this. There are two inputs A B and the output let us say if it is
f. So, f is nothing, but the AND of the two NOT of that; this is how NAND is defined. Now,
what you claim is that NAND is functionally complete what does this mean that if I have
only NAND gates then I can build anything.

197
So, my circuit can consist of NAND gates only, no other kind of gates are required ok. So,
NAND is some kind of an universal gate this is our claim. Let us see. So, as I said the NAND
function is given by NAND of A B is A and B NOT.

Now, if I want to show that this is functionally complete as I said I have to show that I can
realize AND, OR and NOT using NAND; let us see how this is possible. NOT just remember
this expression NAND of two variable A B is just this A and B NOT. Let us say I take a
NAND gate there are two inputs I tie the inputs together and I apply a variable A.

(Refer Slide Time: 05:55)

So, what will be my output? It will be A and A bar this is nothing but A bar which means I
can implement NOT. Then let us say AND well, if I have a NAND gate with inputs A and B
in the output suppose I connect a NOT gate; I already know how to build a NOT gate, I
connect a NOT gate.

So, this will be NAND, NOT of a NAND is nothing but AND, so I get an AND gate. You see
just algebraically you can write like this AND is A B. So, you can do a double
complementation A B bar of bar; now A B bar you can write A B bar and A B bar same thing
twice. So, this A B bar is implemented here this is A B bar and A B bar and A B bar this is
implemented with the second gate this is basically NOT, NOT of that this implements AND
so, you also have a AND.

198
Well OR is the slightly more complex structure. See OR you can implement like this, first
you need two NOT gates, you apply the two inputs here A and B. So, you have A bar and B
bar implemented then you connect these two to another NAND gate. So, what is say is that
this will implement OR.

So, the proof is this OR gate by De Morgan's theorem you can write like this A bar and B bar
of bar. Now, this A bar is nothing but A and A bar; B bar is nothing but B and B bar. This A
and A bar is implemented by this gate, B and B bar is implemented by this gate and whole
NAND is implemented by the 3rd gate. So, you can also implement OR.

So, this shows you that NAND gate is functionally complete and any kind of functions,
switching expression can be realized using NAND gates alone, because you have already
proved you can design a NOT gate, AND gate and OR gate using NAND only.

(Refer Slide Time: 09:06)

So, continuing with the discussion NOR is also functionally complete. Just like NAND, the
NOR gate is also functionally complete. So, if you have only NOR gates then also you can
design and implement any kind of circuit. So, in a similar way to NAND let us also try to
proof that NOR gate is functionally complete, let us see.

So, the first thing is that the NOR function is nothing, but this A or B NOT of that, now
realization of basic gates well in a manner similar to NAND, I am showing here and the
corresponding circuits I am showing one by one. First NOT, you take a NOR gate tie the two

199
inputs together just like a NAND if we apply A it will be A or A bar just this A or A means
only A. So, the output will be A bar.

So, NOT can be implemented, first look at OR because OR is easier, you take a NOR gate
you apply A and B. So, what you had get here you get NOR of that A plus B bar; now you
connect a NOT, NOT you already know how to do a NOT. So, you will be getting finally OR
A plus B. So, OR can be implemented. Now, AND structure is again similar to OR for a
NAND. So, what you do you start with two NOT gates with the two inputs connected
together, A and B. So, here you have A bar, here you have B bar, then you connect them
together using another NOR gate you will be getting A and B.

The proof again goes like there, from A B by De Morgan's law you can write like this A bar
or B bar and this A bar you can write A or A bar, B bar you can write B or B bar. This A bar
you are implementing by this gate, B bar your implementing by this gate then finally, you are
taking NOR of A bar and B bar, 3rd gate right.

So, in a manner similar to a NAND we have also shown NOR is also functionally complete.
So, if you are given only NOR gate we can implement any circuit that you want to; let us
look at some other functionally complete sets also.

(Refer Slide Time: 12:13)

Well this is quite simple to justify, I am saying that this said suppose I do not have OR gates
only AND and NOT.

200
I am saying this is also functionally complete. Well what is my justification? I have AND
gate, I have NOT gate ok. Now, if I connect them together what gate do I get this is
equivalent to a NAND gate. Now, now we have we have already proved that NAND gate is
functionally complete, now because I can implement a NAND gate using AND and NOT.

So, this will also be a functionally complete. AND NOT is also functionally complete ok,
simple to say because we have already proved NAND is functionally complete and using
AND and NOT I can built a NAND right. So, the justification is very simple.

(Refer Slide Time: 13:23)

So, in a similar way you can also say that OR and NOT is functionally complete because you
have an OR gate, you have an NOT gate connect them together you get a NOR gate.

Now, NOR gate is functionally complete this we have already proved. So, if I have only OR
and NOT. So, we can build a NOR using that which is known to be functionally complete so,
this will also be functionally complete. So, we have seen, means other than AND OR NOT,
four sets of gates, one is NAND only, NOR only, AND NOT, OR NOT ok. Now, these gates
are functionally complete means without any external means other kinds of inputs because
you can realize any function just by using these gates.

201
(Refer Slide Time: 14:35)

So, what I mean here is I will be explaining that with the next example. Here what you are
saying is there is another set we are talking about we are talking about AND EXOR; well
EXOR gate you already know what an EXOR gate is. EXOR gate symbolically is shown like
this it two inputs are A B if the output is f well sometimes the EXOR is express symbolically
written like this A EXOR B is written as A bar B or A B bar. Because the definition of EXOR
I gave earlier is EXOR will be 1 if odd number of inputs are 1; odd number means with
respect to A B if A is 0, B is 1 or A is 1, B is 0, odd means one single 1 that is odd so, A bar B,
A B bar right.

Now, here what you saying that only AND EXOR is not sufficient, we also need the constant
value 1. Why? The reason is very simple, suppose I have an EXOR gate, EXOR function
already I have shown what is EXOR function. Let us say to the first input I apply A, to the
second input I apply a constant 1. So, what will be my output just follow this rule A bar B, B
is here 1 OR A B bar; B bar will be 0 bar of 1. So, second term will become 0 only the first
term will remain which is A bar, which means an exclusive OR gate with one of its input as 1
is equivalent to a NOT gate.

So, I can built a NOT gate using an exclusive OR gate and the constant 1 and I already have
AND gate and we have already shown that AND and NOT are functionally complete. So, I
have AND and I also have seen how to build a NOT. So, this will also be functionally
complete right. So, the justification is fairly straight forward.

202
(Refer Slide Time: 17:20)

This is exactly that is said that you can implement a NOT by setting the 2nd input to 1 and we
already have shown earlier NOT and AND is functionally complete.

(Refer Slide Time: 17:35)

Similarly, instead of AND you can go for OR the logic is similar. So, I am not going into
detail because NOT we have seen exclusive OR with a 1, you can implement a NOT and this
OR gate is already there, OR and NOT we have already shown earlier this is already
functionally complete.

203
Therefore OR, EXOR, 1 will also be a functionally complete set. So, these are the things we
have to remember. So, if you are designing any circuit and someone gives you some gates to
design you can check whether those gates are sufficient in terms of functional completeness.
If there functional complete then you know that well I can design any function using those
gates only ok, all right.

(Refer Slide Time: 18:34)

Let us now look at there are slightly different kind of a functional block not exactly a gate,
well we had talked about the multiplexer earlier. So, we considered a 2 line to 1 line
multiplexer. So, how does a 2 line to 1 line multiplexer look like; it is the blocks where there
are two inputs A and B. There is a select input S and there is an output let us say f, this is how
a multiplexer look like.

So, how does it work if the select input is 0 then the output will be equal to A, the first input
will be copied to the output, but if S equal to 1 then the second input will be copied. So, it is
like a controlled switch depending on what I apply to S either A is switched or B is switched
ok. Let see that whether this is functional here what means we have mentioned nothing that I
need both constant 0 and constant 1 to make it functionally complete.

So, talking about the function just you see I can directly write down this expression. The
output function of a multiplexer is A S bar B plus S bar for what is the meaning you see if S is
0 output is A, S is 0 means S bar and A, S bar and A. S is 1 means S and B, S and B you can
directly write an expression like this ok, this is what a multiplexer is. Now, realization of the

204
other gates can be followed like this. Just look at these scenarios; first NOT, let us try to build
a not.

So, what we are saying that the first two inputs I apply 1 and 0 like in this multiplexer I apply
a 1 here, I apply a 0 here let us say and to the select line, third-one is a select line, select line I
apply the variable A. So, if I substitute 1 0 and A in this expression it becomes A that is 1 A
bar plus B, B is the second-one 0 A.

If you simplify this is only A bar which means I can implement NOT, but I require a constant
1 and also a constant 0. I can also implement an AND how? I can apply this input A in the
first-one, 0 in the second-one and I already know how to implement a NOT.

So, suppose the other input I do a NOT and I apply B bar to the select line A 0 and B bar then
if I just substitute the values in this expression A bar of this becomes B plus second-one 0. So,
this will become 0 and A B I have implement AND. So, I can implement a NOT and an AND
and you already have seen earlier that NOT and AND are functionally complete. Therefore,
this 2 to 1 multiplexer is also functionally complete, provided you have the constants 0 and 1
available with us right.

(Refer Slide Time: 22:30)

Now, let us look at some interesting observations. So, when you design a circuit sometimes
we need to transform a circuit from one form to another, change the kind of gates that you
have. So, let us look at somehow the very common transformations that we carry out.

205
First observation that you make is very important it says that two level AND-OR circuit is
equivalent to a two level NAND-NAND circuit. So, what I mean is that if I have a two level
circuit AND-OR like this; this observation says this will be equivalent to keep the circuit
structure the same just replace all the gates by NAND. So, function will remain the same. You
keep the inputs A B C D has as same, here also it will be A B C D; suppose the function was f
here, the same function will remain f here. Now, the proof comes from De Morgan's law just
you see if I have this A B or C this function f, this realizes what A B or C D. Now, I can do a
double complementation, complement of complement in two steps let us say.

So, the inner one I can apply De Morgan's law, something or something bar what does this
mean; this something will be NOT plus will become AND. This something will become NOT
and then whole thing I have bar. See AB bar is nothing, but NAND of A B, CD bar is nothing
but NAND of CD and something and something bar is nothing but NAND of that something
and something. So, straight away these two are equivalent.

So, if I have a two level AND OR circuit just blindly I can replace all the gates by NAND
gates, not necessary two gates any number of gates can be there in the first level because De
Morgan’s law can be extended to any number of inputs right.

(Refer Slide Time: 25:20)

This is one very important observation you should remember and a similar observation exists
for OR-AND circuit. Two level OR-AND circuit is equivalent to NOR-NOR. So, here also
the idea is very similar I have a two level OR-AND circuit like this.

206
So, here also lets say the inputs are A B C D. What I say, this is equivalent to a stimilar circuit
where all the gates are made NOR. So, again this follows from De Morgan's law because if
you say the function output is f. So, what is f, f will be A or B and C or D. So, if you do it
double complementation, complementation again a complementation then the inner one
something and something bar what will be that something bar or this something bar whole of
bar.

So, this means this A plus B bar means this NOR C plus D bar means this NOR and whole
thing means this NOR. So, similarly two level OR AND you can directly replace by two level
circuit with all NOR gates right. This is very simple.

(Refer Slide Time: 27:08)

Now, in general if you have any circuit not necessarily two level, it can be a multi level
circuit. I mean if you apply De Morgan's law in a suitable way and also if you use some basic
transformation rules then you can convert any multi level circuit consisting of AND OR and
NOT gates using NAND gates only.

Well, I shall be illustrated in with some examples well similarly, we can do it for NOR gates
only, but I am not giving those examples you can try them out yourselves, but what examples
I will be giving is given any arbitrary circuit I can convert them into NAND gates, it can be
multi level not necessarily two level. But some basic rules are being followed here.

207
Let us say if I have an OR gate with NOT gates in the input, well sometimes symbolically we
write it like this we show it as small bubbles in the input, bubble means NOT these are same
thing. So, if you see what this functionally means A B means this will be A bar B bar and OR
of that A bar or B bar. So, what is A bar or B bar according to De Morgan’s law it is nothing,
but A B bar which means this is NAND.

So, any OR gate with a not gates in the input is equivalent to a NAND this you should
remember this is nothing but NAND. In some book we will find NAND gates are shown like
this, because they are equivalent right.

Let us now take some examples to show that how this kind of transformations can be carried
out; just a second.

(Refer Slide Time: 29:40)

Let us take a simple example. Let us say I have a circuit like this. So, I want to convert it into
a pure NAND-NAND circuit it is a multi level circuit. So, there is a AND OR and also there
is another AND let us say circuit is like this. So, what are the steps? The first step is that if we
look at this part of the circuit separately this part of the circuit, this is like a two level circuit
AND-OR and two level circuit you already know that we can converted into NAND-NAND
blindly because we have already shown that these are equivalent.

Then lastly we have an AND gate here. Now what is an AND gate? We can write it like this;
these are already NAND we have got in NAND and this AND gate you can implement using

208
a NAND gate followed by a NOT gate. So, you know how to implement a NOT using
NAND.

So, you see you have obtained a all NAND implementation of the original circuit right
simple.

(Refer Slide Time: 31:17)

Let us look at another example which we will involve some OR operations also; suppose I
have a circuit like this with a NOT gate here this goes to an AND. There is an OR gate here
output of the OR goes here, there is another OR gate and there is a NOT; let us take a circuit
like this.

So, step by step let us go through the transformation. First is if you look at these two gates
and followed by NOT this is straight forward it is a NAND. You replace it by a NAND and
you have an OR gate here. Now, you already know how to implement a OR gate using
NAND gates follow that rule; you have a NOT, another NOT, you have a NAND. This is
NOR, this NOR gate and both of these will be going to the input of this AND gate, this AND
gate and then you have the final OR gate and the other input coming through a NOT.

So, the first two part you have taken care of converted them to all NANDs. Now let us look at
the rest. This is a, this is an AND gate. So, what you can do? I am just showing the equivalent
circuit here this circuit is already there, I am also drawing this part; this part is there and they
are going to the input of this AND gate ok.

209
Now, what I am saying is that let us make this a NAND gate followed by a NOT gate. Let us
say a NOT gate this is equivalent to AND and on the other input also I have NOT gate which
goes to an OR gate. Now, we have already shown that a NOR gate with the inputs NOT is
equivalent to a NAND gate.

So, finally, you have a circuit where you have this NAND gate this two NOT gates using
NAND, another NAND they are connected to another NAND and finally, they are going to
the last NAND with the other input is straightaway coming here. So, these two circuits are
equivalent.

So, you see just by knowing some of the rules that you have already know and this kind of
transmissions that I just talked about some time back you can convert any arbitrary circuit
using systematic methods using NAND gates only. Similarly, you can do it for NOR gates
also, the rules are pretty similar.

(Refer Slide Time: 34:58)

So, with this we come to the end of this lecture. Now, in our next lectures we shall be looking
at some systematic ways of minimizing functions. So, far we talked about algebraic methods
and just by applying rules by using ad-hoc methods we are try to minimize; means as I said if
you remember the rules correctly to be easier for you. But if you do not remember the rules
you may not be able to minimize in an effective way.

210
So, we shall next look at some of the more systematic ways to minimize and analyze
switching functions.

Thank you.

211
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 16
Minimization Using Karnaugh Maps ( Part - I )

So, if you recall we have been talking about various algebraic methods of manipulating
switching expression, switching functions and also we saw that using some basic rules or
theorems we can apply them in a suitable way to reduce the number of product terms, reduce
the number of literals which means some sort of minimization.

So, today we start our discussion on some more systematic ways of minimizing switching
functions which in the algebraic form is a little complicated or complex. So, the method that
we shall be starting with is using something called Karnaugh maps, minimization using
Karnaugh maps this is what we shall be starting our discussion on.

(Refer Slide Time: 01:08)

So, first let us try to understand what is a Karnaugh map? First thing let me tell you Karnaugh
map is some kind of visual or graphical representation of the function. See any function any
switching function that you want to minimize or manipulate we should be able to express it in
some way.

212
We have already seen some of the methods like the truth table, like the sum of products or the
product of sum expression forms. Now Karnaugh map is another way of representing a
function which is visual or pictorial in a sense that all the minterms of the function can be
represented in a matrix form and all the true minterms are marked in that matrix. Depending
on the positions of the true minterms and their relationships, we can apply some simple rules
to create some kind of minimization procedure so that what will get finally, will be
minimized form of expression either sum of products or product of sums.

So basically speaking what I am talking is that Karnaugh map is nothing, but a graphical
method for representing and minimization of functions. Well we use Karnaugh map for
simplifying or minimizing switching functions, we earlier already have seen how we can use
algebraic technique to do the same, but instead of algebraic methods we would be using this
Karnaugh map which I shall said is some kind of pictorial representation and sometimes in
short we refer to it as K-map ok. So, there was some characteristics of a Karnaugh map that I
would like to talk about.

Well let us consider we have an n variable function, I have an n variable function. So;
obviously, there will be 2n minterms in this function. So, in the Karnaugh map there is a
concept of cells and the cell has a one to one correspondence with these minterms; there are
2n minterms the K-map will also be having 2n cells. And this minterms are mapped to
the cells in a very specific way such that the adjacent cells, let us say adjacent 2 cells, pair of
cells will differ in only 1 variable; like let me take an example suppose in a 3 variable
function one cell corresponds to AB Ć and its neighboring cell may corresponds to
ABC where C is the only 1 variable which is differing right.

Similarly the Karnaugh map we can also talk about 22 or 4 cells; now if we have a set of
m
adjacent 4 cells they will be differ in 2 variables. In general, we can have any part of 2, 2
cells which will be differing in m number of variables. So, this we shall be seeing through
examples.

213
(Refer Slide Time: 04:47)

So, what we try to do is we try to group the cells, as I said we have the Karnaugh map; we
have the cells on the map where each cell corresponds to a minterm; we try to group the cells
m
together and each group will be having 2 adjacent cells, some power of 2 1, 2, 4, 8, 16
like that.

So, we try to group some adjacent cells which are some power of 2 corresponding to the true
minterms and you always try to make the size of these cubes bigger, because bigger the
number of cells in a group more the minimization that we have been able to achieve. So, our
target will be to make this cubes or this groups as large as possible this is our basic objective.

214
(Refer Slide Time: 05:50)

So, now the question is how do we label the cells in a Karnaugh map such that these property
is ensured. Now this is something which we have already mentioned that we have mapped the
minterms to the cells in such a way that a pair of adjacent cells will differ in only 1 variable
right. Now if we if we can have that for example, I will show you have a cube looks like.
Suppose I have a cube like this let us say the 4 by 4; let us say I have one cell out here, one
cell out here; these 2 are adjacent cells. Let us suppose this one of the cell corresponds to the
minterm ABC the other one corresponds to be minterm A B́ C which differs in only 1
variable B.

So, I am saying that if I have a pair of such neighboring true minterms; I can combine them
into one. So, the idea of combining is this if I combine them then you see the rule of
simplification if you take AC common is B and B́ will cancel out this will of
course AC, C is missing this will be AC ok. So, if you take AC common B+ B́ this will be
AC. So, there is a minimization; similarly if you have 4 such cubes together let us say these 2
in addition this and this.

So, there is a bigger cube I can form of size 4; then this 4 cells may corresponds to these 4
minterms and using rules of switching algebra you can easily see by combining them in pairs
that this is ultimately equal to A. So, you see the bigger the size of the cube; more is the
minimization if you can combine 2 cubes I have something like AC if I can combine 4 cubes

215
I have only a single literal. So, the number of literals go on decreasing as we make the cubes
larger and larger.

Suppose, originally I have an n variable function if I make a 2 cube it becomes n−1 , if I


make a 4 cube it becomes n−2 , if I make 8 cubes it becomes n−3 and so, on. So, the
cubes will become smaller and smaller ok.

(Refer Slide Time: 08:20)

So, this Karnaugh map is fine, but of course, there is some drawback also. The main
drawback is that because it is a pictorial approach; well we cannot draw a figure like that with
more than 5 at most 6 variables. While even a 5 variables it becomes complicated; so, here all
the examples that we shall be showing will be up to 4 variables only 5 and 6 variables
becomes complicated, beyond 6 it becomes impossible, this is the main drawback of
Karnaugh map.

216
(Refer Slide Time: 09:00)

Now let us see what is the basic approach in this method the Karnaugh map based
minimization method?

So, what we do as I had said Karnaugh map is a 2 dimensional matrix of cells, we try to fill
up the cells with the true minterms of the function, I shall be illustrating this with an example.
So, each cell corresponds to minterms; so, we consider only the true minterms which are the
minterms? That are one for the function. So, we mark those true minterms and then we try to
group these true minterms based on their neighborhood.

So, as I said I can group 2 of them together, I can group 4 of them together, I can group 8 of
them together and so, on. So, this grouping will be carried out in such a way that we shall be
trying to make the cubes as large as possible. We shall be trying to maximize the size of the
cube and also should ensure that every true minterm in the function should be covered by at
least one cube ok.

So, once we have done that; every cube will be corresponding to a product term to create one
product term out of every cube. So, once you have done this grouping the true minterms into
cubes. So, every cube will correspond to one product term and what we get will be the
minimized sum of products expression for the given function; this is the basic approach
behind the Karnaugh map method.

217
(Refer Slide Time: 10:52)

Let us take a simple example to start with we consider 3 variable functions to start with later
on we shall be extending to 4 variable and see how it looks like. So, in this example that I am
showing here; here I am considering a function, any function of 3 variables A B and C.

3
So, the Karnaugh map of a 3 variable function will obviously contain 2 minterms which is
equal 8. This is organized in this way; let us say 2 rows and 4 columns and I am labeling the
rows and columns by the variables. Since there are 3 variables ABC; so, along the rows I am
using A, along the columns let us see I am using B and C. Well here I can use other way also,
but here in this example I am assuming that A is marked in the row and B and C are marked
in the column. Now one thing you see the rows indicate the value of A; if the value of A is 0
it is first row, if the value of A is 1; it is a second row. Similarly with respect to BC, you see
the 4 combination of values 0 0, 0 1, 1 0 and 1 1 are given.

But the point to note is that you see we have not written these values in the binary value order
0 0 0 1 1 0 1 1; rather we had written this 1 1 first and then 1 0. So, why we have done this?
Because we have to ensure also as I had said that 2 adjacent minterms must differ in 1
variable only ok. Suppose if I write 0 1 and just on the right of it I write 1 0, then if I consider
2 cells, one cell here, one cell here, this cell corresponds to A equal to 0 B 0 and C 1 and this
cell if it is 1 0; then it will be correspond to A 0 B 1 and C 0 which means in one case it is 0 0
1, in other case it is 0 1 0. So, you see they are differing in 2 variables.

218
So, we cannot do this; so, we write or label the columns in some kind of grey code count
order 0 0, 0 1, 1 1, 1 0, so that they vary in one bit position across columns. Not only that
another interesting point to note is that you see you can assume that the right side and the left
side also are neighbors; how?

(Refer Slide Time: 13:42)

You see this is 1 0 and the left most column is 0 0; they are also differing in one position; so,
right and left will also be considered as neighbors. Similarly top and bottom will also be
considered as neighbors; well here in the rows we have A only. So, it will always differ in one
position; suppose this and this it will correspond to 0 1 0; the first one. The second one
correspond to 1 1 0; they will differ in one position right. So, this is how we make the labels
and the corresponding decimal numbers corresponding to the values of A B and C.

219
(Refer Slide Time: 14:25)

So if I write down the decimal values of A B C like in this order then they will be written like
this 0 0 0 is 0, 0 0 1 is 1, 0 1 0 is 2, 0 1 1 is 3, similarly 4 5 6 7. So, when you use a Karnaugh
map; you should be remembering these assignments of the decimal equivalent of the minterm
values 0, 1, 2, 3, 4, 5, 6, 7 ok.

Now, let us take some examples of actual minimization, how we can use this to minimize
functions?

(Refer Slide Time: 15:09)

220
Let us start with a very simple example here. So, you see here I have shown a function of 3
variables; we write it like this and say a function of 3 variables ABC. There are 4 ones
meaning there are 4 true minterms.

So, we write it like this, sigma notation, this means these are the true minterms. So, what are
the values? 0 0 0 and decimal this means 0, 0 1 1 it means 3, 1 0 0 that means 4 and 1 1 1 it
means 7. So, this problem or the function I can also express like this. It is a sum of the
minterms, there are 4 true minterms corresponding to the decimal value 0, 3, 4 and 7. Now
one thing you see before you go in to the Karnaugh map, 0 means what? All 0 0 0 which
means Á B́ Ć ; 3 means 0 1 1 which means Á BC ; 4 means 1 0 0 A B́ Ć and 7 means
111 ABC .

Now, if we use normal algebraic methods then you can see that the first and the third one
B́ and Ć is common, if you take it common you get Á + A , which is equal to 1. So,
you get only B́ Ć . Similarly second and forth if you combine BC is common and A
and Á will go you get BC . So, the minimized form will be according to algebraic
technique; you will get B́ Ć +BC . Let us verify how we can get this same thing from
Karnaugh map?

Now see I have shown the true minterms here let us try to form the cubes the rule of cube
formation is as I said the number of cell should be some power of 2; they should be adjacent
to each other and you make it as large as possible ok. So, here you can see one cube can be
like this, one cube can be like this you cannot make them any bigger right.

Because this adjacent cells are empty; now when you make this cubes say for this cube you
see it is spanning both the rows. So, A 0, A 1 both it is covering; so, A is getting cancelled out
you have BC 0 0. So, this corresponds to B́ Ć and this one again A cancels out; 0 and 1
and it corresponds to 1 1 which is BC . So, these 2 cubes corresponds to these 2 product
terms.

So, the sum of product expression in minimized form will be B́ Ć +BC ok. So, in
Karnaugh map you get the same value, but you do not have to make an algebraic
manipulation like here; directly you can get it, right. Let us take some more examples, let us
take an example like this, see here there are 5 true minterms.

221
So, instead of going in algebraic thing let me let me straight away come to this map method.

(Refer Slide Time: 18:52)

So, you see that if you want to make the cubes; I can make a cube like this I can make a cube
like this, every minterm must be covered by at least one cube; so, this is uncovered. So,
maybe I can make another cube like this suppose I do it like. So, all the minterms are covered
and what will be the expression for the first cube? This will correspond to B́ Ć , for the
second cube you see this is in the first row, first row means Á , A is 0 and it is covering 0 1
and 1 1. So, the first one gets cancelled out second one remains; it means C remains this is
Á C and this one is Á , C cancels out B remains Á B .

So, there are 3 terms ok, but the point to notice that this is not minimal because I could have
made the cube even larger this is just an example I have shown, but this is not the right
process. So, if you want to follow the right process the cubes will be this is one cube.

222
(Refer Slide Time: 20:08)

And this should be one cube all 4 cells together because as I told you; you can combine any
power of 2 number of cells as long as they are adjacent. So, if you do this the first one will
correspond to again B́ Ć , second one will correspond to it is the first row. So, there will be
Á and it is covering all four; so, B and C all cancels out. So, only Á remains.
So the expression is B́ Ć + Á , this is the minimized expression.

So, directly from the Karnaugh map you can get this right; let us take another example well
here again let us see one cube you can notice like this these 4.

(Refer Slide Time: 21:00)

223
Now you may tell that will I can see one cube here, I can see one cube here; suppose I do this
suppose I do this then this first cube will be corresponding to A gets cancelled out 0 0, this is
again B́ Ć ; the last cube again A cancels out 1 0. So, it will be B Ć and this big one will
be only Á because BC both cancels out it becomes like this 3 terms. But what I am
saying that this is not minimized as you can see; this B́ Ć and B Ć can be further
minimized, algebraically you take Ć common then B vanishes right.

If you take Ć bar common; it will become B́+ B this will become 1; so, only Ć will
remain. So, this is not the correct way of cube formation again.

(Refer Slide Time: 22:06)

The correct cube will be this is of course, as large as possible you can make, but you
remember that I said that the right most stage and the left most stage they are adjacent. So,
you make a cube like this these 2 fellows here and these 2 fellows here; these 4 together they
are adjacent, right.

So, if you do this then this cell A cancels out and 0 0 and 1 0. So, B cancels out and C is 0
Ć , only Ć and this one is only Á . So, it is Ć+ Á , this is the minimized form
right. So, you should notice or note that when you create this cube; so, your cube should be as
large as possible, if you try to make them as big as possible; then only you will get the
minimized form.

224
But you see the advantage I do not have to go to any algebraic technique just pictorially you
from the cubes and directly write down the product terms from there ok.

(Refer Slide Time: 23:19)

Let us take some more examples see here I have an example. So, here again I can form a cube
with these 2 and these 2 to a cube of 4 these two; 1s are still remaining. So, this one has only
neighbor with this and this one has only neighbor with this. I cannot do any further ok. So,
here it will be corresponding to this one; it will be Ć as before this smaller cube you will
be Á and 0 0 and 0 1; C cancels out B́ and these 2 it is A equal to 1 which
means A and 1 1 1 0; C cancels out B is 1.

So, B this is the minimized expression this cannot be minimized any further right ok.

225
(Refer Slide Time: 24:21)

Let us continue with our examples; let us now look at a circuit which you are already seen
earlier we talked about a full adder. So, you recall what a full adder is? A full adder is a
circuit which takes 3 inputs, two binary numbers A and B and one carry input let us
call it C and in the output we get a sum and we get another carry right.

So, earlier we had shown the truth table of a full adder. So, if you check the truth table if you
look at the sum if we consider the values of AB and C . So, when will the sum be 1;
sum will be equal to 1 only for the minterms 1 2 4 and 7; 1 means 0 0 1, 2 means 0 1 0; 4
means 1 0 0; 7 means 1 1 1.

So, only under this 4 conditions sum will be 1, this we have depicted in the truth table like
this; 0 0 1 0 0 1 this one; 0 1 0 0 1 0 this 1 1 0 0 1 0 0 this one and 1 1 1 1 1 right. Similarly
for a carry if you look that when carry will be generated, carry will be generated when the
minterms are 3 5 6 or 7, 3 is 0 1 1; 5 is 1 0 1; 6 is 1 1 0 and 7 is 1 1 1. So, this we have
depicted in this Karnaugh map 0 1 1, 1 0 1, 1 1 0 and 1 1 1 right.

Now if we try to minimize you see for some there are no 2 adjacent 1s which can be
combined. So, this is a peculiar scenario where no minimization is possible; so, each true
minterm will be a separate product term.

226
(Refer Slide Time: 26:43)

So, if we straight away write down for 0 0 1 will be Á B́ Ć ; second one will be 0 1 0
Á B Ć then plus it will be 1 0 0 A B́ Ć and the last one will be 1 1 1 ABC . This will
be the form which cannot be minimized; now you please note that this is a very peculiar case
of a function which is nothing but the exclusive OR function of this 3 variables.

Now exclusive OR is a kind of function which cannot be minimized any further in the sum of
products form. So, this shows like that ok, but carry you can minimize carry. You see there
are distribution ones are like that you can make one cube like this, you can make one cube
like this, you can make one cube like this. You see some of the true minterms might get
covered by more than one cubes no problem; but you should ensure that all the true minterms
are covered.

So, here what will be the function? This one A is cancelled only BC this is BC
plus this one; A is 1 and 0 1 1 1. So, it is C and the last 2 again A is 1 and 1 1 1 0 it is
B this is the minimized expression for carry ok. So, for a full adder you can try this
minimization procedure and get the minimum possible realizations. So, with this we come to
the end of the lecture, where we have basically introduced the Karnaugh approach and using
examples we illustrated how we can minimize a 3 variable function.

227
Now in the next lecture, we shall be extending this discussion to cover 4 variable functions.
So, we shall be talking about how a 4 variable Karnaugh map looks like and how we can
carry out minimization using that.

Thank you.

228
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian institute of Technology, Kharagpur

Lecture – 17
Minimization Using Karnaugh Maps (Part- II)

So, in the last lecture if we recall we have talked about the concept of Karnaugh map method
of minimizing a function and through examples we showed how we can handle or minimize 3
variable functions.

(Refer Slide Time: 00:35)

229
(Refer Slide Time: 00:36)

So, in this lecture we shall be extending our discussion to handle 4 variable functions. So, we
shall be extending our discussion to 4 variables. Now, recall we mentioned that it is not so
easy to extend Karnaugh map concept to larger number of variables. Of course, in books we
will see that the authors have said that you can do it up to 6, but up to 4 is easy, 5 becomes
difficult, 6 becomes quite difficult. Beyond 6 is impossible. So, whatever examples I showed
here will be only up to 4.

So, if you have 4 variables then how many minterms are possible, 24 =16 . So first thing is
that you need to have 16 cells and they will be organized in a 4 by 4 cell array. So, there will
be 4 rows and 4 columns, 4 rows and 4 columns. The basic concept of labeling remains the
same, you will see that the way you label the rows and columns it will always be ensured the
neighboring cells will always be deferring in a single variable.

This will and also there will a neighborhood relationship between the right and left and also
top and bottom. Top-bottom and left-right adjacency will also be provided or ensured. Not
only that, you will see that the four corner cells will be considered are adjacent to each other.
Why? We shall see just a little later.

Let us consider a 4 variable Karnaugh map like this; this is just an extension of a 3 variable
map where we consider a function with 4 variables. Let us say A, B, C, D where in this
diagram I am showing AB along the rows and CD along the columns. And just like a 3

230
variable function the way we label the columns in the gray code order 0 0, 0 1, 1 1, 1 0 here
we are doing the same thing for both rows and columns 0 0, 0 1, 1 1, 1 0. Why?

You consider any two cells let us say these two cells the first cell corresponds to 0 1 0 1, 0 1 0
1, the second cell to corresponds to 0 1 1 1, 0 1 1 1. We check they defer in a single variable
this last position sorry second last position 1 0 1. It defers in the second last position right.

Now, you take any pair of cells. Vertically take this and this, the upper cell corresponds to 0 1
1 0, lower cells corresponds to 1 1 1 0. You see this are all adjacent only the first variables is
different, the other 3 are the same right.

(Refer Slide Time: 03:54)

The same thing you can extend to a cube of size 4. Let us consider these 4, these 4
corresponds to, I am just writing on the binary 0 1 0 1, then 0 1 1 1, then 1 1 0 1, and 1 1 1 1.

So, you see here the second and fourth variables are the same 1. This is also 1. But the first
and the third variables which is A or C they are changing. So, if you form a cube out of these
4, this will be corresponding to B and D. Minimized form will be only BD. A and C will, they
will be cancelled out.

231
(Refer Slide Time: 04:47)

Similar, is the case if you take a cube vertically like this. They will correspond to 0 0 1 0, 0 1
1 0, 1 1 1 0, 1 0 1 0. So, again this 1 0 is common, the first two are changing they will
cancelled out. So, C and D will remain. It will be C D́ , 1 0 right.

(Refer Slide Time: 05:16)

Now, I talked about the 4 corner cells right. Suppose there are true minterms in the 4 corner
cells. They are also considered neighbors. So, I can make a big cube of size 4 using the 4
corner cells which is showed like this. How, let us write down and see.

232
This cells corresponds 0 0 0 0, this cell corresponds to 0 0 1 0, the left one corresponds to 1 0
0 0 and this one is 1 0 1 0. You see last variable is always 0, second variable is also 0. The
first and third are changing.

So, same way first and third will cancel out and this will be equivalent to B́ D́ right. So,
the same concept, you can make cubes as you wish.

(Refer Slide Time: 06:08)

If you have this, this and this, this you can makes cube like this right.

(Refer Slide Time: 06:15)

233
If you have this 4 and also this 4, then you can make a cube of size 8 like this; same concept
you can use right.

(Refer Slide Time: 06:28)

So, this is the same Karnaugh map where I am showing the cells labeled by the decimal
equivalent of AB CD values. So, numbers are made like this 0 1 2 3 4 5 6 7 and you skip to
the last one 8 9 10 11 then 12 13 14 15. This is because this 11 is coming first before 10 right.
Similarly, in the columns these are reverse because 11 is coming first then 10 all right.

Now, let us take some examples.

(Refer Slide Time: 07:01)

234
Let us take function like this where you can see that there are 6 true minterms. Now, if I
follow a rule let us try to get the cubes constructed. This can be 1 cube, I cannot make it any
larger. This is 1 cube, these two 1’s they are adjacent, this is 1 cube. Now, one thing you see
this is also a cube, I am showing it dotted. This and this you can also make a cube out of this.

But this is not required because I have already covered all the true minterms with this 3
cubes. So, I do not require this cube any more ok. This will be my best solution, minimum
number of cubes which is covering all of them. But if I consider this dotted cube also, it will
not make it any minimum. It will be even one more. But here what is the solution? This 2
cubes 0 1 which is Á B and 00 01 D cancels out, C is 0, Ć . Then let us consider this
one vertically.

So, 01 and 11 A cancels out, B is 1, it is B and it is 10 so, C D́ and the last one this top and
bottom 00 and 10 A cancels out, B is 0, B́ and 11, CD. This is the minimized form of this
function right. Let us take another example.

(Refer Slide Time: 09:05)

Let us take an example like this. Well some of the largest cubes immediately see is, one is this
cube of size 4 that is another size 4. You can see which will be these two plus these two.

You see you can take these two separately, but this will not the biggest one, because you can
also take these two. This will be the bigger one and these two 1’s are still remaining. You can

235
of course, take a cube like this. But again this will left out. But instead you take cube like this
and this will cover everything. I do not need any further cube.

So, what will be the expression? This column, this A and B are cancelling out, AB will not be
there. 1 0, so just C D́ plus, these two, these two 4, 00 01 which is Á and 00 and 10 C
cancels out, D́ and lastly you have this two 01 11 means B and 01 Ć D . This is your
minimized form.

So, once you do it like this, you cannot minimize it any further. Just you think that I have
mentioned, let me tell it once more.

(Refer Slide Time: 10:41)

Suppose we had constructed a cube like this, we had constructed this cube also, like this
biggest possible. But suppose I also have constructed a cube like this, then this cube will, also
this 1 is also remaining left out.

So, I will also have to include it. So you see I need one extra cube here and this cube, this
cube these two this is redundant because these cells that this cube is covering, these cells are
already covered by some other cubes. So, a particular cell, a true minterm need to be covered
by at least one of the cubes.

So, if I have 1 cube whose cells are already covered by some other cubes, I do not need to
include that cube at all. You can leave it out; that is why I can remove this cube from my map
and only 1 2 and 3 these cubes will do ok. This is what we have done.

236
Take another example.

(Refer Slide Time: 11:58)

This is a very nice example, regular example which shows four 1’s in the corners. This is very
easy, the middle four will be 1 cube and the four corners will be (Refer Time: 12:14), this will
be another cube.

So, what will be the expression? The four middle one will be 0 1, 1 1, it will B, 0 1, 1 1, again
D or the four corner 1’s 0 0, 1 0, B́ , 0 0, 1 0, D́ . This will be function BD + B́ D́
right.

(Refer Slide Time: 12:48)

237
Let us look at some more examples. Take an example like this.

So, here again let us try to make the cube as large as possible. Like one I can see these four I
can make, these four I can make, these four I can make. Now, you see these two 1’s have
been left out. So, I can make these two together or I can make the cube of four these two plus
these two right, this is the biggest. So, this will be my best cover. So, if we do it this way, this
long one, AB are cancelled. It will Ć D plus, these cubes are left; it will be B and
Ć . And these two 00 and 10, it will be B́ and 0 1, 1 1, C cancels out D . This will
be the minimized form right.

So, when we choose these cells it should be judicious, you should not select something like
this. I select this, I select this then I select this. This is not minimum because you are then we
get bigger I can these two and these two make it cube of size 4 right.

(Refer Slide Time: 13:02)

So, I will try here. I shall try to make it as large as possible the cubes ok. This is another
cyclic kind of a structure. Like you see I can find these 4 cubes, but the other four 1’s are
isolatives. So, I can have 1 cube with these two, 1 cube with these two, 1 cube with these two,
1 cube with these two. So, what does this mean? See let me again go back.

238
(Refer Slide Time: 14:05)

So, our first premise or assumption was that we try to make the cube as large as possible ok.
The second thing I mentioned that if I find that a cube is covering some cells which are
already covered by some other cubes then drop or delete this cube. I do not need this ok. You
see in this example the first I am saying is that this is the largest cube. But the other four, well
I cannot leave them out because each of them having at least 1 which is not covered by any
one; these are not covered by any other cube.

So, when we include these four, you see that the bigger cube is no longer required because
the four 1’s in the bigger cube they are already covered by these 4 smaller cubes. So, even
though this is the bigger one, I do not need this. So, my cover here is look like this because
this middle one will be redundant. It is not required.

So, what will be the expression? The left one will be Á B and 00, 01, Ć . This top one
00, 01, will be Á CD plus these two will be 11, 10, will be A and 01 is Ć D plus
this two 11 is AB and 11 10 is C ok.

This will be my minimized form, but if you also include this middle one then you also
include another term which is redundant not required; it will be actually be B and D ,
BD . BD is not required at all. So, you, unnecessarily you are using five terms.

These four are sufficient; this will be covering BD automatically right. This is something
you should remember.

239
(Refer Slide Time: 17:19)

Ok here let us take a slightly bigger and a more realistic example. Suppose we are trying to
design a 2-bit adder. So, how is this 2-bit adder? It takes 2 numbers. These numbers are 2 bits
each and add them up. Let us say one of the numbers is 01 and the other number is 11.

So, if I add them up 1 1 is 0 with the carry of 1. 1 0 1 is 0 with the carry of 1. So, the final
carry out will be 1. So, my sum will be actually 3-bits, 2-bits and the possible carry out. So,
the sum will be 3-bits

So, whenever we add two 2 bit numbers, your sum will become 3 bits one more right.
Because there is a chance of 1 carry bit coming out right. Let us look into this example and
see how you can use the Karnaugh map for minimization.

240
(Refer Slide Time: 18:39)

Well here, first let us look at the truth table this is the truth table of the adder.

So, just recall in the adder I have said that there will be 4 inputs; the first number A1, A0 then
B1 and B0. And there will be 3 outputs S2, S1 and S0. Let us see in the truth table I am
showing this 4 A1, A0, B1, B0 are the inputs and there are 3 outputs S2, S1, S0.

So, for 4 input there will be 24 or 16 minterms, which I am showing in 4 rows 0 0 0 0, 0 0


0 1 up to 1 1 1 1, there are 16. So, if we just add you think in terms of equivalent decimal it
will be easier 0 0 plus 0 0 means 0 plus 0 is 0. So, sum is 0 0 0, 0 plus 1 sum is 1, 1 is 0 0 1, 0
plus 2, 1 0 is 2, sum is 2, 2 is 0 1 0, 0 plus 3 sum is 3.

So, 0 1 1 is 3, 0 1 is 1, 1 plus 0 is 1, 0 0 1. 1 plus 1 is 2, 0 1 0 is 2. 1 plus 2 is 3, 0 1 1. 1 plus 3


is 4, 1 0 0. 2 plus 0 is 2, 0 1 0. 2 plus 1 is 3, 2 plus 4 is 4, 1 0 0. 2 plus 3 is 5, 1 0 1. 3 plus 0 is
3. 3 plus 1 is 4, 3 plus 2 is 5, and 3 plus 3 is 6, 1 1 0.

241
(Refer Slide Time: 21:01)

Now, in this Karnaugh map we are showing the Karnaugh map corresponding to the output
S2; that means this column of the truth table. So, how many 1’s are there 1 2 3 4 5 and 6?

So, you see there are 6 1’s correspond to, you see first one correspond to 0 1 1 1. So, 0 1 1 1
this one, second one is 1 0 1 0, 1 0 1 0 this one, third one is 1 0 1 1, 1 0 1 1 this one, fourth
one is 1 1 0 1, 1 1 0 1, 1 1 1 0, 1 1 1 0 and finally, 1 1 1 1 right. So, these are there.

(Refer Slide Time: 21:59)

242
Now, if you try to form the cubes in this case you see that you will be getting large cube here,
you will be getting smaller cube here and another smaller cube here that is all; you cannot
minimize any further.

So, what will be the expression for S2? Your S2 will be this bigger one, A0 cancels out only
A1, A1 and 11 and 10 B1, A1 and B1 or let us take this one first. 11 which is A1A0 and 0 1 1
1; that means, it will B0; B0 or this one 0 1 1 1 is A0, A0, 11 B1B0. This is the function
minimized form, this is the expression for S2 right ok.

(Refer Slide Time: 23:10)

Let us now, see what will be S1? Similar is the case for S1, if we look at S1 there are 1 2 3 4
5 6 7 8 1’s and if you check the 8 1’s are distributed like this.

So, you see here the cubes will be like this you cannot minimize it too much. These two, then
these two and here these two, these two and these two are isolated 1’s you cannot group them
together; they will be isolated. So, there will be 1 2 3 4 5 and 6, 6 cubes.

So, the expression for S1 will be slightly bigger; look at this one it will be A1A0, A1A0 and
11 10 B1 sorry not A1A0, Á 1 Á 0 , Á 1 Á 0 B 1 ok, plus let us look at this one 00 and 0
1 Á 1 . This will be Á 1, 10 B1B0 plus let us look at this one 11 10 is A 1, A 1
00 B́ 1 B́ 0 .

And this one 10 A 1 Á 0 and 00 01 is B́ 1 and the two isolated 1’s. This one will be 01
01; which means Á 1 A 0 B́ 1 B 0 and the last one will be 1 1 1 1 which is A 1 A0 B1B0 .

243
So, I see this expression is little more complex, but it is the minimum form, you cannot
minimize it any further, all right.

(Refer Slide Time: 25:42)

Now, finally, the expression for S0 the last one here you see for S0 there are 1 2 3 4 5 6 7 8
1’s which are disturbed like this. Now, these are pretty nicely distributed because you can
very nicely form the cubes like this 1 cube will be this, this four 1’s another cube will be this,
these four 1’s.

So, S0 you can write this two will be A0 and 00 and 10 B́ 0 bar plus this two 00 and
10 Á 0 01 11 B 0 . This is the minimum form right. So, in this way you can minimize
the expressions, this is the minimum form.

So, we have seen that in case of this 2-bit adder so, how we can create or represent the 3
output functions S2 S1 S0 in the Karnaugh map and then use the cubes to minimize the sum
of products expression. So, we have seen I mean how to minimize these functions earlier for
3 variables and now for 4 variables.

So, Karnaugh map is very easy to use, very simple graphically if you want you can find the
cubes from there directly you can write down the minimized form that is the big advantage.

So, with this we come to the end of this lecture. In the next lecture we shall be looking at a
few other issues and the concepts regarding the Karnaugh map methods of minimization;

244
before we move on to some other technique, which is more systematic which can be used for
larger functions for minimizing switching expressions.

Thank you.

245
Switching Circuits and Logic Design
Prof. Indranil Senguta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 18
Minimization Using Karnaugh Maps (Part -III)

So, we continue with our discussion on minimizing switching expressions using Karnaugh
map. If you recall we talked about how to minimize 3 variable and 4 variable functions. So,
let us continue from that point onward. This is the third part of the lecture.

(Refer Slide Time: 00:38)

So, the first thing that we talk about today, now in the examples we have taken we did not
considered don’t care inputs so far. So, what is don’t care input? You recall we mention this
earlier also, don’t care is a kind of input combination which normally will never come, or
appear. Like you think of an example were the input numbers are BCD. There is a four bit
input which is coming which is a BCD number, BCD digit. Now, we know in BCD the 4 bit
can be from 0 0 0 0 up to 9, 1 0 0 1. The remaining 6 combinations 10, 11, 12, 13, 14, 15,
they are considered to be invalid. They will never come as input.

So, what the output will be under this 6 invalid input combinations is immaterial. We mark
them as don’t care, the output can be 0, it can be 1 also. I don’t care, because the
corresponding input value will never come in practice. These are the don’t cares ok. So, what
I mean to say is that their exists functions were for some of the inputs, which we call as don’t

246
cares they will never appear, they will never appear and so, the corresponding output values
do not matter.

We refer to them as don’t cares and they are denoted by this X letter usually in the Karnaugh
map. Now, the ideas follow, you see when you make the Karnaugh map of course, we mark
the true minterms right by using 1s, then additions there will be some excess in the Karnaugh
map.

Let us take an example of, let us say in a typical Karnaugh map, let us say I have a 1 here, I
have a 1 here, I have a 1 here. There is a X here. Normally if the X was not there, I will make
1 cube like this and another cube like this. But because X is a don’t care which means I can
assume it to be 0 or 1 as per my convenience. In this case if I considered this X to be a 1, then
I can make a bigger cube like this.

So, I can make a bigger cube including X, but suppose there is another X here. So, I need not
have to cover this X. So, if I cover the 1s, the true minterms that is sufficient. The only thing
that I can use it, X is that I can use it to make a bigger cube whenever required. So, what I am
saying, the same thing I am mentioning here is that when creating the cubes, we can include
cell marked with X to make the cubes larger. And, it is to be noted as I had said that it is not
necessary to cover all the X marks cells, they are meant only for your purpose of making the
cube larger and nothing else ok, they are really don’t cares.

(Refer Slide Time: 04:13)

247
Let us take an example. Now the first thing let me tell you is that how do you represent a
function with don’t cares? This is one way in which we can refer this summation, or sigma (
Σ ) notation indicates that these are the true minterms, these are the true minterms. And
Σ with a phi ( Φ ), Φ denotes don’t care, indicates that these 4 are the don’t cares.

In some books, you will find instead of Σϕ , they have also denoted like this D 3, 10, 14,
15, means the same thing, they are don’t cares. D means don’t cares. Now, let us look at this
example. These 1, 5, 9, 11, 12, 13, these 6 are the true minterms which are noted down here.
These are the 6 true minterms. And there are four don’t cares 3, 10, 14, 15, this is 3, this is 10,
this is 14, and this is 15 ok.

(Refer Slide Time: 05:41)

Now, with this 1’s and X’s let us form the cubes. Well 1 large cube I can see is this; you can
make a cube like this. Now, with this X’s I can make larger cube. Like for example, I can
make a cube like this to cover this 1 and this single 1 is still remaining, I can make another
big cube including 1 of the don’t cares here like this. But you see I do not need to cover all
the don’t cares. This X remains. Let it remain. But I have covered all the 1s that is what I
want. So, this long one will Ć D plus this one will be AB and this will be (0 0 0) B́
and D . This is the minimized form in the presence of don’t cares right.

So, when there are don’t cares you can use this X’s to your advantage like this, right ok.

248
(Refer Slide Time: 07:00)

Let us take another example. This is an example where there are 3 true minterms, 0, 7, and
10. This is 0, this is 7 and this is 10. 2, 5, 8, 15, this is 2, 5, 8 and 15. Well here you see there
are two 1s in the corner and two X’s in the corner. So, you can make one cube out of these 4
and there is one single 1 left out, you can either make this or this.

Let us make this. This is fine, this is done. So, the 4 corner will be B́ and D́ and this
cube will be 0 1, which is which is Á B and 0 1 1 1 is D and D . This is the
minimized form. So, these examples actually tell you or show you that how to minimize
using K map, when some of the minterms are marked as don’t cares.

249
(Refer Slide Time: 08:23)

Now, let us come to some important definitions with respect to Karnaugh maps which will be
required in our next method of minimization that we will be considering after this. Well, we
know what is meant by minterms, true minterms, false minterms. Now, we introduce some
new concepts called implicants, prime implicants, essential prime implicants. Let us see what
these are. We start with implicant. Well, implicant means as the definition says, suppose I
have a function of n variables. Let us take an example. Suppose I have a function f, let us say
of 3 variables. Suppose the function is Á BC + A Ć or let us say B́ Ć . Let us say this is
my function.

Now, a minterm, see a minterms will be an implicant, if an only if for all combinations of the
variables, whenever the minterms is 1, f is also a 1. Here, let us see, what I am saying is that
let us see this Á BC , this Á BC , this can be an implicant. This implicant means that
whenever for some input combination this I mean this implicant is 1, the function will also be
1. But you think of this A Ć , this A Ć you can right as A B Ć + A B́ Ć . This A Ć
you can expand by B like this. This A B Ć is a minterm, A B́ Ć is also a minterm.

So, you call these as implicants. This implicant means you see, whenever A=1 , B=1
and C=0 , let us say this first minterm or implicant will be 1, A B Ć . Now, because in f
this is one of the terms, f will also be 1. That is the definition, a minterm is an implicant if an
only if for all combination of the variables, whenever this implicant is 1, f is also 1 right. This
is a necessary condition.

250
Now, prime implicant, is an special kind of an implicant. Prime implicant says that it is an
implicant, where if I delete any literal from it, it do not remain an implicant any more. Let us
take an example here, it is given here F equal to Á B , AC , B́ Ć . Here what I say is
that this Á B , here we are saying, Á B is a prime implicant, why? This Á B means
what? So, for what input combinations Á B is 1.

(Refer Slide Time: 12:21)

Let us say for A B C value, if I want to list Á B means what? A is 0, B 1. So, C can be
either 0, C can be either 1, these are the two combinations. This is an implicant, because
whenever Á B is 1, the function F is also 1. But what it says is that if you delete one literal
from here, like if I remove B , I make it only Á , this Á is not an implicant any
more.

Because for Á I can have an input combination 0 0 0 also, or 0 0 1 also. A is 1 B C is


something. So, Á will be true, but this is not an implicant. Therefore, I say that this Á
is not a prime implicant. So a prime implicant is something which is in some sort in some
way it is miniminal, like I cannot remove or delete any literal from it. And even after removal
the property of implicants still holds right.

251
(Refer Slide Time: 13:48)

Now, with respect to the Karnaugh map what does that mean, it means that it is a cube that is
not completely covered by another implicant. You see with respect to the Karnaugh maps,
what is the meaning of this prime implicant and implicant?

Suppose I have a four variable Karnaugh map. Let us say there are four 1s here and, I have a
cube like this, this is a prime implicant. Now, if I consider a smaller cube like this, which is a
proper subset of the larger cube, this will be an implicant, but not a prime implicant. Because
from this one I can delete one of the variable still it will not change. I will have an example
later I should show you, this is the basis idea, let us proceed.

252
(Refer Slide Time: 14:38)

Let us take an example here; here we have a 4 variable map. Now, in this map if you look at
the cubes this is one cube, this is one cube, this is one, all possible cubes I am showing, not
the minimum one. This is one cube; this is also a cube, so, how many 1, 2, 3, 4, 5.

So, corresponding to this 5 cubes the product terms are like this you can check, this one
means Á Ć D́ this one, this one means Á B Ć , Á B Ć , Á B Ć it should be this one
I think A B́ Ć . Any way this one, let us see this one is A Ć D , A Ć D is this. The big
one is BD , BD and B́ Ć D is B́ this one, this one, this and this, this and this, this
one is this one ok.

So, here you can add all the implicants like this, these are all set of prime implicants, but
what I am saying is that: if we have let us say I make a cube like this, what does this cube
mean this cube means AB and D , ABD right. But I could have also had a bigger
cube like this, the bigger cube what does that bigger cube mean BD , bigger cube is also a
prime implicant.

It says that ABD is not a prime implicant because, if I delete one of the literal let us say
A , whatever remains BD that is also a prime implicant, So, from the karnuagh map it
means any cube which is not the largest one will not be the prime implicant, if you
considered the largest possible cubes only they will all be the prime implicants ok. From the
karnuagh map this is the meaning ok.

253
(Refer Slide Time: 17:43)

And there is a notion of essential prime implicant, some of the prime implicant may not be
essentials, some of the prime implicant can be essential. The notion is like this, some of the
prime implicant is called essential, if that prime implicant covers at least one minterm of the
function, which is not covered by any other prime implicant. Like with respect to the
Karnaugh map, let us say here, this is one implicant, this is one prime implicant, this is one
prime implicant. Let us call these three prime implicant, let us call this as P1, let us call this
as P2 and this one as P3.

So, if you look at P1, P1 covers these two cells, which are not covered by P2 or P3, P2 covers
this cell, which is not covered by P1 or P3 and P3 covers these two cells which are not
covered by P1 or P2. Therefore, all the three prime implicants P1, P2, P3 are essential. So,
with the respect to the cubes in the Karnaugh map, the condition is that at least one cell of the
cube is not covered by any other prime implicant, this is what I just now explained ok. Such
prime implicants are called essential prime implicants.

254
(Refer Slide Time: 19:19)

Now, take an example here where the prime implicants are not essential, why it is so, you
look at the cubes here this is the 3 variable function, 1 cube is this, 1 cube is this, 1 cube is
this, 1 is this, 1 is this and 1 is this. So, this is like a cyclic one, this is called as cyclic prime
implicant chart. You see this they are all connected in a chain and you cannot identify any 1
prime implicant here, which is covering one literal, or cell which is not covered by any other
prime implicant.

Like for example if you consider this one, both the ones that covered by some other cube
also, this 1 is covered by this cube, this 1 is covered by this cube. So, none of the prime
implicants here are essential. Here, there are 1 2 3 4 5 6 prime implicants, but none of them
are essential ok.

255
(Refer Slide Time: 20:30)

Now, I am not giving example just showing you how a 5 variable Karnaugh map looks like,
just to tell you that it is, it is a little more complex. You see 5 variable Karnaugh map will be
having 5 variables here. Let us say 2 variables on this direction, the other 3 variable on this
direction. You see here is here as I had said the numbering would be something similar to the
grey code numbering, they will be differing in one position 0 0, 0 1, 1 1, 1 0. Here, you see
similarly 0 0 0, 0 0 1, 0 1 1, 0 1 0, 1 1 0, 1 1 1, 1 0 and again back to 0 0 0, you see between
adjacent cells it is differing by exactly 1 bit position.

And A B C D if you represent and write down the decimal numbers it will be something like
this you can check, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, because grey code is a reflected
code you can see there is a reflection kind of a thing across this, it is a mirror image 0 1 2 3
from other side you count 4 5 6 7, 8 9 10 11, 12 13 14 15 like that ok; 6 variable again there
will be 1 here and 1 here. It will become much bigger; it will become very complicated
forming the cubes like when you forming the cubes, you cannot form a cube like this. For
example, 3 from this side and 1 from this side, this is not a valid cube.

The rule is that you should have power of 2 from this side and the power of 2 from this side,
they will be formed into a single cube. So, the rules are much more complicated here. So, I
am showing you an example in this case 5 variables.

256
(Refer Slide Time: 22:26)

Now, there some results which you can just summarized with respect to what we have talked
about the prime implicants, implicants, essential prime implicants and so, on. These results
are the important in understanding what we shall be discussing next. Now, I am just showing
a few of the results here, first result says that any irredundant some of product, irredundant
means it is a minimized form, irredundant some of product means all the product terms are
essential. If you remove one of the product terms the function will become different.

But you may have some function where even if you remove product term, the function does
not change which means, that product term was redundant, it was not required ok. When I see
irredundant it mean it is a minimized form, that kind of expression for a function F is a union
of prime implicants of F. This is important. It says that any minimized expression sum of
product form. Whatever you write something plus something plus something, these
somethings must all be prime implicants.

This is the first result, that in any minimized expressions all the product terms must be
corresponding to prime implicants right. And second point is the essential prime implicants
are those you recall, which will be covering some minterm that are not covered by any other
prime implicant. So, in any minimized expression those essential prime implicants must
always be present, otherwise it cannot represent the function, some of the minterm it cannot
cover right.

257
So, the second point says that the set of all essential prime implicants must be present in any
irredundant some of product expression and the third point is just a corollary of it. It says any
prime implicant covered by the some of the essential prime implicants must not be contained
in any irredundant.

(Refer Slide Time: 24:50)

It says that suppose I have some essential prime implicant, let us say I have this one essential
prime implicant, another essential, another essential, let us say P1, P2, P3. Let us say there is
some prime, not there is some prime implicant may not be essential, here in the common area
in this P1, P2, P3.

Now, if this is here then, this fellow must not be contained in an irredundant expression,
because P1, P2, P3, because they are essential, they will always be there. But because they
are always there, anything which is already covered must not be there. So, any prime
implicant covered by the union or the sum of the essential prime implicants must not be
present in any minimized form, that is the idea ok. These are some of the results.

258
(Refer Slide Time: 25:56)

Now, just one thing let me tell you briefly we talked about how to generate minimum sum of
products expression using Karnaugh map, but what about product of sums? Because of the
principle of duality, if you can do sum of products you should be also able to do product of
sums. So, using Karnaugh map in fact you can also do that I shall just take one example to
show you, how to do it without going into too much detail ok.

So, the point to note is that the process is somewhat similar, there are couple of differences.
The first difference is that, when you form the cubes you form them using the 0 cells or the
false minterms, not the 1 cells which you do for sum of product. And lastly when you write
down the expression for example, when you write down the expression for sum of product, if
there is 1 0 here, you are writing A B́ , 1 means A , 0 means B́ .

But here the convention will be different the variable corresponding to 1 will be
complimented, while a variable corresponding 0 will not be complimented. So, for 1 you will
be writing Á , for 0 you will writing let us say B . These are the two changes.

259
(Refer Slide Time: 27:25)

Let us take an example to show you how it works. Let us see what this slide means, consider
we have a function like this, it is in the sum of products form, these are the true minterms 1,
4, 5, 6, 11, 12, 13, 14, 15, which is shown by this Karnaugh map. These are the ones. Now,
the remaining cells will be 0’s, there 11 so the remaining there, how many 9.

So, the remaining 7 will be 0’s. So, sometimes this remaining cells which are 0’s are written
in this product or pi ( Π ) notation. Π of this means these are the false minterms of the
function, Σ means these are the true minterms, Π means these are the false minterms.
So, when I talk about product of sum we need to talk about the false minterms and, you make
a cube just like you did earlier. There are 4 corner cells, make a cube out of them, there will
be one cube like this and let us say I make one cube like this that is all I have covered
everything.

Now, out of the four corner cells let us see what are the thing, just I am writing in terms of A
B C D. Four corner cells here and here 0 0 and 1 0 so, B is 0, B is 0 and here 0 0 and 1 0 D is
0. So, I said the convention is just reverse B is 0, D is 0 means, they will be in the
uncomplemented form, I write B+ D , this is one of my sum term. Take this. In this 0 0 0 1
means A is 0 so, A is 0 and here 1 1. So, C is 1 D is 1. These are not here, so, A is 0 C is 1 D
is 1. So this term will be Á or sorry this should be A , this will be A + Ć+ D́ and the
last one these 2 these A 1, B 0 so, A 1 B 0 and C is 0, D is not there.

260
So, it will be Á + B+C . This will be the product of sum expression, this is the rule. You
consider the 0’s try to cover the 0’s and when you write down the expression same way for 0
you use uncomplemented form and for 1s you use complicated forms. But instead of sum of
product write them in product of sum form. So, this will my function in this case. So, if you
write it in sum of product form you will be getting some expression, if you write in product
of sum form you will be getting some other expression ok.

So, with this we come to the end of this lecture. The next lecture we shall see how we can
follow some more systematic approach, there is something called a tabular approach which
can be done more systematically, which you can also use for larger functions, functions with
more than 4, 5 or 6 variables. We shall be discussing about that in our next lecture.

Thank you.

261
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 19
Minimization Using Tabular Method (Part -1)

So, we have seen; how we can use the Karnaugh map method to minimize switching
functions. We had looked at 3 and 4 variable functions as examples and we have also
mentioned that we can go up to maximum 6 not more than that. So, today, we shall be
discussing some method which is sometimes called the tabular method which is more
systematic in this respect.

(Refer Slide Time: 00:42)

Let us first look at the motivation why we are trying to go for this? We have already
mentioned that in the Karnaugh map method; we can go up to maximum 5 or 6 variables.
More importantly, it is not easy to automate on a computer; you see whatever you do,
whatever design minimization we do today, everything is done on computer, right. This
Karnaugh map which you are doing with hand very nicely, we can visualize, we can write
down the expression, but you think, if I have to write a computer program to do this, this will
not be so easy, quite complicated finding out visually the maximum size cubes, try to make
them largest not so easy.

262
So, Karnaugh map is not so easy to program and automate ok, this is one problem. So, to
remove this drawback, say for larger functions and also from the point of view of automation
we need a more systematic procedure. There is a systematic procedure which was proposed
by Quine and McCluskey which is sometimes known as Quine-McCluskey method because it
uses some kind of a tabular structure.

It is also refer to as Tabular method sometimes, this method has an advantage that is can this
can be automated easily and also you can also do it for hand computation as we shall
illustrate with some examples. But automation is you can write a computer program for this
in a much easier way, but what is the basic idea behind this approach? The idea is very
simple. We already know an identity like this that any sub function A multiplied by a variable
x plus that sub function multiplied by x bar, if you take A common x and x bar goes, you get
only A ; that means, one variable gets less, becomes less.

So, if you repeatedly apply this simple rule to all adjacent pair means which differ in a single
variable. So, if we repeatedly apply this the concept is whatever you get at the end, they will
only with this setup prime implicants. So, you start with all your true minterms go and
applying this rule repeatedly on wherever you can at the end where you see that you cannot
apply this rule anymore, you will find that whatever you are left with they are the prime
implicants and prime implicants are something which will make your final minimized
expression some set of prime implicants, you have to select after that ok. This is basically two
step process first generate all the prime implicants, second select the minimum set of prime
implicants that can cover the whole function.

263
(Refer Slide Time: 04:13)

Let us see how it works. So, a very simple example; let us first illustrate with the help of an
algebraic expression. Consider a 4 variable function like this, there are 4 product terms or 4
minterms, you see there are some minterms which differ in one variable like let say
Á B́ Ć D́ and Á B́ Ć D , they are adjacent and similarly A B́ Ć D́ and A B́ Ć D ;
they are also adjacent.

So, you can combine the first two and the last two terms you can take them common; so, D
disappear, same also D disappears, you get Á B́ Ć + A B́ Ć . Now you can go and applying
the repeatedly, again, you see here again, you can apply this same thing, they are again
adjacent they differ only in A.

So, you can again apply this same rule and you get finally, B́ Ć , now what is B́ Ć ?
B́ Ć is a prime implicant. So, the idea is as follows whenever you have this minterms, go
on applying this rule whenever possible, how many times possible; ultimately you land up
with something those will be the prime implicants, we go on combining any pair of product
terms that differ in a single literal value, this is the basic idea behind the tabular method
which we shall be illustrating with some examples.

264
(Refer Slide Time: 06:10)

Let us now look at the basic concept; what we do in the tabular method, two k variable terms
can be combined by an application of that rule that I mentioned to form a single k −1
variable term, here an example is given Á B́ Ć D , 4 variable term; there is another 4
variable term, you combine them what you get this A disappears you get B́ Ć D ,, 3
variable term here k =4 , k was 4, after combining you get single k −1 variable term.

This is possible if and only if 2 terms differ in a single literal here, they were differing only in
Á and A , B́ Ć D was common, right. For convenience, we use the binary
representation Á B́ Ć D , we write as 0001 , 0 means bar, 1 means
uncomplimented. Similarly A B́ Ć D , we write as 1001 . So, we have 0001 and
1001 . So, wherever there is common, you keep them unchanged and wherever that is
different, you put a −¿ ok.

So, you get −001 which means B́ Ć D , A has vanished; this is the idea. So, in this
binary representation, we are using this symbol −¿ to indicate the absence of a literal. So,
in the Tabular method, we use such binary representation this −¿ to manipulate such
minterms and apply these minimization rules.

265
(Refer Slide Time: 08:23)

So, this steps of the tabular method broadly the first have that means, identification of the
prime implicants that part of it, there are three steps. First is that we identify all the minterm
minterm in true minterms, of course, find the true minterms in groups based on the number of
ones in the minterms and the number of ones in the minterms is called the index and what we
do suppose I have two groups, let us say, I have made two groups; one with index i , I have
made another group with index i+ 1 , well just repeating, what does index mean, index
means how many number of ones are there. Let us take an example, let us say i=1 .

So, i=1 means in this group of i, i will be having minterms like 1000 , 0100 like
this, there is a single one and i+ 1 , I will be having 1010 , 0110 like this where there
are two, two ones and what we do after that is that compare every term of the index i group,
this group with each term every term with the group i+ 1 , you compare means every
element from here, with every element of here and check whether they differ in a single
position or not and wherever possible, you apply this rule ok.

Now if you are able to apply this rule, for example, here there was 1000 here was
1010 , it was differing in one position, you could have applied, if you are applying, you
put a check mark against the two elements tick; that means, you have applied rule here right
this is something you do.

266
(Refer Slide Time: 10:40)

And after you do this step to you. compare the terms which have generated after the step in
the same fashion generate some new term by combining two terms that differ in a single one,
again dash in the same position, I shall show an example that explains; what this mean and
you go and repeating this combination process combining process until no further application
of this rule is possible at that point all the terms which are not checked or not checked, they
will be the prime implicants of the function; the point it notice that at every step you will be
comparing the terms of the index i group with the index i+ 1 group.

(Refer Slide Time: 11:32)

267
Let us take this example worked out and after that we shall workout an example by hand. The
font size is little small, but let us see here we have a function like this where these are the true
minterms 0, 1, 2, 5, 7, 8, 9, 10, 13, 15, there are 4, 5, 6, 7, 8, 9, 10, you see first here in the
first column, what we show here is we write down this 10 minterms, in the order of the
number of one ones in their binary representation that is the index the first group consist of
index 00 is there, 0, 0, 0, 0; second group is index 1. So, 1 index 0 is done, 0, 2 and 8, they
have single ones 1, 2 and 8, then group with index 2, we have 5, 9 and 10, they are with index
2, then index 3, three ones we have 7 and 13 and finally, 15 which is index 4.

So, for an n variable function, let us say for an n variable function, the index can go from 0
up to n, 0 up to n like here for 4 variable function, you can go from index 0 up to index 4,
right, this is what we do first, then let us see what you have done we compare this group 0
and group 1 index 0 and index 1, this is 0 0 0 0 , this is 0000 and 0 0 0 1 that differ
in one position you combine them, you get 0 0 0−¿ and you put a tick here put a tick here
you have applied them and here you write 0, 1 saying that 0 and 1, you have combined
you go and comparing 0000 you compare the next one, also 0 and 2, also you can
compare differs in one position. So, it will be 0 0−0 this. So, this also is ticked 0 and 8
can also be combined one position −0 0 0 tick. So, this comparison is done now you
compare this with this.

So, all element of this you compare it all element of this see that which can be compared
combine 0001 you can combine with 0 1 01 , there is one gap ok. So, 1 and 5 can be
combined 0−0 1 , this will become −0 and 1 01 . So, you also put a tick here,
similarly this 1 and 9 can become 0 0 0 1 and 1 0 01 , −0 01 put a tick here similarly
2 and 10; 0 0 10 and 1 01 0 can be compared first one is different −01 0 tick 8 and
9 can be compared 1 0 00 and 1 0 01 , it will be 1 0 0−¿ all unit ticked 8 and 10 also
can be compared already ticked, then they send this group same way 5 and 7 can be
compared 0 1 01 and 0 11 1 , one position difference 0 1−1 put a tick 5 and 13 can
be combined put a tick 9 and 13 can be combined already ticked, similarly, 7 and 15 can be
combined 13 and 15 can be combined.

So, from the first column, it generate the second column, repeat this process as long as you
can, now start with this second column, same concept; now you have this 1, 2, 3, 4, groups,
compare first and second group, now here same thing 0 0 0−¿ , you see where you can
compare; you can compare.

268
(Refer Slide Time: 16:13)

This with here 1 0 0−¿ , you see 0 0 0−¿ and 1 0 0−¿ , −¿ must be in the same
place and 0 on different. So, if you combine, it is become −0 0−¿ , you see you will
combine becomes −0 0−¿ , 0-1 and 8-9 are being compared 0 1 8 9. So, you tick this, you
tick this 0 1, you cannot combine anything else 0 2 0 2 is here, 0 2 you can compare with 8
10. So, this also that is ticked 0 2 also get ticked 0 8, you can combine with 1 0 8 and 1 9. So,
this will get ticked 0 8 1 9 and 0 8 also 2 10 0 8 2 10, this will also get ticked. So, this is done.
Now similarly this and this with all of this you combine you compare with this, this is
0−0 1 1−0 1 you can combine this.

So, now this will also get ticked this. So, in this way you go and taking I am not showing
everything you go and take in I shall workout in example later. So, you get this and after the
here you see that after this step, you cannot combine anything else here, you see this and this
you cannot combine anything more. So, whatever remains which are not ticked which you
call A , B , C , D , these are the 4 prime prime implicants −0 0−¿ means x́ ý ,
−0−0 means x́ ź , −−0 1 means ý z , −1−1 means x z , these are all
prime implicants. So, let me workout an example by hand, it will be clear to you.

269
(Refer Slide Time: 18:21)

Let us take an example like this it is simpler; there are 7 minterms 1, 2, 4, 8, 10, 14, 15. So,
let me form the table like that.

So, these correspond to the 4 variables A B C D. So, I there is no index 0 term. So, index 0 is
not there. So, index 0 is blank. So, index one; you have 1 2 4 and 8 1 is 0 0 0 1, 2 is 0 0 1 0, 4
is 0 1 0 0, 8 is 1 0 0 0, then you have index to index is a 1, 10 only 10, 10 is 1 0 1 0, index 3 is
only 14, 14 is 1 1 1 0 and index 4 lastly 15, 1 1 1 1, these are the initial groups. Now from
here, let us form the next level groups, let us see let us compare these with this 0001 ,
1 01 0 , we cannot compare 0 0 1 0 and 1 0 1 0 you can compare they differ in one
position. So, I put a tick here, I put a tick here and if I combine; what I get I combine 2 and
10 right 2 comma 10 what I get −01 0 .

Then you continue comparing, you cannot this 4 and 10, you cannot compare, but 8 and 10,
you can compare, 8 and 10 is differ in 1 position again. So, you put a tick write 8 comma 10.
So, it becomes 8 and 10, 1 0−0 , 1 0−0 to this group is done. Now compare this and
this 1 0 1 0 and 1 1 1 0, they can be merged differ in one position 10 comma 14, it becomes
1−1 0 , similarly 14 and 15; you can combine 14 and 15 11 1−¿ these are all ticked,
right, but these two are not ticked 1 and 4 and not ticked. So, you continue with this try to
merge this and this you cannot merge anything because the dashes are not in the same
position similarly here and here, you can you cannot merge anything more ok. So, there are 1,
2, 3, 4, 5, 6, 6 prime implicants which are unticked or unchecked.

270
So, your prime implicants are in this case first one is 0 0 0 1 which is Á B́ Ć D , second one
is 0 1 0 0 Á B Ć D́ , third one is −01 0 . So, A is not there, B́ C D́ , 1 0−0 ,
A B́ D́ , 1−1 0 A C D́ and lastly 11 1−¿ , A B C ; these are the 6 prime
implicants for this function and using this tabular method I have identified all the prime
implicants, right.

Let us workout another example for your convenience; the head will clear for you.

(Refer Slide Time: 22:44)

Let us take this example in addition, I am showing some don’t cares. So, this don’t cares also
I am including in my table to form the prime implicants. So, in the first step don’t cares also
will be included. So, I am showing the table again this A B C D , I am not writing same
way. So, there is a 0 index 2 minterm here 0 0 0 0 and 1 index there is who is one index.

It is here and 2 is here 2 and 8 2 is 0 0 1 0 single 1 8 is also single 1 1 0 0 0, then 2 2 index 2


and with 3 5 and 10 3 is 0 0 1 1 5 is 0 1 0 1 10 is 1 0 1 0, then index 3; we have 7, 11 and 13,
remaining 3 7, then 11 and 13 that is it. So, you continue with the same process go and
combining try to combine this group with this group 0 and 2 gets combined, see you tick
here, tick here, it become 0 0−0 0 and 8 gets combined −0 0 0 , this is the first group,
then do with this and this are ticked.

So, two you try to merge 2 and 3 can be combined 2 and 3 becomes 0 0 1−¿ , then two 2
10 can be combined −01 0 and then 8 and 10 can also be combined, but it does not add

271
any more tick 8 and 10 to be 1 0−0 . Now in the next 1 3 and 7 can be combined 3 and 7 3
7 is 0 −1 1 , then 3 and 11 can be combined; 3 and 11 −01 1 , then 5 and 7 can be
combined, 0 1−1 , then 5 and 13 can be combined −1 0 1 .

So, you have this step, now you continue with that in the next step, you see that where you
can combine this group and this group here you can combine 0 2 and 8 10 0 2 and 8; 10. So,
you get 0 2 8 10, you get −0−0 , −0−0 , similarly 0 8 and 2; 10, you can combine you
will get this same thing, they will also get ticked and 2-3 means and similarly, if you combine
this group with this group, they will be another one generated 2 3 10 11; 2 3 10 and 11; 2 3
with 10 11 0−−0 1−¿ , −01−¿ yes.

So, you see whatever there is untick now ah. So, is it correct I think this is not correct let
check it ones 0 2 8 10, it is done, then here whatever 2 10 and 3 2 10 and 3 11 ok, 2 10 and 3
11 can be combined. So, it be 2 3 10 11, that is fine, it was right 2, 3, 10, 11; 2 3 10; 2 3 10
11; it is dash 0 1 dash 0 1 yeah, this is right fine. So, how many remaining untick this is 1 2 3
4 5 and 6. So, there are 6 prime implicants here also.

So, you can less them, this one for example, will be A bar B bar C A bar B bar C; this one will
be Á C D , Á C D , this one will be Á B D , this will be B Ć D , this will be B́ D́
and this will be B́ C . So, in this way you see you can follow this systematically, you can
do it by hand, you can generate all prime implicants of the function.

There is the first step of the tabular method, this method can also be automatic uses
systematically, you can do it, you can also write a program to do the same thing ok.

272
(Refer Slide Time: 29:43)

So, the next step; which we shall be discussing in on next lecture so, once you have obtain the
prime implicants of the function, when we know earlier that among the prime implicants;
some of them may be essential well some may not be essential.

So, now our task is to select the minimal set of prime implicants that covers all the true
minterms of the function of quotes we must include the essential prime implicants in addition
sum of the non essential prime implicants. So, in the next lecture, we shall be discussing a
systematic method for selecting the set of prime implicants to cover the function that will
complete this method ok. So, with this, we come to the end of the present lecture, we shall be
continuing with this discussion in the next lecture.

Thank you.

273
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 20
Minimization Using Tabular Method (Part – II)

In the last lecture we were talking about the Quine-McCluskey method, for minimizing
switching expressions, if you recall the basic idea was that we were considering the binary
representation for the minterms, then using a set of systematic rules, we were combining
adjacent minterms which were differing in index value of 1; that means, number of 1’s.

And we were ultimately creating smaller and smaller terms and finally, whatever we are left
were with those were the so, called prime implicants. So, what we have seen so far is how we
can get all the prime implicants of a given function. So, let us continue from that point
onwards. So, this will be the part two of our lecture minimization using tabular method.

(Refer Slide Time: 01:12)

So, as I said. So, what you have seen in the last lecture is we have looked at a method to
generate the set of all prime implicants for a given function right. So, we have used a tabular
method a systematic method to do that, but what after that once you get the prime implicant
what is our next step. So, our next step will be to select the smallest set of prime implicants
that will cover all the true minterms of the function.

274
This is very similar to what we were doing using the Karnaugh map method also, they are
starting with the prime implicants which were the so called cubes that, we are forming on the
k map we were selecting a minimum set of cubes that will be covering all the true minterms
or the ones in the map. Exactly the same thing we are trying to do here.

Now, that we have got the list of all prime implicants, let us try to see, what is the best set of
prime implicants, we can select out of that which will be covering all the true minterms of the
function ok. Now, for doing this again we shall be using a tabular method and, the table that
we will be using now is called a prime implicant chart. Let us see what are prime implicant
chart looks like.

(Refer Slide Time: 02:50)

So, a prime implicant chart essentially is a tabular kind of a structure. So, basically it is a
table kind of a thing. So, along the rows we list the prime implicants. So, all the prime
implicants are listed along the rows and along the columns, we list all the true minterms. So,
which prime implicant is covering which minterm that we indicate by putting a cross mark
this is how the table looks like.

So, let us see what is mentioned here this pictorially depicts the covering relationship; that
means, which prime implicant covers which true minterms ok. So, as I said minterms are
listed along the columns and, the prime implicants are listed along the rows, we shall see
some examples. And will be entering some check marks some X kind of a checkmark in the

275
table whenever a particular prime implicant is covering some minterm. So, we will put a X in
the corresponding location.

(Refer Slide Time: 04:14)

So, what we say is that a row of the table; that means, a minterm covers all the prime
implicants; that means, the columns I means I means all the minterms I mean, all the
minterms the columns where there are X say for example, for some prime implicant we see in
the table that there is an X here, there is a X here, there is a x here. So, the corresponding true
minterms, these 3 minterms are covered by this prime implicant right. This is the idea we use
this term cover ok. .

276
(Refer Slide Time: 04:55)

And the other point to notice that if we see that in the table let us say we we have again the
table, there is some prime implicant, where we see or there are several prime implicants ok,
several prime implicants are there. And you see that in some column there is a single X for
example, in some column let us say this column there is a single X may be in the middle one.

What does this mean? This means this particular minterm corresponding to this column is
covered by only this prime implicant not the others. So, I must select this prime implicant,
otherwise this column will not get selected not get covered. So, such prime implicant which
corresponds to a X, which is the only X in a column is called essential prime implicant, that is
what I must include in any minimal set of prime implicants that we generate.

277
(Refer Slide Time: 06:06)

So, what we have to do finally, we will have to select a minimal subset of their prime
implicant that is what I mentioned, but what is the requirement, requirement is that every
column which means the true minterms, will contain at least one checkmark corresponding to
the selected subset which means what I mean to say is that, suppose there is a table there are
many prime implicants. Suppose I finally, select this one this one and this one. So, what I say
is that the check mark should be such that, these 3 prime implicants will be covering all the
columns; that means, all the true minterms.

And we have to select the set of prime implicants in such a way that the total number of
literals in this prime implicant is as small as possible, as small as possible means the gate
realization will be as simple as possible. Let us say one prime implicant may corresponds to
x z , some other may correspond to x́ y z . So, if given a choice we will select this
because, this contains smaller number of literals. So, this will be requiring a 2 input gate, but
this will be requiring a 3 input gate fine. Let us take an example.

278
(Refer Slide Time: 07:40)

Now, here we are showing a prime implicant chart, just an example of a prime implicants
chart, where I am showing 4 prime implicants, which I have called A B C and D just ignore
this check marks for the time being.

And there are 10 true minterms 0 1 2 5 7 8 9 10 13 15. So, this is actually a 4 variable
function, this is a 4 variable function. And these are the check marks which says that this
x́ ý this prime implicant covers, 0 1 8 and 9 this can also be reflected from the Karnaugh
map if you for example, draw the Karnaugh map. Let us say if I have a Karnaugh map like
this, where I have xy along this direction. And let us say wz along this direction. So,
you will be having 0 0 0 1 1 1 and 1 0 here again 0 0 0 1 1 1 and 1 0. So, this x́ ý will be
corresponding to this cube, this whole row right.

So, if you see so, what this corresponds to, this corresponds to this 4 0 1 8 and 9. So,
depending on in which order these are of course, I have put it in a different order, it should
actually be w x and y z , if you put it w x and y z , then it will be 0 1 8 and 9, this
you can check fine. So, in the same way for all the different 4 prime implicants, we list which
are the true minterms which are covered.

279
(Refer Slide Time: 09:42)

Then we look at which of the columns have a single check mark. So, I can see column
number 2 here, there is a single check mark column number 7 also, this 10 also and 15 also.
So, these are the only check marks in the column, which means that the corresponding prime
implicants are essential. So, I must select B and D of course, there is a choice A and C that is
not essential because there are multiple cross marks.

Now, see once we have selected B and D so, all the check marks corresponding to B and D
will get covered B has a cross mark here, a cross mark here, a cross mark here, a cross mark
here, D has a cross mark here and here. So, what are the columns which are still remaining 1
is remaining, 9 is remaining these 2 right. Now, in order to cover 1 and 9 I have a choice, I
can either use A, because A covers 1 as well as 9, or I can use C, C also covers 1 as well as 9.
So, I will have to choose B, I will have to choose D, then I will be choosing A or C this is a
choice right. .

280
(Refer Slide Time: 11:20)

This is a choice that we have, this is exactly what we mentioned B and D are the essential
prime implicants, these we must choose and addition we can choose either A or C because,
this 1 and 9 these 2 minterms are left out both A and C are covering them ok, this is how we
can get the set of prime implicants which can be covered.

(Refer Slide Time: 11:45)

So, in this example as I said there are 2 minimal expressions possible, B and D are
mandatory. So, this corresponds to B, this corresponds to D so, here also I have B here also I

281
have D and, I can include either A or I can include C. So, there are two possible minimum
expressions for this particular function ok.

This is how we proceed with the tabular method, step 1 we create all the prime implicants of
the function, step 2 we create the prime implicant chart and, try to select a subset of prime
implicants which covers all the minterm true minterms. Now, there are some tricks here,
which can help us in minimizing or in selecting the set of prime implicants, we shall be
looking at some of these rules now ok.

(Refer Slide Time: 12:46)

First let us see that we can have don’t care combinations in a function. Now, we have seen
earlier, when you have don’t care combinations, we include the don’t care combinations in
the first step, when we were generating the prime implicants.

Because prime implicants can also include the don’t cares ok, but when we create the prime
implicant chart, we need not include the don’t care in the columns, why because it is not
necessary to cover all the don’t care minterms by the set of prime implicants that we chose
that is why we do not show they don’t care minterms in the columns, because they need not
be covered. We only show the essential minterms the ones and the function in the columns,
which must be covered by the set of prime implicants ok. Let us take an example here, while
here I am just showing prime implicant chart, where you see there are 8 prime implicants
which have already been generated let us say in the first step.

282
Now, these horizontal lines indicate the size of the minterms, the first one is smallest 2
literals; second one is 3 literals remaining all 4 literals. So, the corresponding function here is
this. So, we have used this don’t care literals also to generate all the prime implicants, but
when you list in the columns we only list these, but not the don’t care ones right.

Now, let us see which of the prime implicants are essential look at the columns, where you
have a single one, we have a single one here, we have a single one here, we have a single one
here and also here this 4 circled ones. So, correspondingly A is essential, B is essential and,
also D is essential. Now, once we have selected the essential ones you see a covers 17 19 21
23 25 27 29 31 many of them.

B covers 13 15 29 31 is already covered and D covers 20 and 21 21 is covered 20. So, what
we are left out here is only 18. So, only minterm 18 is not covered. So, after we select A B D
in order to cover minterm 18, we have a choice there are 2 cross marks either, we can use E
or you can use G. Since both of them are having the same number of literals there cost is the
same. So, here we will be choosing A B and D, essentially and either E or G right. This will
be our final selection in this example ok.

(Refer Slide Time: 16:15)

Let us take another example so, here is the here you show a similar table here the essential
cross mark; that means, single cross marks are here, here and here, which means A is
essential, C is essential these two are essential, because A is essential, we have a tick mark

283
here here here and here. And because C is essential we have a tick mark here here 19 and 23.
So, what we are left out with we are left out with 0 1 4 20 and 22 and also yeah these 5.

So, the earlier example we had only one. So, just by looking at we could take a decision that
which one to select, but in general for larger table, there can be many such true minterms
which are still left out. So, now, we have to find out. So, mean out of the remaining minterms,
which are not out of the I mean remaining prime implicants, which are not essential, what is
the minimum set I have to select to cover 0 1 4 20 and 22.

So, the procedure is we first create a reduced prime implicant chart. So, what is this? So, the
columns which have been covered are deleted, only this is 0 1 4 20 and 22 are left. So, you
only list these you only list these 5 and, A and C have already been taken up. So, the
remaining prime implicants are list in the columns.

So, now you have a choice here you see these remaining minterms can be covered by D E F
G H I in this way this is check marks. Now, we can very easily find out what is the minimum
set of these prime implicants that you cover all of them, how do you do it, we try to
determine, a condition for selection and the condition for selection is determined in the
following way. Let us look at this minterm 0, 0 is covered by either H or by I, I write H or I 1
is covered by either G or I, I write G or I 4 is covered by F or H I write F, or H 20 is covered
by E or F, I write E or F and 22 is covered by D or E I write D or E.

So, if I expand using multiplication rule of switching algebra I get an expression like this. So,
I can use any one of these product terms to realize my function because, I have to make this
true, if this denote the function f, I have to make f equal to 1 to make f equal to 1, I can make
any one of these product terms equal to 1, last one will be expensive there are 4 terms, let us
take EHI for example.

So, we will be selecting E will be selecting H, we will be selecting, I there are other
alternatives you can select E F I D F I E G H. So, our choice will be A and C and let us say E
H I. So, we will be selecting these 5 right, this is how we do it. .

284
(Refer Slide Time: 20:06)

Let us take another example this is a slightly bigger example, where you see there are 4 5 6 7
8 9 10 11 prime implicants, and this circled ones are the essential columns. So, the
corresponding essential prime implicants are A B and J and finally, here K. So, it covers A
coveres 4 5 6 7 20 21 22 23, B covers 4 5 6 7 12 13 14 15 that is it. J covers 1 3 5 is covered
7 is covered that is it. And K covers 25 and 27. So, what you are left out with we are left out
with 10 11 18 19 and 26 these five right.

(Refer Slide Time: 21:14)

285
Now, we present some simple rules, using which you can further reduce the size of the table,
there is a concept of domination, row can dominate another row, or column can dominate
another column. So, what is the basic concept of row domination, it says a row U of the table,
will dominate a row V, if U covers every column covered by V. For example, in row U I have
a checkmark here, I have a checkmark here, I have a check mark here, I have a checkmark
here.

And, in V let us suppose there is a checkmark here and there is a checkmark here. So, we say
U dominates V because, wherever V has a checkmark, U also has a checkmark. So, in such
case what you do you can delete V from the chart, because the idea is if we select U these 2
columns automatically will get selected. So, we do not need to keep V so, V can be deleted
from the chart.

(Refer Slide Time: 22:34)

This is row domination rule so, that same example that you saw earlier 10 11 18 19 16, were
remaining and these were the remaining prime implicants 7 of them. Now, if you use row
domination you see D is getting dominated by C because, C has check mark here D has here
so, D you can delete.

Similarly F is dominated by E checkmark E also has so, F also can be deleted H is dominated
by C and also G H has a single checkmark H can be deleted. Similarly I is dominated by E as
well as G, I can also be deleted. So, you are left with only C E and G. So, idea is that you

286
make your tables smaller and, then you do the next step. So, instead of using a large table,
you minimize the size of the table. So, it becomes easier for you, you see in the minimized
table you can immediately see.

Now, there is a single checkmark in column here and here so, C will be essential E will be
essential. So, if you take these two all the columns are getting covered so, C and E right. So,
in the earlier previous slide we had selected a sorry were selected A B J and K and, here we
are selecting C and E in addition, this is how we get our final solution. .

(Refer Slide Time: 24:16)

Now, there is a there is a similar rule for dominating columns, dominating column I will
showing with this example, it says a column will dominate another column like, suppose I
have a column C 1, I have a column C 2, if C 1 will dominate C 2 if C 1 has a cross in every
row, where C 2 has a cross; that means, if I have let us say cross here here and here and C 2
has a cross here and here, then I say C 1 dominates C 2.

But for column domination the rule is different, the column which dominates that can be
deleted, because you see the idea is that, if we can cover column C 2 say either by this prime
implicant, or this prime implicant. So, automatically C 1 you will also get covered. So, if I
cover C 2 that will imply that C 1 also gets covered that is why the dominating column C 1
can be deleted here right. So, columns so, here in this example you take this example here,
column 11 can be deleted, this is column 11 you see column 1 dominates column 10 ok.

287
That is why column 11 can be deleted. Similarly column 19 dominates column 18. So,
column 19 can also be deleted. So, you can also reduce the size of the table in terms of
column. So, the idea is that when you have this prime implicant chart, you first select the
essential prime implicants take them out your size of the table reduces, then you use row
dominants and column dominants rule, the further reduce this size of the table, then try to
find out what will be the set of minterms set of prime implicants, that will cover all the
minterms, this is the basic idea behind the Quine-McCluskey method.

So, with this we come to the end of this lecture. So, we have seen over the last few lectures
the different ways to minimize a switching function, earlier we looked at the algebraic
methods, here we talked about the graphical approach the Karnaugh map method and, also
we talked about a more systematic approach a Quine-McCluskey method which can be used
even by hand, you can use a function of 7 or 8 variables to create the list of prime implicants
that minimize and I also said. This is much easier to automate, you can write a program for
the Quine-McCluskey method in a much easier way as compared to the Karnaugh map
method.

Thank you.

288
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institution of Technology, Kharagpur

Lecture – 21
Design of Adders (Part – I)

If you recall, what we have to discussed so far; we had looked at the different ways of
realizing a given function in the form of an expression some of products, product of sums.
And we talked about several methods using which you can minimize the size of the
expression so that the resulting gates level realization becomes cheaper in some respect;
number of gates can be smaller the size of gates can also be smaller.

Now, let us look at some real circuits and how you can design those circuits and apply them
in various applications. So, we shall be looking at the design of such basic building blocks
which can be used to build larger systems.

To start with, we look at the most fundamental thing that we need in many many applications.
Think of an adder, we need to add numbers in almost any applications you can think of. So,
we shall be starting by talking about the design of adders and after that you should be looking
into some other very commonly used functional blocks ok.

(Refer Slide Time: 01:47)

289
So, the title of this lecture is Design of Adders. So, before we move into the design of adders.
So, as I had said we shall be discussing over the next few lectures, the design of various
commonly used so called combinational circuit modules.

So, let us first try to understand what is a combinational circuit. See the definition of a
combinational circuit is like this. Suppose I have a circuit where there are some inputs, there
are some outputs. Now, if this circuit is such that the outputs will depend only on the inputs
that I have applied at that point in time and they have no bearing on the past history of the
inputs.

Past history what happened earlier, what inputs were applied earlier that will not have any
bearing on for the output is coming. So, such a circuit is called a combinational circuit, but
the output will depend only on only on what inputs we are applying at this particular point in
time.

So, you think of an adder, an edition. I apply 2 inputs to be added, I get the sum as the output.
This is an example of a combinational circuit ok. We talked about a multiplexer we have the
inputs where the select lines, we select one of the inputs and that go to the output that is also
combinational circuit. But you think of an application like this. Well you have seen the traffic
light controller in the intersection of the roads.

See the sequence in which the lights glow that is not arbitrary. Red, Orange, Green then again
orange then red again orange then green there is a particular sequence. So, when the lamp is
orange; so, what will be the next state? What will be the next lamp that will glowing that will
depend on the past history. If previously red was glowing, then the next will be green and if
previously green was glowing, then the next will be red. This is an example of a so called
sequential circuit which we shall be discuss in later where the output that we get depends not
only on what we have now, but also some previous history of the system ok.

So, far here we are talking only about combinational circuits ok. Some of the very common
examples of combinational circuits that we shall be discussing are; first we shall be talking
about Adders and subtractors. Multiplexer we already have talked about earlier we shall be
looking into it again, Decoders and Demultiplexers and Encoders. These are the some of the
function blocks that we shall be going into some detail.

290
(Refer Slide Time: 05:08)

So, we start with addition ok. We already talked about the problem of adding binary numbers.
We talked about the binary number system, we talked about how you can add to binary
numbers; we start from there. From there you will get some idea that how we can design a
circuit that can actually carry out addition ok. Let us see.

First we take a very simple sub problem; addition of 2 binary digits or bits. This is the truth
table. So, I have 2 input bits A and B. So, I am adding them. I get a sum, I get a carry. S is the
sum and C is the carry. What are the rules for addition? You have already know this. If we
add 0 and 0, the sum is 0 which in binary you can write 0 0. If you add 0 and 1, sum is 1 0 1.
If we add 1 and 0 again sum is 1 0 1. If you add 1 and 1 sum is 2. So, in binary it is 1 0 that is
why I use 2 bits. Because in 1 bit you cannot represent the sum, 1 plus 1 is 2 1 0.

You see this truth table actually shows this. The last bit is sum 0 11 1 0 is a 0 1 1 0 and this is
the carry 0 0 0 1 right. So, we want to design such a circuit and a circuit which does this is
called a half adder. So, the definition of a half adder is that half adder takes 2 binary digits or
bits as input A and B. It produces sum. It produces carry. And this is the truth table of a half
adder.

Now directly from truth table, if you write down the expression that what is the switching
function that is realizing sum in the carry. You first look at this sum. Sum has a true minterm
here, it has a true minterm here. You see inputs as 0 1 and 1 0, 0 1 means Á B , 1 0 means

291
A B́ . You see here, we have written exactly that Á B or A B́ which is nothing, but
the Exclusive OR function.

So, sum is nothing, but the exclusive OR of A and B ok. If you look at the carry, carry has a
single true minterm corresponds to 1 1 which is A B . So, carry is nothing, but A B . So,
if you implement of such a half adder using gates, the sum you can implement by a single
exclusive OR gate; the carry you can implement using a single AND gate right.

Now, one thing to notice that exclusive OR is sometimes not regarded as a basic gate; So, if
you want to implement an XOR gate maybe you will be using a function like this, like you
will be using 2 AND gates like this. Then there is a there will be an OR gate. Output of the
AND will go to the input of the OR. First one we will get Á B . So, the input A is there; so
there will be a NOT Á and B will be fed to the second input. Second one will A B́ .
So, this A will be fed here and B will be fed with the NOT to here. This is the implementation
of a exclusive OR function A XOR B.

Now let us see, if we say that delta is the delay of a basic gate; we shall also talk about this
later. Basic gate means AND OR and NOT. These are regarded as the basic gates. Then for
generating carry, the delay is only delta. There is a single AND gate. But for sum, if I realize
using this you see; firstly, there is a NOT, then there is AND, then there is a OR. There are
three levels.

So, the delay will be thrice delta. So, you should remember this. If delta is the delay of a
basic gate, then for a half adder sum will required 3 delta and carry will require only delta to
generate fine.

292
(Refer Slide Time: 10:11)

Now from here, let us talk about how we can add multiple bits, multi bit numbers. This we
have already discussed earlier. Let us look at it once more. Suppose I have a number A, I
have a number B. These are the example I have taken. These are 7 bit numbers. So, I am
adding 1 and 1, sum is 0 with a carry of 1. Then I am adding all these numbers with carry 1 1
and 0, sum is 0 again with the carry of 1. 1 0 0 sum is 1, but no carry 0 1 1 is 0 with the carry.
1 0 0 is 1 with no carry. 0 1 0 is 1 with no carry. 0 0 0 is here ok.

So, you see when you are actually adding 2 numbers, multi bit numbers; we also need to take
care of the carry. We need to check the carry from 1 stage to the next. These are called the
stages. For a 7 bit number, I say that these are 7 stages of addition. The first stage of addition
generates a carry which is used in the second stage. Second stage of addition generates a
carry which is used in third stage like this.

Let us take another example. So, where the numbers are like this 0 followed by all 1’s and the
second number is just 1. This is an example when you see that the carry will be propagating
up to the last position. You see 0, we saw the initially there is no carry initially carry is 0. So,
0 1 1 will generate sum of 0 carry of 1. 1 1 0 sum 0 again carry of 1 same thing, sum 0 carry
1, sum 0 carry 1. This will continue right.

So, you see there is carry propagation from the last stage to the more significant stage. So, the
reason I have taken this example is that later on we will see when we calculate the delay of

293
such adder, will have to take care of this carry propagation time. If it is an 8 bit adder, it must
propagate from stage 0 up to stage 7. We will have to take care of this entire carry
propagation time.

So, one observation from here is that you see in every stage that you are shown in this
example. Now we are require into add 3 bits, 2 bits are the bits of the number and the third bit
is the carry which is coming from the previous page. So, 1 bit of number A, 1 bit of number B
and a carry bit coming from the previous stage.

So, earlier we talked about a half adder which can add only 2 bits right. Inputs were A and B,
output were sum are carry. But here, we need another kind of a adder which can add 3 bits
not only A B, but also there will be a carry input 3 inputs ok. So, we need another kind of an
adding block which is called a full adder. Full adder basically can add 3 bits.

(Refer Slide Time: 13:50)

Let us look at full adder. What a full adder is. Full adder as I said, will have 3 inputs. The 3
inputs are the 2 input bits A and B and the carry input let us say C in carry in and again there
will be 2 outputs 1 is the sum other is the carry output. Let us call it C out.

Now, we shall see later that when you have multi bit numbers, we can use a cascade of full
adders because you see exactly the way we did the addition by hand. We add 3 bits, carry was
generated. Again we add added 3 bits, carry was generated. Again we added 3 bits. So, if we

294
have several full adders connected one after the other the same thing can be implemented.
The idea is like this.

(Refer Slide Time: 14:57)

So, a full address schematically can be shown like this where A B and C in are the inputs and
S and C out are the outputs. The truth table for the full adder will look like this. This you had
this were also seen earlier. So, A B C are the 3 inputs. If we add them 0 0 0 is 0; that means, 0
0 ok. C out will be.

So, when you consider the value of the number, you consider it as C out has the most
significant then S in that order. 0 0 1 is 1, 0 1. 0 1 0 sum is also 1 0 1. 0 1 1 is 2 1 0 1 0 is 2
right. 1 0 0 is 1, 0 1. 1 0 1 is 2, 1 0. 1 1 0 is also 2, 1 0. 1 1 1 is 3, 1 1 right. This is the truth
table of the full order.

295
(Refer Slide Time: 15:45)

Now, again from this truth table, you write down the expression for this sum and the carry.
For this sum, you see there are 4 true minterms. If you just write them down 0 0 1 which is

C∈¿ C∈¿
Á B́ C ∈¿ , 0 1 0, , 1 0 0, and 1 1 1. So, this again is nothing, but the
Á B ¿´ A B́ ¿´
exclusive OR of the 3 numbers A B and Cin.

Similarly, when you look at the carry out; Here again there are 4 true minterms corresponding
to 0 1 1 which is Á B C in 1 0 1 1 1 0 and lastly 1 1 1. So, if you minimize this using
Karnaugh map or any other method, you will see that the minimum form is
AB+ BCin+ ACin .

So, when you implement a full adder like this using this expression, what we need is that for
the first case for sum, you need either a big XOR gate with 3 inputs and for carry we will be
needing 3 AND gates A B, B C, C A and an OR gate. So, each of these AND gates will
having 2 inputs each. First one will be fair with A and B , next one is B and Cin , B and Cin
and last one with A and Cin like this and output will be Cout. And here the inputs will be A B
and Cin and the output will be S.

Now, again because this XOR gate is not a basic gate; If you want implement them using
AND gates, suppose this first expression we look at. Then you will need 4 bigger AND gates

296
each with 3 inputs because there are 3 literals, each with 3 inputs. Then there will be a 4 input
OR gate connecting the output of this AND gates.

And to generate this Á , B́ , and Ć , there will be some NOT gates also in the output.
I am not showing the exact circuit. You can draw this. So, in terms of delay again, here you
see this XOR gate again will be having a delay of 3, 3 basic gates. First stage are the NOTs,
then the ANDs, then the OR. But for carry out here again, there is a AND and OR; there is no
NOT here too.

(Refer Slide Time: 18:57)

So, if you look at the delay here for a sum the delay will be 3 delta where delta is the delay of
a basic gate and for carry out, it will be twice delta. Delta is the delay of a basic gate right.

297
(Refer Slide Time: 19:11)

There are other ways of implementing a full adder. So, I am showing a couple of such designs
where you can possibly reduce the number of gates little bit. Say earlier for carry out, we had
use 3 AND gates and 3 input OR gate. So, here we are making a short cut. First thing is that
the sum earlier was generated by a 3 input XOR you recall.

Now, 3 input XOR gate can also be implemented as a cascade of 2 2 input XORs. If these are
A B and Cin, you can have A B here, you can have Cin here. These two are equivalent. So,
here we have done the same thing for generating sum. And for generating the carry, we are
taking the output from this place. Well do not ask me how we arrived at this. This is more like
an art. There is no systematic procedure for this. Somehow, someone has come up with this
design which requires less number of gates and this can also generate the same carry out.

Let us check. So, what is the value here? It is XOR of A B ok. What is XOR of A B,
Á B∨ A B́ . This is XOR of A and B. In this AND gate where ending this with Cin ok. Just
for convenience them writing only C, Cin for simplicity and the second gate is just the AND
of A B, so OR A B. This is what I get. Let us see what this expression is. The first one is
Á B C∨ A B́C∨ A B .

Now, you recall we have A B́ C and A B . So, what does this mean? A B́ Á BC + A B
. If you take B common here, it will be A B́ C + A . Now, you recall a rule which you
discussed earlier. This Á C∨A is nothing, but C∨A which means B C∨ A B .

298
So, I can write this only B C. So, exactly in the same way this A B́ C and AB if you
combine, take A common it will be B∨C . So, it will be nothing, but A C and this A B is
already there. So, you see you get the same expression for carry out. This is a simpler way
using one extra gate and also smaller gate here or gate is also smaller to input right.

(Refer Slide Time: 22:17)

There is another way to implement a full adder that is using half adders. So, this I leave this
as an exercise for you to verify that whether this C out and S that is in generated as correct or
not because already you know what is the half adder. So, if A B are the inputs. The carry is
nothing, but A B and sum is nothing, but A XOR B.

In this half adder you are again applying the third input with A XOR B. So, sum will be
correct A XOR B XOR C sum is obviously, correct, but you have to check this A XOR B OR
A B. This is what is carry out. So, this I will leave an exercise for to verify whether this is
actually equal to A B. This is the carry weight. This not A B actually no, no; I have just made
a mistake. Let me correct it.

299
(Refer Slide Time: 23:33)

So, A B it is sum is A XOR B carry is A B. In the next half adder the inputs are A XOR B and
C. Let us see.

So, the sum will be A XOR B XOR C and this carry output will be A XOR B and C. This
point will be A XOR B. This is one input and this is second input. So, the carry out that we
generate finally, this will be OR of these 2 week, OR of A B and this and I leave it as exercise
for you to verify that this is actually equal to A B or B C or A C ok.

So, if you have a half adders as the building blocks, you can also design of full adder using
that fine.

300
(Refer Slide Time: 24:37)

Now, using the previous methods; so, this delays. Already we have talked about earlier. So,
here it is mentioned again. This is the original design of the full adder where this sum was
generated by XOR function and the carry was generated using A B or A C or B C this way.

So, if delta is mentioned is delay the delay of a basic gate. So, far carry it requires 2 level of
gates. First is AND, then OR. So, which is 2 delta and some XOR I said XOR can be
implemented in 3 levels. First NOT then set of AND gates finally, OR gates OR gate 3. So,
this will be 3 delta. So, delay of a full adder you can calculate like this. Carry will be
generated after 2 delta, sum will be generated after 3 delta right.

301
(Refer Slide Time: 25:45)

So, now we shall see in our next lecture. So, what are the different ways to design parallel
adders using these full adders and may be half adders also. So, we shall look at several
different kinds of adders Ripple carry adder and Carry look-ahead adder in some detail and
we shall briefly look at two other kinds of an adders which are a little more complex. These
are the little advanced kind of adders. We shall be looking at them.

So, this we shall be discussing starting from the next lecture onward. So, we come to the end
of the present lecture where if you recall, we talked about the basic building blocks using
which when our ready to design a parallel adder. We looked at the half adder. We will looked
at the half adder, the next step the full adder. Now, using this full adders and also half address
if you want, you can design a circuit that can add 2 in general n bit numbers. This is called a
parallel adder.

This we shall be discussing in our next lecture.

Thank you.

302
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 22
Design of Adders (Part – II)

So, in this lecture we shall be starting or discussion on how to design parallel adders. Earlier
we had seen how we can design half adder and a full adder; we shall be using these as the
basic building blocks to design this, so called parallel adders. So, the title of this lecture is
Design of Adders the second part.

(Refer Slide Time: 00:38)

So, we mentioned in the previous lecture also that, we shall be looking at various designs of
n-bit parallel adder. So, what is an n-bit parallel adder it means that I have a circuit, where
there are 2 inputs I am applying let us call them a b x y whatever you call let us say one is X
and one Y, both of these are n-bit numbers we show it like this. If you cut it and write n these
means these X and Y are n-bit numbers.

This is shortcut way of showing it and in the output we generate sum, this will also be an n-
bit and carry is a single bit. This is what you mean by an n-bit parallel adder we want to
design a circuit like this fine.

303
(Refer Slide Time: 01:44)

So, the first design that we look at is something called ripple carry adder and it very naturally
and directly mimics our hand computation the (Refer Time: 01:57) add by hand. We at the bit
stage by stage we accumulate the carry, the carry goes to the next stage the carry gets added
with the 2 input bits are the next stage again carry is generated it goes to the next stage and so
on ok.

So, the ripple carry adder essentially just works on this principle we take n fuller; that means,
we want to design an n-bit adder, we take n full adders and simply cascade them connect
them 1 after the other. So, I shall show you how their connected and the connection should be
such that the carry output from particular stage, let us say the i th stage must be applied as the
carry input to the next stage.

This is how we do addition by hand. So, in the last lecture we took an example that that we
can have an example where the carry can ripple through all this stages during addition. So, let
us show that example once more. So, if you have two numbers like this that you are adding.
So, initially this is no carry I am assuming 0. So, I am adding them up, the carry is going
here, I am adding this 3 numbers again carry is going here I am adding them up.

So, the carry is going to the next stage. So, if I call any 1 of the stage as stage i. So, the carry
of stage i that is generated will be going as the carry input of stage i plus 1 right. The sum is
generated and the carry will be generate. So, you see if I have full adder now, if I have full
adder like this, but the 2 inputs that are coming let us say the 2 input bits are applied here, I

304
get the sum that sum bit is generated and whatever the carry is generated the carry this is my
carry out and this is your carry in right. And if use another full adder site by site this carry out
will be connecting as the input carry into the next stage.

So, your inputs are again coming the next sum you generating. So, exactly what you are
doing by hand your trying to do like this. See for the first stage because the carry is 0 will be
applying the 0 here right and for all other stages the carry in will be coming from the carry
out of the previous full adder, and the 2 inputs will be applied here and will be getting the
sum bits here, and the carry bits will be generated here right.

(Refer Slide Time: 05:05)

So, this is how an n-bit full adder looks like. You see there are n number of stages there are n
full adders FA0, FA1 up to FA n−1 , the 2 input numbers are A0 to n−1 and B0 to B
n−1 , C0 is the input carry sum is generated S0 to S n−1 and final carry outs Cn is also
generated.

Now, just the point to notice that well if this n-bit adder your using the standalone more,
which means this carry input is always 0, then this first stage of this addition which is a full
adder, you can replace it by a half adder; Because half adder if you recall has only 2 inputs.
So, this will only have A0 and B0. A0 and B0 and it will be generating S0 and it will be
producing C1 as the carry out ok. So, if we have an adder added design where there is no
carry input, then you can replace the last stage by an half adder which will be a little less
expensive because number of gates in half adder is less than that of full adder fine.

305
(Refer Slide Time: 06:35)

So, what you have said is this there are two numbers were adding, one is A, another is B both
are n-bit numbers input carry is C0, sum is this and output carry is Cn. Now let us look at the
delay calculation, first look at the carries you recall for a full adder we have said the delay for
a carry; that means, the carry out delay is twice delta.

Because there are some and gates followed by an OR gate there is two level circuit delta plus
delta. That is why if you see here we have seen that suppose A0 B0, and C 0 has been applied
at time T equal to 0; So, when will C1 be generated the carry out of this after time 2 delta.

(Refer Slide Time: 07:38)

306
So, if you look at the time suppose this is your time. So, this is time t equal to 0 a time t equal
to 0 all the inputs have come all the inputs have been applied all the inputs and also C0.

So, at a time twice delta, C1 will be generated. So, at time twice delta this new value of C1
will be fed as an input to FA1. So, even though A1 B1 is applied earlier, but the carry input is
coming now at 2 delta. So, this will be requiring another 2 delta time to calculate C2. So, C2
will be generated at 4 delta time C2. Similarly C3 will be generated at 6 delta time right.

So, in this way if you continue then Cn minus 1 will be generated after 2( n−1¿ delta and
the final carry output 2222 like that it will be generated after 2n delta right. This is the
total delay for the carry now let us see what will be total delay for this sum.

(Refer Slide Time: 08:55)

Now, you recall for full adder the delay of a sum is 3 delta because there is an XOR function
and XOR is NOT, AND and OR. So, it requires twice delta. So, assuming that again all the
inputs are coming time 0, this sum S0 will be generated after time 3 delta because it is an
XOR of A0 B0 and C0.

Now, what about S 1 you remembered just now what you said here that C1 was generated at
time twice delta. So, it will take another 3 delta time, now A1 B1 and C1 is come at 2 delta it
will take another 3 delta time. So, at 5 delta this S1 will be available. Now again for the next
stage, C2 is generated at 4 delta. So, S2 will be 4 delta plus another 3 delta for this addition.

307
So, 7 delta. So, it goes like this. So, finally, for S n−1 if you just see expand this, (Refer
Time: 10:16) be 2(n−1) delta plus 3 delta which is (2 n+1) delta. So, you see the
delay of the carries 2n delta delay of the sum is a little more 2n delta plus another
delta. So, one characteristic of this kind of ripple carry adder is that, the total delay is
propositional to the number of bits you see 2n delta, (2 n+1) delta something like that.

So, what it means if I want to design very large adder see 8 bit adder, 16 bit, 32 bits, 64 bits
then my addition time will go up. 16 bit adder will have double the delay as compare to an 8
bit adder. 32 bit adder will have 4 times delay as compare to an 8 bit adder. So, it is not so
easy to expand the number of bits in an addition without scarifying on the delay this is one
main draw back here.

(Refer Slide Time: 11:24)

Now, let us see how we can design a subtractor now let now that you know that how we can
design ripple carry adder. So, for parallel subtractor subtraction, we rely on or that old
number theoretic knowledge number system we talked about, we talked about the 2’s
complement number system.

We said when you subtract two numbers it is nothing, but you take the 2’s complement of the
second number and add that is as good as subtracting ok. So, when you say that we are trying
compute A minus B, we can do this same thing by adding the 2’s complement of B to A and
what is 2’s complement? It is 1’s complement plus 1 and what is 1’s complement? You are
doing not of all the bits. Let us say Xi denote the NOT of Bi. Let us look at an arrangement

308
like this. So, what we have shown here is nothing but a ripple carry adder you see same ripple
carry adder. But the only difference is this second input instead of applying B0 B1 B2 we are
applying X0 X1 X2 and X n−1 .

And what are these Xs? These are NOT of the corresponding B values which means, we are
adding A with the 1’s complement of B. 1’s complement of B right because 1’s complement
means just NOT, but we 1 to, but we have to add 2’s complements right. So, what we do, we
also add a 1 to the carry input. So, effectively what is happening? I am adding the 1’s
complement of B from the top and an additional 1 I am adding this carry input, which means
it is becoming 2’s complement. So, whatever this sum is coming out it is actually this same as
A minus B. So, I am I mean able to carry out subtraction just by taking the naught of the
second number and by feeding 1 in the carry input it is as simple as that right.

So, subtraction is very easy we do not need a separate subtract for that, we can use this same
addition hardware same addition circuit with small adjustments ok. Now let us see whatever
we have shown here how we can make it general. General means circuit, which can do both
addition and also subtraction. So, let us see how that circuit will look like.

(Refer Slide Time: 14:33)

The circuit will look like something like this, where this is our adder this can be ripple carry
adder let us say we have seen only one kind of adder so far, this can be any kind of an adder
let us say ripple carry adder. So, what we have doing, the first input A we are applying to this

309
adder, this second input actually what you are applying this is second input B ok, but you are
not applying B directly there are some XOR gates.

So, there are some exclusive OR gates what you doing, let us say the ith bit Bi and the second
input of this XOR gate, we are applying control input let us call it this add subtract let us in
short call it P, let us call it P. So, how does XOR gate behave I talked about it earlier. It says if
P equal to 0 if the output is f if P equal to 0 the output is nothing, but Bi; but if P equal to 1
then the output becomes the NOT of Bi ok. So, it is controlled inversion now let us see what
happens.

(Refer Slide Time: 16:14)

Now, let us suppose this add subtract control input I have applied 0; 0 means this XOR gates
this Bs will passed.

So, I have A here I have B here coming directly and carry input is also 0. So, this is just like
addition normal addition and in the output will be getting A plus B right.

310
(Refer Slide Time: 16:44)

Now, suppose I apply add subtract control input 1. So, if I apply 1 what will happen? First
thing is that XOR gates will be doing the naught of each of the B is. So, A will be applied
here, but here it will be 1s complement of B 1s complement of B will come here and carry
input is also 1. So, basically 2’s complement will get added and what will get is A minus B.
So, we have a circuit with the control input which if I set it to 0 it will be addition if I set it to
1 it will those subtraction ok. So, this is very general kind of thing just the addition
subtraction circuit.

(Refer Slide Time: 17:32)

311
Now, let us look at what is the drawback of this ripple carry adder. Drawback of the ripple
carry adder is that, the total delay we have seen earlier is propositional to n. So, as we
increase the number of bits the delay also increases these, the degradation in performance.
But in reality we need to design adders that are pretty big 32 bits, 64 bit adding addition not
only that we want to make them very fast we want to carry out the addition in a faster way.

So, how do you do this? Well one alternative maybe that if possible you generate all the carry
bits and parallel, why because in a ripple carry adder the main bottle neck was the carry
which was propagating sequentially from one stage to the next. So, somehow if you can
generate all the carry bits in parallel before the actual addition starts, then after that all the
additions can be done in parallel because we have generated all the carries and we did not
have to wait any further for the carries, to propagate. This is the basic idea behind the next
kind of adder that we propose that we shall discuss which is much more efficient in terms of
the speed in terms of the delay.

(Refer Slide Time: 19:05)

This is called carry look ahead adder. Let us see the basic idea what carry look ahead adder is
we have mention this in the previous slide also

That the propagation delay of ripple carry adder is proportional to n, but in a carry look ahead
adder we are generating the carry signals in parallel. This is notation which we use to denote
complexity, but you need not have to worry about the exact meaning of this, this O(n)
roughly indicates it is proportional to n and O(1) means it is constant.

312
So, in a ripple carry adder the total time taken is proportional to n. So, if you make bigger
adder the time increases, but in a carry look adder what you are saying that the addition time
is constant irrespective of the value of n. But the drawback is that hardware complexity
number of gates, size of gates, they will increase very rapidly with the value of n. This actual
limits the value of n to certain value beyond that a carry look ahead adder can become very
complex.

(Refer Slide Time: 20:38)

Let us see how it works. So, we look at the full adder again look at the ith stage of the full
adder. Just with respect to the ripple carry adder, you just imagine that in the ith stage the
inputs were address Ai Bi and the carry inputs Ci and the outputs were sum Si and the carry C
i+1 , which was going to the next stage. Now we define two functions call carry generate
and carry propagate.

Will in terms of the inputs Ai and Bi only these are may 2 input bits which I am trying to add
in the ith stage, first time calling carry generate. Generate means under what conditions of Ai
and Bi carry will always be generated. You see whenever Ai is equal to 1 and Bi is equal to 1
then irrespective of what my carry input is this carry out will always be equal to 1 this C
i+1 will always be equal to 1 because if this is 1 1, if carry input is 0 then also there will
be a carry. If carry 1 then also they will be carry.

So, I can express this as the AND function and of A and B if it is 1 1 then only it is 1 and the
second one is carry propagate. Carry propagate says that suppose there is an input carry Ci is

313
1. So, under what condition this 1 will propagate to the output; that means, there will also be
an output carry; propagate means under what conditions.

So, Ai and Bi this 1 will propagate. You see if it is a 1 1 then it is generating carry anyway.
But for propagation the two condition will be one of them will be 1 either 0 1 or 1 0. If it is 1
0 1 then also there will be a carry out if it is 1 1 0 then also they will be carry out. So, this is
the condition for carry propagation 0 1 and 1 0 is nothing, but the exclusive OR function
carry out will be 1 if A 0 B 1 or A 1 B 0. So, you generate is your AND propagate is XOR,
this is this is what I have just now explained.

Now, the output carry C i+ 1 , we can now generate or express in terms of Pi and Gi. How?

(Refer Slide Time: 23:35)

We can simply write the carry output is either there is carry generated or the carry
propagation condition is true and Ci is 1 there is a input carry. So, only under this condition
Ci plus 1 will be 1 means either carry is generated this Ai Bi both are 1 or Pi is 1 means I
mean exactly one of them are 1 and Ci is also 1 right.

314
(Refer Slide Time: 24:08)

So, we have this relation now if you go and expanding this C i+ 1 equal to this. The Ci I
can write like this multiply this out I get this, this C i−1 I can again write like this I again
multiply go and doing this to this you can go on continuing.

So, if you go on continuing. So, you go for i , i−1 , i−2 so on you go up to the last
stage. So, the last term in the last term what will happened? This Pi P i−1 it will go on Pi
P i−1 so, on the last one will be P0 and then you will be having C0 this will be the last
plus something.

So, this whatever your getting you can check this you can express this in a concise form like
this C i+ 1 equal to Gi will be there last term will be there, last term is this as I said C0
multiply with by the product of all the P0’s Pi is then the intermediate terms, there will be
summation of some Gk and then the products like you are said P1 P2 P i−1 P and P2 like
this.

So, it will be something like this, but you really do not need to understand or remembered
this in that way.

315
(Refer Slide Time: 25:48)

So, what we want is that in terms of our design, suppose where trying to design a 4 bit carry
look ahead adder. So, just using these expressions that we have just now talked about if we
expand them like this same way. So, I am looking from the reverse way, the last stage C1 can
be written as G0 plus C0, P0 C0 is already there you already know the carry input. C0 is the
carry input of the last stage right C0 and all the A B inputs are given to you.

So, C1 you can generate directly using this equation. Now C2 if you just apply that previous
one that expansion C2 will be like this, this you can verify C3 will be like this and C4 will be
like this. And this sums sum as we know sum is nothing, but the XOR of the 3 inputs in a full
adder.

Now, A0 XOR B0 is nothing, but the carry propagate function. So, propagate XOR of the
carry is nothing, but the sum. So, you can generate this sums like this. So, you see what you
need. So, in order to do this unit 4 AND2 means 2 input and AND2 refers to 2 input AND
gates.

You see there is one here or here 2 input AND gates, three 3 input AND gates one is here one
is here one is here, two 4 input AND gates one is here one is here and one 5 input AND gate
it is here. And similarly for to do this OR operations, you need one 2 input OR for C1 one 3
input OR for C2, one 4 input OR for C3 and one 5 input OR for C4 and for sum you need
four XOR 2 input XOR’s right.

316
Now, it is possible to simplify this expression little bit if you make an observation the
observation is us follows, you see C3 is this ok. Now, in C4 if you take consider this part and
if you take P3 common, you see whatever remains G2 plus G1P2G0P1P2C0P0 P1P2 is
nothing, but this. So, you can write this as P3 multiplied by C3. So, your expression will
become much simpler.

(Refer Slide Time: 28:42)

You can write it like this straight away it becomes simpler and accordingly the size of the
gates also becomes smaller this 5 input and gate is no longer required at all. So, you are cost
reduces right.

317
(Refer Slide Time: 29:04)

So, here for this 4 bit carry look ahead adder, this expression whatever means we have done
here generated if you show the equivalent gate level realization it is sum what like this.

So, I am assuming that already PiGi are already generated because Pi if you recall Gi is
nothing, but an AND gate if you have Ai and Bi the two inputs, this will be Gi and the
propagate function is XOR gate Ai Bi this is Pi. So, I am assuming this P1P2P3 all these
propagated and generate has already been generated by this, this can be done in parallel all
the stages can generate the Pi and Gi in parallel this have been generated, then for the carry
generation I need this circuit.

Well this is after minimization C3 is being fed here right, but if you do not do minimization it
will be two levels only. So, without minimization the delay will be little more because 2 again
plus 2 4. But without minimization, this circuit requires delay of twice delta and followed by
or and for Pi Gi for Gi it is twice delta for Pi XOR it is thrice delta.

So, this is more. So, the total delay will be 5 delta 2 delta to generate these 3 delta we
generate this and 2 delta for this circuit.

318
(Refer Slide Time: 30:49)

So, the overall picture looks like this, this is the this is 4 bit carry look ahead adder scheme.
So, you see the 4 bits of the numbers the two numbers A and B or fed to the Gi and Pi
generator. So, as I said Gi is nothing, but an AND gate they will be 4 such AND gates and Pi
is nothing, but an XOR gate there will be 4 such XOR gates and because XOR gate has
higher delay the delay of this stage will be 3 delta.

Then after you do this, just in the previous slide the diagram I showed the 4 bit carry look at
circuit will be generating the carries in parallel and because is a two level and or circuit the
delay is only two delta. Now, for generating the sum we do not need any full adder anymore
because as we see saw from the expression this SS are nothing, but this propagate function
XOR the carries. So, if you take P0 XOR C0, P1 XOR C1, P2 XOR C2 like this you can
directly generate this sum and for XOR unit another 3 delta.

So, you see the total delay is 8 delta which is independent of the number of bits right. But the
only point to notice that this you can see from the expressions that have generated earlier just
let us go back this expressions. So, as you go on increasing number of bits the complexity of
the expression will also go on increasing, the size of the gates number of gates will increase
very rapidly this is the main drawback.

So, total time as I said will be 8 delta. So, with this we come to the end of this lecture, where
we have talked about two different kinds of parallel adders ripple carry adder and also carry
look ahead adder. So, we shall be continuing with our discussion in the next lecture.

319
Thank you.

320
Switching Circuits and Logic Design
Prof. Indranil Senguta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 23
Design of Adders (Part - III)

So, in the last couple of lecture if you recall, we had talked about the design of adders, we
talked about half adder, full adder. Then we saw how we can combine them to create parallel
adders, two kinds of parallel adders if seen so far ripple carry adder where the delay increases
with the number of bits and carry look ahead adder where the delay is constant. But the
hardware complexity number of gates, size of gates they increased very rapidly with the
number of bits.

So, in practice we needs some kinds of trade off we cannot design a 64 bit carry look ahead
adder because the gates would become too complex. But we cannot also design a 64 bit ripple
carry adder, because it delay will have to ripple through all these stages. So we have to have
some kind of compromise, let us look into it this is our third part of adder design.

(Refer Slide Time: 01:22)

So, here I shall not to be going into too much detail, just I will try to give you the basic idea
in the concept. See we are trying to design a 16 bit adder let us say and suppose we have
already designed 4 bit carry look ahead adder modules as we had discussed in the last lecture.
Suppose we have such 4 bit carry look ahead adder modules available with us so we can add

321
two 4 bit numbers and generate of 4 bit sum. Now, one thing you can always do we can
cascade this 4 bit look ahead adders like this you see this 4 bit carry look ahead adder if you
recall earlier that we need 5 δ time to generate the carry and 8 δ time to generate the
sum.

(Refer Slide Time: 02:18)

So, this sum will be generated after time 8 δ and this carry is being generated here so
again after some delay this will be generated. So you can calculate the total time taken, so
there will be a rippling of carry still going on across the 4 bit carry look ahead adders stages,
but within each of these blocks the time is constant this is basic idea.

322
(Refer Slide Time: 03:01)

So, carry propagation between modules are still there.

(Refer Slide Time: 03:05)

Let us do a simple calculation. So, one thing you can do, we can use a second level of carry
look ahead mechanism like in the previous diagram we said that we have 4 bit carry look
ahead adder modules where there will be carry propagation between stages or between these
modules. So what we can do we can have a higher level carry look ahead adder kind of a
scheme again where all this carries can be generated in parallel.

323
So, you do not need to have this carry propagation again, same concept which you had for
carry look ahead adder where using it in the higher level also to generate C4, C8, C12 and
C16 like if you just go to the previous slide this C4, C8, C12 and C16. If I have a higher level
carry look ahead module which can generate these in parallel then this rippling affect will no
longer be required.

(Refer Slide Time: 04:22)

So, a simple delay calculation I am showing here. Now this carry look ahead adder as you
know that requires only two stages AND OR it will require two get delay twice delta and for
a 16 bit adder 4 4 4 4 for the original single level carry look ahead if you do it, it would be 14
δ or for modified two level it will be reduced 10 δ . So, here I am not going in to the
detail of this, but I am just telling you; that if you have a modified two level carry look ahead
scheme and the delay can be reduced; so the basic idea I give you.

324
(Refer Slide Time: 05:05)

So, this table gives a quick comparison that for certain values of n typical value 4, 16 up to
256 I am showing, if we have a normal ripple carry adder the total delay will go if you recall
total is (2 n+1)δ .

So, it will go up like this 9, 33, 65 up to 513, but for carry look ahead adder this 4 4 4 in term
so it will go like that you build up 4 bit adder; 4 such you get 16, 4 such you get 64, 4 such
you get 256 and so on. So if you with a calculation will see that the delay are much less
between 16 and 64 they do not change. So, you will see so above 16 if you go it will become
12.

325
(Refer Slide Time: 06:05)

Up to 64 and above 64 if you will go it will be 14 up to 250 so it will increase by 2 2 2 each.


So for every part of 2 like you go from 4 to 16 to 64 to 256 your delay increases by 2 2 each
so for 4 it is 8 δ , for 16 it is 10 δ , 64 it is 12 δ and 256 it is 14 δ . And if you have
any other value between like 128 that will also be this 14 delta as you can see so 32 for
example, 32 is also 12 delta.

So, you see the carry look ahead adder delay is much less, much much less as compared to
ripple carry adder this is one point.

(Refer Slide Time: 06:58)

326
Now, let us look at some other kinds of adder of course, we shall not be going into too much
detail on these, these are the slightly advanced kind of an adder which can generate the sum
with the average delay which is much smaller as compared to the conventional adders. Let us
see the basic concept the first kind of adder we talk about is called a carry select adder. Well,
carry select adder the idea is very simple let us talk about an addition module let us say we
have several modules of addition which are cascaded like we have talked about several 4 bit
adders connected in cascade.

So, for one stage of a 4 bit adder the carries coming from the previous stage and the carry out
will be going to the next stage, let us considered one such stage. Now in that stage the input
carry can be either equal to 0, it can also be equal to 1, so we do not know what the carry will
be at time t equal to 0 will have to wait for certain time for the carry to get generated only
then we will be knowing what the carry is. Now the idea is that we have two concurrent
versions of the at the that is two hardware we were using two adders one adder is doing
addition in advance assuming carry is 0, another adder is doing addition in advance assuming
carry is 1, but do not know have the actual carries ok.

So, we carry out addition twice; once with carry as 0, once with carry at 1. Later on when the
previous stage has generated the actual carry because you see already these two additions had
been complete they have added and we waiting. Now we can have a multiplexer this
multiplexer can be selected by this actual carry either this will get selected or this will get
selected the correct sum is selected by a multiplexer right.

327
(Refer Slide Time: 09:25)

So, a multiplexer if you recall multiplex is any circuit where there are two inputs, there is one
output, there is one select line. It will either select this or select this depending on the value of
the select line. If S is 0 this will be selected, if S is 1 this will be selected.

(Refer Slide Time: 09:51)

I mean showing you how a carry select adder looks like considered a 4 bit adder model that is
like what we saw here.

So, now we have two parallel 4 bit adders one is here, one is here you see one of them is
adding assuming carry in 0 other is adding assuming carry in is 1. Later on when the carry in

328
actually becomes available from the previous stage, but this full adders have already
completed their addition, so their sums are already available here their sums are already
available here.

So, depending on the actual value of carry in you either select the top most adder or the
bottom most adder depending on the actual value of carrying on either this or this will get
selected as the final sum. Similarly the carry you select either this carry or this carry
depending on Cin, so you will be getting the final carry out. So the idea is this way investing
on hardware we are using double the amount of hardware and we are doing some speculation
we are doing addition both for carries 0 also for carry 1. And we are being ready with the two
sums and once when the carries available here selecting one of the two sums using a
multiplexer.

So, we are avoiding the addition delay, only the multiplex delay which is much smaller that
will be n, got ok. This is the item so this is an example where we have 4 bit modules, now the
second thing is that if we have a multi such stage addition the number of full adders in every
stage can be either uniform or variable let us see.

(Refer Slide Time: 11:52)

This is an example of a uniform size adder where I am saying this is a 16 bit adder, this is
stage first stage, second stage, third stage and fourth stage 4 bit 4 bit 4 bit adder for the first
stage there is no confusion because carry in is directly available.

329
So, you do not need two copies of adder only one adder, but for the other stages there are two
adders and they are adding with carry 0 and carry 1 in parallel. So once the carry gets
available say for a example the first stage computes this carry this carry will be selecting
these S 4 5 6 and 7 also it will be selecting the correct carry here and this correct carry will
now you selecting S8, S9, S10, S11, it will be selecting here and again 12 13 14 15 and Cout.

So, you see here also there is a rippling effect, but this rippling effect is much faster because
this is only through a multiplexer, not through full adders. So here the total delay of the 16 bit
adder will now will be only 4 full adder delays, because each of these stages are taking 4 bit
full adder delay assuming it is a rippling carrier. Let say, assuming ripple carry adder it will 4
full adder delays, if it is a carry look ahead adder it will even faster plus this 3 multiplexor
delays right.

(Refer Slide Time: 13:44)

Now you can also improve upon this using a very intelligent way by using variable sized
adders; variable sized means you see we use 2 bit adder, 3 bit adder, 4 bit adder, 5 bit adder
because the input bits you do not divide equally this is also 2.

So, it is A0 and B0, B1 these also two A2, A3, A4, A5, A6 this is 4 bit this is 5 bit, but
ultimately you are getting a 6 bit adder stage. See here the advantages that see when you the
first stage has completed two bit addition. Let us say this two bit addition is also completed
so after the one bit multiplexer delay will be getting here, so this two full adder delay plus
one. Now this adder by this time has finished addition because assuming one full adder delay

330
and one mux delay are equal, here the delay was two full adder delay plus one mux delay by
that time this carry is generated this full adder is also ready with the sum and the carry.

So, you can directly applied here and by the time it is ready plus one mux delay this 4 bit
adder is also ready because 3 plus 1 is 4. Similarly 4 plus 1 mux delay when that comes here
this is also ready, so you do not have to wait any more for the later stages because when the
carry comes the adders are already ready with their sums. So you need only two full adder
delays corresponding to the first stage plus the mux delays.

So, the delay gets reduced right I am not going into too much detail about this as I told you
just trying to give an idea. This is sometime called 2 2 3 4 5 configuration 2 2 3 4 and 5, this
is how the number of bits are divided.

(Refer Slide Time: 15:57)

The last kind of adder that I talk about here is something called a carry save adder; now it is a
little beyond this scope of this a course to talk about the application. It is very useful for the
design of design multipliers carry save adders, but we shall not discussing this here we shall
be telling you what the basic concept of carry save adder is and how you can design it. See a
normal adder adds two numbers right. You have two input numbers you generate some and
carry, but in a carry save adder you are adding 3 numbers X Y and Z.

So, you are adding 3 numbers and you generating sum and carry these are not 1 bit number
this can be n bit numbers. Now you shall see how it works because we are seeing that there

331
are 3 operands, but the carry save adder is nothing but an independent full adder without
carry propagation, there is no carry propagation. Here let us try to understand this; what is
meant by no carry propagation and that parallel adder may be required at the last stage.

(Refer Slide Time: 17:21)

Let us take an example considere an addition; suppose I have 3 members X Y Z which you
have adding because as I said for a carry save adder we talk about adding 3 numbers so we
are showing the carry and sum separately.

So, when you add X Y Z 1 1 1 it is 1 with the carry of 1 we are not just adding with carry we
are independently adding the different stages 1 0 0 is 0 with the carry of 1, 0 0 0 is 0 with the
carry of 0, 0 1 1 is 0 with the carry of 1 and 1 1 0 is 0 with the carry of 1. Now what I do you
see you have accommodated the sum 0 0 0 0 1 parallelly without carry propagation and also
carry you 1 1 0 0 1 1 the carry out from different stages we shift the carry by 1 position and
add with S whatever we get is the actual sum of X Y and Z.

So, the idea is that we are not doing the conventional kind of addition instead of using the
carries from one stage to the other we are actually doing the addition like this we are doing
parallel addition without carry propagation, we are generating this sum separately generating
the carry separately. A set of full adders independent full adders; that means, each full adder
will be like this there are 3 inputs and just 2 outputs sum and carry no carry propagation from
one stage to the next

332
So, this can be then in constant time and after this constant time the last stage you need an
adder to add S and C after 1 shift so you need an adder of course at the end, but not before
that, but why do you need this what is the advantage.

(Refer Slide Time: 19:37)

The advantage will be if you have many numbers to add like n bit carry save adder just look
like looks like this set of independent n full adders the each full adders takes 3 bits of the
inputs X Y and Z, and generates carry and sums separately right.

(Refer Slide Time: 20:01)

333
So, let us take some examples suppose I have 3 numbers just like I show to previous just in
the previous you see there are 3 numbers X Y Z in the output I generate a sum and a carry so
it is like this. There are 3 numbers m equal to 3, I use 1 carry save adder the 3 numbers are
applied in input.

So, sum and carry are generated so as I said at the last stage after shifting I need one parallel
adder I get the final sum. Now for m equal to 3 we do not see advantage unnecessary we are
doing it complicated because ultimately we need an adder at the end, the advantage will be it
would be apparent if you use some higher value of m. Suppose you were wanting to add 4
numbers first 3 numbers are applied here the fourth number is applied here, you see we use a
carry save adder in the first stage you generate a sum and a carry independent this sum carry
and the fourth number if you need it to another carry save adder, it will generate a sum and
carry.

And finally, you applied to a parallel adder with carry propagation to generate the final result.
So these two CSA and CSA will be recurring constant time without carry propagation only
the last stage will be using carry propagation. Let us say m equal to 6 there are 6 numbers I
want to add, see first stage I use 2 carry save adder so 1 2 3 4 5 6 I am adding so 1 sum and 1
carry 1 sum and 1 carry.

So, I can use another carry save adder to add 2 of these and 1 of these I have finally, 1 sum
and 1 carry and also this sixth one, you see ultimately you will have to bring it down to 2 that
is why we needed a tree of such carry save adders, you see here this parallel adder will take
only 2 inputs at the last stage, so you have 6, 6 down to 4 down to 3 down to 1 will have a
chain of such carry save adders. The advantages is that each carry save adder will be will be
just independent full adders they will have a delay of 3 delta each 3 delta 3 delta 3 delta only
at the end there is a rippling of the parallel adder, but the advantage is that you are able to add
6 numbers together, 9 delta plus the delay of 1 parallel adder this is the main advantage here.
But, as I mentioned these are some kind of slightly advanced kind of adders, and we do not
need to going into too much detailed about this just to give you an idea that this kind of carry
select and carry save adders are also possible, and they have very interesting applications in
some other areas may be.

So, with this we come to the end of this lecture you see over the past 3 lectures we basically
talked about various kind of adders. Now in the next few lecture we shall be looking at how

334
we can design other basic combinational circuit modules and how they can be used in certain
applications. Logic design is of course one application we want to realize or implement some
function, but there can be other kinds of applications as well. This we shall be discussing in
our next lectures.

Thank you.

335
Switching Circuits and Logic Design
Prof. Indranil Senguta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 24
Logic Design (Part -I)

In the last few lectures if we recall, we have been talking about the design of different kinds
of adders and subtractors stuff like that which are very important for the design of more
complex systems. So you know addition is a very integral part of most components and
systems that we design so in this lecture from today onwards we shall be discussing some of
the other basic building blocks that are available for designing functions implementing
functions and so on. So today we shall be starting our discussion on general logic design.

So, logic design is somewhat general in the sense that we are not talking about specific
applications like addition or subtraction, but we shall be talking about some generalized
building blocks which we can use to implement any arbitrary function let us see what we plan
to cover.

(Refer Slide Time: 01:26)

So, as I said that we shall be talking about some of the common functional blocks which we
normally use in the design of digital systems. Now specifically the kind of functional blocks
that we shall be considering are multiplexer which we have already seen earlier see earlier

336
when you discussed about the CMOS switches the transmission gates we mentioned how we
can very efficiently implement multiplexers using those transmission gates.

So, we shall be revisiting multiplexers and what kind of applications and designs we can do
using that then we shall be talking about something called demultiplexers and decoders then
encoders and comparators. Now all these blocks are quite important in the design of more
complex digital systems. So for those of you will be going into the design of complex
systems you will find that these modules will be used repeatedly in various scenario.

(Refer Slide Time: 02:37)

So, we start with our discussion on multiplexers which are also called data selectors; now
why do we use the term data selector for a multiplexer, let us try to understand if you recall
what is a multiplexer, a multiplexer is a block if you think it as a black box which has a
number of inputs and 1 output depending on some select inputs one of the input line is
selected and it goes to the output.

So, in the sense we are selecting one of the several data inputs at a time that is why this is
also called a data selector. So let us talk in a slightly general term, so let us consider
multiplexer which has n data inputs, let us call them as D0 up to D n−1 and let us also
have m number of select lines. So we have a multiplexer as a box, so in the input we have this
D, the n number of lines and then we have the select line S. And we have an output single
output f, this is how a multiplexer is, there is one output line f and as we know the way

337
multiplexer works depending on what we apply on the select line one of the data inputs is
selected and it goes to the output; so if we consider this m bit select line as a binary number.

If we consider the decimal equivalent so when S is 0 the first input D0 is selected we will
have f =D 0 , D0 will go to f. If S is 1 then D1 will be copied to f, f =D 1 ; if S=2
similarly D2 will be copied and so on. Now usually there is a relationship between n and m
the number of select lines should be large enough so that any of the n inputs can be selected,
so this condition can be expressed as n=2
m
or alternatively we can write m=log2 n ,
these two are equivalent right. Now because the number of inputs is 2m and there is a 1
output we also referred to this kind of a multiplexer has a 2m to 1 multiplexer fine.

(Refer Slide Time: 05:41)

So, this is how this multiplexer we will look like there are n inputs, there are m number of
outputs S0 up to S n−1 and there is a single output as I have just said ok. This is how a
multiplexer looks like.

338
(Refer Slide Time: 06:04)

Now, let us talk about specific small size multiplexer for specific values of m and n and let us
see how we can design or implement such multiplexers, you recall earlier when you are
discussing CMOS logic we showed how to design a multiplexer using transmission gates. But
here we shall be showing the designs using basic logic gates like AND, OR, NOT, NAND,
NOR etcetera ok. Let us see let us start with the simplest and smallest kind of multiplexer
which is 2 to 1 multiplexer there are 2 inputs D0 and D1 there is 1 output, because there are
only 2 inputs which is n=2 , we required m=1 , 2m=n , ok.

So, there will be a single select input alright, here I have shown an expression before coming
to this let us try to see how this multiplexer works.

339
(Refer Slide Time: 07:12)

let us try to construct the truth table, so there are three select lines three inputs S0 let us show
D1 first and D0 then f is the output. So there are 8 possibilities 0 0 0, 0 0 1, 0 1 0, 0 1 1, 1 0 0,
1 0 1, 1 1 0 and 1 1 1. So according to the definition of multiplexer if S 0=0 then we
should have f =D 0 , D0 will be selected; so the first four rows show S0 equal to 0, so
whatever is in D0 that will go to f, 0 1 0 1. Similarly when S 0=1 then D1 will be selected
so the last four rows show this case when S 0=1 so D1, 0 0 1 1, this will be selected so
this is the truth table of the multiplexer.

Now, if we construct a karnaugh map for example, and I try to minimize this let us see on this
side let us show D1 and D0 this is 0 0, 0 1, 1 1, 1 0 and in this side we show S0, 0 and 1 the
true minterms are 0 0 1, 0 0 1 here, 0 1 1, 0 1 1 here then 1 1 0, 1 1 0 here and 1 1 1, 1 1 1 so
you see here the minimum set of cubes will be this. And this so for that the first two indicate
S´0 AND D0 you see in this expression S´0 AND D0. And the other cube is S0 AND 1
1 and 1 0 is D1, S0 D1 so you see after constructing truth table and minimization we have
arrived at this expression, but we would have arrived at this expression without going through
all this minimization procedure just by thinking about the functional description for
multiplexer. What is a multiplexer if S0 is 0 we will get D0 if S0 is 1 we will get D1.

So, how do we express it if S0 is 0 which means S´0 we get D1 which is and D 0 we get
D0 and D0 or if S0 is 1 which you write as S0 we get D1 and D1. So we can write down this
expression straight away without going through on this minimization right let us move on.

340
(Refer Slide Time: 10:35)

Let us talk about a larger multiplexer or 4 to 1 multiplexer for a 4 to 1 multiplexer the number
of inputs n is 4. Therefore, number of select lines m will be log 2 4 which is 2. So S0 and
S1, so in the same way depending on the values of S0 S1 one of the inputs will be selected.

So, just following the principle I mentioned for 2 to 1 mux we can straight away write down
the sum of products expression for the output function how do you write from the functional
behavior if S0 S1 is 0 0 then we should get D0 we write it like this S´1 S´0 indicates both
are 0 0 then this will be 1 1 and D0. So if S0 S1 as 0 0 and D0 then this f will be equal to D0
right if it is 0 1; that means, S1 is 0 bar and S0 then D1 if it is 1 0. That means, S1 is 1 S0 is 0
then D2 when both of them are 1 1 1 then D3 ok. Now you see this is a 6 input function D0 to
D3 and S0 to S1.

So, if I construct a truth table in the normal way there should be 2 to the power 6 or 64 rows
in the table the truth table will be too big to draw, so again because of the functional
characteristics of a multiplexer we can show the truth table in a much compact way using
some entries as don’t cares how do we do it let us see. Here we do it in this way you see first
two rows of this truth table indicates the select line 0 0, so you know when this 0 0 then D0
will be selected so we apply D0 0, D0 1 so the output will be f 0 f 1. Now under this
condition the values of D1, D2, D3 are all don’t cares, it is only D0 which will be copied to f
similarly when I apply 0 1 then D1 will be copied the others will be don’t cares the if I apply
1 0 D2 will be copied. If I apply 1 1 then D3 will be copied so you see we can very concisely

341
express this truth table in only 8 rows this is possible because of the nice functional behavior
of a multiplexer right.

(Refer Slide Time: 13:36)

Let us now see how we can build multiplexers, let us say we have smaller multiplexers
available to you and we shall be talking about how we can build larger multiplexers using
these smaller multiplexers. Let us take a couple of examples the first example we take is
suppose we have 2 to 1 multiplexers available to us, but you require a 4 to 1 mux; so what is
the 4 to 1 mux we recall again there will be 4 inputs D0, D1, D2 and D3 the output will be f
and there will be two select lines S0 and S1. Now you see how we have designed this
multiplexer network we have used 3 multiplexers in 2, in the first level we are using 2
multiplexers and both of them we are using S0 as the select line whereas, for the next one we
are using S1 as the select line.

Let us try to see the different scenarios, let us say S1 is 0 suppose we have applied 0 and 0
what will happen; first is let us look at the last multiplexer first if S1 is 0 then the first input
will be selected this one. So we need not have to see what this multiplexer is doing, because it
is not been selected the first multiplexer is being selected at S0 is also 0 so S0 0 means
ultimately D0 gets selected. So if we S1 and S0 I am selecting D0, if it is 0 1 same way if it is
0 this is selected and if S S0 is 1 then now D 1 is selected, so D1 will go out D1. If I apply 1 0
if S1 is 1 then it is not this, but this line will get selected and it is the lower multiplexer which
will get selected because S0 is 0 here.

342
So, now this D2 get selected and D2 will go out D2 and if I finally, apply 1 1 this S0 is also 1.
So now, D3 will be selected. So you see that it works perfectly as a 4 to 1 multiplexer so if I
have only 2 to 1 multiplexers available to us so we can very easily construct a 4 to 1
multiplexer, let us take another example.

(Refer Slide Time: 16:38)

Let us say we now want to have an 8 to 1 multiplexer and we assume that we have smaller
multiplexers available with us smaller means we have 4 to 1 multiplexers and also 2 to 1
multiplexers, you see the network look similar we have used multiplexers in two levels which
2 for 2 1 multiplexers in the first level which are selected by S0 and S1 with the first 4 data
inputs connected here. The last four data inputs connected here, the outputs of which are
feeding to the second multiplexer which is selected by S2.

Let us try to understand the operation with respect to the truth table, this is the functional
truth table I am showing with respect to the select lines S2, S1, S0 if it is 0 0 0 then D0 is
supposed to be selected 0 0 1 D1, 0 1 0 D2 and so on 1 1 1 D7. Let us look at it one by one let
us consider the first scenario for select lines are 0 0 0, if S2 is 0 this will be selected, if S1 S0
are both 0 0 then this will be selected which is D0 is fine; let us take another case let us say
talk about this to this one.

Because S2 is again 0 so again this is selected, but S1 S0 is 1 1, 1 1 means 3 this one will be
selected D3 so it is D3 it is fine. Let us take another example let us consider this here S2 is 1

343
right, S2 is 1 so this 1 will be selected the second input. That means, this multiplexer and S1
S0 we have applied 1 0 or 2, 2 means 01 to this 1 will be selected which is D6. So you see
this works exactly like a 8 to 1 multiplexer. So in this way if we have any smaller
multiplexers given to you and were asked to design a larger multiplexers you can
systematically construct the multiplexer network like this so I have shown 2 small examples
you can think and work out larger examples similarly fine.

(Refer Slide Time: 19:20)

Now, let us talk about how we can implement logic functions using multiplexers. First let us
see that if I have a 2n to 1 multiplexer we want to implement an n variable function how
do we do it. It is very simple here I am showing an example of a 3 variable function so I need
a multiplexer with 8 inputs and 3 select lines so what we are saying is that: we shall be
connecting the input variables to this select lines directly so we have A B C we connect them
like this A B and C. So what will happen if A B C is 0 0 0 the first line will be selected, if it is
0 0 1 the second line will be selected, 0 1 0 the third line will be selected and so on.

So what do we do we look at the output column of the truth table and simply copy this and
apply it to the inputs 0 1 0 0 1 1 1 and 0 the output is f. You see the implementation of any
function using multiplexer is very trivial and the same multiplexer can be used to implement
any arbitrary truth table if you just change this input 0 1 assignment.

344
Basically, you are applying the truth table whatever it is to the input of the multiplexer and
the multiplexer implements the required function. This you can say is something like a
programmable logic function generator the input you change without any change in the
circuits circuit is the same still that multiplexer you only change that 0 1 pattern in the input
you get another truth table and that corresponding function gets implemented right so it is
very simple. Now in this context let me just ask you one thing means one thing we did not
mention possibly earlier let us suppose let us do this.

(Refer Slide Time: 21:47)

Let us suppose we have an n variable function, we have an n variable function so my question


is how many distinct functions are possible, functions are possible. Well, here we are raising
this question, because we are talking about programmability just by changing this input 0 1
pattern I can change the function, I can go for one function to another. So my question is in 3
variables for example, total how many functions are possible let us think in this way, think
about this truth table this is a 3 variable function for n=3 , so I have my output
specification here this output column specifies the function. So how many bits are there in the
3
output 8, there are 8 rows how do you get 8; 2 . Now you consider this as an 8 bit number
so how many possible 8 bit numbers can be there, now you have learnt about binary numbers
so in 8 bits how many possibilities can be there 28 .

3
8
So, the total possible number of this 8 bit vectors you can have will be 2 means 22 , in
n

general for n variable it will be 22 right. This is how you get the total number of functions

345
that are possible in an n variable function like this here for the multiplexer case also for the
same three means 8 to 1 multiplexer you can implement so many functions 2 to the power 8
or 256 this is 256 functions, because there are 256 kinds of truth tables we can generate by
changing this 0 1 pattern.

n
Now, let us try to improve upon this for an n variable function we were require in 2 to 1
multiplexer now what you are saying that we can do a little better how; we are saying for an n
n−1
variable function let us use a smaller multiplexer we can do with 2 to 1 multiplexer this
is sufficient.

(Refer Slide Time: 24:17)

Now here again I take that same example of a 3 variable function so n=3 , so how big a
multiplexer I require 22 to 1; that means, 4 to 1 there are 4 inputs, 1 output and 2 select
lines. Now the question is how do I apply this inputs to this multiplexer because in the earlier
case there are 8 inputs straightaway we are applying the output column to there.

So, the procedure that we follow is something like, this you say it says connect n−1
variables to the select lines there are 3 variables A B C let us say I connect A here and B here
two of the variables I connect to the select lines, C is still left right and with partition the truth
table into groups of 2 rows like we first consider the first two rows. Let us look at the output
and the variable that we have not yet assigned we see that C is 0 f is 0 if C is 1 f is 1. That
means, f and C are the same. So in the first input for 0 0 case you see A B is 0 0 we directly

346
connect the variable C to this first data input, they directly connect C here. Let us look at the
second row pair of rows now here you see that the outputs are 0 and 0 irrespective of the
value of C it is always 0.

So, you apply directly a value 0 here in the second one where A B is 0 1, next two rows
where A B is 1 0 you see irrespective of C the output is always 1, 1 1 so you apply a 1 here
and the last pair you see for 1 1 case D3 will be selected if C is 0, f is 1, if C is 1 f is 0; that
means, the inverse of this. So I will use NOT gate and applies C bar to this, this will be my
design. So you see if I do this then maximum I will be requiring one additional NOT gate to
generate this C bar and of course, this constant 0 1 I may have to apply and C I may have to
apply, that is why it is mentioned apply the remaining variable it is complement C bar if
required and 0 or 1 at the data inputs right. Let us take another example.

(Refer Slide Time: 27:44)

This is a bigger example I consider a 4 variable function for a 4 variable function I will be
requiring 2 2 23 to 1 multiplexer 8 to 1 multiplexer.

So, this ABC this we are applying here A B and C and we are applying the same rule consider
the first two rows, first two rows you see output is 0 0 irrespective of D, now D is the last
variable A B C we have already applied A B C 0 0 0 D 0 is selected so we apply a constant 0
here, next two rows output is 1 and 1 always 1 you apply a constant 1 here. Next 2 rows if D
0 this is 1 D is 1 this is 0 so apply D bar D bar, next two again 0 and 0 I apply 0, next two if

347
D is 0 f is 0 D is 1 f is 1 I apply D directly, next two it is 1 and 1 I apply 1, next two again if it
is 0 is 0 1 is 1 I apply D, next two again 0 1 1 0 I apply D bar this will be my circuit.

So, so here we are inserting the truth table and we are applying either 0 1 the last variable
here D or the compliment of D, D́ to the input so I need a multiplexer and just to generate
D́ I need a NOT gate that is all. So you see using a multiplexer we can so conveniently
design any arbitrary logic functions. So, we sometimes called this kind of multiplexer as
generalized function generator because of this property.

(Refer Slide Time: 29:50)

Let us look into one more thing that we can implement arbitrary function using 2 to 1
multiplexers only there is a principle called a Shannon’s decomposition theorem. This we
shall be looking again later in some later lectures so what is Shannon’s decomposition
theorem say this says that if I have an invariable function.

Let us say f the input variables are x1 to xn we can decompose this function with respect to
any of these variables. Let us say we are decomposing with respect to x1 so if I decompose
with x1 what will happen this function can be written as x´1 and this same function where
x1 is replaced by 0 or x1 and the same function with this x1 replaced by 1 this is the
Shannon’s decomposition theorem. This you can do with respect to any of the variable and in
a compact form we write it as x´1 f 1 0 this is called f 1 0 which means it is with respect to
x1 we are setting x1 to 0 and f 1 1 means this is x1 they are setting it to 1.

348
Now you see if I do it in this way you think in this way suppose I have a multiplexer here this
is a 2 to 1 multiplexer, I have 2 inputs, I have 1 output and I have a select line. Suppose, I am
connecting x 1 to the select line the output represents my function f now if I connect this f 1 0
here and if I connect f 1 1 here so does not this multiplexer implement this if x1 is 0 then f 1
0 if x1 is 1 then f 1 1. So every time you do the Shannon’s decomposition with respect to a
particular variable we can use a multiplexer to mimic that or to realize that this is the basic
idea.

(Refer Slide Time: 32:17)

So, what are we saying each applications of Shannon’s decomposition theorem as I said is
like a 2 to 1 multiplexer, where the variable is applied to the select inputs and this f 1 0 and f
1 1 is applied to the data inputs.

349
(Refer Slide Time: 32:33)

Let us take a simple example consider a function like this a 3 variable function
Á B+ BC + A Ć .

Suppose we apply Shannon decomposition with respect to variable A, so what is the Shannon
decomposition Á and substitute A as 0 if A is 0 the last term will vanish this Á is 1
only B B plus BC B plus BC plus A and substitute A by 1 the first term will vanish so it will
become 0. A by 1 this will become Ć , BC plus Ć and if we apply minimization B plus
BC is nothing, but B BC plus Ć is nothing but B plus Ć . So this decomposition you
can express like this you can have a mux where we have applied a select input A the output
implements f and you will have the 2 inputs. In the first input I am just writing the function
you have this function B and the other input a of the function B plus Ć ok. Now because
in the first one you have only a single variable I can directly apply B here no problem, but for
B plus Ć I may have to further do a decomposition; so similarly B plus Ć .

If you let us say decomposition with respect to B we get something like this B plus Ć B
bar and substitute B as 0, it is only Ć plus B substitute B as 1 so 1 plus Ć is 1. So here
as if we are using another multiplexer this were decomposing with respect to B so the select
line is B and the two inputs are Ć and 1, Ć and 1. So you see given any function if you
systematically carry out this decomposition with respect to the variables one after the other
you will be getting a network of 2 to 1 multiplexers. So this is in fact, a very general
approach to implement any arbitrary switching function this we shall be discussing later.

350
There is a data structure called binary decision diagram we shall see that this can be used in a
systematic way to construct that kind of a data structure or representation.

So with this we come to the end of this lecture. Now, in this lecture we have basically
considered multiplexers, its various kinds of designs how to construct larger multiplexers
using smaller ones and also how we can design logic functions using multiplexers.

Thank you.

351
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institution of Technology, Kharagpur

Lecture – 25
Logic Design (Part - II)

So, we continue with our discussion on Logic Design. This is the second part of the lecture
logic design part 2; now you recall in our earlier lecture we talked about multiplexers. So,
continuing with this discussion we talk about another kind of a basic building block, which is
similar to a multiplexer, but it can be considered as its inverse or reveres. This is called a de-
multiplexer, let us start with that a de-multiplexer.

(Refer Slide Time: 00:48)

So, why we say this is an inverse? This works in a reverse manner as compared to a
multiplexer. You see for a multiplexer there are many inputs and one output, many inputs and
output depending on the select line one of the inputs is selected. But in a de-multiplexer there
will be one input, but many outputs depending on the select line the input will be connected
to one of the outputs, this is why we call it the reverse or inverse.

So, a de-multiplexer will be having n number of output line that is called D0 to D n−1 .
So, just like a multiplexer there will be m number of input lines where n=2m and there
will be one data input line here let us say IN. So, the way the de multiplexer will function is

352
very similar to multiplexer just the reverse of it when S is 0; that means, it is all 0 IN will be
connected to D0. If S is 1 IN will be connected to D1, if S is 2 IN will be connected to D2
and so, on right. This is called a 1 to 2m because there are 2m outputs n is 2m de
multiplexer ok.

(Refer Slide Time: 02:34)

So, as I had said a de-multiplexer looks like this, there is one input n number of outputs and
select lines, which will be selecting one of the outputs the input will be connected to that.
Now, you may think where we can or what are the applications of this de-multiplexers and so
on; there will be many applications in fact, but one application I would like to just highlight
their use in data communication.

353
(Refer Slide Time: 02:55)

Let us consider a scenario, suppose I have a multiplexer here let us consider 4 input
multiplexer and on the other side let us say I have a de-multiplexer, let us say here also there
are 4 outputs. So, multiplexer is having one in one output DEMUX have been one input and
suppose they are connected let us say over a long distance connection line they are connected.

So, MUX will have two select lines DEMUX will also 2 select lines. Let us say this inputs
are connected to A B C D some input sources and this outputs of the DEMUX’s are
connected let us say W X Y Z. Now, you imagine that this is something like a telephone
exchange, you think of the older telephone exchange which you had many years back, we are
used to dial the numbers and using some switches in the exchanges the connection used to get
established.

So, conceptually this is something like that suppose A B C D are subscribers W X Y Z are
also subscribers let us see B is trying to make a call to Y. So, what the telephone exchange
will do the telephone exchange will be applying 0 1 to the select line of here. So, that B get
selected and it will apply 1 0 to the select line here. So, that this output gets selected. So,
ultimately this B will be available on this line and this will get connected to Y. So, for
multiparty communication, this kind of multiplexer de multiplexer pair can be very useful at
as this simple example shows it right ok.

354
(Refer Slide Time: 05:25)

Let us move on let us see how you can implement a small de multiplexer there is 1 input and
there are 2 outputs. First again let us considered a truth table approach, this is a very simple
case there only 2 inputs there is IN there is S0 and there are 2 outputs D1 D0. So, inputs can
be 00 0 1 1 0 and 11. Now, if the input is 0 0 select line is 0. So, this 0 will go to D0 if select
line is 1 input will go to D1, select line is 0 this input 1 will go to D0 and if it is 1 it will go to
D1, then assuming the output which are not selected they will remain at 0.

So, this is my function. So, if you construct the truth table you can I mean see the this cannot
be minimized any further. So, what it D0 indicate? D0 has a single 1 here this is S´0 into
IN, let us see D0 is S´0 IN and D1 is S0 IN. So, if you want to implement it you just need
two AND gates, they will be generating D0 and D1 and this input IN will be connected to
both. And S0 will be connected directly to here and via a NOT gate to here, this will be
simple implementation for 1 to 1 MUX using gates right, just implement these two functions
just 3 gates you require fine.

355
(Refer Slide Time: 07:30)

Now, think of a larger DEMUX it is a 1 to 4 because there are 4 outputs there will be two
select lines. Now, again instead of going through the truth table, let us try to find out how we
can directly write down the expressions. You see D0 will be what I am saying IN will go to
the D0, if both S0 S0 S0 S1 0 and 0. So, we write it like this S´1 S´0 IN and of this 3
D1 will be S1 is 0 S0 is 1, S´1 is 0 IN, D2 is similarly S1 is 0 bar 1 0 and D 3 is 11.

So, how you can implement this very easily? We would be requiring 4 AND gates to generate
the 4 outputs D0, D1 D2 and D3. Let us say one of the inputs is common to all this is
connected common to all of them that is your IN and this S0 and S1 which are there, let us
say that we have also have some NOT gates connected. So, we have S´0 and S´1
available with us. This is S´0 , this is S´1 . So, in the first gate for D0 I need to connect
S´1 and S´0 . So, I connect S´0 I connect S´1 , for D1 I need S´1 and S0. So, I
need S´1 and S0 like this for D2 I need S1 and S´0 .

So, I need S1 and S´0 , S´0 is this S´0 this and D3, I need S1 S0. I connect S1 I
connect S0, just like this to look at this expression just connect either S0 and S´0 S 1 and
S´1 accordingly. So, you get a de-multiplexer circuit right quite simple fine.

356
(Refer Slide Time: 10:20)

Now, let us come to a special kind of a de-multiplexer it has a different name it is called a
decoder. Decoder you can say is a special type of a de multiplexer, but it has very specific
and different kinds of applications. So, what is a decoder? As I had said this is a special case
of a de-multiplexer what kind of a special case, you can say that I have a de-multiplexer
where there was one input and there was several outputs and select lines. So, I am assuming
that the input line is always equal to 1.

So, under that special case I shall be calling this decoder and because I am assuming this to
be always 1; so, this input need not be connected explicitly. So, I will say that only my inputs
are the select lines and my only output at the just the output lines.

So, a decoder essentially will be having n number of inputs and 2n number of output let us
say right depending on the applied input. So, whatever you are applying exactly one of the
output will be set to 1, while the others are set to 0 like let us take an example

357
(Refer Slide Time: 11:43)

Suppose I have a decoder with 2 inputs and 4 outputs let us say I have applied 0 1; 0 1 means
1. So, 0 1 let us say this line will be set to 1 all others line will be set to 0. If I apply with a 11
then the last line will be 1 others will be 0. Now, the point to notice that which you have
mentioned here is that, some decoders are available where the reveres convention is followed.
So, what is the reverse convention? It says that the line which is selected that will be set to 0
and all others are set to 1.

So, you see design is the same just on the output as if you are connecting a not gate right. So,
either you follow this convention or you follow this convention. There are many applications
of a decoder like when you have many functional block. For example, memory in a computer
system there can be many memory modules depending on the address you have to select one
of the modules you can use a decoder to select that. So, there were many applications we are
not talking about them right now, some of the application we shall be seeing later possibly,
but not right now. There are other applications like code conversions, data distribution and
many others.

358
(Refer Slide Time: 13:20)

Let us take a specific case 2 to 4 decoder. So, you have a 2 to 4 decoder whether a 2 inputs 4
outputs look at the truth table. Depending on the data info let us say if it is 0 0 and f0 is set to
1 for let us say I am following this convention the others are 0 the line selected is 1. If it is 0 1
f1 is selected, 1 0 f2 is selected and 11 f3 selected.

Now, here also you can straight way write down the expressions for the 4 outputs you see f0
will be 1 only when D1 D0 at 0 0. So, you can straight away right f 0 is equal to ´ D´ 0 .
D1
Similarly, f1 will be 1 when it is 0 1 ´ D 0 , f2 1 0
D1 D1 D´ 0 and finally, f3 1 1 D1 D0.
So, again just to implement decoder you need 4 AND gates all of them will be 2 inputs; this
will be generating the 4 outputs f0 f1 f2 f3 and the inputs you apply accordingly, D0 D1 are
there you will be requiring 2 NOT gates you can apply the inputs accordingly right ok.

359
(Refer Slide Time: 14:56)

Now, the point to notice that the decoders that are available, the we use they typically has
another input called enable like normally a decoder has 2 inputs and 4 outputs like a 2 to the
4 decoder, there will be another input called enable using which I can disable the decoder if I
want. The idea is that if a decoder is disabled, then irrespective of what we apply in the input
none of the outputs will get selected, only when decoder is enabled then only depending on
the input the respective output will gets selected.

Now, this feature you require when you have many decoders, I will take an example later and
you need to select one of the decoders. So, you enable one of them you disable all others the
concept is like this ok. Now, in this case the truth table will be very similar the only thing is
that I have an extra input call G, where the normal operation of the decoder which is shown
in the first 4 columns the G 0, but when G is 1.

G 0 means I am assuming that this is active low; that means, G equal to 0 means the decoder
is active and G equal to 1 means the decoder is disabled. So, when the decoder G is 1
irrespective of what we apply in the input the outputs will always 0 none of them will be
selected right. So, when the expressions for the outputs you will be similar only you add a
literal Ǵ to all of them.

360
Because they will be 1 if is also 0. So, Ǵ will get added. So, you will be need in the AND
gates here which will be 3 inputs each in the same way you can implement right let us take
some examples.

(Refer Slide Time: 17:19)

Suppose I have a smaller decoder 2 to 4 decoders, these are available to me. So, I have this
kind of decoder available to me 2 inputs 4 outputs and enable, but I want to design a larger
decoder 4 inputs and 16 outputs. So, how do you do it? Now, we shall be showing you the
design the basic idea is like this we will be requiring 5 such smaller decoders.

(Refer Slide Time: 18:00)

361
In what way the decoders will be connected somehow like this somewhat in this fashion well,
I shall be showing you the detailed diagram there are 4 2 to 4 decoders will be 2 inputs 4
output each.

So, this will make 16 outputs and there will be another decoder which will also be a 2 to 4
decoder this decoder actually will be selecting one of these 4 decoders. So, depending on
what we apply here, one of these 4 decoders will get selected and depending on what you
apply in input of that decoder, the corresponding output will get selected. This is the basic
concept regarding the construction of the 4 to 16 decoder here.

(Refer Slide Time: 19:00)

Let us see. So, here I have this schematic let me complete the connections. So, what do you
want here just a second, let me just make this. There is a formatting issue let me correct this
14, this is 15 this is 10 and this is 11 fine.

So, let us have the so, now, we have a 4 to 16 decoder let us consider how our 4 to 16 decoder
will look like our 4 to 16 decoder. Let us say there will be 4 inputs let us call them A B C D,
where A is the most significant bit B is the less significant bit. So, let us see how we connect
the inputs to the decoder, what we say is that we will be connecting A and B here the most
significant 2 bits of the decoder, and C D we connect to the inputs of the other 4 decoders.

Now, you see as I had said in the previous diagram that the 16 outputs are generated by this 4
decoders f0 to f3 here, f 4 to f 7 here f 8 to f 11 here and f 12 to f 15 here. And this decoder

362
what it does this will be selecting one of the 4 decoders. This is the first decoder, this is the
second decoder, this is the third decoder and this is the fourth decoder and this will be the
enable of the whole decoder this is the G of this 4 to 16 decoder.

Let us see how this works. So, as I had said this A B C D or the 4 select lines, let us take
some examples suppose I have applied 0 0 0 and 0. So, under this condition f0 is supposed to
get selected; that means, f0 should be one all other should be 0 let us say how it works. Since
A B is 0 0, for the first decoder this one only will be one all others will be zeros for the
reverse.

So, let us consider I mean if we consider the convention that you are followed then will be
connecting a not gates here let us assume there is also not gate in between here they are
connected via not gates. So, if this is 1. So, this decoder will be selected, but these are all 0 0
0. So, these enables will be 1 1 1 these are not selected not selected not selected and C D is
also 0 0. So, C D is 0 0 means this f0 will get selected.

Now, let us say I apply 0 0 1 0, because A B is 0 0 again this one will be selected this line, C
D is 1 0 means this is the decoder which is selected C D 1 0 means the third line this f2 will
be selected that is correct. Let us take another example suppose I apply 1 0 11. 1011 which is
supposed to be selected is 10 and 1, 11 this f11 is supposed to be selected; let us see how? In
A B I have applied 1 0, in A B I have applied 1 0. So, in the output this is not the case because
is 1 0 this line will be selected the third line, the others will not be selected. Because the third
line is selected this will be 0 and now this decoder will be selected, now C D I have applied
11 11 it will select the last one f 11. So, you will see this works.

So, in this way we can very systematic look and design an arbitrary 4 by 16 decoder using 2
to 4 decoder. As I had said there are many applications where you need to design very large
decoders. So, it is important to know how such smaller decoders can be connected we say
cascaded together to form larger decoders. So, using this very small example, I think I am
able to show you how smaller decoders can be used to construct larger decoders. In this
example we had used 2 to 4 decoders to construct a 4 to 16 decoder.

363
(Refer Slide Time: 24:40)

So, the last thing you consider here is that, how we can implement logic functions can we
implement logic functions using decoder also. We say that will yes it may not be very natural,
people may not be doing it because multiplexers and much more convenient. But decoders
can also be used to implement some logic function implement some truth table let us see how.
So, the idea is that you use and n by 2n decoder and an OR gate to implement a function
of n variables.

(Refer Slide Time: 25:37)

364
So, how do we do it let us take an example, let us consider an example let us say that I have a
truth table; consider a truth table say this is a 3 variable function A B C and f. These are the
inputs, let us say the output column is looks like this there are 5 1’s in the truth table.

So, in terms of the function representation, you can also write it like this represents the
function sigma notation it is 0 3 4 6 7; 0 3 4 6 and 7. Now, how we can implement using
decoder? What we say is that we take a 3 to 8 decoder. So, we connect the 3 inputs here A B
C there will be 8 outputs this 8 outputs will be corresponding to the different combination.
So, A B C to a selected and writing down the numbers 0 1 2 3 4 5 6 and 7.

Now, you look at this what are the minterms that are true minterms 0 3 4 6 7, you connect an
OR gate here 0 connect one input 0, 3 connect 3, 4 connect 4 connect 6 connect 7 this will be
your output. So, it is easy to see how it works, depending on A B C value if one of the true
minterms is applied.

So, the corresponding output will be one let us say you have applied 0 1 1. So, this one will
be 1, this an OR gate if any one of the input is 1 the output will be 1. So, in this way you can
implement the function right very simply. But you may need a larger OR gate with large
number of inputs depending on how many ones are there in your function right.

(Refer Slide Time: 28:20)

365
So, this is another example. So, you can similarly work out here. If you apply A B C, again
you have the 8 outputs here the n 2 minterms are there 0 0 1 which is this 1 0 0 4 1 1 2 3 4
this one, 5 and 6 5 and 6 connect them to the OR gate this will be a function ok.

So, these are also very programmable kind of a thing the decoder remains the same only the
OR gate and how you connect them that will vary you can implement any arbitrary function
alright. So, with this we come to the end of this lecture. Now, we shall be continuing with our
discussion in the next lecture, where we shall be talking about some other kind of building
block; some special kind of decoders and other building blocks that are also very useful in
logic design.

Thank you.

366
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 26
Logic Design (Part III)

So, we continue with our discussion on Logic Design, this is the third part of our logic design
lecture.

(Refer Slide Time: 00:30)

Now here, the first thing that I would like to talk about is to discuss some kind of special
decoder design, but before I go into it let me try to give the motivation, see we talked about
the design of decoders. Now, you imagine that you are trying to design a 4 to 16 decoder.

Now, if you design 4 to 16 decoder directly using gates not using smaller decoder as you are
seen, directly using gates, then for every output you need a large gate you need an AND gate
with 5 inputs, 4 inputs will be some combination of the input variables and one for the
enable. So, you will be needing 16 AND gates each with 5 inputs.

Now one difficulty in designing circuits is that when the number of inputs of a gate increases,
it becomes increasingly difficult to design and manufacture those gates while the gates
become unreliable, they consume more power they become slower and there are many
drawbacks. So, it is always attempted to keep the number of inputs to a gate as smallest

367
possible. So, instead of designing a very large decoder just in one go it is always better to
design smaller decoders and use it to build up big decoder larger decoder.

Like the example we showed that using 2 to 4 decoder we can construct 4 to 16 decoder.
Now, here we shown an alternate approach here also we are trying to build a 4 to 16 decoder,
but the approach is different. So, here I am showing this schematic so, we are using two 2 to
4 decoders. If the concept is that I mean using a row column concept, there are 4 columns and
there are 4 rows.

One of the decoder depending on two of its inputs will be selecting one of the rows. The
other decoder depending on the other two inputs will be selecting one of the columns.
Suppose you applied W X equal to 1 1 which means the 4th row is selected and, if we apply
W Z equal to 1 0 which means third row is selected.

So, it is 1 1 and 1 0 so, this last row and third column means I am referring to this junction,
now what I do at every junction I connect an AND gate you see, every row line and every
column line I connect as inputs to an AND gate and the output of the AND gate are my
decoder outputs. So, just imagine here 1 1 and 1 0 will what if it is 1 1, then only the last row
will be 1, if it is 1 0 the third column will be 1.

Now, if I connect through these two lines and AND gate, both of this input will be 1 and 1 I
have connected this and this. So, the output will also be 1, but for all other AND gates this
both inputs will never be 1 at least one of them will be 0. So, all others will be 0 only this will
be 1. So, this works perfectly as a decoder depending on what I am applying to the input W X
and Y Z exactly one of the gate outputs is becoming 1, that is my decoder output normally
large decoders are built, or constructed using this way, because here as you can see the
number of gate inputs are all small all two input and gates you required right ok.

368
(Refer Slide Time: 04:45)

So, next move on so, let us talk about another kind of a special decoder, which is sometimes
very useful particularly, when we work with BCD numbers, we need to use a decoder to
identify what decimal digit, or BCD digit is that we need a decoder which is called a BCD to
decimal decoder.

BCD as you know, BCD is a 4 bit number encoding, that is a W X Y Z is BCD and in BCD
number the valid numbers go from 0 0 0 up to 9, the remaining 6 combinations are
considered to be not valid. So, in the output we have only 10 outputs not 16 outputs like in a
normal decoder, depending on whether you are applying one of these so, one of the outputs
will be one so, again there is an enable.

So, the way you construct the decoder is exactly same as the normal decoder, let us say this 5,
the gate for this 5 this 5 indicate 5 is 0 1 0 1. So, we will be connecting using first one to
connect enable, then you connect X, then you connect Y bar, then you connect Z and of
course, W bar.

So, this way you can connect this inputs in various way and of course, here what final
realization we have shown is after minimization, because you see in this case among the
inputs there are 9 valid inputs and they were 6 don’t cares.

369
(Refer Slide Time: 06:47)

So, when you do a minimization these don’t care inputs also will result in the cancellation of
the some of the terms. So, I am showing just the final realization, I am not showing in the
steps of minimization. So, this is another kind of a decoder that we use in some designs BCD
to decimal decoder.

(Refer Slide Time: 07:17)

There is a another kind of a decoder which is also very useful well we have also, we have all
seen this kind of digital displays in many gadgets and places in a in our calculators, in our
washing machines, in our refrigerators different places, where there are some digitals displays

370
the displays that display the number of displayed in a very specific way these are called 7
segment displays.

So, how 7 segment display you see here I have shown a picture of a typical of 7 segment
display unit, you see 7 segment display unit there are 7 lamps, which are typically numbered
or named as a b c d e f g by selectively drawing this lamps, you can display any of the digits
from 0 to 9. Let us see if you want to display 5 you display this digits this lamp this is 5. So,
if you want to display 3 you display these 0 to 9, you can display anything right.

So, basically this 7 segment display units that we talk about these displays, there are 7 display
segments that that I have shown here and, the way they are manufactured that can be
different, they can be manufactured using light emitting diode, liquids crystal display, neon
lamp or any other technology. So, as I have said the examples depending on what digit you
want to display, the appropriate segment should be activated and they should glow ok. So,
these displays come into variety.

(Refer Slide Time: 09:16)

Let us talk about light emitting diode LED kind of displays, the first variety is the common
cathode variety, where this 7 segment are actually 7 LEDs, or light emitting diodes, they are
called common cathode as the cathode terminals of the 7 LEDs are tied together, this is the
common terminal which you connect to ground.

371
And you apply a voltage to the this a b c d e f g input depending on which one you want to
glow ok. Similarly you can have a common anode kind of a display unit, where the LEDs are
connected in reverse order, the anode point is connected together and, they are connected to a
positive supply voltage and, when you want to glow a digit you will have to ground the
corresponding input so, that a current flows ok.

Now, in the example that you will be showing will be assuming that is common cathode that
means, if you want to display a segment you have to apply a high voltage let us say 1 on that
line, the other side is grounded. So, when you talk about BCD to 7 segment decoder we are
talking about a circuit like this.

So, we want to design a decoder this, where there will be 4 input applied this is our BCD
input and 7 segment outputs will be generated which will be connected to the 7 segments of
this display A B C D E F G and depending on the BCD input the correct segments will be
activated so that the corresponding digit blows this is a basic idea.

(Refer Slide Time: 11:25)

So, the truth table the BCD to 7 segment decoder looks like this. So, I have also shown a 7
segment display here side by side for reference, let us say my digit is 0. So, when I want to
display 0 other than g all the other segment should be 0. So, you see a b c d e f are all 1 only
g is 0. So, if I want to display 1 only B and C must glow all other should be 0, you see only B
and C are 1, for 2 a b g e n and c should glow A F and C should be off you see for 2 C is off
and F is off.

372
So, like this you can display all the digits say for 8 all the segments should glow all are 1 for
9 only E and D should be off, for 9 only E and D should be off other should glow. So, it is
easy to construct this table right corresponding to the BCD input code. So, exactly how you
want to display it so, you can just activate the segment assuming common cathode display
that 1 means it will glow 0 means it will not glow ok.

And, just from this you can write down the expressions also, I have not shown you the exact
process of minimization again well I leave, this again exercise for you this a 4 variable
function there are so, many don’t care states 10 11 12 13 14 15, there are 6 don’t care states.
So, when you if you I mean when you use a Karnaugh map you can use this don’t care states
also for minimization, let me take one example as an instant sorry yeah, let us take one
example let us consider a 4 variable Karnaugh map, let me show you one x1 x2 x3 x4 let us
say x1 x2 is in this direction, x1 x2 and x3 x4 in this direction let us say.

So, the inputs as 0 0 0 1 1 1 1 0 and 0 0 0 1 1 1 1 0, let us take this segment A for example,
segment has many Karnaugh let us take another let us say segment C, C can be minimized let
us have C look at the segment C, or fine fine A is fine let us take a look at the segment A.

You see where they are 1 for 0 this is 0 1 2, 2 is 0 1 2 is this, then 3, 3 is also 1 this is 3 4 is 0
5, this is 4 this is 5 5 6 7 4 4 5 6 7, this is 7 and 8 and 9 this is 8 and this is 9, this is your
Karnaugh map and in addition there are 6 don’t cares 10, 11, 12, 13, 14, 15, 10 is 1, 0 so, I am
showing the don’t care as x these are all don’t cares, these 6 are don’t cares.

So, now, you will have to just include these once and go ahead to the minimization. So, you
can make the cubes like here the 4 corners will be a cube, then you can have a cube like this,
you can have a cube like this, you can have a cube like this, you can have a cube like this,
like if this if you do a minimization we will be arriving at this expressions ok, I am not
working all of them I am leave it as an exercise for you. So, you can actually verify this ok,
minimize this and see that whether it is a expressions are tallying with this ok.

373
(Refer Slide Time: 16:16)

Let us move on next let us come to a circuit, which you can call it is an reverse of the
decoder, I call it an encoder, there are many inputs, there are few outputs, depending on
which input is activated at a time I am assuming only one of the input is active.

So, the output will contain the binary code of the input which input that is active, let us see
how it works. So, an encoder will typically have 2n input lines and n output lines. So, here
there is an example for n equal to 3, there are 8 input lines and 3 output lines. So, I am
assuming here that only one of the input lines is one at a time. Let us say D3 is 1 the output
will contain the binary code of D3, 3 is 3 so, it is 0 1 1 that is 3 this will be the output, if this
6 is 1, 6 is 1 1 0 so, it will be 1 1 0 and so on this is how the encoder will work.

374
(Refer Slide Time: 17:41)

So, for a normal encoder the truth table will look like this. So, this is the first case I am show,
if you see here again you need not show all the there are 8 inputs. So, there will be 28
possible input combinations. So, you need not show the whole truth table, because this is a
functional block you can show only the functional rows. The first row says only this row is 1
others are all 0s. So, this will mean 0 0 0, if D1 is 1 0 0 1, D2 is 1 0 1 0 like this, where the
other inputs are not valid and they are all don’t cares.

Now, now one issue here is that in this kind of an encoder design that, we are assuming that
exactly one of the input is at 1, but let us say by accident or otherwise I have also made
another input as 1. Let us say D4 is 1 D0 is 1. So, what will be my output? So, will it be 0 0 0,
or will it be corresponding to D4 1 0 0.

So, for the other input combinations where more than one inputs can become 1, there is some
ambiguity. So, to remove this ambiguity normally the encoders that we design, they are
something called priority encoders not a normal encoder like this table shows, now in a
priority encoder the input lines will have some priority like. Suppose, I have said D0, I have
applied 1 D4 also I have applied 1 and I am saying line that the D4 is having higher priority
than D0.

So, if both of them are 1 D4 will take priority and the output corresponding to D4 will be
generated 1 0 0 this is how the priority encoder should work. So, for a priority encoder the
basic concept is like this let me just repeat.

375
(Refer Slide Time: 20:00)

Now, let us put it in the proper context, see a priority encoder, or means any encoder circuit,
they are used an application. Let us say 4 inputs and 2 outputs, they are used an application
where some sub circuits, or some devices or some units they are generating some kind of
requests, the input lines are assumed to represent you need that requests some service.

And the output will indicate that which one of the inputs that the input 0 1 2 3 there are 4
input units, which of the inputs are generated the request. Now, in a priority encoder as I said
whenever 2 of the inputs let Di and Dj, where i> j request service simultaneously, then
you assume that Di is having higher priority, because i> j . So, the output will be
generating the binary code for the highest priority input that is active. So, this is how the
priority encoder works ok.

376
(Refer Slide Time: 21:22)

Let us look at the truth table it will be clear think of an 8 by 3 priority encoder there are 8
inputs, there are 3 outputs. For the first case where only D0 is 1 others are all 0s, there is no
ambiguity it is 0 0 0, but when D1 is 1, D1 has a higher priority than D0. So, D0 will be don’t
care D0 can be 0 or 1 does not matter, but D1, but the rest are 0s it will be 0 0 1, but when D2
is 1 the lower priority 1s will be don’t care, it will only be important the that the higher
priority 1s are 0 0 0.

Similarly, D3 is 1 lower priorities are all don’t care this is like this so, the truth table you look
like there will be lot of don’t care entries here. So, when D7 is one for example, so, the other
inputs does not influence the output. So, if D7 is 1 and the output will always be 1 1 1
irrespective of the other inputs, right.

377
(Refer Slide Time: 22:47)

This is how priority encoder works then it is again I leave it as an exercise for you to find out
how to generate the expressions, just here I am showing out the expression for the outputs.
So, for this f2 f1 and f0 if you write down the expressions, then after minimization the
expressions will be like this.

So, I leave it as an exercise for you to verify again. So, if you want to implement this using
gates will of course, first you will have to minimize this and then implement these using gates
right ok. So, f2 for example, you require a single OR gate nothing else is required f2 will
required a single 4 input OR gate 4 5 6 7, this f1 for example, will be requiring 2 AND gates
3 inputs each, well for see a some not gates also some are bar are the D4 D5 bar and D6 D7
are direct. So, there will an OR gate correct D6, D7 will be direct.

378
(Refer Slide Time: 24:00)

So, similarly f0 now, let us talk about a circuit element called comparator which is also very
important, sometimes we need to compare the values of the magnitudes of two numbers,
whether they are equal whether one is greater than the other, or whether one is less than the
other, this is called a magnitude comparator a simple comparator digital comparator.

So, when you talk about an n bit comparator, it basically compares the magnitude of two n bit
numbers A and B and, there will be 3 outputs GT EQ and less than greater than or equal to
less than, where GT will be said to 1, if and only if A is greater than B in magnitude EQ will
be 1 LT will be 1, if A is less than B, this is how the comparator works right.

379
(Refer Slide Time: 25:00)

So, let us it is the very simple scenario first, just a one bit comparator. So, I want to design a
circuit, where the two numbers are adjust single bit numbers A and B 1 bit number, and I
have my three outputs greater than less than and equal to.

Now obviously, A greater than B means what for a single bit number that A is 1 and B is 0
this is only case. So, it will be A B́ , A B́ , less than B means so, here also only case only
one case A 0 B 1 so, Á B , Á B , and equal case A equal to B you see that can be two
scenarios, A 0 B 0, or A 1 B 1.

This can be expressed as Á B́ OR A B which is nothing but the exclusive NOR function,
symbolically it is written like this exclusive NOR, if A B are the inputs output will be this
EQ. Because of this property XNOR is basically checking equality of 2 bits, this is
sometimes also known as the equivalence function, or the equivalence gate XNOR is
sometimes also known as equivalence gate, 1 bit comparator is very easy to design, let us
look into larger size comparators, let us say 2 bit comparator.

380
(Refer Slide Time: 26:54)

Here I am assuming that the numbers are 2 bits A1 A2 is one number B1 B2 is another
number and I want to generate the 3 outputs, you see let us construct the Karnaugh map this
A1 A2 along this side B1 B2 along this. Let us think of the 3 scenarios separately greater than
so, what are the greater than scenarios.

Let us take some examples A1 A2 and B1 B2, it is a when A1 A2 is let us say 0 1 and B1 B2
is 0 0, then A is greater than B so, when it is 1 0 B1 B2 is either 0 0 or 0 1, then also A is
greater than B and A1 A2 is 1 1 B1 B2 is either 0 0, or 0 1 or 1 0 then also it is greater so,
there are 6 possible scenarios.

So, see in the truth table I am showing this a 6 possible cells, where GT GT GT or not,
because GT LT and EQ will be all exclusive that is why I can show it on the same truth table,
equal to will be 0 0 equal to 0 0 0 1 equal to 0 1 1 1 equal to 1 1 1 0 equal to 1 0 and the
remaining cells will be all be less than.

So, directly from the single truth table you can actually write down the expression for all of
them, like for the GT the cubes will be one will be this one will be this and the other will be
this these two and so, on. So, if you write down it will be this. This EQ it cannot be
minimized any further, there will be 4 terms with all containing 4 letters, for LT also you can
combine this 4, you can combine this 2, you can combine this and this and there will be three
terms right. So, a 2 bit comparator is also relatively easy to design you can do it in this way.

381
(Refer Slide Time: 29:19)

Now, when we talk about a 4 bit comparator let us see, suppose I want to design a 4 bit
comparator, there are two 4 bit numbers B0 to B3, A0 to 3 let us say. Let us denote or
introduce a symbol called xi which is the equivalence of Ai and Bi that means, XNOR
function that means, xi will be 1 if Ai and Bi are equal.

Now, with this notation we can directly write down the expressions for EQ GT and LT, you
see A and B they will be equal if all corresponding bits are equal; that means, x3 is 1, x2 is 1,
x1 is 1 and also x0 is 1 just end up all 4, G3 you see you think of the two numbers the number
A and B.

So, if you see that A is 1 that A3 is 1 and B3 is 0, then you did not have to see the other bits,
because irrespective of the other three bits will be the most significant bit A is greater than B,
then always A will be greater A3 B3 bar, but if you find they are equal these two are equal 0 0
or 1 1 these two are equal, then you need to see the next bit.

If this is 1 and this is 0 so, you see the first bit is equal and A2 is 1 B2 is 0, or the first 2 bits
are equal A1 is 1 B1 is 0, or the first 3 bits are equal A1 is 0 B1 A 0 is 1, B0 is 0 the LT is just
the reverse instead of A3 B3 bar is A3 bar B3 bar, this is 0 1, they are equal and this they are
equal, then this they are equal then this same way you can straight away write down.

So, you see for this kind of functional blocks you did not always have to construct the truth
table to write down the expression, you think mentally that what this block is supposed to do

382
and you can functionally you can write down the switching expression directly. This often we
do for the many of the functional books. So, with this we come to the end of this lecture on
logic design, in our next lectures we shall be looking at various ways of representing Boolean
functions and some of the ways in which you can manipulate them.

Thank you.

383
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 27
Binary Decision Diagrams (Part I)

So far we have seen different ways in which we can represent a switching function. We
looked at the truth table; we looked at various algebraic forms, like the sum of product, the
product of sum and so on. Now, over the next few lectures we shall be looking at some of you
can say unconventional, but very effective ways of representing switching functions which
have many applications. So, in many applications we do not represent a function just by truth
table or by an expression, but by using one of these representations that we shall be
discussing.

Now, the first such representation we shall be discussing is called binary decision diagram.
So, this is the first part of our lecture.

(Refer Slide Time: 01:16)

Let us try to understand first what is a binary decision diagram? So, as you can see form here
from this slide binary decision diagram in short we referred to as BDD, is essentially data
structure. We call it, a way to represent some information to represent a switching function or
a Boolean function. BDD is nothing but some kind of a graph. Now, for those of you who
were not familiar with graph let me give you a brief introduction to a graph.

384
(Refer Slide Time: 01:57)

As you see when you referred to a graph, a graph essentially consist of a set of vertices and a
set of edges. So, an example graph is these are the vertices typically they are represented by
circles and there are edges, this edges can connect circles. Now, there are many applications
where you can represent the information as a graph. Let us say this circles can indicates some
cities or towns and these lines can indicate the roads that are connecting them. So, these
edges can also have some labels; for example, in the example as cited the distance in
kilometers can be the level of the edges, right like this and so on. And, there are some graphs
where these edges can have directions. Like some of the roads may be one way roads, right.
So, you cannot drive in both directions, so such a graph is called a directed graph.

Now, there is another term which we use is called an acyclic graph. See, a cycle in a graph
means something like this. Suppose, I have a graphs so, let us add directions to the edges. So,
you see there is some kind of a cycle like this there is a directed edge, if you consider these
four edges together starting from this vertex A, I can go to B, I can go to C, I can go to D,
then I come back to A, this is a cycle. Now, an acyclic graph is one where there is no cycle,
right.

Now, a special type of a graph is called a tree which is of course, acyclic and a tree looks like
this. So, I am just giving an example of a tree, this tree does not have any cycle and this edges
can have directions again may not have direction and there is one special node which is
referred to as the root which is considered to be at the top of the tree. So, these are some
definitions or concepts that we shall be using in our definition of a binary decision diagram.

385
Now, let us see, binary decision diagram is an acyclic graph, there are no cycles, the edges
are directed. Well, we may not be showing the arrows in the diagrams, but as a matter of
convention so, any edge that connects a node which is on the top to a node at the bottom will
be assumed to have a direction like this top to bottom, and there will be a special node which
will be designated as root, root at node.

Now, the vertices can be of two types in a BDD. So, here we shall be see with example some
vertices are called decision nodes, some vertices are called terminal nodes. Now, the terminal
nodes are marked as 0 and 1, there is one terminal node called 0 there is one terminal node
called 1, there can be several such. These are called 0-terminal and 1-terminal. They indicate
the logic value 0 and the logic value 1.

(Refer Slide Time: 06:21)

Talking about the decision node a decision node looks like this a decision node is labeled by a
Boolean variable. Let us say I have a variable A, so, the vertex the decision node will be
labeled by A and there will be two child nodes. Well, the nodes that connect to it below they
are called child, this is called the parent. So, there can be two child node let us say one here
and one here. So, I am showing two different kind of edges one show it as dotted and one
showed as solid. So, the left one I call it as the low child, the right one I call it as the high
child, this is the convention. It has two child every decision node will be labeled with a
variable A and will be having two child’s left and right; left indicates the dotted line, right
indicates the solid line.

386
They are called low child and high child and low child and high child actually indicates so,
will be going to the left direction whenever A is equal to 0 and will be going to the right
direction whenever A equal to 1. So, the edge the two edges represent the assignment of the
variable here A to either 0 or 1.

Now, this we shall see later. So, in a BDD we will be having several such nodes, they will be
labeled with variables. Now, if starting from the root that is a special node called the root. So,
if you look at the different decision nodes that come in between. So, if the variables appear in
the same order in whatever path we traverse from the root then we say that it is an ordered
BDD, ordered binary decision diagram. This we shall see later, ok.

(Refer Slide Time: 08:15)

Let us now look at how a binary decision diagram looks like. Well, here we consider a three
variable function represented by a truth table like this. So, you see these are the output 1 0 0 1
0 0 1 1 and a, b, c are the three variables. Now, this is a classic kind of a binary decision
diagram where the root node is on top, this is the root and the root indicates the function f, ok.
Now, I have two paths left and right. So, a will be having a low child and a high child, this
corresponds to a equal to 0, this corresponds to a equal to 1.

Similarly, at the next level I have nodes which are marked by b. So, in the left this indicates b
equal to 0, this indicates b equal to 1, this also is b equal to 0, b equal to 1 and similarly, at
the lowest level there are nodes mark c, similarly they will indicate the values of c equal to 0
and 1, like this.

387
So, now, you see starting from the root if you traverse along any path for example, if I
traverse the left most path, I traverse the edges a equal to 0, b equal to 0, c equal to 0, I arrive
at the terminal node 1. This corresponds to the first row of the truth table 0 0 output is 1. So,
you consider one case where the output is output is 1 again let us say 0 1 1. 0 1 1 I come here;
let us take a 0 case let us consider this 1 0 1, 1 0 1.

(Refer Slide Time: 10:21)

So, you see essentially what we do here is that if we look into the output column of the truth
table 1 0 0 1 0 0 1 1, we simply copy this as the terminal nodes at the lowest level. This will
gave us one representation for this function, this is what we referred to as the binary decision
diagram.

Now, one problem with this representation is that because the size of the truth table is
exponential for an n variable function the number of rows is 2n because of the same
reason the number of terminal nodes here will also be 2n . So, this size of this binary
decision diagram will be exponential in size, exponential in n. So, it is pretty large. So, we
shall see that there exists some techniques using which we can reduce the size of the binary
decision diagram. So, it need not be exponential in size we can reduce the size of the BDD to
a great extent we shall see some examples in this regard, ok. Let us move on alright.

388
(Refer Slide Time: 11:40)

Earlier we talked about the Shannon’s decomposition theorem. So, we again revisit that same
thing Shannon’s expansion we are calling here of a function. Now, the idea here is that we
can construct a BDD from a given function by repeated applications of the Shannon’s
decomposition theorem or Shannon’s expansion theorem.

Now, recall earlier when we talked about the multiplexer realization of a function
representing a function using two to one multiplexers, there also we did the same thing. We
successively decomposed a function into sub functions by applying the Shannon’s
decomposition theorem, we got smaller and smaller functions and every decomposition was
map to a 2 to 1 multiplexer. Here the concept is almost identical every decomposition will be
corresponding to one decision node of the BDD and we repeatedly do it we get the complete
BDD, let us see.

389
(Refer Slide Time: 12:52)

Take an example. Suppose, I have an n variable function f the variables are x1 to xn in


general there is a variable xi. So, we talked about this earlier also. So, we define something
called a positive co-factor. This we denote as fi with 1 in this superscript ( f 1i ) which means
the same function where this xi is replaced by 1, this is referred to as the positive co factor
with respect to variable i. So, in the same way we can define a negative co factor negative co
factor is very similar which is denoted by f i0 , where this variable xi is replaced by the
constant value 0, right.

Now, Shannon’s expansion theorem we have already seen. Now, here we have showing two
forms of expansion one of that we have seen earlier. So, in Shannon’s expansion theorem we
can write the function f like this. So, when you expand a function with respect to x I can write
xi dash. That means the compliment of xi multiplied by the negative co factor with respect to
xi plus the variable xi and the positive co factor. Now, there is an alternate representation also
which you have not talked about earlier, where this is something like product of sums. This
same function we can write like xi+ f 0i and x́i+ f 1i . But, of course, we shall be using the
first representation only in our examples and illustrations.

Now, one thing you see the why we expanded here this one. So, we can map it into a decision
node of the BDD, where the node will be labeled by xi and the left and right this is this
corresponds to xi equal to 0, this corresponds to xi equal to 1, the low and right child’s they
will be corresponding to f i0 and f 1i .

390
So, basically and this of course, this will represent the function f. So, every decomposition of
the Shannon expansion can be mapped into a decision node of the BDD or a decision vertex,
this is the basic idea.

(Refer Slide Time: 15:48)

So, function example here consider a three variable function like this. Here in this step we are
decomposing this function with respect to the variable a. So, we write á multiplied by the
negative co-factor, replace a by 0 you get b́ ć and bc plus a multiplied by the positive
co-factor replace a by 1, this become 0 c +bc . Now, c +bc can be simplified into only c
and this cannot be simplified any further. So, this decomposition as I said can be conceptually
mapped into a decision node, where this node is labeled a the low child refers to this function
b́ ć +bc and the right child refers to this function c, ok.

This is of course, the first step and we repeatedly go through this, next we will be doing let us
say using variable b. Next we will be doing using variable c and so on. Let us see how we do
it.

391
(Refer Slide Time: 17:12)

Well, here the whole step is shown. The first step we have already seen, this step we have
already seen in the previous slide. So, where we have expanded the function with respect to
variable a and we have got two sub functions b́ ć +bc and only c. In the next step let us
suppose we are trying to expand with respect to b. So, this b́ ć +bc if you do a Shannon
expansion with respect to b you do it in a similar way b bar multiplied by the negative co
factor with respect to b, replace b with 0 this all becomes 0 it becomes only ć , only ć
and b multiplied by positive co-factor, replace b by 1, it becomes only c.

Similarly, for this variable c you do the same thing with respect to b, b́ multiplied by
replace b by 0. There is no b, so, it remains c, b into positive co factor replace b by 1, no b so,
it remains c and in the last step we expand you see. Now, we have got four sub functions ć ,
c, c and ć . So, you see the size of the sub functions are getting smaller and smaller as you
proceed. So, in the last step we expand by c, you see ć can be written like this ć
multiplied by the negative co-factor replace c by 0 it becomes 1, plus c negative co the
positive co-factor replace c by 1, ć becomes 0, right, but if it is c, then ć multiplied by
replace c by 0 it is 0 c into replace c by 1, it is 1. So, in this way I get this.

Now, you recall when you do the expansion here I have shown a, b, c I can do it in any order.
I can first use b then a then c or I can use first c then a then b and so on. So, here I have
illustrated only one particular order, the variable ordering is a, b, c. So, at the end I get some
constants or terminals you see 1 0 0 1 0 1 0 1 remember this 1 0 0 1 0 1 0 1.

392
(Refer Slide Time: 19:44)

So, I can directly map this into a BDD like this 1 0 0 1 0 1 0 1. So, so, my order of expansion
was first I use variable a. Next level I use variable c, third level I was used variable c b and c
right, this is how we had carried out the expansion.

(Refer Slide Time: 20:10)

Let us take another example, let us work this out. Suppose, we have a function like this
á bc + b́ ć +ac let us follow some particular I am not may say a, b, c let us say first we
expand with respect to variable c, let us see. So, it will be the function will be ć the
negative co-factor replace c by 0, this will become 0, it will be b́ or a, b́ or a, b́ or a

393
plus c into replace c by 1, the last two will become 0, it will become only á b, á b. So,
I have got two sub functions b́+ a and á b .

In the next step let us expand with respect to a, let us say. So, this function if we expand with
respect to a, this will be á into replace a by 0 it will be only b́ plus a into replace a by
1 this will be something plus 1, it will be 1, right. In the same way if you expand this with
respect to a, it will be á replace a by 0. So, it will be only b plus a replace a by 1 this will
be 0. So, now, we have four sub functions b́ , 1, b and 0. Now, we are left with b, ok, we
will be expanding with respect to b.

So, b́ can be expanded as b́ replace b by 0 this will be 1 plus b replace b by 1, this will
be 0 and here we have a constant one. So, if we expand by b, b́ replace b by 0, there is no
b, this remains 1 plus b into replace b by 1, it remains 1 and for b it will be b́ replace b by
0, this will be 0 plus b replace b by 1, this will be 1 and for 0 this will be b bar into 0 plus b
into 0. So, now, we have 1 0 1 1 0 0 0 0.

So, now, our BDD will look like this will be having the root node c at the top level this will
represent the function f, at the next level will be having a, a, and a this will be the negative
edge 0 and this will be positive edge. Next level will be having b, b, b, b and b. So, this will
be negative, this will be positive, this will be negative, this will be positive and in the last
level will be having c no, not c ok, c a b you have taken already.

So, last level you have the terminal nodes. So, if you take the terminal nodes. So, we have 1
and 0. So, you will be having 1 out here, 0 out here like this then 1 and 1, 1 out here, 1 out
here like this 0 and 1 0 out here, 1 out here and 0 0. So, we have obtained the binary decision
diagram for this function. So, given any variable ordering you can construct the BDD by
systematically decomposing the function using Shannon’s law, fine.

394
(Refer Slide Time: 24:42)

Now, let us talk about variable ordering. The point to note here is that the size of a binary
decision diagram is determined not only by the function you want to represent, but greatly
also on the ordering of the variables. You see in general for most of the functions BDD gives
a very compact representation for a function we shall see some examples later, but what we
are trying to point out here is that the ordering of the variable plays a great role. We shall take
an example where the ordering if we change the size of the BDD can differ to a great extent.
Well, of course, here we shall be showing you the reduced or minimized form of BDD which
we have not discussed so far, just we shall be showing you pictorially how the reduced
version looks like, later on we shall see how to arrive at the reduced version.

Let us see, let us take an example we say that it can be proportional to the input variables, it
can be proportional to the power of an input variable is called exponential. Let us take an
example. This is a very classic example. Suppose, I have a function which looks like this, but
the variables are paired the product terms are like this x1 x2 or x3 x4 or x5 x6 like this. So, if
I use expansion if I use variable orderings like this first I choose the odd number variables x1,
x3, x5 and so on. Then I choose the even number of variables then it can be shown that I will
be requiring exponential number of nodes in the reduce or minimum BDD representation.

But, if we use a variable ordering which is just x1, x2, x3, x 4 in that order then we will be
requiring only 2n nodes. This is a very classical example which is given in all textbooks

395
n
which is used to show that variable ordering is very important, ok. 2 is very large, it
grows very quickly with respect to n, but 2n is proportional to n.

(Refer Slide Time: 27:13)

Let us take a very specific case of that function, where there are eight variables. So, x1, x2,
x3, x4, x5, x6 and x7, x8. First we considered the bad variable ordering; that means, we
expand with respect to this first the odd number of variables x1, x3, x5, x7, then the even
number variable x2, x4, x6 and x8. So, you see here the entire BDD shown of course, this is
not the conventional BDD. This is the reduced version of the BDD. So, we shall see later how
to reduce a BDD in size.

But the point to note is that you see there are large number of nodes in this representation if
you count how many 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 23 27 30 and 2 terminal
nodes total 32 nodes are there in the BDD representation, if we have a variable ordering like
this, right.

396
(Refer Slide Time: 28:31)

But, this same function if we use an alternate variable ordering like here the same function
we are using a variable ordering like this x1, then x2, then x3, then x4, then x5, x6, x7, and
x8; then we have you see such a compact representation of the function where the number of
nodes are 1 2 3 4 5 6 7 8 9 10 only 10 nodes are there. So, this is a very classic example
which is used to illustrate. But, the ordering of the variable is very important and most of the
BDD generation and manipulation tools they try to find out very good ordering of the
variables, so as to minimize or reduce the size of the function.

(Refer Slide Time: 29:24)

397
So, the important point to note is that just and mention this just now, that it is very important
to find a good variable ordering and we have a binary decision diagram here which is called
ordered binary decision diagram, where to ordering of the variable is defined. Of course,
finding the best ordering is not easy this is this NP-hard is a class of problems which are
considered to be computationally very complex. Why? If there are n number of variables the
number of possible orderings can be n! and factorial function grows very rapidly with n,
right. That is why this are very difficult problem. But, there are a number of rules or
heuristics these are called which are used to find a good variable ordering in general.

So with this, we come to the end of this lecture. Now, in the next lecture we shall be looking
at how we can reduce the size of the BDD to the form in the two examples that we showed.
The conventional BDD looks like a tree, how we can reduce the size of the tree and make the
BDD more compact, such rules we shall be discussing in our next lecture.

Thank you.

398
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 28
Binary Decision Diagrams (Part II)

So, we continue with our discussion on Binary Decision Diagram. If you recall in our last
lecture we said what a Binary Decision Diagram is we also talked about something called
ordered BDD ordered binary decision diagram, where the order in which we are expanding
variable that is fixed from, or with respect to any path from the route that you follow that.

Let us say if I use an order a b c so, will be expanding along all the paths in that same order a
b and c. Now, in this next lecture on binary decision diagrams, we shall be talking about some
rules using which we can reduce the size of the BDD,

(Refer Slide Time: 01:02)

Because as I said the size of the BDD as such is exponential in nature, because the number of
leaf nodes the terminal nodes that will be equal to 2n the same size as the output of the
truth table.

So, we have to do something to reduce the size, because exponential is a really large number.
20 20
Suppose I have a 20 variable function, how much is 2 ? 2 is 1 million; for 30
variable function how much is 230 ? It is 1 billion. So, that number grows very rapidly with

399
n all right, today we talk about functions with 100 variables and even more. So, we just
cannot represent a binary decision diagram in a conventional way we needs some rules for
reduction.

Let us see so, reduced order BDD that is what we call this is referred to as ROBDD, well we
know that what is a binary decision diagram, if the ordering of the variables is constant along
all the paths we called it as ordered BDD, now we apply some rules to reduce the size so,
what we get is reduced ordered BDD. So, we have this reduced ordered BDD, where two
reduction rules are applied systematically, these are the two rules we shall be illustrating.

We shall be merging any isomorphic sub graphs what is isomorphic sub graphs, isomorphic
sub graphs are two graphs which are identical in natures like, let me take an example suppose
I have a part of my BDD which looks like this, there is another part of my BDD where also
the same structure is formed. So, I say that these two are isomorphic, if I find two such sub
graphs which are identical isomorphic we can merge them into one.

This is one rule and we can eliminate any node whose two children are isomorphic. So, what
I mean is that suppose I have a decision on b at a higher level so, one of it is children is
pointing here other is pointing here. So, what is it says that both are pointing to basically the
same thing right this to indicate the same thing. So, there is no need for using b, so if b is 0
then also will be doing this, if b is 1 then also will be doing this. So, we can simply eliminate
b from this graph this is so, called elimination rule we shall be seeing in this more detail.

Now, the interesting thing about ROBDD is that is that it is a canonical form. Now, you recall
for you we talked about some of product and product of some representation, we talked about
canonical representations canonical is some sort of unique representation, this reduced order
binary decision diagram with respect of particular variable ordering is canonical, which
means there exists a single unique ROBDD presentation ok.

Now, because there is exactly one ROBDD for a given function, this is a very useful property
using which we can use it in many applications, one such application is functional
equivalence checking like.

400
(Refer Slide Time: 05:15)

We have two functions f1 and f2 and we are not very sure whether they are the same function
or not, we want to check whether they are equivalent or not. So, what you can do we can
create the ROBDD of the two functions and, then check whether the 2 ROBDD are identical
or not fine.

(Refer Slide Time: 05:35)

So, some properties of ROBDD as I have just now said, there are two main properties; one is
uniqueness no two distinct nodes can be labeled with the same variable name and have seen
low and high successor. Just the example that you took earlier a 0 1, the same example let me

401
show once more. And again a 0 and 1 we see let us say this is my node a and this is my node
b, it says that you cannot have two such nodes even we labeled with the same variable here a
and, have the same low and high success low successor is 0 in both the cases and high
successor 1 in both the cases.

So, if you have such then you will be merging them into 1, there will be unique copy of such
structure existing. And non redundant is other one it is says no variable node can have
identical low and high successor. So, a particular node cannot have let us say both the dotted
and the solid arcs pointing to the same node in the next level, this means that a not required
irrespective of the value of a you will come to be anyway ok these are the two main
properties of ROBDD that needs to be satisfied.

(Refer Slide Time: 07:17)

So, this is explained diagrammatically here, first is the deduction rule where we say that if we
find isomorphic sub graphs merge them. So, two cases are shown here there can be similar
cases other similar cases also. If you find that the two sub graphs like this, two nodes labeled
with the same variable x and x same low and high successors child y z y z, then you and you
see they are coming from two different places ok.

So, then you merge them into a single node and around places from they are coming so now,
they will be to incoming input stages one from this one other from this one, but you use is a
single copy of this single copy of this. Similar this stage maybe solid this stage maybe dotted

402
also. So, this is other example so, if we have this thing then you can merge this like in this
way same way merge this two and two edges will be coming like this.

(Refer Slide Time: 08:58)

So, you can have the other cases also I have not shown all like for example, one of them can
be dotted and the other can be solid. So, in that case when you merge the first one will be
dotted and this one will be solid right, or the other way around this one solid this one dotted
ok. So, you can have all such cases of merging. The other case is to remove the redundant
nodes, like I already mentioned this if you have a scenario like this.

Where there is a decision node labeled with x it says, if x equal to 0 you come here if x equal
to 1 come here, now you see this decision does not make any sense, because you are anyway
coming to y in both the cases, which says you can all together eliminate this node x and get a
reduced representation like this. So, these two rules I mean if you apply repeatedly on a
binary decision diagram, then the size of the BDD will be going on reducing step by step and
finally, we shall be getting representation which is called the ROBDD, or the reduced order
BDD representation of the function.

403
(Refer Slide Time: 10:08)

Let us take an example complete example, for constructing the reduced order BDD. So, we
considered a three variable function like this a function of three variables ok.

(Refer Slide Time: 10:26)

So, if you use the method for constructing BDD that we discussed earlier, you have the
function here let us say I expand the variables, in this order first x1 then x2 then x3 so, I am
not showing the steps you can use this expansion step to get the functional representation so,
I am only showing the final one. So, once you do this the BDD you get will look like this,
this is your initial form non reduced version of the BDD right.

404
Now, you see you will be apply in those two rules that I mentioned in a repetitive fashion,
you will be merging nodes whose left and right childs, or low and low and high childs are
identical, or if you get isomorphic sub graphs merge them. Now, with respect to the original
BDD one observation, you can immediately see that in the terminal part there are so, many 0
and so many one nodes. So, why not merge all the 0 nodes into 1 and why not merge all the 1
nodes into 1 that is the first step we do ok.

So, you see here in this example there are 1 2 3 0 nodes and 5 1 nodes. So, the first step you
do is something like this, we use a single copy of this 0 node and a single copy of the 1 node,
you see the 0 nodes are pointed from where the dotted h for x3, you see from dotted x3 comes
here dotted h from this x3 still comes here, this x3 also comes here and also from here, this
also comes no this 3 only ok.

And the one, they write this solid arrow for this x3 the solid arrow from x3 coming here, the
solid arrow from this x3 also solid arrow, solid arrow from this x3 solid arrow and both
dotted and solid arrow from this x3 right. So, this is the first step of reduction, now in this
step you can make some observations, well the first thing is that you see if you look at this
part, x3 both the dotted and solid edges are pointing to the same node.

So, as per as our reduction rule we can eliminate x3 and, another thing also you see if you
look at just look at these two nodes this x3 and this x3, you see these are isomorphic why
because in this x3 the low child points to 0 and high child points to 1, here also low child
points to 0 high child points to 1 so, they can be merged.

So, after this reduction step we get something like this, you say exactly what I said this x3
was having two parallel edges pointing to 1 so, here we have eliminated that three this x2 is
now with a one edge is directly pointing to 1 x2 is directly pointing to 1, and these two x3s
have been merged into one, this is x3 and input edges was see one was coming from x2 the
dotted line in the solid line, now from x2 both the dotted line in the solid lines are now
pointing to x 3 right. So, after this reductions so, you get this.

Now, in this version you see now again who have the scope for reduction, you look at this
part of the graph, this x2 has both the dotted and the solid, solid arrows pointing to the same
node. So, x2 can be eliminated ok. So, you get finally, after eliminating x2 a BDD like this.
So, you can understand what is the purpose of carrying out this reduction, because in the
original BDD there were 8 terminal nodes in the lowest level 8 for 12, 14, 15 total 15 nodes.

405
And after reducing you are left with only five nodes, for larger function the reduction can be
even more, this is the basic purpose of carrying out reduction and getting order BDD.

So, what we get finally, here this is your ROBDD, for this particular variable ordering that
reduce x1 x2 x3, but if you use some other variable ordering, then you can get some other
structure for ROBDD fine.

(Refer Slide Time: 16:02)

So, some benefits of ROBDD are pretty straight forward to understand, you see you have a
function f now I want to check, if the function is equal to 1 for all assignments of input
variables such a function is called a tautology, now for a BDD it is very easy to check
whether a function is tautology.

Because the ROBDD of that function will simply be like this, it is straight away point to the
node 1 right, not only dot also dash. So, checking for tautology simply f will be pointing
directly to 1, the second thing is that complementation well you look at the ok.

406
(Refer Slide Time: 17:08)

Let us take an example we did not trying to illustrate, suppose I have a BDD representation
let us say I have x1, I have x2, I have x3 here and I have 0 here, and I have 1 here, let us say x
points to it is, suppose I have a function like this is f.

Now, what I am saying is that suppose I have a BDD of a function f, now I want to get a
generate a BDD for the compliment function should, I have to go through the process again
and construct the BDD. It says it is not required if you have the BDD for a function f, just
interchange 0 and 1 make this 0 as 1 make this 1 as 0.

So, for all those cases where the function was getting the value 0, now it will get the value 1
and for all cases where it was getting 1, now it will get 0 just the reverse right. And thirdly
these equivalents whether two functions are identical or not it is easy because, here you have
seen that ROBDD is canonical. So, if you construct the ROBDD of the two function and
show that they are the same, then you can say that the two functions are equivalent, or
otherwise the functions are not equivalent ok.

407
(Refer Slide Time: 18:44)

Now, means you can use this ROBDD in various ways for synthesizing circuits also. So, we
shall be taking some examples say earlier I mentioned that the 2 to 1 multiplexer the way we
designed a function using such multiplexers. And the way we realize a BDD implement of
BDD using repeated Shannon decomposition they are very similar. So, a BDD node and a 2
to 1 multiplexer they have a one two one correspondence.

Let us take some examples so, in synthesis what I mean to say is that you see some of the
reduction rules that we are applying, we already mentioned that one is this rule, where if a
node is pointing both edges to the same node I can eliminate that node well. In terms of
switching algebra what it does that mean, you see if this entire sub function below, this
represents a sub function h. Then this BDD represents x́ h or x h the left hand side so, if
you take h common it will be x or x́ which is equal to 1 it get is eliminated only h.

So, this kind of algebra minimization is implicitly carried out during this transformation.
Similarly the other one there are two identical or isomorphic sub graphs, suppose they
represents h this means plus bh. So, if I merge them like this it means a or b and h they also
have the same. So, again using some rules of the switching algebra we do this kind of
minimization. So, we are doing the same thing implicitly using this graphical data structure of
BDD.

408
(Refer Slide Time: 20:54)

So, so let us make one point here this already we have mentioned that variable ordering can
reduce the size of the BDD ok. Now, this variable reordering is also one way to minimize the
logic implementation, implicitly. So, we are not talking or thinking about gates circuits
minimizing, we are thinking about the function the BDD representation, we are trying to find
out what kind of variable ordering can give us a BDD which is smaller.

Now, a smaller BDD may mean a smaller circuit. So, we are thinking about a correspondence
like that and, during the construction of the BDD itself as we are seen some redundancy are
implicitly removed. So, the idea is that when we use a BDD, we are already carrying out
some minimization function minimization while you are constructing the BDD, while you are
applying the reduction and merging rules to get the ROBDD, where already applying some
minimization right ok.

409
(Refer Slide Time: 22:19)

Now, let us talk about the multiplexer realization of functions, you think of a scenario like
this decision node in a BDD, labeled x it is low child represents a function f, the high child
represents the function g. Now, if I want to implement this decision node by a multiplexer
while a multiplexer symbolically sometimes denoted like this, like trapezium this is your
select line.

This is your multiplexers select line, this is your multiplexer output and this side are the
inputs and 0 1 indicates, if x equal to 0 which is selected f is selected, x equal to 1 g is
selected. So, this is exactly what the decision node means, if x is 0 you come this side, if x is
1 you come this side. So, this can be directly mapped to a multiplexer and you can repeatedly
do this is f and g can be further decomposed you can have more multiplexers generated for f
and g fine.

410
(Refer Slide Time: 23:43)

Let us consider some scenario like this, well you have two sub functions f and g well I am
showing some segments of BDD and, you can have a decision block here which can be
represented by h it means if some condition h is true, then you go to h g if h is false, then you
come to f; that means, if h is 0 then f if h is 1 then g.

Now, if we have a scenario like this if you can identify this h, then you can again realize this
using a multiplexer what this select line is generated from h, because depending on h equal to
0 or 1, you will be selecting either f or g if it is 0 if selected, if it is 1 g is selected right.

(Refer Slide Time: 24:51)

411
Let us look at a concrete example here, considered a scenario like this where you can say this
part indicates h. And this is our c and d or our means inputs so, you can have a multiplexer
like this see can be fed to one side, d can be fed to the other side and this h whatever it is this
h network, this can be used as the select line. So, you can have a mapping for the multiplexer
realization like this ok.

(Refer Slide Time: 25:37)

Let us work out a complete mapping example with respect to the BDD that we had taken as
example earlier, this was the ROBDD that were generated from the function. Now, what we
want to know now is that let us have a complete multiplexer based implementation for this
function. So, how do we do this? We start from the root side, let us take x1 we take 1
multiplexer here; the output of the multiplexer generates the function f.

The select line is connected to x1, now there are 2 0 and 1 now we will be having two inputs
here, one is for select line 0 other is for select line 1, let us show them like this. Now, if it is
select line 0 where do you go you come to here x3 so, there will be another multiplex out
here, this will be selected by x3. So, again they will be a 0 input and a 1 input and for x3, this
0 input is connected to 0 and the one input is connected to 1. So, you can directly connected it
to 0 you can directly connect it to 1.

But for the other case here you have x2 so, here you have a multiplexer, which will be
selected by x2 and the 0th input will be connected to x3 so, this one so, x3 node is indicated
by this. So, this same thing will go here 0 input and the one input is connected to 1 so, one

412
input is connected to 1. So, you see this BDD using a pure multiplexer realization you can
have something like this.

But of course one thing you understand, that this may not be the best possible realization for
example, if you look at this part what does this function mean, extremely select line 0 here
and 1 here, it means if x3 is 0 you send 0 if x3 is 1 you send one this is as good as x3, you can
eliminate this multiplexer and you can directly connect x 3 here ok.

So, such minimization are possible, but in general for a large BDD for every decision node
you can directly map them into a 2 to 1 multiplexer. So, this is a very convenient and easy
way to map a BDD to a multiplexer network. This is one of the ways in which synthesis of
circuits using multiplexer kind of a network can be carried out ok.

(Refer Slide Time: 29:04)

So, to summarize we have talked about BDDs, now BDD has many applications, they have
been traditionally used to represent and manipulate switching functions of Boolean functions
in many way, they are used to generate circuits which are called synthesis, they are used to
verify the operations of circuits, there is a branch called formal verification where they are
very heavily used.

And there are many software packages which are available already in internet which you can
download and use for manipulation of BDDs. So, you can create a BDD for a function you
can minimize them in various ways, you can do equivalence checking given two BDDs, you

413
can find the AND of the 2 BDDs or you can complement a BDD lot of such operations can be
carried out, these are all supported by the BDD tools. So, with this we come to the end of this
lecture. So, what the last couple of lectures we talked about binary decision diagram, which is
a very important data structure that is used to represent and manipulate switching functions.

Thank you.

414
Switching Circuits And Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 29
Logic Design using AND-EXOR Network

So, in this lecture, we shall be looking at some unconventional ways in which a circuit can be
designed. Normally when you talk about logic design, the kind of methods that you talked
about; we have said that we can use the different kind of gates to implement logic AND, OR,
NOT, NAND and NOR gates are functionally complete; we can use them as well. We are also
seen various ways in which the basic building blocks in design like multiplexers, decoders,
etc. can be used to implement logic.

Now in this lecture, we shall be talking about something called AND and EXOR network.
This is not very conventional. So, we will be using AND gates and EXOR gates to realize
logic functions, let us look into this in a little more detail.

(Refer Slide Time: 01:23)

So, what we have just now mentioned is that we have already studied various methods of
logic design using basic gates and also using functional modules like the multiplexer, but
there are many applications; you think of arithmetic, addition, multiplication, this kind of
operations, well, you had looked at the hamming error correcting codes earlier. So, error

415
corrections; there are applications in communication decoding encoding various such
applications where we use exclusive OR operations very heavily.

So, if we allow not only the basic gates, but also exclusive OR gates to be used in our final
circuit, then the size and complexity of the circuit can be reduced or great extent. So, one
classic example, if you recall is the circuit will generate the parity of a word simple exclusive
OR of all the bits will generate the parity that was one example where EXOR gates will be
very efficient. Not only that; it is also very easy to test such circuits, we shall consider this
AND and EXOR kind of representation in this lecture. Now, one thing I want to just
mentioned here that I mentioned the example of a parity generator. Let us take a very simple
example. Suppose, I have a 4 bit parity generator and I use an EXOR gate to generate it.

So, I can use a large EXOR gate or I can use a cascade of smaller gates, they are equivalent
because normally it is much easier to build smaller gates. So, you see; to generate the parity
of 4 numbers bits, I need 3 input EXOR gates, right. So, the hardware is not that much, but
now one thing I want to implement; the same thing using AND, OR and NOR gates in a
conventional way. So, what will be my hardware complexity 4 input EXOR. So, what this
function is I just to recall EXOR is nothing, but count counting or the min terms which
corresponds to the odd number of ones. So, in 4 variable function, there will be 8 such
combinations where the number of ones will be odd.

(Refer Slide Time: 04:24)

416
So, in an AND-OR realization, what will require will be requiring 8 AND gates; each
containing 4 inputs each and in the last stage will be requiring a large OR gate with 8 inputs
and of course, in the inputs will be using some NOT gates in addition, there will be 4 NOT
gates for the 4 inputs.

So, you see will be requiring 8 4 input gates 1 8 input gates and 4 NOT gates for a
conventional AND-OR-NOT realization, but if we use EXOR, you need only 3 2 input
EXOR gates, you see, this is a classic example which shows, but EXOR gates for some
applications can be very efficient as compared to conventional logic implementations let us
move on.

(Refer Slide Time: 05:30)

Now, because our subject or discussion in this lecture is and EXOR implementations; let us
look at 3 alternatives ways of expansion well, we already talked about Shannon’s expansion
earlier, let us look at it like in this way consider that we have an n variable function f with
variables x1 to xn, they can be expanded just like Shannon decomposition or expansion we
talked about earlier with.

Respect to any of the variables well in this example, I illustrated with x1 in 3 possible ways,
these are called positive Davio, negative Davio and our familier Shannon’s expansion well,
here we used 3 notations where f0 indicates the cofactor of this function f where the input
variable x1 is 0 well earlier, we express it like this f 10 , but anyway I am showing it only as

417
0
f0 and similar f1 which earlier we showed as f 1 , but this f1 means the variable is at 1 and
we introduce another function f2 which is the exclusive OR of f1 and f 0, right.

Now with this notation, the positive Davio expansion says at the function can be written as f0
exclusive OR x1 f2, negative Davio says, it can be expressed as x´1 f2 EXOR f1 and
Shannon expansion says x´1 f0 EXOR x1 f1. Now recall one thing, earlier while talking
about multiplexer realization.

When you introduce the Shannon decomposition theorem you recall we used a OR here not
an EXOR, but here we are showing an EXOR well, you shall see very shortly that in certain
cases, OR and EXOR can be equivalent this is one such case. So, I can either write OR, I can
write EXOR, they will mean the same thing. So, there are 3 different ways in which I can
expand the function you see one thing that if I expanded in this so far example let us say
Shannon.

(Refer Slide Time: 08:15)

So, what is this function mean this function means I will be having 2 AND gates one x´1
and f0, one is feeding with x x sorry x´1 other is fed with f0. The other AND gate is x1 f1,
x1 AND f1 and it is EXOR of the 2. So, there is EXOR gate. So, you feed this to this and you
get the function f this is an AND EXOR kind of realization right.

418
(Refer Slide Time: 08:57)

Now, let us look at some of the properties of EXOR sum of which you already know some of
which may not be very apparent, but will see this see sum of the roots you already know the
first one that exclusive what is associative. So, whether you take the EXOR of x and y and
then EXOR of z or the other way around, it does not make any difference because ultimately
EXOR means whether the number of ones is odd or even if it is odd it will be 1, if it is even it
will be 0 ok.

The third one is also known commutative whether you take x EXOR y or y EXOR x, it
means the something. Now if you take the EXOR of any function with respect to itself or any
variable with respect itself it will become 0. This is important like when you have an
expression like this, let us say x y z EXOR x y z, it cancels each other this is 0. So, this x
need not be a variable only, it can be a function also any function EXOR with itself means 0
and anything EXOR with one means the compliment not right. Now let us look at the other 2
rules which may not be very familiar to you.

419
(Refer Slide Time: 10:30)

This one this is some kind of distributive law and an EXOR x AND y EXOR z is equal to
this, let us take the right hand side xy EXOR x z. So, let us expand it. So, what is EXOR x y
AND x´z OR x´y AND x z, this is an EXOR. So, if you apply Demorgan’s law x́
OR ź OR x́ OR ý x z. Now if you multiply out x y x́ will be 0, x x́ cuts out
xy ź and x́ and x z cuts out x́ x and ý x z if you take x common y ź or ý
z which is nothing but the exclusive or of y and z ok, it will be left hand side.

So, you see that EXOR and they distribute over each other. So, as if you have an and EXOR,
you can multiply as if x y EXOR x z, you can do this, right. Now let us look at another
interesting rule, it says this x and y, well again, x y need not be variables, they can be any 2
functions.

420
(Refer Slide Time: 12:18)

If the AND of this two is 0, then OR and EXOR, I can replace their equivalent, well, why it is
so? You see; if you look at the truth table of a function; let us say this xy OR x OR y.

So, if I say EXOR y what is the truth table, this odd says truth table is this, but it addition, I
am saying that my x y equal to 0, this condition is true. So, x y 0 means what the end the end
of x and y is 0. So, AND means what? x y is the AND so, I am making this is as 0. So, I am
left with 0 and 1 0 which is nothing, but x EXOR y so, they are same. So, in any just
example, if you see, if you look at the previous example and let us go back to the previous
slide, this x́ f0 x1 f1, you see that is x´1 and x1, here if you take and of these two, it is
0.

Therefore, this EXOR-AND-OR are equivalent, you can replace EXOR with OR, right. This
is the basic idea. So, you remember these rules.

421
(Refer Slide Time: 14:07)

Fine, now let us come to something called Reed Muller expansion. Reed Muller expansion is
a classic way of implementing a switching expression using AND and EXOR gates. This was
proposed long back, there are many applications of Reed Muller expansion and people have
been using it, since, many years many decades. The idea of classic Reed Muller expansion is
that, we use positive Davio expansion, let us say to expand a given function repeatedly
recursively.

So, if you do that we shall take some examples, later, if we do that then we can get this
function f written as EXOR of a number of n terms the may be all possible very; let us say, let
us take a very specific example, suppose I have 3 variables x1, x2, x3, this is a general
expression that is why this looks complicated for 3 variables. This functional look like this is
a0 plus a1 x1 plus a2 x2 plus a3 x3 not plus using EXOR all EXOR, EXOR a12 x1 x2, a13
x1 x3, a23 x2 x3 and EXOR a123 x1 x2 x3. So, you see all possible values of x1 x2 x3 and
their combinations and terms are there is only x1 x2 x3 pairs, 2 of them taken together 3 of
them taken together and none of them.

So, there are 23 ; 8 such and terms product terms they are all EXOR together and this ai is
this a coefficients, they are either 0 or 1, they may be present in expansion or they may not be
present this is actually what is referred to as Positive-Polarity Reed-Mullar expression,
Positive-Polarity means all the variables appeared in uncomplimented forms ok. This is the
basic idea.

422
(Refer Slide Time: 16:59)

So, let us take an example here, suppose, I have a function like this, I want to express this in
the Positive-Polarity form. So, what I do? Well, here I already know the EXOR rules. We saw
in the last slide, this x´1 ; we can write as x1 exclusive OR 1, x´2 , we can write like this
x´3 , we can write like this. Well, why we are writing like this because its bar; we are
eliminating; we want to use Positive-Polarity only.

So, this x´1 ; we have made it x1 because EXOR 1 means compliment. So, we have
eliminated the bars like this and if you put it here it will be like this and you know that this
EXOR is distributive over and you on multiplying. So, I have skipped a step you can do it
like this.

The first 2 for example, if you multiply by x1; so, it will be x1 x2 x1 and 1 EXOR x x1 1 and
x2 EXOR x2 1 and 1 EXOR 1, this will be the first term and x3 and 1 remains. Now if you
multiply again like this, you will get you say x 1 x 2 multiply x3, you get this x1 x2 and 1,
you get this, then x1 and x3, you get this x1 and 1, you get this x2 and x3, you get this x2 and
1, you get this and finally, 1 and x3; you get this 1 and 1, you get this, right. So, you see for
this function ultimately, whatever you get this is your Positive-Polarity Reed-Mullar form all
variables are uncomplimented, right ok.

423
(Refer Slide Time: 19:05)

Now, the point is that sometimes, you may not require all in un-complimented form. So, you
can have or combination of un-complimented and complimented form, but you can in force
that some variable say x1 appear in only one form of complementation, let x1 will always be
un-complimented x2 will always be complimented x3 will be un-complimented like that if
you have that kind of a restriction in place, then you can have something called fixed polarity
Reed-Mullar expansion.

Here, we can use a combination of positive Davio and negative Davio kind of expansions
where as I said each variable will appear in either complimented or un-complimented form,
but not both ok. Let us take an example, take a function like this where there is a term x1, x2,
x3, x4 and there is a term x´1 , x´2 , x´3 , x´4 , let us try to arrive at a mixed kind of
a Reed-Mullar expansion. For this, the first observation is based on one of the rules, you see
this x1 x2 x3 x4 this term and the other one; x´1 x´2 x´3 x´4 , if you take the
AND of this 2 because there is variable and compliment, they will become 0, this is 0.

So, according to our previous rule, if x y is 0, then you can write x plus y and x EXOR y
same ok. So, the original function was or I can replace it by EXOR because their AND is 0
ok, the product terms are disjoint.

424
(Refer Slide Time: 21:21)

Now, let us look at these steps of expansion. Now what we are doing? We are applying or we
are one thing x1 x2 to be in the un-complimented form and x3 and x4 in the complimented
form. So, what we do? So, for the first time because x1 x2 are already un-complimented, we
leave them as it is, but x3 x4, you want in complimented form. So, x3 you replaced by this,
we bring in a bar and x4, we do this, we bring in a bar x´4 . Similarly for the other side,
x´3 x´4 are already bar.

So, we leave them as it is, but x1 we make it un-complimented x2, we make it un-
complimented, then using the distributive law, we go on multiplying x3 plus this and this if
you multiply, you get this, if you multiply this and this; you get this. Now straight away, you
multiply this with this, you finally, get x1 x2 x´3 x1 x2 x´4 like this, you see you get
you get a Reed-Mullar expansion where x1 is un-complimented, everywhere x-2 is un-
complimented, x-3 is always complimented, x-4 is always complimented. So, you have a
Fixed Polarity Reed-Mullar expansion where the complementation can be can be either
complimented or un-complemented, but not both ok, this is what you can have.

425
(Refer Slide Time: 23:11)

Now, let us talk about 2 level AND-EXOR realizations. So, the function that we just now just
arrived at the function is like this. So, in order to implement it what we need; so, how many
product terms are there? 1, 2, 3, 4, 5, 6. So, we will be needing 6 AND gates, first AND gate
will be having 2 inputs second AND gate will be having 3 inputs, third one also 3, forth one
2, this is 3, and this is 3 and I have to take the EXOR of all of them see, there are 1, 2, 3, 4, 5,
6.

So, one way is to have a large EXOR gate with 6 inputs and connect the output of this AND
gates to them directly, but as I said, the large EXOR gate can be difficult to manufacture. So,
what we will just another thing is that for the input stages, you may need some
complimenting the x3 x4, you will be needing two NOT gates to compliment x3 and x4. So,
you need two 2 input AND gates and four 3 input AND gates and one 6 input EXOR gate.
Now what I am saying is that in general, you see this is a general expression for some
function, you may also have a term 1 EXOR something, right.

So, general circuit will look like this. So, you can have a chain of 2 input EXOR gates, this is
an alternate way of implementing and here, well for this case, you can set it 0, but if it 1, then
we will be put it to 1. So, the advantages that you can use small size EXOR gates, but the
drawback is that the delay will be more the total delay will be a cascade of the 6 EXOR gates,
right.

426
(Refer Slide Time: 25:59)

This is what I mentioning that instead of a single logic circuits, we can have a cascade of
such gates also right in general, right.

(Refer Slide Time: 26:13)

So, let us work out a complete example here. Suppose, you we want to find the Reed Muller
expansion for this function, there are 4 min true min terms 0 3 5 6. So, if you carry out
minimization, you will see that your minimize form of the function will be this, there are 4
product.

427
Now one thing you observe that there is between each pair of product term, if you take AND
it will be 0. So, this OR can be replaced by EXOR that is the idea. So, the first thing is that
because they are disjoint this product terms there is nothing in common, but let us say if you
have if you had 2 product terms a b OR b c, then you could not have done that because a b
AND b c are something in common a b AND b c is not 0, but if you had a b+ b́ c , then you
can replace plus by EXOR, right. So, here first step is you replace this OR by EXORs.

(Refer Slide Time: 27:35)

Then, suppose I want a positive regular form. So, all variables un-complemented. So,
wherever that is not; so, x´1 , you replace it by this x´2 , I replace by this x´3 .

Similarly, here x´1 x´2 and here x´3 , then I go on multiplying like in the method, I
mentioned earlier, we simply multiply this part, this part is you straight away multiply this
red first term gets like this the second term blue, it becomes like this x2 x3 or x1 x2 x3, the
green term become at this brown term becomes this x1 x3 and x1 x2 x3 and EXOR of so
many terms. So, now, you can become x1 x2 and x1 x2 x3 now you see I have an EXOR of
so many terms. So, now, you can apply that rule that any function source of function EXOR
that same function is 0.

So, I can cancel them out. So, let us see which are the terms that are getting canceled out, you
say I can get 1 x1 x2 x3 here and 1 x1 x2 x3 here they were canceled out. So, 1 x1 x2 x3 here
and one here they get canceled out this x1 x2 and this x1 x2 cancels out this x2 x3 x2 x3
cancels out and finally, x1 x3 and x1 x3 gets canceled out. So, most of the terms are getting

428
canceled out and what you finally, have his only 1 EXOR x1 EXOR x2 EXOR x3. So, the
AND-EXOR you do not need any AND gates for this example you need only EXOR it is a
very simple form of an expansion.

So, with this we come to the end of this lecture you see what we discussed in this lecture are
some as I said unconventional ways in which you can represent some functions using and an
EXOR gates which show how we can realize any arbitrary function using Reed-Mullar means
expression form either in un complemented variables forms are in so called fixed polarity
form, some variables are complimented some variables are un-complemented.

So, the advantages is that for some functions that can lead to very small realizations and also
later on, we will see that when you design a circuit we also may want to test whether the
circuit is working correctly or not so far, this kind of Reed-Mullar circuits testing becomes
very easy, but this we shall be discussing later.

Thank you.

429
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 30
Threshold Logic and Threshold Gates

So, far if you recall you had discussed various ways designing logic circuits using
conventional gates like and OR, NOT, NAND, NOT, EXOR and also some functional blocks
like multiplexers, decoders etc. In this lecture will shall be looking at a slightly different way
of designing logic functions, not using the conventional gates, but using something which is a
little unconventional. The title of a talk here is this lecture is Threshold Logic and Threshold
Gates. So, in this lecture we shall be talking about something called threshold logic circuits
and how we can implement threshold logic circuits using threshold gates.

(Refer Slide Time: 01:14)

So, let us see the basic idea behind this so called threshold logic ok. The first thing is that, in
threshold logic the basic element that you talk about is called a threshold element or a
threshold gate this is how a threshold gate looks like in its schematic form. So, you see in a
threshold gate, what are the basic components, if I show it as a rectangular box there are
some inputs, which are applied. So, in this example have I shown that there are n inputs x1,
x2 up to xn and there is one output y.

430
Now, there are a few other parameters you can see each input assigned some weight is W1,
W2, Wn, these are called weights and, there is another parameter quantity is called T, T is
called a threshold, so, unlike a gate which we have learned earlier AND, OR, NOT etc, where
only inputs and the output matters, here in addition you have some weights and threshold.
Now, how does the weights and threshold play a role? You say this threshold element, or
threshold gate the weight works well the inputs x1, xn these are binary inputs, means they can
be 0 or 1.

(Refer Slide Time: 02:55)

The output y this is also a binary output, but this weights Wi and the threshold these are not
binary numbers, are real numbers. Real number means they can be integers, they can be
fractions, they can be negative also, they can be any arbitrary numbers like for example, the
weights and threshold values can be 1, they can be 2, they can be minus 3, 0.5, −2.5 etc.
So, you can have any such numbers integer numbers, not only integers also numbers with
fractional points both positive and negative.

Now, the way the output gets decided is as follows the output why you will be equal to 1, if

n
the weighted sum of the inputs by weighted some inputs by weighted sum means ∑ Wi xi
i=1

So, you see here we have written down in the expanded form W1 X1, W2 X2 upto Wn Xn, if

431
this weighted sum is greater than or equal to the threshold T, then the output will be 1,
otherwise the output will be 0. This is how a basic threshold gate works. So, you have the
inputs you have the weights assigned to the input you have a threshold, you compute the
weighted sum, if the weighted sum is greater than or equal to the threshold the output will be
1, if it is less than the threshold the output will be 0, this is the functional behavior of a
threshold gate ok.

(Refer Slide Time: 04:56)

Now, let us try to understand why threshold logical and threshold gates are considered
important today. The first thing is that threshold logic has a direct connection with
neuromorphic computing, well neuromorphic computing is a branch of you can say computer
science, where we are trying to mimic the behavior of a brain. The way the brain works there
are basic building blocks in the brain which are called neurons, neurons work in a way which
is very similar to threshold gate, there are weights there is a concept of a weighted sum, there
is a concept of a threshold, this is how people normally model a neuron.

So, if you have a way to build a threshold gate, then using threshold gates you can possibly
model the brain also, so, this is called neuromorphic computing. So, you can model the
neurons that are present in the brains. Well if you forget neuromorphic computing even in
conventional logic design, this can be considered as an alternative. Because we shall see a
few examples that you can get much simpler circuit realizations, for many functions not all
functions of course, for many functions using threshold gates you can get very small circuits,

432
very easy to design and implement ok, this can be one advantage.

And second thing is that although threshold logic is relatively old, if this is not a new
concept. But technologies for implementing them efficiently is available only recently, earlier
the technology was not available that is why people did not build circuits using threshold
gates, but today people are thinking about building. For example, we talked about memristor
earlier, using memristor people are working very actively toward building are threshold logic
gates and with applications in neuromorphic computing fine.

(Refer Slide Time: 07:20)

Let us take an example now here we considered a four variable function, let us look from the
other side. Let us say that we want to implement a function like this. So, it is a sum or
product specification by the true minterms are 1 2 3 6 and 7. So, if you do a minimization
using k map or any other method, you will see that will minimum form is this x´1 x3 OR
x2.

Now, here I am not telling how, but this is a threshold logic implementation of this function.
Now, you look at one thing if you wanted to implement this function using conventional
gates, then you would require an AND gate with x1 you would require a NOT gate, x´1 x3
and this will go to an OR gate, it will be connecting x2 here.

So, you would be requiring three gates, but using threshold logic this is just an example
single threshold element you can implement this function.

433
Now, let us see how this single threshold element actually implement this function you see,
here the weights that are used are – 1 , 2 and +1 and the threshold is half. Let us see
how this gate works.

(Refer Slide Time: 09:07)

Look at this truth table well on one side; we have considered all possible input combination
of x1 x2 x3. And because the weights are – 1 , 2 and 1, we are computing the weighted
sum as I said weight multiplied by the inputs. So, here the expression will be
−x 1+ 2 x 2+ x 3 , which is this so, with respect to the values of x1, x2, x3.

If you if you calculate this expression the value that you get are like this 0, 1, 2, 3, −1 , 0,
1, 2, you can check for example, for 0 1 1 it will be 0+2 ×1+1 which is 3, minus 1, 1 0 0,
−1+0+ 0 , −1 .

So, now you see because the threshold is 0.5 half. So, among these how many of them are
greater than equal to half and how many of them and less than half, you see this is greater
than half, this is greater than half, 2, 3 is also greater than half, 1 is also greater than half, 2 is
also greater than half, but this 1 0 minus 1 and 0, these are not greater than half that is why
you are getting the output 0. So, you can verify, this is nothing but what you wanted to
implement 1 2 3 6 7 you see this is this one corresponds to 0 0 1 which is 1, this is 1 2 0 1 0
this is 2 1 0 1 1, this is 3 1 1 1 is 0 is 6 and 1 1 1 is 7, you see 1 2 3 6 7 this is what you
wanted to implement ok. So, this example shows that how using a single threshold element

434
OR gate we can implement a complex switching function a switching expression.

(Refer Slide Time: 11:22)

The next thing that you want to talk about is well we talked about functional completeness
earlier, we said that when you build circuits, we should be confident that the basic building
blocks that are given to us those can be used to design and implement any circuit to you want
for instance AND, OR, NOT by definition they are functionally complete. So, if you have
these three kind of gates given to us we can design any circuit, similarly we proved NAND is
functionally complete, NOR is functionally complete. So, if you give us only NAND gates
we can design any circuit using only NAND gates.

Now, here we try to show that is threshold gate also a functionally complete gate. So, using
threshold gate alone therefore, we can design any function you want, let us see what is our
logic. So, we are trying to say threshold gate is functionally complete. The first thing is that
this already have seen through the example that we showed just, now threshold gate is some
kind of a generalization of conventional gates, they are more powerful. Because we can
implement functions which otherwise would be requiring several conventional gate, we can
implement the same function using a single threshold element, or a single threshold gate in
that way a threshold gate is more powerful, because it can realize a larger class of functions.

Now, the point you want to show here is that because we are saying functionally complete,
we want to show that any conventional gate should be realized with a threshold gate. So, if

435
you can show this then you can prove that threshold gates are functionally complete. Now,
the way we give the proof is through an example, we show a threshold gate implementation

1
with the input weights −1 and −1 and the threshold of −1 .5 , −1 and .
2

So, you can verify that this threshold gate actually implements the NAND function and
because NAND is functionally complete. So, if you can show that this threshold gates
actually implements NAND so, we have proved that threshold gate is functionally complete.

Now, let us see how this implements NAND, let us quickly again draw the truth table here so,
x1 and x2 are the inputs so, there can be four combinations 0 0, 0 1, 1 0 and 1 1. And here
what is the weighted sum – 1−1 so, simply −x 1−x 2 so, the weighted sum is
−x 1−x 2 . So, if you just calculate the weighted sum on these it will be 0−1 , −1
and −2 .

So, what will be the output y the threshold here, the threshold is – 1.5 . So, if this weighted
sum is greater than or equal to T, then the output will be 1. So, in this case you see – 1.5
the first three they are greater than – 1.5 , –2 is less than – 1.5 . So, for the first
three the output will be one this, the output will be 0, this is nothing but the NAND function
right. So, we have proved that using threshold gate we can construct a NAND gate so,
therefore, it is functionally complete ok.

(Refer Slide Time: 15:38)

436
Now, the next question we ask is, can we implement any arbitrary function using a single
threshold gate, well earlier we have seen that that implemented a slightly complex function,
say x´1 x 3+ x 2 that kind of function using a single gate, now the question is given any
arbitrary function can we implement using a single thresh old gate.

Let us try to answer this question of course; the answer is no, all functions cannot be
implemented. Now, how do we prove this? We give a counter example, we show an example
logic function and give a justification why this cannot be implemented using threshold logic.

Let us see so, the first thing is that the answer is no as I said so, you cannot implement any
function arbitrary function using threshold gate. Now, the example with we take is like this
now one thing we would like to just tell you, you just understand how a threshold gate works,
once more the inputs are x1, x2, x3 and x4, with the weights W1, W2, W3 and W4.

Now, when you calculate the weighted sum right if, and some particular input let us say x1 is
0, then that component x1 W1 will not appear in the weighted sum this will become 0. So,
you will have to consider only those combinations for which it is 1, only those terms will
come for example, if x is 1 then x3 is 1, then the weighted sum will be only
x 1 W 1+ x 3 W 3 , x2 and x4 will not appear here, because they are 0 right ok.

(Refer Slide Time: 17:55)

So, with this example because the product terms here are x1 x2 and x3 x4. So, the output
must be one for these two minterms, so, x1 x2 say x1 x2 must be 1 must be 1, but x3 x4 can

437
be anything they can be 1, x3 x4 are actually don’t care, then the first product term will be 1,
but because we are talking about threshold gate, we are talking about the minimum what is
the minimum weighted sum, minimum weighted sum will happen, if this 2 as 0 0. So, let us
take the 0 0 part x´3 x´4 . Similarly for x4 we take x´1 x´2 , let us take these two
minterms.

(Refer Slide Time: 18:49)

So, if we take these two min terms the output is supposed to be 1 so, what does this mean just
in terms of weighted sum it will be x 1 W 1+ x 2 W 2 because these two are 0, they will not
come and because x1 x2 are 1 and 1 it will only W 1+ W 2 . Now, in order the output be
one this must be greater than equal to the threshold.

Similarly for the second min term x3 x4 are 1 1 so, W 3+W 4 must be greater than or
equal to T, because your condition was W 3 x 3+W 4 x 4 greater than or equal to T, now x3
and x4 are 1 so, it is W 3+W 4 greater than or equal to T. So, if you add this to up you get
W1, W2, W3, W4, there is greater than or equal to twice T right.

438
(Refer Slide Time: 19:55)

Now, let us look at the false minterms, the output will be 0 for which minterm now the
reverse you see output will be 1, if either x1 x2 is 1 or x3 x4 is 1. Now, output will be 0, if
one of x1 x2 and one of x3 x4 as 0.

So, let us look at two conditions here x1 is 0 and x3 is 0 here x2 is 0 and x4 is 0, just let us
take these two so, others also you can take and you can make a similar justification. So, if
you take this then you can arrive at conditions like this, because for this case your output is
supposed to be 0, so the weighted sum must be less than the threshold for this case, because
x2 and x4 are 1 so, W 2+W 4 must be less than T and for the second case x1 and x3
W 1+ W 3 must be less than T.

So, if you add them up you see their sum is less than twice T. So, you see you have arrived at
a contradiction, so in one side your saying that the sum of the words must be greater than
equal to 2 T, on the other side you are saying it must be less than 2 T. So, the overall
conclusion is we cannot find the value of weights W1, W2, W3, W4 such that these
conditions are satisfied simultaneously. So, we can say that this function cannot be
implemented using a single threshold gate. So, we have actually showed this as a counter
example, because we have a counterexample. So, this condition is not true fine.

439
(Refer Slide Time: 21:49)

Let us take another example where is showing a function, that requires more than one
threshold gates for implementation. So, here I am just showing the final solution not going to
this steps, this is a function of variables so, the lines faint I am actually showing the gates in
using convectional logic, we will be requiring four gates to implement 2 AND gates and 2
OR gates.

Now, what we are showing here is that using threshold logic gates, you can have two
threshold logic gates to implement the same functions. So, you can actually verify that it
works. So, I live in a exercise for you, for the first gate the inputs you are applying are x1, x2
and x3, this inputs are x1, x2 and x3, the weights are 1, 1 and 2 and the threshold is 2. For the
second gate the inputs are x4, x5, the output of this gate is also an input and x6 and the
weights are 1 1 1 and 3 and the threshold is 3. So, the function f that you get will correspond
to this function, this you can verify. So, this is one function which can be implemented using
two threshold gate and this is an example realization of that right this you can check.

440
(Refer Slide Time: 23:34)

So, the basic design problem in threshold gate is given any arbitrary switching function, let us
of n variables, the main question you want to answer is to determine, whether it can be
realized by a single threshold gate, by a single threshold element and, if it is possible what
should be the weights, what should be the threshold value. So, the design problem of a
threshold element or a threshold gate is to determine, or find out what will be the value of my
weights, what will be the value of the threshold. So, once we have to do this your synthesis
problem is solved ok.

So, if a function can be implemented using a single threshold element, we say that it is a
threshold function. So, a threshold function is a function which can be implemented by a
single threshold gate like this.

So, we shall take an example to show how we can systematically check for this and how to
determine the weights. So, we construct the truth table and from every row the truth table we
obtain some inequality constraints we shall see how. So, we will be having 2n such
n n
constraints, because there are 2 rows in the truth table. So, will have to solve those 2
inequalities at the end right, this is the basic idea.

441
(Refer Slide Time: 25:15)

Let us take an example to work this out, we consider this function and for this function, we
have constructed the truth table. So, we are showing the inputs x1, x2, x3, the output f and
also the decimal equivalent of the inputs 0 1 2 3 4 5 6 7 8. Now, suppose I have I mean I am
that I can build a threshold gate to implement this function. So, the inputs will be x1, x2 and x
3, my weight will be W1, W2 and W 3, there will be a threshold there and the output will be
f.

So, if I have input 0 0 0, so my weighted sum will be 0, because the output is 1 that weighted
sum must be greater than equal to T. Similarly for 0 0 1 the weighted sum will be x3 W3,
only W3 this is 1, so, W3 also must be greater than equal to T, for the row number 2, 0 1 0
only x2 is 1, so, W2 it is 0 so, W2 must be less than T, that is why it is 0 here, both x2 and x3
are 1, so, W 2+W 3 this is one should be greater than or equal to T, here W1 0 less than
equal to T, W 1+W 3 0 less than or equal to T, W 1+ W 2 0 again less than T and all 1,
it is 0. So, their sum also should be less than T.

So, you see we have got 8 inequality constraints directly from the truth table from here, we
have to solve for W1, W2, W3, well, there are ways of solutions and, showing you just
intuitively how you can go about doing it D equal to 0, if you row it says T should be
negative. So, the first conclusion is T must be negative, look at row 2, D equal 2 it says W3 is
less than T, because T is negative. So, W3 also must be negative and should be less than T.
Similarly row number four says W1 less than T, because T is negative W1 should also be

442
negative.

So, there is you can see directly and for rows 3 and 5, what you can see it says W2 plus and
W3 is greater than equal to T and W 1+ W 3 is greater than equal to T so, W3 cancels out
less than T it says W3 greater than, or equal to T, W1 less than T, so from that you can reduce
W2 greater than W1, well understanding that W1 and W3 are all negative and lastly from the
D equal to 1 row W3 is greater than equal T this is another condition.

So, combining this you have, these conditions to be satisfied that W3 should be greater than
equal to T, T must be you have this conditions T must be greater than W2 greater than W1,
because this condition are there T must be greater than W2, T must be greater than W1 and
already we have shown W2 must be greater than the 1.

So, you can have any arbitrary choice that satisfies this and of course, T must be negative, so,
one possible choice I am showing here, there are infinite possible solutions, so, one solution
is W1 is minus 2, W2 is minus 1, W3 is plus 1 and T is minus 0.5, you see all this conditions
satisfied W3 greater than equal T, T greater than or equal to W2, W2 greater than equal to W1
ok. So, this is how given a function given a truth table well, if it is a threshold function you
can reduce the values of the weights and the threshold ok.

(Refer Slide Time: 29:38)

Now, we shall look at a couple of properties of threshold gates, or threshold functions. The
first property says that any threshold gate as you know, it is characterized by the weights of

443
the inputs and also the threshold, they taken together is called weight threshold vector. So,
any threshold gate is characterized by its weight threshold vector.

So, consider a function f, which can be realized by weight threshold vector W1, W2, Wn and
T. Let us call it if you want let us say if we have any particular variable xj have any particular
variable exchange the corresponding what is Wj the corresponding weight is Wj.

So, if we compliment xj, so, instead of xj in the function we make x́j , then the
corresponding function can be implemented only by negating the weight of that Wj and
changing the threshold from T to T −Wj , this can be proved ok. So, given a function if
you want to realize another function for one of the inputs are complimented, then you can just
adjust the weights like this is one result I wanted to show.

(Refer Slide Time: 30:55)

And another result is let us considered this f is a threshold function and this is the
corresponding weight threshold vector.

Now, suppose we want to implement the complement of f, it says the compliment function
can be directly realized using the weight threshold vector, where you negate all the weights
and the thresholds −W 1 , −W 2 , −Wn and −T , well it is very easy to prove the
this, because we see from the first condition from V1, because this is a threshold gate the
weighted sum of Wn xi must be greater than or equal to T, whenever f is 1.

444
And it should be less than T, whenever f is 0. Now, if you multiply both sides of this
inequalities by minus 1 so, this Wi xi will become −Wi xi , this will become −T and,
this greater than will become less than less than will become greater than right. So, from here
you can see that if you make the weights and threshold negative and, this less than greater
than gets reversed. So, so in that earlier case, if you was 1 now it is becoming 0.

Now, earlier case it was 0, now it is becoming 1 so, it is actually the complement function
right. So, we have seen a few such properties there are many such properties of threshold
functions so, so we are not going to go into the detail of that any of you are interested, there
are some very nice literature available and books available on threshold logic functions you
can go through them, but I just wanted to give you a brief overview about threshold logic and
threshold gates and how they can be used to implement switching functions.

Thank you.

445
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 31
Latches and Flip-Flops (Part I)

We now start our discussion on sequential circuits. Let me try to explain what we are going to
talk about in the next few lectures. So, far the kind of logic circuits, the design, the
optimization that you had talked about they were basically concerned with combinational
circuits. If you recall, in a combinational circuit the outputs depend only on the inputs that we
apply at that particular point in time.

In contrast, when you talk about a sequential circuit there the output also depends on some
kind of previous history what kind of inputs were applied earlier, we should be talking about
examples later. So, for this kind of a sequential circuit we need to store or memorize some
information ok. So, the topic of our discussion in this lecture is Latches and Flip-Flops; the
first part of it. These latches and flip flops essentially constitutive the basic building block
using which we can store some information; well information in this case we are talking
about bits.

(Refer Slide Time: 01:45)

We want to store the 0s and 1s ok; so, let us see. So, as I have just now mentioned that for a
sequential circuit; the outputs of the circuit will depend not only on the inputs that we have

446
applied, but also on some kind of past history of the system; which in other word means that
the system has to memorize which state it is in. Let me take a very simple example to
illustrate what we are trying to talk about.

When you imagine that you have built a system; an automated system whereby whenever
someone enters the room, the light turns on automatically. Well, it is not a very big thing you
can have a sensor which will give a pulse or one and whenever there is an interruption in
some light when a person enters. And there will be a circuit which will be turning on the
switch of the lamp whenever that kind of signal comes.

Now, what you want is whenever that person comes out of the room; the lamp should
automatically be turned off. So, there can be multiple persons in the room right; so, the
system should remember or memorize how many persons have entered the room so far;
suppose there are 10 persons who have entered. So, unless and until all the 10 persons come
out of the room exit the room; the lamp will not be switched off.

So, for the circuit like this; this circuit or this system has to remember how many persons are
currently in the room and that constitutes the state of the system. This is the basic idea behind
this design of sequential circuits; well there are 2 types of sequential circuits you can talk
about later synchronous and asynchronous ok; mostly we shall be discussing about
synchronous sequential circuits ok

So, in order that the circuit can memorize the states we have to have some kind of basic
memory elements called latches and flip flops. So, these latches and flip will be our basic
memory elements; I mean using which we can store the state of the system and this latches
and flip flops there are various type, we shall be discussing the various types and how they
are designed in the subsequent discussions.

447
(Refer Slide Time: 04:37)

Now, what we are trying to say is how to design a circuit in which we can store some
information, store the value of a bit whether it is 0 or a 1. So, storing a bit is the most basic
form of information storage; whenever you have multiple bits we can store a word ok. So, let
us see what is the basic requirement for storing a bit electronically; what do you mean by
storing, where do we store ok?

You see when you talk about some device like a magnetic disc; there we have an idea well
magnetically you are trying to store something; we are trying to create tiny magnets on the
surface of the disc; which side is the north pole, which side is the south pole that will
determine whether you are storing a 0 or a 1. But in an electronic circuit; how do we store?
Let us try to understand that.

Well here what we are saying is that if we have a cascade of 2 NOT gates; NOT gate means
inverters with feedback that will constitute the most basic form of storage system, this will
constitute a stable state of the system. What do you mean by that? Let us try to see. Suppose
we have an inverter like this, there is another inverter like this 2 NOT gates; they are
connected in cascade with feedback.

Now in such a circuit let us say suppose assume that somehow this input is at logic 0; well if
this input is a logic 0 the output of the NOT gate will be at logic 1, output of this NOT gate
will again be at logic 0 and that same 0 is being fed back here; you see this is a stable state of
the system.

448
Whenever we set this system in this state; the values of those 0s and 1s will never change, it
will memorize the state as long as power is on; the circuit is powered on it will remember the
state. Let us say if we take the output from this point; let us say from this point if we take the
output this point is at state 0. Now somehow if you are able to change it; suppose if you are
able to make this 1, which will make this 0 which will again make this 1 and the same 1 is fed
back here then we say that the output is 1.

So, this configuration; a pair of NOT gates or inverters connected in a chain like this is the
basic memory element and constitutes stable state. And as you can understand whenever a
circuit is in this kind of a value; this 1 will remain a 1 because no other circuit is trying to
make it 0 because this NOT gate output is also 1; the same 1 is again applied as an input here.

So, it constitutes a stable system this is never deleted or modified unless we change
something externally right. But in practice we need something more you see what we saw?
We saw that we have a stable system that can be designed using 2 cross coupled NOT gates,
but what we have not talked about so far is how to set the output values to 0 and 1 depending
upon our wish, depending upon our requirement.

So, we also need to have that kind of a circuit that from outside we can set the circuit state to
either 0 or to a 1 as per whatever we need ok. So, for that reason; for that purpose we need
some additional circuitry and depending on what kind of additional circuitry you are using
and what is the exact functionality of your circuit; we can distinguish between various kinds
of latches and flip flops accordingly and this we shall be discussing in our subsequent slides.

449
(Refer Slide Time: 09:17)

Let us look into this cross coupled inverter once more in a little more detail. So, in the
diagram one top we are showing this 2 NOT gates which are connected in the chain like this.
The input is Vin1, the output of this gate is Vout1 which is applied as the input to the next
gate which are calling Vin2. And the output of this gate Vout2 is again fed back here and is
applied as Vin1. Now you see one thing whenever we have a single inverter like this; a single
NOT gate.

So, if we try to plot the input versus output voltages; the input voltage and this is the output
voltage; let us say O; the curve typically shows a behavior like this. So, when the input this is
0 volts and this is the maximum. So, when the input is 0 the output is at a high level and there
is a point where the; the outputs slowly starts to change and beyond the point, the output is
more or less at lower level this is how inverter works.

Now, let us see this diagram in this diagram we have plotted Vout1 on this side and Vin1 on
this side; this is the first inverter. And the solid black line shows the characteristic of the slide
I showed for this first inverter. Now for the second inverter this Vin and Vout are reversed
right? This Vout of the first gate will be Vin of the next gate and Vout of the next gate will be
Vin of the first gate.

So, to show the characteristic curve of the second gate; we just reverse the axis. So, along the
y axis we show the input Vin2 and along the x axis, we show the output Vout2. And you have
a similar curve like this; this is the blue curve shown like this. Now we are showing both the

450
plots on the same curve because the output of one gate is actually the input of the other gate
fine.

So, what we say here is that there are 2 regions of the system one out here and one out here
where the system is stable. Stable means whatever is applied in Vin2 that is regarded as logic
0 and the output will be logical high and this Vout2 will also be logic 0. So, Vout1 will be at
logic high; Vin is 0, Vout is high. So, there are 2 region; if it is Vin is high, it will be low
output will low.

If V in is low the output will be high, but in the middle region it is not a very stable thing
because if the input changes; the output changes very sharply. If you look at this curve in the
middle part the slope is very high. So, if the input changes a little bit the output changes
pretty sharply; that is considered as a meta stable or an unstable state. So, if the circuit is in a
metal stable or unstable state, it will try to move into one of the stable states as quickly as it
can ok.

So, in terms of a pictorial representation if this is my input voltage range. So, on the 2 sides;
we have 2 stable states and whatever is in between this is unstable or metastable states. So, it
is like a ball if it is in a metastable state; it will always try to roll into one of the stable states
right let us move on.

(Refer Slide Time: 13:34)

451
Now, let us come to what we are actually trying to discuss here; so, how to design the storage
elements? So, we first talk about the design of storage elements called latches. A latch is
nothing, but a storage element or a storage device that can store a single bit of information
ok; it can store either a 0 or a 1 which means it has 2 stable states 0 or 1.

Because there are 2 states 2 stable states and it can switch from one to the other; it is
sometimes also called a bistable device. And it since it switches between 2 states it is it is also
called bistable multivibrator; it vibrates between 2 states something like that. So, this is also a
name which is given bistable multivibrator.

And latches are level sensitive means suppose I am applying some input. So, when I am
applying input let us I have applied a 0 or a 1. So, when I am applying a 0 and applying a
continuous 0; when I am applying a 1 I am applying a continuous 1. So, this circuit will react
to the voltage level ; that means, how long I am applying a 0, how long I am applying a 1; it
will not look at exactly when I am changing it from 0 to 1 or 2 so.

(Refer Slide Time: 15:28)

Level sensitive means this circuit operation will depend only on the voltage levels; level of
the voltage that I am applying in the inputs alright. So, the different kind of latches and also
flip flops that we shall be talking about are Set Reset or S-R type Delay; D type, J-K this is
also the short from Jack and King, but it is not very commonly referred to; so, we simply call
it J-K and T means Toggle; these are the 4 most widely used flip flops and latches we shall be
discussing them

452
(Refer Slide Time: 15:56)

We start with the most basic kind of a latch; this set reset latch or an S-R latch; this is the
most fundamental of the storage element. And let us try to understand how this works; well a
set reset or a S-R latch consists of a pair of cross coupled NOR or NAND gates.

Now, in this diagram I am showing cross coupled NOR gates; so, why I call it cross coupled?
Because you see there are 2 NOR gates, the output of one is fed to the input of the other. Now
let me talk about one thing; I mentioned earlier that 2 inverters connected in cascade with the
feedback constitute a stable storage element. Now suppose this inverters; I modify into a 2
into gate, let us say I make this as NOR gate, I make this also a NOR gate and I apply another
input here and I apply another input here.

So, one of them I call it S another one I call it R; now the purpose of R and S are to forcibly
set these lines to 0 or1 whatever you want from outside which for a simple NOR gate chain it
is not possible to do because there is no external input. So, S-R flip flop is just an extension
this latch is just an extension of that inverter pair feedback with feedback that kind of a chain,
but we have replaced the inverter by a 2 input gate in this case NOR gate.

So, you see these 2 circuits are actually identified. So, here we have drawn in a slightly
different way the NOR gate with the R input this is the one. The output of which is going to
the input of the other NOR gate, output of which is going to the input of the other NOR gate
and the other input is coming S; other input is coming S. The output of this coming to the

453
input the output of this again is again coming to the input here R is applied ok. So, these 2
circuits are same ok.

Now, let us try to look at the functional behavior. So, how does a S R flip flop behave? Well
this I shall be explaining in more detail later, but let us look at it just at phase value; whatever
I am saying here. Suppose I apply S equal to 0 or R equal to 0 which means in this circuit S is
0 and R is both are 0 and 0. So, the output Q and Q bar one is the complement of the other
ok; Q and Q are the complement Q and output and its complement. So, it will be storing some
value.

Suppose Q is storing 0 and Q bar is the complement of that. So, if we apply R as 0 0; you see
this 0 is coming here 0 and 0 this output will be 1 and 1; 1 and 0 output of the NOR gate will
be 0; so, there is no change. Similarly if you see this is 1 and this is 0 then also no change;
this one will remain 1 because 0 0 is 1 and 1 0 is 0.

Therefore if you apply 0 0; there is no change in the outputs right; this is the first one. Now
suppose I apply 0 1; S equal to 0 and R equal to 1, now for NOR gate if we apply a 1 to one
of the inputs; the output is forced to become 0 right and this is 0 and 0 this will become 1. So,
you see for this case Q is 0; Q́ is 1 ; so, whenever I have to set the output Q to 0, I will
have to apply S equal to 0 and R equal to 1.

Similarly, if I apply the reverse S equal to 1 and R equal to 0; then this 1 is the input of this
NOR, this gate will be it will become 0 and 0 0 and this will become 1; so the output is 1. So,
whenever I want to make the output 1 I have to apply 1 0. Now the last row I shall be
explaining a little later it says that if I apply 1 1; this will be an invalid combination, well I
shall be explaining this a little later.

You see means our basic purpose is fulfilled; we have been able to set the output to 0, we
have been able to set the output to 1. But if you do not want to change we also have an input
is in which you can keep it in this same state as it was. So, we have designed a very basic
storage element which can store either a 0 or a 1 or it can also remember whatever it was if I
apply a 0 0 in S and R; there will be no change in the outputs fine.

454
(Refer Slide Time: 21:30)

Now, this is the circuit diagram using NOR gates; now just using NAND gates also you can
design a very similar circuit. The only change with respect to this stable well just one thing;
let me tell you also as I mentioned for a combinational circuit when we talked about the input
output behaviour in the form of a table we called it a truth table, but for a sequential circuit
such an input behavior we referred to as a state table.

Because it also captures the state of the system not only the input output behavior ok; so, this
table that was showing here this actual is a state table right. Well here for the NAND one; the
only difference is we are applying S bar here and you are applying R bar here; the NOT of S
and R. Now you can verify the operation; suppose S is 0, R is 0 the first row which means
Ś is 1 and Ŕ is 1. For an NAND gate let us say the output was 0 this was 1; so, 0 and 1
for a NAND gate output will remain as 1 and 1 and 1; output will remain as 0; so, same state
no change.

Similarly, for 1 and 0 see if I apply S 0, R 0 there will be no change right. So, if I apply S
equal to 0, R equal to 1; S equal to 0 means S bar is 1, R equal to 1 means R bar is 0. NAND
gate any input 0 will force the output to be 1; 1 and 1 this will be 0 ; so, you see the output is
0. Similarly if you reply here let us if we apply the reverse S equal to 1 and R equal to 0; S
equal to 1, R equal to 0. So, this 0 will make it 1, this 1 we make it 0; so the output is 1 right.
So, now let us see that why the fourth combination is considered to be invalid.

455
(Refer Slide Time: 23:54)

So, let us try to just explain this with respect to this NAND level diagram. Now what you
have seen so far is for S equal to 0 and R equal to 0 combination, the behavior of the circuit is
there is no change right. And we are saying that S equal to 1 and R equal to 1; this is an
invalid combination; you should not apply right. Now let us see that what will happen; if I
apply S equal to 1, R equal to 1 here; let us see. S equal to 1 means Ś is 0, R equal to 1
means R Ŕ is 0.

So, because both the gate inputs are 0 ; so, both the output should be forced to become 1. So
well; so I mean you may argue one way of arguing means well I am calling them Q and Q́
. Now because both of them are 1 and 1; so, it is wrong that is why this is invalid well, but I
can argue in a different way; suppose I have designed the circuit in such a way that only Q is
available to you, Q́ is not available.

Then I can say that if I apply a S is equal to 1, R equal to 1; Q will always be 1 as the circuit
shows. So, there is no ambiguity here; so, where is that ambiguity is coming in or why you
are calling it invalid? Well we are calling it invalid because you see this S equal to 0, R equal
to 0 combination is supposed to be a combination which will not cause any change. So, what
is the meaning of S equal to 0, R equal to 0; which means I have applied the 1 here and I have
applied a 1 here.

So, my circuit state was this 1 1 it was; now if I apply 1 1 you see this 1 and 1; this will
become 0, this 1 and 1; this will also become 0. Now if both these 2 gates are exactly of the

456
same speed then this one will be changing to 0 at the same time. Then again 0 and 1 will be 1;
0 and 1 will be 1; then again change back to 1 at the same time.

But in reality what happens 2 gates are never of the exactly same speed, one will be slightly
slower than the other because of fabrication differences and differences and characteristics.
Suppose this gate is faster and this gate is slower; so what if this is gate is faster than this 1 1
was there; let us say 1 1 was there earlier. This 1 and 1 will cause this to become 0 first and
this 0 will now be fed back to make it 1.

So, now we have a stable state 0 and 1, but if this gate was faster than the reverse would have
happened we will get 0 here and 1 here. So, the confusion is after applying S 1, R 1; if we
apply S 0, R 0 you cannot definitely say whether the output will become 0 or 1; it will depend
on the relative speeds of the 2 gates this is a phenomenon which is called race condition ok.

(Refer Slide Time: 27:31)

So, this is exactly what I explained; race condition is a scenario with a final result depends on
the relative speeds of the gates. So, I said if I apply S-R 1 and then apply S R 0; so you
cannot definitely predict what the output will be, it can be either 0 Q bar 1 or Q 1, Q bar 0
depending on the speeds.

457
(Refer Slide Time: 27:54)

Now, this one thing if you want to extend the design of an S-R latch to a gated latch. So, you
see here we have made a very small change; we have added an enable input.

We have added the third input and have added 2 NAND gates here and this NAND
implementation was already there; the basic S-R latch; S bar R bar we are just added this and
we call this as S we call this as R. We say when E equal to 1; the latch is active when E equal
to 0 the latch is not active. So, now this state table will look like this; when E equal to 0 then
irrespective of what you are applying in S and R; there will be no change NC means No
Change.

Next 4 rows indicate when the latch is active E is 1; if an S is 0, R is R is 0 just like previous
since no change; 0 1 will cause the output to become 0, 1 0 will cause the output to become 1
and 1 1 is an invalid combination. Now let us workout one combination let us say enable is 1;
let us say 1 0 be fourth row 1 0; S is 1, R is 0. This is an NAND gate 1 and 1 will make it 0; 1
0 will make it 1, this 0 will make this output 1; 1 and 1 this gate will make the output 0. So,
the output Q will be 1; 1 circuit right.

So, this circuit works perfectly right and just there will be using an enable input you can
either activate it or deactivate it by setting E to 0 or1.

458
(Refer Slide Time: 29:48)

Now, you can have an alternate design instead of the NAND latch, you can have been NOR
latch only difference is instead of NOR gate you need to have and gates instead of NAND
gates which are earlier, here you will have only and gates.

So, I leave it an exercise for you to verify that this circuit also works as an S-R flip flop and
the same state table is satisfied here ok; these are just 2 alternate designs.

(Refer Slide Time: 30:22)

Now, come to another kind of a latch which is much simpler type which is called a D-latch.
Now what is the D-latch just as compared to an S-R latch which has 2 inputs S and R a D-

459
latch has a single input called D. And basically when you enable the latch the value you are
applying a D will get stored inside the latch very simple. This is how you can design a D-
latch; you see the second design is easier.

The first part which you see here is just an S-R latch where this is your S input and this is
your R input. You take an inverter D you directly connect to S and D bar you directly connect
to R; you get a D latch. Well if you start with the NAND based design then your design can
be like this you can save one inverted here; it means the one gate less is required D enable.
And the output of this when enable is one D will become d bar this bar can be fed here and
you can directly connected here.

So, these are 2 alternate designs of a D flip flop; D-latch and for D-latch how it behaves? If
the enable is 0 irrespective of D; there is no change in the output. If you are enabling the latch
if D is 0, output will become 0. If d is 1; output will become 1, whatever is the input that
same value will get stored in the output this is; however, the gated D-latch works.

(Refer Slide Time: 32:14)

Just as I mentioned in the diagram that if you have an S R latch available with, you can
construct a D-latch just by using a NOT gate. D you apply directly to S and with a NOT you
can apply to R. So, we say of course we can see later that how any arbitrary kind of flip flop
or latch can be converted into other type of flip flop.

460
But in this lecture we stop here and in our next lecture, we will we shall be looking at the
other kind of designs of latches and flip flops, what are their characteristics and how they can
be converted from one form to the other.

Thank you.

461
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 32
Latches and Flip-Flops (Part II)

So, in the last lecture we talked about 2 different kind of latches; the S R latch and the D
latch; we also considered the variation of the latch with an enable input. So, we continue with
our discussion today in the second part of this lecture on Latches and Flip-Flops.

(Refer Slide Time: 00:40)

So, we shall be talking about clocks, the notion of a clock and how we can modify our design
of the latch to make it a flip flop, which can be operated by a clock. Now let us try to
understand why we need this kind of a clock and what is a clock?

See a latch I mentioned if you recall in our last lecture it is level triggered; for example, in an
S R latch whenever you apply S R 0 1 or 1 0 and apply enable equal to 1; the output will be
set to 0 or 1 accordingly, similarly for a D latch.

So, whenever the inputs are changing the output is changing immediately after the gate delay
of course. But for a flip flop we talk about something called an event; output do not change
immediately when the inputs are changing; we have the notion of time there is something
called a clock. Whenever there is a clock signal only then the outputs are supposed to change

462
in response to whatever we are applied in the inputs. So, clock gives some kind of
synchronous kind of operation for a sequential circuit. It tells you well inputs I can change
any time I want, but when the outputs are going to change that will be determined by the
clock.

So, let us see formally what a clock is; a clock is a periodic rectangular pulse strain just like
what is shown here in this diagram below. It is a repetitive pulse strain this signal goes up and
down, up and down well it depends of course, on what kind of logic system you are having.
This low level can be your 0 volts, the high level can be 3 volts, 1 volts, 5 volts; it can be
anything.

So, it can be just 2 levels of voltage and you see whenever there is a clock signal coming like
this; there are 2 events 1 event is when the clock is going from low to high and the other
event is one it is going from high to low. So, I can say that my circuit will be triggered by
some edge of the clock I call it clock triggered.

So, I can say it is triggered by the leading edge or the positive edge or the falling edge or the
nomine negative edge of the clock positive edge triggered, negative edge triggered. Well in
contrast in latches I mentioned earlier these are triggered by voltage levels and such clock
concept is not there in a latch fine.

(Refer Slide Time: 03:36)

463
So, with this background we shall be trying to explain what so called edge triggered flip flops
are. You see flip flop is nothing, but an extension of the latches same kind of logic applies;
say for an S R latch depending on the S R values the output will be set to 0 or 1. For a S R
flip flop for example, same concept is there you apply some S R values, but the output will
change only when there is an active clock signal that is coming, only when there is a clock
the output will be set to 0 or 1 right.

So, at edge triggered flip flop basically it changes its state in synchronism with a clock signal
or a clock pulse. So, we have already said how a clock signal looks like; now a clock signal is
characterized by the time during which it is high this is called the on period. The period it is
low it is called the period and the total sum of the on period and of period this is called the
time period.

So, the clock going up, down here and again going up this total time duration is called the
time period. And if T denotes the time period then the reciprocal of it 1 by f is the frequency
or the other way round; if f is the frequency then 1 by f will be the time period.

Ah. So, I can say if my T is for example, if my this time is 1 milliseconds then my frequency
will be 1 by T which is 1 kilo hertz. So, in 1 second there will be 1000 times up, down, up,
down this kind of things that will happen. And we use clocks to synchronize operations in a
sequential circuit. And this synchronization can happen either at the positive edges of the
clock which are these when the clock is going from 0 to 1 or at the negative edges of the
clock which means here when the clock is going from 1 to 0 right. So, when we design a
circuit all output changes will occurred in synchronism with this clock edge; positive edge or
negative edge fine.

464
(Refer Slide Time: 06:28)

Let us now look at an edge triggered S-R flip flop; now you already know what an S R latch
is let me try to explain what is shown in the slide. First is this symbolic notation in a diagram
how you show and S R edge triggered flip flop. You see in an S R latch you have S and R Q
Q́ outputs; here in addition you have another input called clock and this is how you show
a clock just by a this angular symbol here.

Now if we show it like this, this indicates positive edge triggered that whenever the clock is
going from 0 to 1; the output will change during that time. But if we use a small circle or a
bubble here like it shown here, a small bubble before this triangle this indicates negative edge
triggered, where the output will change whenever the clock is going from high to low.

So, just in a circuit diagram whenever you show a flip flop you either show this bubble or do
not show this bubble this will indicate whether it is positive edge triggered or negative edge
triggered right. Let us assume that we have a positive edge triggered S R flip flop; so, what
will be the state table like? You see here instead of enable, we have used a signal called clock.

What does this table says? It says the clock can be either 0 or 1, but no leading edge or
positive edge. So, it can be either continuously 0 or continuously 1; no problem there will be
no change in the output. The outputs will change only when there is a positive going
transition in the clock; that means, clock is going from 0 to 1 only during that time this circuit
operation will take place and output will be changed.

465
So, for a latch we had the enabled signal whenever enable is high the latch was open; here
whenever the clock edge is coming only exactly at that point the output will be evaluated this
is the idea. So, whenever there is an edge well the rest is exactly like an S R latch that is same
behavior; 0 0 means no connection, 0 1 means no change, 0 1 means output is 0, 1 0 means
output is 1, 1 1 is invalid.

Now, from this behavior we can write down the characteristic equation of an S R flip flop
means what is the output expression. Now see for a flip flop there is a notion of a clock
clocks are coming 1, 2, 3, 4, 5, 6; I can count how many clocks are coming. So, we use this
notation the value of the output Q at time t+1 ; Q t+1 indicate what will be the value of
the output when ( t+1 ¿ th clock pulses come; just after t-th clock pulse.

The characteristic equation says two things first it says R and S must be 0 which is an indirect
way of saying both R and S never be 1. Because if R and S are 1 then end will be also be 1; it
says its end must be 0 and second 1 is it also tells what the next Q will be in terms of the
present states.

So, you see you can draw a Karnaugh map and you can derive this expression how have
comment come to it. You see in a Karnaugh map on this side I am showing the S R this S and
R inputs and on this side I am showing the output at time t; that means, in the previous state
what the output was. Well why I need this? You see if my S R input is 0 0; it is supposed to be
no change; if it is 0 0.

So, if the previous output was 0; it will remain as 0, if the previous output is 1 it will remain
as 1, but if my input S R is 0 1; then the output is definitely 0 it is definitely 0. And if it is 1 0;
it is definitely 1 definitely 1, but because 1 1 is not a valid input, we can mark it as a do not
care right; so we have the Karnaugh map.

So, in this Karnaugh map; you see I can make 1 Q like this will be 1 latch Q and 1 Q like this
will cover all the ones this latch Q is nothing, but S and this Q will be Qt here and in this case
it will be Ŕ , Ŕ Qt. So, Ŕ Qt OR S this will be the expression for the output when
the clock comes it will be evaluated according to this expression right.

466
(Refer Slide Time: 12:30)

Now, another thing we define for a flip flop; it is something called excitation table now
excitation table means we can setup flip flop to 0, we can setup flip flop to 1.

Now, only that you also want to see that how we can change the output of a flip flop from 0
to 0, 0 to 1, 1 to 0 or 1 to 1 all possible 4 combinations this is shown by something called an
excitation table this excitation table says that when the circuit output changes from all
possible 4 combination 0 to 0, 0 to 1, 1 to 0 1 to 1 what is the required S R values I have to
apply say the output was 0, I want to change it to 0; that means, no change means.

So, I can either apply 0 0 or I can apply 0 1; so, I am writing 0 do not care. See if I want to
change from 0 to 1, there is only one way I have to set S equal to 1 R equal to 0. Similarly
from 1 to 0 I have to set S equal to 0 R equal to 1 and it is 1 it remains at 1; then I can either
apply 1 0 or 0; 0 0 means no change; that means, do not care 0, do not care 0 this is the
excitation table of the S R flip flop right.

For any flip flop we can design and write down such an excitation table. Now let us look at a
simple timing diagram for an S R flip flop; I am just showing suppose I have a sorry let us go
back.

467
(Refer Slide Time: 14:30)

Suppose I have a clock signal let us say I have a clock let us consider there are 4 such pulses
coming clock signal. And the flip flop is positive edge triggered I am assuming it is positive
edge triggered, it is triggered on this edge this triggered on this edge.

Now let us assume some values on S and R, suppose my S value is like this and I am also
showing the value of Q. Suppose initially my S value was 0, S value was 0 and R value was 1
R value was 1. So, when the first clock edge comes here it says that S is 0, R equal to 1. So,
the value of Q will be 0 over; earlier Q can be anything I do not know what Q was earlier; so
I am showing it like this; I do not know what is Q was.

But after the clock edge comes because S is 0 R equal to 1, Q will be 0 definitely. Now
suppose this S remains as 0 and R also becomes 0; so, both of them are 0 0 here when the
next clock comes. Now 0 0 means no change; so output Q was 0, this will remain as 0. So, so
even after this it will remain as 0; now let us say this S is becoming 1 here and R is remaining
0.

So, now, S is 1 R is 0; so, at the third clock this output will be becoming 1 because S 1 R 0
and let us say this S is again becoming 0 here and R remains 0. So, 0 0 is no change; so, this
will again remain at 1.

468
So, a simple example I am showing this is just a timing diagram that with respect to S R flip
flop. As the input changes and as the clock edges comes well how the things go on changing
right.

(Refer Slide Time: 17:10)

Now, let us talk about the edge triggered version of a D flip flop; it is exactly similar to an S
R flip flop. First this symbolic notation exactly same; so I can show it either positive edge
triggered like this negative edge triggered like this. This is single input D, this state table also
will look like this if the clock there is no leading edge positive edge triggered; no change if
there is a clock edge.

So, whatever is there in D that same value gets copied to the output. So, here you do not have
to do any calculation; characters equation is very simple. Qt +1 is nothing but whatever
has been applied to D at that point in time ok. So, whenever the clock edge comes at that
point in time whatever the value of D was; that will go to the output.

So, you see just I am telling you one thing; suppose this is your clock and your D input was 1
and your Q was 0. So, whenever the clock comes then only Q will become 1. Q will not
become 1 means immediately as soon as you make D 1 not like that; it will wait for the clock
to come only then this change will occur alright.

469
(Refer Slide Time: 18:49)

So, in a similar way we can construct the excitation table for the D flip flop; say for D flip
flop if you want to change from 0 to 0, I have to apply 0 in the D input that 0 will go in; 0 to
1 I have to apply 1, 1 to 0 have to apply 0 1 to 1 I have to apply 1.

So; that means, whatever I want to change to; I have to apply that same value in D
irrespective to what it was earlier right this same thing I have to apply.

(Refer Slide Time: 19:30)

470
Now, let us talk about another kind of a flip flop called J-K flip flop. J-K is the most versatile
of the flip flop versatile means it is more powerful than the other kind of flip flop, you can
implement some functions with the other flip flops may not be able to do it directly.

Let us see what it looks like and how it works first thing is that there are 2 inputs J and K just
like S R flip flop. There is a clock here I am showing only the positive edge version negative
edge will be similar there will be a bubble here that will be negative edge triggered. Now, let
us see how a J-K flip flop looks like.

Now in the state table you see we have shown it in a slightly different way instead of just
writing Q and Q́ ; we have written Q t+1 and Qt´+1 because we need to keep track
of the time. Here as per our notation Qt will indicate, the value of the output at t-th time unit;
that means, after the t th clock pulse has come.

And Q t+1 will be the output after (t+1) th clock pulse has come; that means, it keeps
track of the time. So, I am saying what will be the value of Q t+1 and Qt´+1 depending
on at the previous state what Qt was? Qt was the previous history. So, if there is no edge it
will remain at Qt and Q́t ; no change. And when there is an edge 0 0 means no change
again Qt and Q́t ; 0 1 exactly like an S-R flip flop 1 0.

So, far it is exactly like an S-R flip flop the only change is the last row. For a S-R flip flop the
1 1 combination was supposed to be invalid, but for a J-K flip flop we are not saying it is
invalid. If J-K is 1 1 the output value will be complemented; that means, 1 will become 0 0
will become 1 that is the difference. So, when the inputs are 1 1; Q t+1 will be same as
Q́t , Qt´+1 will be same as Qt; that means, it will be the K naught of the previous state;
this is the only change.

This is why we say that J-K is the most versatile and most general kind of flip flop because it
is S-R flip flop plus this additional functionality which S R flip flop does not has. Talking
about the characteristic equation you see again.

471
(Refer Slide Time: 22:38)

So, here we are again showing JK and Qt; if JK is 0 0 no change; that means, if your 0 will
remains 0, if it is 1 it will be remain 1. 0 1 means it will be 0; set to 0 1 0 means it will be set
to 1 and 1 1 means naught, if the output was 0 it will become 1 if the output has 1 it will
become 0. So, if you want to minimize this Karnaugh map; the Q S will be 1 will be this
other Q will be this. So, if you just write down the minimize expression this will be the
characteristic equation J Q́t , Ḱ Qt.

This will correspond to this second term this one J Q́t is this Q and the other Q is Ḱ
Qt. And regarding excitation table; suppose we want to go from 0 to 0 for a J K flip flop you
can apply either 0 0 or 0 1 just like an S R flip flop. So, it will be 0 do not care, 0 do not care
if you want to go from 1 to 1 this is also S R flip flop you apply either 0 0 or 1 0.

That means don’t care, 0 don’t care 0 but the differences are here if we want to go from 0 to
1, then you either apply 1 0 which will make it definitely or you apply 1 1 which will
complement the output was 0 it will change it. So, it is 1 don’t care 1 don’t care and 1 to 0
will also be different; 1 to 0 you either apply 0 1 or you complement 1 1 don’t care 1. So,
these are the excitation values for J K flip flop.

472
(Refer Slide Time: 24:56)

Now, let us look at the gate level implementations of a J K flip flop. Here I am just showing
this diagram where we have referred to has here clock, but we have not shown any edge
detection circuit we shall discuss it later, but assume that somehow some edge detection
mechanism is there. So, when ever clock edge comes then only here some 1 comes.

Now just let us see how it works let us say, but I have applied J 1 and K 1 and here also it is
clock works like an enable; this also is 1. And let us say the output this Q was 0 and Q́
was 1; you see there is a feedback from Q́ there is a feedback to here. So, Q́ was 1; so
this input is 1 Q is 0 there is a feedback this is 0. So, 1 and 1 and 1; this NAND output will be
0 and 1 1 0 this will be 1.

So, this cross coupled NAND gate this 0 will make this 1 and this 1 1 will make this 0. So,
you see whatever it was; it gets complemented J 1, K 1 should complement it. So, I will leave
it as excess fall to verify that for the other combinations this J K flip flop state table also gets
satisfied ok; this is the gate level implementation.

473
(Refer Slide Time: 26:40)

Now, there is another kind of a flip flop which you can say is a special case of a J K flip flop;
this is called a toggle or a T flip flop. So, a T flip flop has a single input T as this diagram
shows there is a single input T this S this is the positive edge portion negative edge portion
and what this T flip flop says; if T is 0 there will be no change.

If T is 1 the output will toggle; toggle means it will be it will get complemented NOT, no set
reset kind of thing only toggle. So, this is equivalent to that J equal to 1 K equal to 1
combination of a J K flip flop the output will toggle just that. So, you see the state table; so,
whenever there is a clock active edge, you see if T is 0; no change. If T is 1 there is a toggle
Qt +1 becomes Q́t ; Qt´+1 becomes Qt.

And characteristic equation you can read like this, you can write straight way like this
because you see anything X or with see if T is 0; 0 EXOR Qt is this Qt itself same. And if T is
1; 1 EXOR something is NOT; so you are getting naught this is the characteristic equation of
a T flip flop T type flip flop.

474
(Refer Slide Time: 28:25)

And talking about the excitation table whenever you want to go from 0 to 0, there is no
change no toggle.

So, T is 0 0 to 1 there is the toggle there is a change you just set T equal to 1. 1 to 0 toggle set
T equal to 1, 1 to 1 no change set T equal to 0. So, whenever there is a change you set T equal
to 1, when there is no change; you set T equal to 0. And just as I said T is the special case of a
J K flip flop if we simply tie J and K together and you call it T you can implement a T flip
flop. Because if we apply T equal to 0 this is equivalent to J 0 K 0 which means no change; if
we apply T equal to 1 which means J 1, K 1 which means toggle output that is what T flip
flop is right.

So, with this we come to the end of this lecture. Now in the next lecture we will looking at
some other aspects of flip flop design. And in particular we did not mention one thing that
how this edge triggering is implemented. We I have said that the flip flop will work whenever
the edge of the clock comes, but there must be some circuit which is detecting the edge and
activating the flip flop during that time right. So, we shall be seeing that in the next lecture.

Thank you.

475
Switching Circuits and Logic Design
Prof. Indranil Sengupta.
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 33
Latches and Flip-Flops (Part III)

So, in this lecture we continue with our discussion on Latches and Flip Flops. This is the third
part of it. Now if you recall in our earlier 2 lectures on this topic, we talked about some of the
latch designs and some of the different kind of flip flop designs. Now here we shall first be
talking about some additional facilities that you can have in a flip flop.

And in particular how this edge triggering can be implemented then you shall be seeing how
we can convert from one flip flop type to another in a general science fine.

(Refer Slide Time: 01:05)

The first thing we look at is to see how we can implement a flip flop with asynchronous
preset and clear what do we mean by this? See we have seen so many different types of flip
flops SR, D, T, JK four types.

Now, we have seen how by applying the various input combinations, we can set the output to
0 set the output to 1 and so on for a T flip flop we can toggle it. But sometimes it may so
happen there is a requirement that before you start this circuit operation you should reset or
initialize the flip flop to either 0 or 1 and that need not happen along with the clock. It can be

476
a so called asynchronous operation. Asynchronous operation means it does not depend on the
clock; you just apply some input to output will be reset to 0 or 1. So, here we are talking
about design of a flip flop with asynchronous reset and clear features. So, the motivation
already I have mentioned that sometimes we need to initialize the flip flop to a known stage it
may be 0 or it may be 1.

So, we define a preset operation which means setting the output to 1 and a clear operation
which means setting the output to 0 and we shall see how these can be implemented very
easily you can do it, and these are typically asynchronous operations which means they does
not depend on the clock. Whenever you set preset and clear they will be activated
immediately now let us see how it works. Well we look at one of the designs you are seen
earlier namely that cross coupled NAND gates for designing a latch or a flip flop stage. So,
we talk about that let us look at that cross coupled thing.

So, what do say you see for an SR latch for example, what we are said? We said that we are
applying Ś here we are applying Ŕ here. Now here we are saying that let us have 2
additional inputs one apply to here another input apply to here. So, we are having 2 additional
inputs to these two NAND gates which are directly generating the outputs.

So, we are calling them let us call this input as preset bar, let us call this input as clear bar.
Bar indicates that they are active low; that means, if they are set to 0, then you are activating
that feature if it is 1 means you are not activating. So, suppose you are setting preset equal to
0, which means you are trying to present this latch.

Well if we apply 0 here you see what will happen these are NAND gate whenever you are
applying 0, there can be clocks before because you see clock circuit is here I am not showing
the clock circuit, this is the last stage of the latch I am showing. If you are setting a 0 then this
Q will immediately become 1 and because it is 1 with this feedback this Q́ will become 0.
So, we are setting or presetting the flip flop or this latch to 1 and similarly if you set clear to 0
then this Q́ output will become 1 and this 1 will be fed back and this preset is not active it
is 1 of course, 1 1 and this equals to 1 this will become 0.

So, this is how you are presetting this to 0 or 1 by applying this preset and clear asynchronous
input. Now this clock circuitry I am not showing which I generating this Ś and Ŕ in
synchronism with the clock. So, when there is no clock, both Ś and Ŕ are set to 1 and

477
1. So, under this condition whenever you are applying 0 to either preset and clear, the output
will be asynchronously initialized to either 0 or 1 this is what this asynchronous preset and
clear means.

(Refer Slide Time: 06:22)

Now, let us look at how we can implement edge triggering, but before that let us see why we
need edge triggering. So, why we talk about edge triggered operations well you think of a
latch, let us say I have a D latch, think of a D latch where there is an enable input there is an
enable input and there is an output. Suppose I have set enable to 1 and I keep it 1 for a long
time what will happen is that this latch will remain one will remain open for the entire
duration this enable is 1.

Like for example, if enable I make it 1 and I leave it 1 like this, and D suppose I change it
make 1 0 1 0 in the mean time, what will happen this Q, Q will also go on changing as long
as this enable is active; this is the problem that when you are enabling a latch for a longer
time. So, any changes in the input will also cause a corresponding changes in the output
unlike an edge triggered operation, where whenever an edge is there only then the output is
changing all other times even if the input change output will not change ok.

Now, this is required for some application let us say where the flip flop outputs are driving
some other circuit, this output you are connecting as input to some other circuit. So, if the
output goes on changing that output circuit can also start doing some wrong operations doing
some wrong calculations that you may want to avoid. So, there can be several solutions the

478
first solution we will look at is to somehow generate a very narrow enabled signal like this
same flip.

(Refer Slide Time: 08:43)

What I am saying the enable signal will be a very narrow signal, very thin signal. And exactly
at this point whatever is the value of D that value will be captured right. This is one solution,
but now the question is how we can do this. Well doing this is not difficult well a simple
solution can be like this.

(Refer Slide Time: 09:04)

479
Let us try to understand, this circuit here we are saying that there is some kind of a clock
signal we are applying let us say this is my clock signal. So, I am showing 2 such pulses let
us call this as A, lets this call this as B and let us call this as C. Now A you see there will be a
small delay of this NOT gate. So, if I plot A, A will be the NOT of this, but there will be a
small delay after some small delay there will be a NOT. So, whatever clock you are applying.
So, A will be just NOT of that, but there will be a delays small delay equal to the delay of
this.

Now, you are doing an NAND of CK and A and generate B. NAND means what? So,
whenever both of the inputs are 1 1 then only the output will be 0 otherwise the output will be
1. Now see here there is a period when both of them are 1 1 it is this period this period then
again this period right both clock and A are 1 and 1.

So, you see 1 of A 0. So, B will be generating a pulse like this, now what will be the width of
this pulse narrow pulse? The width of this pulse will be equal to the delay of this gate delta
because this is nothing, but delta this edge and this edge the difference between the two, and
because it is becoming negative just I am using another NOT gate to make it positive. So, I
am applying a very narrow some kind of enable signal, which I can generating from the clock
signal.

Now because it is so narrow, so, it will allow only one operation to carry one change in the
output to happen. And this will avoid that earlier problem when enable is set to 1 for a longer
time if the input changes output goes on changing continuously, that will also get avoided
here. But well here another thing is that instead of one NOT gate we can use any odd number
of NOT gates instead of 1 we can use 3 not gates also let us say. So, in this case this delta will
be equal to the delay of the 3 not gates right, but the wave form will be very similar.

480
(Refer Slide Time: 12:19)

Now, we will one problem here is that, here we are getting a very narrow enable pulse all
right, but the width of this pulse is not under our control, it depends on the delay of a gate
which is circuit dependent.

So, if the delay if the width becomes too narrow, then the circuit may not respond to that very
narrow pulse may be we have to make it little wider, but it is difficult to predict beforehand.
Because when we are fabricating a circuit you are not very sure, what the exact delay of a
gate will be there is always a plus minus tolerance during the fabrication process ok.

(Refer Slide Time: 13:01)

481
So, better solution to have an edge triggered flip flop or a circuit is to have a mechanism like
this.

Here we are showing the complete diagram of a positive edge triggered D flip flop where this
is your D input and this is your clock input; let us try to see what is happening. Suppose when
suppose when clock is 0, you see you see there is a clocks coupled latch on top there is one in
the output stage and another here there are 3 cross coupled latch latches NAND gates. If it is
0 then this output will be 1 this output will be 1 which will make this work as a latch, there
will be no change whatever Q and Q́ is there that same value will get stored if it is 0 and 1
it will remain 0 and 1, 0 1 means 1, 1 and 1 is 0 there will be no change.

(Refer Slide Time: 14:18)

Now, let us see when the clock becomes 1 just from 0 it has changed to 1, it has just become
1 let us see what happens. And suppose at that time I have applied D equal to also 1. So,
when this happens you see this D is equal to 1 is coming here right and this clock has become
1. So, actually what will happen this cross coupled latch, it will actually store this value 1
whatever you are storing here, this value will be triggering this here and this edge information
will get stored here and it will not change subsequently. And in this one the value of D which
was there at that point in time also gets stored and depending on that the operation of the flip
flop will happen here.

482
So, without going into the detailed signals an all the lines what I am saying is that, they are 3
cross coupled latch stages we are using here, in one of the stage we are capturing the edge, if
the edge that latch set to one immediately.

There is another D type stage where whenever D value is there and that clock is 1 that value
will get stored there and the third stage is the actual D flip flop where the value gets
transferred whenever the clock is active. So, this is the edge detection circuit and this is my
actual D flip flop or D latch. As you can see this circuit is a bit complicated, but this works
well this circuit remembers the clock edge actually there actually whenever it comes and it
does not depend on the delay of the individual gates like in the previous approach. Well I am
not going into the detail description of this circuit but I am telling you just this is a much
better and robust design, and this is used in most of the implementation of edge triggered
latches flip flops.

(Refer Slide Time: 16:44)

Now, there is another kind of a flip flop which is used, I just as opposed to edge triggered this
is called master slave flip flop. Now as the name implies there is something called a master,
there is something called a slave and there are 2 latch stages, it is something like this there is
one latch which you call as the master latch, there is another latch which you call as the slave
latch.

So, the inputs you are applying to the master latch, the output of the master is going to the
input of the slave and the final output of the slave is your circuit output. And the clock or the

483
enable what you are here just applying you are enabling the master, and same signal after a
not you are enabling the slave, this means that you are not enabling both the master and slave
at the same time.

Let us say if it is a D flip flop you are applying a D. So, whenever this clock this signal
control signal is active, the master is first active. So, the value of D will be transferred to the
output of the master, but when this C is become deactivate this c prime will become active
now slave will become active. This output of the master will now get transferred to this slave.
So, you see if D changes multiple times in this process, there will be there is no harm because
whatever is the value in the output of the master it is that value that will finally, go to the
output of the slave and it will be a clean transition, there will be no multiple transitions like in
a single latch that was possible right.

So, as I mentioned that this aims to address the problem in latches as I said earlier, when the
enable signal may be active for a long time right and the output may change for multiple
number of times. But there is some problem which may happen for master slave flip flop as
certain cases which of course, here you are not discussing, this is something called 0 catching
and 1 catching problem that for certain input combination and for certain means applications,
the output of the master might be reset to 0 or might be reset to 1 depending on certain
conditions.

But for many applications it is not an issue, you can use a master slave flip flop without any
problem. But for those application where this is an issue then you have to use edge triggered
versions and not master slave flip flops.

So, the pointer note is that master slave and edge triggered are 2 alternatives you are not
using both of them together, you are using either edge triggered flip flops or master slave
kind of flip flops both are used actually in practice.

484
(Refer Slide Time: 20:03)

So, I am showing some examples of master slave flip flop designs; this is an SR master slave
flip flop. See you see there are 2 stages master and slave both are SR flip flops and this clock
or enable as I said you are applying it directly to the master and after an inversion you are
applying it to this slave that same architecture.

So, when clock is 1 when clock is 1 master is enabled and whatever you are applying in the
input accordingly the output Q is set and when clock becomes 0 then after inversion this slave
will be enabled and whatever was here that will get transferred here right this is how it works.

(Refer Slide Time: 21:00)

485
D similar, D already I have shown earlier, again in master and slave D is coming here clock
and clock bar same.

(Refer Slide Time: 21:13)

And this is JK JK is also similar there is one stage here another stage master and slave, but
here the only point is you note is that the feedback is coming from the final output of the
slave back to the first gate ok. From Q́ to this J input and Q to this is k input this is your
master latch, this is your slave latch this is master this is slave, the way the work are very
similar.

(Refer Slide Time: 21:55)

486
Now, let us look into some ways of converting 1 flip flop types to another, let us look at a
number of illustrative examples and see that how we can convert 1 flip flop type to another.
Because there may be many designs where you are given 1 particular type of flip flop, let us
say you are given JK flip flop, but you actually required a D type flip flop. So, how to convert
it into a D type flip flop that you should know ok. So, let us see this through some examples
one by one. The first example that we work is let us say just one by one let us see sorry yeah.
First we see we look at JK flip flop using SR flip flops.

So, we are given SR flip flop. So, how to build a JK flip flop let us see. So, you have an SR
flip flop given with you, this is given and you have to build a JK flip flop of course, clock is
there clock I am showing here let us say clock is here. So, what you do? You use 2 AND
gates and we apply the J and K inputs here J and K and what do you from Q́ , you take a
feedback to here and from Q you take a feedback to here. So, you get a JK flip flop. This is
how you actually built a JK flip flop using SR flip flop. You see to built a JK flip flop the
essential idea is same. You have to take a feedback from Q́ you have to bring it to the gate
that feeds J and from Q you have to bring it to the gate that feeds K.

So, I suggest you can work out this condition for the different input combination C that this
actually works as a JK flip flop I will leave it as an exercise for you. So, this was JK using
SR.

(Refer Slide Time: 24:43)

487
Let us next look at a simpler problem suppose we want to build a D flip flop using SR flip
flop, D using SR. Let us say again similar suppose we have an SR flip flop again Q, Q́ ,
this is very simple you connect D directly to S and with an inverter NOT gate you connect it
to R. The idea is when you are trying to apply D equal to 0; that means, you are trying to
store that 0 then you make S equal to 0 R equal to 1, that will make Q equal to 0 and when is
D equal to 1, you want to store that 1 you make S equal to 1 and R equal to 0 that will make it
1 output equal to 1 fine.

(Refer Slide Time: 25:58)

Next let us look at S R using JK. You see here you do not have to do anything sorry if you
have a JK flip flop with you, if it is J K you simply call them S and R Q, Q́ are there you
do not do anything why? Because I mentioned this SR flip flop is functionally it is a proper
subset of JK flip flop. It uses only some of the functionalities of JK see J equal to 1 K equal
to 1 that combination it is not using.

But it does not matter because you will never apply S equal to 1 R equal to 1 because you are
using an SR flip flop. So, it is a JK flip flop where you are never applying this input
combination. The remaining combination are identical for an SR flip flop. So, it does not
matter. So, here you do not have to do any kind of change it is just straight forward.

488
(Refer Slide Time: 27:17)

Let us look into another one, T flip flop let us see using D flip flop see in the T flip flop you
have talking about some kind of toggling. So, you have a D flip flop with you D Q and Q́ .

Here you use an exclusive OR gate you use an EXOR gate you apply the T input here and
this Q output this is fed here just see what this means. If T is 0 then there is not suppose to be
any change, see if T is 0 then the same Q same value will be coming here, that same value
will be gets stored is a D flip flop if it is 0, 0 will be coming here if it is 1, 1 will be coming
here. So, no change and if T equal to 1 there is supposed to be a toggle, if Q equal to T is 1.

If Q is 0 then this will become 1 0 x or 1 and if Q is 1 it will become 0 1 x or 1 0 there is a not


toggle and that toggled value will get stored ok. So, it is a T flip flop right.

489
(Refer Slide Time: 28:55)

So, let us look at a some others a T using JK I already mentioned earlier let us let talk about it
once more T using JK flip flop. So, if we have a JK flip flop, you simply tie J and K together
and call it T that is done. Because again for a T flip flop you are using only one of the rows of
JK flip flop not 1, actually 2 when the combinations are 0 0 J 0 K 0 and J 1 K 1 ok. So, when
T equal to 0 you are selecting this T equal to 1 you are selecting this just that.

(Refer Slide Time: 29:47)

Then let us look into a slightly complex one. Suppose you are having a D flip flop and you
want to build an SR. Here you have a D flip flop given to you and you use an OR gate which

490
you connect to D this you connect to S suppose I have AND gate, the output you connect here
R with a NOT connect here and the Q bring it here. So, I leave it as exercise for you to check
that this actually works as an SR flip flop. Because you will see if S equals to 0 R equal to 1
this will make D equal to 0; that means, you are storing 0 if S is 1 R equal to 0 it will make D
equal to 1; that means, you are storing 1.

But if it is 0 0 then the previous value is coming and that Q value is getting stored here that
same value is getting stored right.

(Refer Slide Time: 31:27)

And the 1 last kind I am showing JK using T. Suppose you have a T type flip flop you want to
build a JK flip flop. So, you use an OR gate here connect it here connect like this, you apply
the K input here, apply the J input here and the Q input you connect here to K and Q́ input
you connect it to J this makes a JK flip flop. So, here again I will leave it as exercise for all of
these 2 for you to verify that whether this actually performs as the JK flip flop by applying all
possible input combinations and verifying whether this state table is actually being satisfied
fine.

So, with this we come to the end of this lecture, where we talked about various features in a
flip flop like edge triggering how edge triggering can be implemented, and lastly we talked
about how you can convert the various kinds of flip flops from 1 type to the other. Now in the
next couple of lectures we shall be discussing some of the other clocking and timing issues in
sequential circuits, which will be very useful to understand whenever you are designing

491
sequential circuits which will be taking up in the later discussions. These timing issues will
be very important for you to understand and appreciate some of the concepts that we shall be
discussing later.

Thank you.

492
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 34
Clocking and Timing (Part I)

In the last few lectures if you recall, we had discussed the various types of latches and
flip flops how they work. We talked about both edge triggered and the master slave
variations of flip flops, which are very useful in designing synchronous sequential
circuits. But there are some timing issues which you have not discussed so far; for
instance, what should the clock frequency be, what is the maximum frequency with
which a circuit can operate without any errors and so on and so forth. So, we shall try to
address some of these issues in this lecture. So, the title of this lecture is Clocking and
Timing.

(Refer Slide Time: 01:08)

Fine let us first try to understand why timing issues are important in a synchronous
sequential circuit.

Now, let me repeat a sequential circuit is set to be synchronous if there is a clock


frequency, and all state changes and outputs are generated in synchronism with the clock.
So, synchronous sequential circuit means that; there is a clock which is fed to the flip

493
flops, and the flip flops would be working would be changing the states in synchronism
with the clock edges.

Now as I said there are several timing issues that need to be considered that we shall be
seen. Now these timing issues relate to some properties of the flip flops for example,
given a flip flop when the clock edge arrives the question is; when should we apply the
data inputs consider a D flip flop; some value should be applied to D and then the clock
pulse should be applied the clock edge should come.

Now, minimum what is the time duration before which the data should arrive, and only
then the clock edge come those timing constraints should be considered. And also after
the clock edge come, how much more time I should hold my input data steady so that;
the data can go inside the latch without any problem. These are some issues which relate
to the properties of flip flops how fast they are and so on.

And of course, there are some issues which relate to the delays of the circuits which are
outside the flip flops. So, we shall be considering all of these, all of these taken together
will determine the maximum speed of operation of the circuit and we normally designate
the maximum speed of operation in terms of the clock frequency right?

(Refer Slide Time: 03:34)

So, let us now look at some basic definitions with which we can explain these issues of
timing in a much better way. The first definition we talk about is Clock width, Clock

494
width says suppose, this is my clock pulse. Now I can even give the clock pulse like this
very narrow pulse. Now this clock with constraint says that what must be the minimum
time duration for which the clock signal should be high, it is this duration this time is tw
right? So, there is a minimum duration of the clock pulse width which should be there in
order the flip flop works properly, this is the clock width constraint.

(Refer Slide Time: 04:40)

Then we have something very important this is called Setup Time. Setup Time I shall be
giving some examples later.

But let us look at the definition we designate the setup time by a tsetup this is defined as
the amount of time the input to a flip flop, input to a flip flop means it can be a D input
for a D flip flop, it can be S and R input for an SR flip flop, J and K for J K flip flop and
so on. The time the inputs of a flip flop must be applied and must be held table before the
clock transition comes like; if I look into the axis of time suppose this is the time where
all the inputs are stable, and this is the time where my clock edge comes let us say for a
positive edge triggered my clock edge comes here.

So, this is the duration for which my data remains stable before the clock edge that is
defined as the setup time. And setup time is a characteristic of the flip flop that you are
using depending on the flip flop that you use, it has a specified setup time and the input
must be applied at least before that amount of time.

495
(Refer Slide Time: 06:07)

So in a similar way, you have another timing constraint called the Hold time denoted by
thold, this is on the other side of the clock edge this is the amount of time the input of the
flip flop must be stable after the clock edge comes; that means, for a positive edge
triggered the clock goes high or a negative edge triggered clock goes low

So, again in the axis of time if I say that my clock let us say a positive edge triggered, my
clock comes here at this point in time. So, what is the minimum duration I must hold my
input stable after this; let us say this much and this duration is my t hold my hold time.
So, setup time specifies, with respect to clock edge how much before I must apply the
inputs and hold time specifies, again with respect to the clock edge how much more I
must maintain that input stable so that the flip flop works correctly. And of course, there
is another delay to be considered it is the Propagation delay.

496
(Refer Slide Time: 07:27)

So, you see these are the constraints we talked about setup time, hold time, but the flip
flop as a circuit it will be having it is own delay. There are so many gates all the gates
will be having some delay. So, what is the minimum time after the clock edge has come
that the output will start to change that is, the propagation delay of my hold circuit in this
case the flip flop right?

So, the propagation delay we can define in 2 to w 2 ways, propagation low to high
transition and propagation high to low transition. So, after the clock has appeared this
signal output signal, it can be it was low it was it becomes high or it was high it becomes
low. So, what is the total delay? Suppose the clock came here this is the clock. So, with
respect to the clock what is the delay, after which the output starts changing that is
denoted as t propagation delay low to high or high to low as the case may be right? Let
us take an example.

497
(Refer Slide Time: 08:42)

Consider a scenario like this; here we have a simple D flip flop with a D input and a
clock. See the clock input I am applying a clock pulse like this. So, I am showing only
one single clock pulse suppose this is positive edge triggered as the symbol shows. So,
the circuit will be triggered by the positive edge of the clock; that means this edge.

Now, in this case is the input data which is only a single data D, the input data is coming.
So, what I am saying is that before the clock edge comes my input data must remain
stable for at least a time equal to t setup; this is my set up time. And after the clock edge
has come I must maintain the data stable for another duration of time this is my hold time
thold. So, if you maintain this t step t setup and thold constraints then your flip flop will be
working correctly, otherwise if you apply the inputs too late or remove it too early the
data that will finally, gets stored in the flip flop may be wrong. So, you need to satisfy
the setup and hold constraints in a flip flop like this as I have showed you.

498
(Refer Slide Time: 10:25)

Let us take another diagram to show this part, there is other delays are also shown here.
So, here again we have a D flip flop with a D input a clock input and output Q, I am not
showing Q́ , I am only showing Q. So, let us assume that in the clock input I am
applying a clock pulse and because this is positive edge triggered, it will be triggered in
the positive edge of the pulse this is my triggering edge.

Now suppose in D input I am applying some signal like this. See you when this clock
edge comes the value of D is 1 this point. So, this t setup is the minimum duration before
this edge I must apply this D equal to 1 and leave it like that similarly, after the clock
edge comes this hold time thold I must maintain D for this much time again.

And after the clock edge come, let us say here this red vertical bar how much time it will
take for the output to change? This is my propagation delay now since here D was 0 it is
becoming 1 so Q will also become 1 so this is a low to high transition. So, this is denoted
by propagation delay low to high. This is here and suppose later again when the next
clock comes my D is 0, here the value of D is 0 here.

So, again I must apply this 0 t setup time before and maintain it for a duration of thold after
this. Now the output since it goes from 1 to 0 now the corresponding delay will be tp
high to low. Now for a flip flop typically these 2 delays are equal, but in case they are
unequal that is why you use 2 different symbols lh and hl.

499
So, in this diagram, we show all the relevant delays and of course, the width of the clock
pulse that is tw, I mentioned that is also shown.

(Refer Slide Time: 13:01)

Now, the next question is; when we connect more than one flip flops, I mean one after
the other we say that it is a cascade, cascade of flip flops what will happen? Let us look
at a very simple scenario where 2 flip flops, 2 D flip flops this simplest kind of flip flop
are connected in cascade, such input signal in is applied here the first flip flop generates
an output Q 0, which is fed as the input to the next flip flop and this Q 1 is the final
output. And the clock signal is applied to both the clock let us assume these are both
positive edge triggered.

Now, there are some considerations you need to look here. Like the first thing that we are
saying is that what if the flip flop propagation delay exceeds the hold time, well what
you mean by it, let us look at the clock signal. Suppose, the clock signal was coming like
this was the active edge or the clock, just assume that hold time and set up time
constraints are being satisfied. So, after a some propagation delay so I am showing it
after the clock edge comes, the output Q 0 will be available this is equal to the
propagation delay t p well either low to high or high to low. So, it is at this point that the
output Q 0 is available after the delay of flip flop.

Now, what I am saying is that what if the flip flop propagation delay this t p-lh exceeds the
hold time because you see when the clock signal comes like this, this same clock signal

500
is applied to both the flip flops. Consider the second flip flop this second clock pulse has
come now the value of Q 0; this Q 0 should be held stable at least t setup time before the
clock edge comes. Now if this tp-lh is too large. So, that it eats of into the t setup time then
the setup time constraint will be violated, this can be a problem ok. So, you should be
careful enough that the clock frequency should not be too fast that such a thing happens.

(Refer Slide Time: 15:51)

Now, let us show a scenario this is the correct scenario, the correct scenario where all the
timing constraints are met. Let us see this is my clock signal, this is my first edge, this is
my second edge. So, I am applying some value input here. So, when the clock edge
comes it is here so I have applied the input at least t setup time before that and the input
is stable till thold this is fine. And output Q 0 will be available after the clock pulse after a
delay of t propagation low to high, because Q 0 is changing from low to high.

Consider in the second flip flop second flip flop will be taking this Q 0 will be input and
store it there. So, this input this new value of Q 0 ay the next clock, this next clock will
be coming to the D input of the second flip flop. So, for the second flip flop again you
see this setup time constraint is satisfied because this Q 0 is available here only. So, it is
much buffer to sufficient buffer is here and off course hold is also there.

So, after that time Q 1 Q 1 will go from low to high. So, it is t p-hl this after this delay this
Q 0 1 value will get stored here right? So, this shows the basic requirement of the timing
constant which you see once more carefully. You see this is your total time period of the

501
clock starting from here the first stage up to the next stage. Now within the time period
what are the things that are included you must have one setup time, you must have this
propagation delay and of course, hold time and propagation delay can overlap. So, setup
time and this propagation delay at the most important delays that must be included here.

(Refer Slide Time: 18:11)

So, so when you talk about edge triggered clocking, well this edge triggered may be
either positive edge triggered or negative edge triggered both ways. So, to summarize
data that is generated from the output of the flip flop, let us say at time t must arrive at
the next flip flop one setup time before t plus capital T, where T is the clock period. This
is exactly what I mentioned in the previous diagram it must appear one setup time before
this. And the second thing is that the delay from each input flip flop to each output flip
flop of a combinational block should be less than T minus setup.

So, if the flip flop is too slow then it will be eating up into the setup time of the next flip
flop and the timing violation will result. There is another issue which we shall be talking
about very briefly later that the clock signal does not arrive exactly at the same time to
all the flip flops, there will be some delay, because when the wires are laid out on a chip
when a chip is fabricated there can be the flip flops, which is spread all round chip. The
clock is generated at some point and it is connected to all the clock inputs of the flip flop

Now, the delays of these wires because the wires may be of unequal length that delays
will also be unequal not the same. So, if you are not careful about this fact then the clock

502
edge might reach the flip flops slightly varying in time either a little early or a little late.
So, if you are not careful this may lead to incorrect behavior of the circuit ok, this is
called clocks skew this is referred to as clocks skew, but here we are not discussed in
detail regarding clocks skew.

(Refer Slide Time: 20:26)

Let us now look at a couple of examples. Let us say how this flip flops that we have
designed we saw, how it can be designed how they work how they can be used to carry
out certain very useful things. The first example we considered here is one of frequency
division, what do mean by frequency division? We are saying that suppose we have I
already have a clock signal of f hertz, what is hertz? f number of oscillations per second
that is the definition of hertz.

So, we want to divide this frequency by 2 we want to generate another clock signal let us
say whose frequency will be f by 2. So, how do we do it? Let us say I have a frequency
of 1 kilohertz or let us say 2 kilo hertz and I required I mean another let us say clock of
frequency 1 kilohertz, half of that. So, this is extremely easy to do using a special kind of
a flip flop.

Let us take a T flip flop, T flip flop with the output and let us say this is my clock. So,
what I do? Let us apply constant 1 on the T input and in the clock input I apply that f
hertz clock frequency. So, what I claim is that at the output whatever will be generated

503
that will be a f by 2 hertz clock signal, well how does it happen let us look into it with
respect to a timing diagram.

So, I am showing a clock pulse I am showing 4 pulses. So, this is coming continuously
now you think of a T flip flop this is positive edge triggered. So, this the active edge 0 to
0 to 1 going this is the active edge of the clock. Now what will be the value in Q suppose
initially the value of Q was 0 let us say, because T is 1, what is the T flip flop? A T flip
flop says that if the T input is 1 and a clock comes the output will be toggled, if it was 0
it will become 1, if it was 1 it will become 0. Here I am applying T continually as 1. So,
every time a clock comes the output will toggle right because T is always 1.

So, whenever the first clock comes here let us say, this Q will toggle this will become 1,
the next clock comes here it will again toggle it will become 0, next clock is here it will
again become one next clock is here it will again become 0. So, you see over the original
clock let us over this time duration which I have drawn, there were 4 pulses that were
generated here. For every 2 pulses one pulse is generated only 2 which means I have
reduced the frequency by half to half. So, a simple T flip flop like this can be used to
divide the frequency by 2. Now means another advantage is there if you want this that in
the output of Q.

The clock the on period of the clock and the off period of the clock if you call them t on
and toff they will be exactly equal and they will be equal to the time period of the original
clock signal in this was your time period both ton and toff will be equal to the time period.
So, I will be getting an exactly square wave with equal on and of duration’s right. So,
this is one example.

504
(Refer Slide Time: 25:24)

Let us extend this example further. So, here what I am saying is that; again we are given
a clock of frequency f see again we are doing frequency division, but we are not dividing
by 2 rather we are dividing it by some arbitrary power of 2, let us say 2k where k is
some integer right let us take an example again here.

Let us assume k is 3, which means I am trying to divide a frequency by 8, 23 is 8. So,


the required circuit here will be very simple well again we are using T flip flops, but not
1, but k number of T flip flops; that means, 3, 3 T flip flops. And all the T inputs I am
setting to 1, just like before and the clock these are the clock inputs. So, the clock input
of frequency f, I am feeding here the output Q of the first T flip flop I am feeding in the
clock of here and output of this I am feeding here and this is my final output. So, what I
claim? That if my clock input frequency is f then here the frequency will be f by 8 ok.

Let us call this Q1 Q2 and Q3, let us work out how it works suppose I have a clock pulse
here 1 2 3 4 5 6 7 8 9 10. Let us say 10 pulses and this is the active edge right leading
edge triggered, this is the active edge of the clock. So, if you plot Q1, so Q1 will toggle
every time the active edge of f comes because T is 1 with respect to this CK. So, Q1
suppose initially it was 0.

So, it will toggle one once here again here again here like this it will go on toggling right
like this. So, the clock frequency was f for Q1 it will be f by 2 just as we discussed
earlier in the previous example. Now think of Q2 for Q2 the clock input is Q1, Q1 is fed

505
to the clock. So, same thing will happen, but with respect to the active edge of Q1. So, if
Q 2 was also 0. So, first toggling will be here next toggling will be here, next here, next
here, next here and so on..

So, you see for every 2 pulses of Q1, 1 pulse of Q2 is generated. So, the frequency here
will f by 4, and again Q3 for Q3 this Q2 is fed as the clock of the last one. So, similarly if
this is of 0 initially there will be one toggling here, one toggling here, one toggling here
and so on. So, for 2 pulses of Q2 one pulse of Q will be generated.

So, this will be f by 8. So, you see just by using a chain of T flip flops we can very easily
divide an input clock frequency by any arbitrary power of 2. So, these are very
interesting features of a flip flop of a T flip flop in particular that is, very useful
whenever we want to generate some specific you can say timing signals with some
frequency relations ships with respect to the original set.

So, with this we come to the end of this lecture, where we had discussed some of the
timing and clocking issues that for 10 circuits when you use flip flops and we shall be
continue our discussion in the next lecture as well.

Thank you.

506
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 35
Clocking and Timing (Part II)

So, we continue with our discussion on Clocking and Timing issues in flip flops and
synchronous sequential circuits. So, the title of this talk is: Clocking and Timing the second
part. Now in this part, we shall be talking about several other features of timing that are
important. For example, we shall be talking about the maximum clock frequency that we can
use in a circuit such that correct operation can be guaranteed then you shall be looking at
something call hazards and clock skew already you have mentioned and finally, we shall be
looking at an example synchronous sequential circuit and see how it works ok.

507
(Refer Slide Time: 01:10)

So, we consider this example means a very simple example you see means any synchronous
sequential circuit generally if you look into the block diagram level, it will consist of 2
blocks; one is a block which contain one or more flip flops and there will be another block,
which will be a combinational circuit just gates combinational circuit. So, the output of the
combinational circuit will be feeding the flip flops, and the flip flop outputs can come to the
input of the combinational circuit. Now, in addition you can have some more inputs coming
here and some outputs can be generated from here. Now, in the example any of course, clock
will be there clock will be applied to flip flops, now in this example we do not have these
inputs coming to the combination circuit and this outputs generated from the combination
circuit.

So, for flip flops we have a single D flip flop here, and for the combinational circuit we have
a single inverter or NOT gate here ok. So, let us look into this example and try to do an
analysis. Now the assumptions regarding the delays that you make are as follows, we will
assume that the delay of an inverter this NOT gate tp-inverter inv is 2 nanoseconds.

The delay of this flip flop so, once the clock comes after how much time the output will come
it is 5 nanoseconds tp-FF, and setup time of this flip flop is 3 nano seconds. Now, as I said
earlier if you recall in our last lecture, the hold time and the propagation delay overlaps
actually hold time need not be considered when you are talking about the maximum clock
frequency, minimum clock frequency minimum time period yes fine.

508
(Refer Slide Time: 03:34)

So, let us see for this example as I said, now we are trying to determine the minimum clock
frequency for which the circuit will work correctly let us look into the clock; the clock signals
is coming I am just showing 2 pulses positive edge triggered so, this is the active edge. So, if
you look at the second edge let us concentrate here.

So, with respect to the first edge; so, when the first edge comes so, after how much time this
D will be generated? There will be a delay out here which will be equal to the delay of the
inverter, the inverter delay D will be generated. And after this delay is generated you will
have to hold this before the clock pulse also before this clock pulse you must have a
mandatory setup time, this delay you have to give and also you should not forget the delay of
the flip flop. So, after tp-inverter your generating this, after this delay this will get into this.

So, there will also be a propagation delay of the flip flop. So, these 3 delays are getting the
getting added up here and there must be greater than the clock time period let us take 2 cases.
So, min what I am saying is that Tmin in this example will be some of these 3 times, which is
coming to 10 nanoseconds 2+5+3 . Now, because frequency is the reciprocal of the time
period, the maximum frequency will be 1 by minimum time period. So, 1 by 10 nanoseconds
1 by 10 nanosecond means 10 is 10−9 . So, it is 109 by 10 this is 100 mega 106 mega
hertz right.

509
(Refer Slide Time: 06:25)

Now, let us say suppose we choose a clock with time period 9 nanoseconds. So, what we
have to do when you are designing a circuit we will have to make this calculation, and see
that whether 9 nanosecond is enough or not. In this case because we see that we need
minimum of 10 nanosecond so, this will lead to a timing violation we cannot have a clock
with time period 9 nanosecond. But let us say we are using a clock with 12 nanoseconds or
15 nanoseconds this is certainly greater than this minimum of 10 so, this is perfectly
permissible. So, what we do whatever minimum we calculate, in addition we also put some
more you can say a buffer.

So, I mean if it is coming to 9 we say that well let us keep it minimum 10 and that way we
can choose the clock frequency maximum clock frequency with which we can operate our
circuit right.

510
(Refer Slide Time: 07:38)

And one thing to notice that as we have said that for a hold time delay, does not affect the
calculation here because when the clock edge comes hold time means after that minimum
how much I have to hold the inputs, but already that propagation delay is calculated from the
clock edge. So, propagation delay time includes the hold time. So, we need not have to add
the whole time again ok. So, this does not affect the calculation.

(Refer Slide Time: 08:10)

Well let us take another example, which let us ha draw like this suppose we have a D flip flop
let us call it D1 and Q1 this input is coming here, and let us say in Q1 this has been fed to a

511
let us say a chain of gates. So, here I am showing here 3 gates let us say. So, some external
inputs are coming here, some external inputs are coming let say this Q1 itself can come here
again and this is being fed to another D flip flop let us call it D2 and Q2 and there is a clock
signal, which is being fed to both these flip flops.

Consider a scenario like this and assume that the propagation delay of a NAND gate t p-NAND is
3 nanoseconds, propagation delay of a flip flop is 5 nanoseconds and setup time is also 3
nanoseconds. So, for this circuit if you just look into the timing once more so, I am showing 2
clocks let us say like this these are the active edges.

So, if I look at Q1, Q1 when the clock edge comes the propagation delay is 5 nanosecond. So,
Q1 whatever it was earlier I do not know whatever it was earlier, after a delay here of 5
nanoseconds, I will get the value of Q1 here after this clock edge comes t p-FF right and this
value will remain after that, it’s a does not change after let us say. Now, let us talk about D2
what will happen with D2? D2 will also face the delay of these 3 NAND gates which will be
equal 3 ×3 , 9.

So, there will be another delay of 9 nanosecond after this. So, D2 whatever the value it was I
do not know after that whatever is new value of D2, new value of D2 will be generated here
and in addition before the next clock edge comes. So, you must have the set up time of 3
nanosecond maintained here; so, 5 9 and 3 at the minimum time that you must have between
2 successive clock edges. So, in this case your minimum time period will be 5+9+3 ,
which is 17 nanoseconds.

So, in this way given any circuit so, if you have an estimate of the delays of the different
components the gates flip flops, then you can have a calculation which can tell you what is
the maximum frequency with which your circuit should work. So, you should not blindly
select clock frequency and see whether the circuit is working or not, because every circuit
component will have some delay and the clock must be slow enough so, that all gates are
getting sufficient time for evaluation ok. So, these 2 examples actually try to show this.

512
(Refer Slide Time: 12:40)

Now, talking about clock skew already I just mentioned this earlier in the last lecture, the
main concern is that the clock signal let say suppose we have a chip here I said, the clock
signal may be fed to the input of the chip and there can be several flip flops spread all around
the chip. So, this clock signal has to be connected here, has to be connected here has to be
connected here and so on.

So, the length of the wires will be unequal and because of resistive and capacitive effects of
these wires, there will be a delays rc time constant delays and the delays will be unequal in
general and this due to unequal propagation delays along the various paths, there will be
clock skew which means clock signal may not reach all the flip flops simultaneously.

While of course, when we are designing this clock network we try to reduce clocks skew as
much as possible ideally 0 but, but in practice we cannot exactly make it 0 we will try to
reduce it as much as possible. There are methods of doing that, but that is slightly beyond this
scope of this course we shall not be discussing how to reduce skew here just what is skew
that is enough. Now, the point to notice that, let us assume t skew is the maximum possible
clock skew meaning for 2 flip flops, if let us say the clocks are coming the first clock is
coming like this, the second clock is delayed like this. So, what is the maximum difference
that is defined as tskew ok.

So, if you consider also clock skew then the minimum time period for the clock, the
calculation would be a little different. Like as we have seen earlier for the for the Tmin

513
calculation we have to calculate the delay of the flip flop, we calculated the delay of the
combinational part; that means, the 3 NAND gates were there in the previous example and of
course, this setup time. Now, in this case in addition we also have to add clock skew.

Because you do not know how much clock skew is, but we know what the maximum clock
skew can be; which means for the 7 flip flop the clock edge might get delayed or might get
advanced by that amount. So, that also you must make provision when you are estimating the
minimum time period of the clock right. Because of this the clock frequency can reduce with
respect to the case where you are ignoring clock skew right this is the idea.

(Refer Slide Time: 15:51)

Now, now we considered I means, another property of circuits, related to delays these are
called hazards. The idea is given a circuit I am applying some input, I am expecting some
outputs. Let us say the output was 0 now the output is supposed to be 1. It may so happened
that instead of the output going from 0 to 1 in a clean transition, we may say that it may vary
several times before settling to 1; that means, there can be unnecessary transitions for very
short durations these are called glitches, and this glitches actually are called hazards in circuit
design.

So, definition of hazard very loosely can be mentioned is that, they represent momentary
transition in the value of a signal line to the opposite logic value. Like I am showing some
examples, 2 types of hazards are possible static and dynamic. Static hazard is one it says there
is a momentary transition, when the initial and final values are the same; like the initial value

514
was 0 final value is 0, but what I see in the output that it makes a momentary transition to 1
before settling to 0; that means, 0 1 and back to 0. Or the other way round suppose the output
value supposed to be 1, it makes a very temporary transition to 0 before settling to 1.
Dynamic hazard is similar, but here the initial and final values are different.

So, let us say initial value is 0 final value is 1. So, it may so happen that instead of a clean
transition it makes a transitions like this and then settles down; that means, 0 to 1 back to 0
again 1 or the reverse from 1 to 0 back to 1 like this. 1 to 0, 0 to 1 1 to 0. Now if this circuit
outputs are directly connected to some other circuit and this glitches may affect the operation
of those circuits, then there can be a problem. So, you should be aware of such glitches or
hazards and design circuits in such a way that it should only be activated in the edges of the
clock and these hazards should be stabilized before the clock edge comes ok. So, the idea is
like this.

(Refer Slide Time: 18:56)

So, let us take some examples to illustrate the occurrence of such hazards. So, here we
consider one example like this, this is a circuit comprising of four gates let us assume delay
of the NOT gate is 1 and delay of the other gates is 2 2 and 2 let us show a typical scenario.
Suppose this input signal A is applied constant 0. So, I have applied a constant 0 here, B is
also constant 0 B is also constant 0 0. Now S what we do? S let us say it was 0 here we make
it 1 and then we leave it as 1. So, what will happen to Ś ? Ś is the NOT of S and the
delay of this NOT gate is 1.

515
So, it will be not, but it will not become 0 immediately after a delay of 1 so, this grids I have
shown these are one unit. So, after one unit Ś will become 0, and then it will remain 0. So,
now, if you calculate F what will be the value of F? So, in one case you are taking or of A and
S or of A and S the delay is 2. So, here one 2 delay 2 means here this point will become one
at this point. And if you look at this gate or of Ś and B; Ś and B you see Ś is 1 here
right or B so, it will get again delayed here.

So, if you just plot this will become like this, this I suggest you also show the waveforms of
these 2 lines and then do an, and it will be clear to you. So, here what you are saying that the
F was 0, and it is supposed to be 0 again in the stable state, but because of the delays of the
gates it is temporarily moving to 1 and remaining one for one time unit before coming back
to 0, this is an example of static hazard right let us take another example.

(Refer Slide Time: 21:53)

This is the slightly more complex circuit, here we assume the delay of the NOT gate there are
3 NOT gate this is 1. So, this is 1, this is also 1 this is also 1 and NAND and nor gate there
are four of them it is 2 this is 2 2 2 and 2. So, here again let us take a scenario, where the
input A is permanently at 0, I am applying 0 at A. B input let say it was 0 we make it 1 here
and leave at 1.

C input let us say it is one continuously it is 1 these are the 3 inputs ABC we are applying.
Now let us see what will be the value of X. You see the value of B is getting inverted twice 2
NOT gates. So, the same values should come at x, but after a delay of 2 units one plus 1. So,

516
whatever B is there, there will be a delay of 2 so, X will be like this right B after delay of 2
and what will be Y on the other side? See here this A is 1 A is 0 B is also becoming 1. So, this
1 this is 0. So, output of this y would be one. So, initially y was 0 because A 0 B 0 both were
0 it was 1 0.

So, it was 0 Y is supposed to become 1, but the delay will be from the inputs 2 plus 1, 3. So,
when B is changing after that 1 2 3. So, Y will also change, but it will change here after a
delay of 3. Now if you just look at F, I am again jumping some steps, you can also plot the
value here it will it may be easier, but what will see that in F in this part of the circuit after
this X as come, there is delay of 2 before this is changing and again a delay of 2 before this is
changing so, 1 2 3 4.

So, you will get A value like this, B is changing yeah like this. So, the first edge will be
because of this transition in B, you show it like this in an arrow, the second edge will be due
to transition of X here this edge, and the third edge will be due to the in a transition at Y here
is this. So, you see all this edges are because of the delays in these gates and some signal
value changing somewhere, this is an example of dynamic hazard which you have
demonstrated. So, you are not going into the theory behind static and dynamic hazard, just I
have shown you one example and with the help of the example we have seen how this hazard
shows up ok.

(Refer Slide Time: 26:05)

517
Lastly we shall be showing you one example of synchronous sequential circuit; this is an
example of a serial adder. Now, before going into a serial adder well earlier you have already
seen how a parallel adder works like let us say whenever you add 2 binary numbers let us
take another number 0 0 1 0 1, you add this bits accumulate the carry add this carry, carry,
carry, 1. This will be the sum and initially for the first stage you typically assume that the
carry is 0.

So, in the parallel adder design we had a full adder in all the stages. Now what is saying is
that, we are saying we have a serial adder; that means, I want to design a circuit like this
where these 2 numbers let us call it A and B, they are fed serially. Serially means least
significant bit first, first I feed 1 and 1 then I feed 1 and 0 then I feed 0 and 1, then I feed 1
and 0, then I feed 0 and 0 and in the output here I am expected to generate these bits in that
same order, first I will get 0 then I will get 0 then 0 then 0 and finally, 1.

So, there is a single input A, single input B where I am sending 1 bit at a time, and in the
output I am generating 1 bit at a time. So, if you think that how this circuit should works,
suppose you are a serial adder you imagine yourself to be a serial adder where the bits are
coming one by one. So, if I tell that well the next pair of bits are 1 and 0 what should be the
output? Well your question will be, well I cannot say what will be the output unless I know:
what is my carry, because I also up to add the carry right to A B to get the next sum. So, you
will have to memorize the carry, there has to be a flip flop inside your circuit which will be
memorizing the last carry right.

518
(Refer Slide Time: 29:02)

So, to implement this circuit well we shall be going into the formal we have designing’s are
circuits later but what we are saying is that we shall having a D flip flop, and what will you
do? We will be having a full adder, where the 2 serial bits A and B will be fed here, and the
carry input will be the output of this memorized carry will come here and the output of the
full adder the carry out will be stored in the flip flop and the sum will be generated as the
serial output and this will be working in synchronism with a clock. And the only point to
notice that this flip flop must be having an asynchronous clear input, because we have to
initialize Q to 0 before you start the addition. So, every time you add 2 bits the previous carry
will be coming, those 3 bits are added you generate sum you generate a carry the new carry
gets stored and the process continuous.

So, you see the advantage of a serial adder is that you need so, little hardware, you need only
one full adder and one flip flop, but the drawback is that if you are adding to 16 bit numbers,
you will be requiring 16 clock pulses. So, it is very slow serial. So, we have just shown one
example, later on we shall be looking into more formal ways of designing or synthesizing a
circuit given a specification, this we shall be starting from our next lecture onwards. So, with
this we come to the end of these lecture so, over the last 5 lectures, we discussed various
latches and flip flop types and clocking and timing issues.

Thank you.

519
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 36
Synthesis of Synchronous Sequential Circuits (Part I)

So, if you recall our last few lectures we have been discussing about the various sequential
circuit components, which helps us in designing synchronous sequential circuits. In particular
we had discussed the design of the various kinds of flip flops, how they can be excited, how
the flip flops states can be changed from the various input states to the output states? And of
course, we have seen the different ways in which we can convert one kind of flip flop to
another.

So, now with that background let us see how we can design or synthesize general
synchronous sequential circuits starting from the specification; our target is to obtain the final
gate level and the flip flop level circuit from the specification. So, the title of talk today’s
Synthesis of Synchronous Sequential Circuits, we are starting with part 1.

(Refer Slide Time: 01:23)

So we have already mentioned earlier that circuits are of 2 types combinational and
sequential.

520
So, let me repeat it once more because this difference is extremely important now what I have
said a combinational circuit is one, where whatever inputs you are applying; the outputs will
be depending just on those applied inputs and nothing else ok. But it contrast when you have
a sequential circuit, here you are applying some inputs all right, but the outputs that are
generated that are dependent not only on the inputs, but also some internal history of the
system. There is a concept of a history which is maintained by the system which is called the
state the state of the system. So, whatever the output you will be getting that will be
depending not only on the inputs are applying, but also on which state the circuit is presently
in.

So, we mentioned very briefly earlier that the flip flops that you have studied they help in
maintaining or remembering the state of such a sequential circuit or a sequential machine we
sometimes call. So, whatever you upset is mentioned here in a combinational circuit; the
outputs depend only on the applied inputs, but in a sequential circuit the outputs depends on
the applied inputs.

And also on the internal state this is what I just mentioned; and another point to note is that as
we go on applying inputs in some sequence the circuit also goes on changing its state. So, the
state of the circuit or the state of the system changes with time in response to the applied
inputs. So, this is an important point; the internal states of the machine or the circuit changes
with time and just one thing you remember; I am saying that the system or the circuit is
remembering some history that is the state of the system.

Now from a practical point of view; you just think if the number of possible states in the
system is infinite can you really built such a system? You see this state or the information
about the state you maintain in flip flops or a set of flip flops. If you have n number of flip
n n
flops they can be in 2 possible combinations. So, a maximum of 2 distinct states can
be represented, but if we talk about infinite states; this indirectly means that we require an
infinite number of flip flops which practically you really cannot built right.

So, from practical point of view we talk about something called Finite State Machines or in
short FSM. Finite state machine is nothing, but a sequential circuit where the number of
states or the internal history information is finite in number that is why it is called finite state
machine ok. And of course, I mention this repeatedly that most of the practical circuits we see

521
around us are sequential in nature; very rarely you will find the circuit for the output depends
only on the inputs and not on the history alright.

(Refer Slide Time: 05:20)

So, let us look into these finite state machines in a slightly more detail; the first thing is that
whenever we talk about a finite state machine or a sequential circuit; well we have to start
from somewhere right. Somehow we have to specify the behavior of the circuit for example,
in a combinational circuit or a combinational function we specify the behavior either by using
a switching expression a function or by using a truth table and so on. So, in exactly the same
way for a sequential circuit there must be some very systematic way to specify the behavior
of the circuit or the system.

So, the two ways you can do that is either in the form of something called a state table or a
state transition diagram. You note that state table and state transition diagram equivalent, they
represent the same information, but in two different ways. State table is a tabular
representation; state transition diagram is a diagrammatic representation. So, it is fairly
straight forward to convert from one form to the other.

So, whenever actually designing a sequential circuit it is good enough to start with any one of
the two you do not need both, but in the examples that we shall see in some subsequent
lectures; we shall be showing you both the representations just for your convenience right.
There are some other ways also to represent finite state machines particularly more complex
ones; like one of them is algorithmic state machine or ASM; this we shall be discussing

522
briefly later ok, but right now we shall not be considering this ASM chart in this discussion,
we shall be talking about this state tables and state transition diagrams.

Let us take a very simple example to illustrate; consider that we want to build, a circuit this
circuit would be like this there is a circuit which will be having a single input let us say X it
will be having an output Z and of course, there will be a clock. So, what you are saying is that
in this input X; we are applying a sequence of bits; that means, a serial bit stream. For
example, we are applying a bit stream like let us say 0 1 1 0 1 1 1 1 0 1 0 1 1 1 and so on.
Now the circuit should be such that you will be detecting 3 or more consecutive 1s in this
stream. And these bits are applied in synchronism with the clock, we apply a bit, apply a
clock, apply the next bit, apply a clock and so on.

So, the output will become 1 whenever it will detect 3 consecutive 1s; 3 or more in fact. So,
in this case the output will be also 0; it is out here it will become 1 because there are 3 1s,
there are 4 1; here also it will remain 1, well again it will become 0 and again it will become
1 out here because there is again 3 1s. So, we want to design a circuit like this right.

So, the first thing that you realize is that well this is not a very trivial or very simple kind of a
specification. Somehow we have to represent or capture this information in the form of a
table or as I said in the form of a diagram say transition diagram. Let us see how from this
specification that we want to detect see consecutive sequence of 3 or more 1s how we can
formally capture the behavior and represent it in a systematic way ok.

(Refer Slide Time: 09:56)

523
Let us look into the diagram on the right this diagram first. You see here I have shown a
diagram this is called a graph where I have some nodes or vertices which are represented as
circles. Here you see there are 4 such vertices A B C D; these vertices represent states of the
system.

So, there are 4 states; now how many states will be there for a given problem that you will
have to figure out. The designer has to figure out how many states are required; in this
problem as you will see 4 states are sufficient ok, these are A, B, C, D and I am saying that
state A is the starting state; so when you start this system it begins with state A, there can be a
reset input. So, whenever you reset the system it will initialize itself into state A.

Now what I am saying is that whenever there are consecutive 3 1s the output has to be 1. So,
how do you capture this in this diagram? Well we shall be discussing this in more detailed
later, but let us start try to explain with this example how we have constructed this diagram.
Well while in state A see there are some arrows you can see these are edges this arrows
connect one state to another.

And this arrows are labeled with 2 numbers let us say alpha and beta; this alpha indicates
inputs that mean what input is coming and beta indicates that what output will be generated.
You see when you are in state A you are looking for 1 1 1 right, but if you see there is a 0 well
you will have to remain in state A because you have not yet started the sequence; it is a 0. So,
you have no information about 1 1 1, you remain in A and your output is 0 because you have
not yet got 1 1 1 1. But as soon as you get the first 1; let us see this edge you get the first 1,
you go to state B. State B indicates that I have found a single 1 1 1 has been found that is the
meaning of state B and still the output is 0 because we have found only a single 1.

Now, while in state B; that means, you have got a 1 if you find there is again a 0; that means,
again you have to start from the beginning. So, if you in state B if you see that the next bit is
0, you again go back to state A right, but if you find that there is another 1 which means now
I have seen 2 1s; 1 1 I go to state C; so state C indicates that I have seen 2 1s.

Similarly, while in state C there is another 0 coming; you go back to state A, you have to
search from the beginning again. But while in state C if you get another 1 which means let
me clear this which means 1 1 and 1; then you go to state D with the output of 1. Because
now we have found 3 consecutive 1s B C and D; D means at least 3 consecutive 1s have been
found and here you see the output is also becoming 1.

524
Now, while you are in state D which means already 3 1s are there; if there are more 1s
coming you remain in state D; this edge you remain in state D and the output continues to be
1. But whenever you find that there is 0 coming; that means, you again have to start from the
beginning; you go back to A right. This is how you capture the specification of the problem in
the form of a state transition diagram, where the nodes indicate the states, the edges represent
the transitions.

Now the same information you can capture in the form of a truth table kind of a structure; this
is called a state table. Well there is another more conscious way of representing this, but I am
first showing the simpler one. Let us assume that there is a reset input; so, when the reset
input is 1. See here we are talking about present state, next state the applied input and the
output.

So, whenever reset is activated irrespective of what the present state is the next state will be
A; that means, you will be going to A and the output is 0. So, when the reset is 0 which is the
normal mode of operation; that the inputs are coming one by one. Now you see this table; so
when you are in A and the input is 0; according to this diagram you remain in A; you remain
in A with output 0, 0 slash 0.

Well if you are in A, but the input is 1; that means, you are taking this path not sorry not this
path; So, the input is 1 and the output is 0 and you go to state B. So, state B with output 0; so
where in state B; 0 you come back this one. If B is 1 you go to C this one. In C similarly if it
is 1 you go to D, in D if it is 0 you go back to A, if D it is 1; you remain in D right.

So, these 2 are equivalent actually ok; so, you can either show it in a diagram form or in a
table form. Now what I have said is that this table can be expressed more concisely or in a
more compact way in this form I am showing here.

525
(Refer Slide Time: 16:33)

So, this state transition diagram will be showing side by side for convenience and on the right
has side whatever I am showing; this is the concise state table more concise form of state
table. So, let us see how we have done it; you see there are broadly 2 columns. The first
column specifies the present state of the system PS; it can be A B C or D. While here while
and the next part of it specifies 2 things, next state, Z; Z is the output next state is where you
are going, Z is the output value that is being generated.

Now as you can see there are 4 columns in the second part; there is something called R X;
what is RX? R represents the reset input; X represents the actual bit that is coming the actual
input. So, whenever R is 1; that means, you are resetting; so, you can see the third and fourth
columns where R is 1.

So, your next state is becoming A because it is reset to A and the output is also 0. So, you are
resetting the system here, but when R is 0 depending on the value of X; you are taking a
decision here X is 0 ok. So, when is X is 0 you see from A if the input is 0; you remain in A
with output 0, you write A comma 0 this is the next state and output. So, if it is 1 here you go
to B; if it is 1 you go to B with output 0.

From B if it is 0 you go to A with output 0 from B if it is 1, you come to C with output 0 in


this way you go on. From C you go either to A or to D with output 1, from D again you either
go to A or go to D with output 1 ok. So, this is a way to express the state table in a concise

526
way; in all our example subsequently; we shall be using this concise notation of this state
table to represent a functional behavior.

(Refer Slide Time: 19:06)

Now, let us look into some formal definitions of finite state machines; there are broadly 2
kinds of finite state machines called Mealy machine and Moore machine. But before coming
to the distinction between these 2 kinds of machine, let us try to formally define,
mathematically define what a finite state machine is.

Well from a designers point of view; it is a circuit, it has some memory some state. Inputs are
coming, outputs are generated, states are changing this is intuitively specifying, but
mathematically when you capture this information; how should you specify it is specified in
this way. A deterministic finite state machine this is deterministic because whenever you are
applying some inputs; the output is deterministic legitimate, there is no ambiguity.

It is mathematically determined as a 5 tuple, Σ , Γ , S, s0, Δ and Ω . What that


these 6 things means? This is actually 6 tuple not a 5 tuple actually 6 tuple; this Σ means
this set off input combinations. Suppose this circuit has 3 input lines, there will be 8 input
combinations; so, sigma will denote this set of all possible 8 input combinations.

So, sometimes we call it as the input alphabet what are the possible input combinations that
are possible ok. Similarly Γ is the set of output combinations in the same way let us say if
there are 4 output lines; there can be sixteen possible output combinations 24 that is Γ

527
set of all possible output combinations. Capital S denotes this set of states; well it depends
whenever you have drawn the finite state machine, this state transition diagram you have
some idea how many states are.

So, S denotes set of the states like in the example that I have given there are 4 states A B C D
ok. And small s0 which is member of S is the initial state because you have to mention from
which state you have to start alright. And of course, this is not enough, you also have to
specify something called Δ which is the state transition function. That means, it will tell
you that if you are in a state si and if some particular input i comes which state you will be
going to sj that is defined by this function delta.

And there is another function called Ω , this is called the output function; it says that when
you are in certain state and some input comes what should be my output? So, if we specify all
this 6 things, your specification of the FSM is complete right. So, Δ can be
mathematically expressed like this; given a state, given an input from the input alphabet it
determines the next state.

Mathematically in the function notation you specified like this Present State in short PS and
the present input, this alphabet determines the next state NS ok. Now, let us distinguish
between Mealy and Moore; you see Mealy and Moore are identical have to Δ , it is Ω
that distinguishes these 2 types. For a Mealy machine, the output depends on the present state
and also on the present inputs; that means, the output Γ depends on S as well as input
alphabet.

But in a Moore machine; Moore machine you can say is a subset of a Mealy machine. In a
Moore machine, the output depends only on the present state; it does not depend on the input.
So, the output is determined only by the present state; there are many systems, where the
output that is being generated is not determined by what input you are applying; rather than
which state you are in. Well a classic example is a traffic light controller where there are
some states which we are going through and depending on which state you are in; so, either
the red or the yellow or the green lights are glow.

So, it is not that you are applying some input depending on that you decide which light to
glow, not exactly like that ok. So, Mealy and Moore machine are two different types of
machines these are the formal definitions.

528
(Refer Slide Time: 24:40)

So, pictorially speaking the Moore machine and Mealy machine can be depicted like this. For
a Mealy machine the primary inputs are coming like this PI is the Primary Inputs, PS is the
Present State, NS is the Next State. Now you see in any circuit realization there will be a set
of flip flops, which will be storing the present state the outputs of the flip flops that will
define the PS ok.

And there will be some combinational circuit here which will take the primary inputs and
also this present state and it will determine the next state. Well one thing I am not shown this
there it is the clock the clock is also there. So, whenever the clock comes the next state will
get stored in the flip flop and it will now become the present state ok.

And there is another combinational circuit calls the output logic which again takes the PI and
the present state as input and generates the primary outputs PO. For a Moore machine; the
first part is the same, but for output logic it takes only one input the PS that it does not take PI
as the as the other input. The output depends only on the present state this is how you
distinguish between these 2 types of machines.

529
(Refer Slide Time: 26:12)

Now, when you talk about a synchronous sequential machine; a synchronous sequential
machine is a sequential machine which is synchronized by a clock. So, whenever the clock
signal comes, the active clock edge comes; the states keep on changing. The flip flops are
triggered by some leading edge or falling edge of the clock only then some changes will take
place.

So, pictorially speaking; so, a synchronous sequential machine can be model like this; there
are n input lines. Let us say the input variables X1 to Xn there are m output lines; this Z1 to
Zm, there are state variable let us say there are memory elements or flip flops these are flip
flops.

There are k numbers of flip flops; so, I am saying that there are k numbers of state variables
which are y1 to yk, small y1 to small yk; these are called state variables. Now according to
our definition of FSM the input alphabet consisting of all possible input combinations;
because there are any inputs there will be 2n such combinations. Similarly output alphabet
there are m outputs all 2m combinations and these states, there are k number of state
variable; so, there will be 2k states.

So, as this diagram shows this small y1 to yk; this is my present state and the capital Y1 to
YK which is fed to the input of the memory elements; this will be my next state. So, the clock
is not shown, when the clock comes, the next state will become my present state, right copy
to the present state ok.

530
(Refer Slide Time: 28:17)

So, now, let me just tell you about the steps that are required when you synthesize a
synchronous sequential circuit. These steps are as follows; first you start with the description
of the problem, for any problem you have to start with the description. You will be forming
the state diagram or the state table, this is the first step this is 1 way to formally capture the
specification of the system.

Then this second step we shall not be considering now; this we will discuss later because this
table or the diagram may contain some redundant states; that means you can remove or
reduce the number of states in some way. So, we shall learn later how to reduce the given
state table or a state transition diagram or in other ways to minimize it we shall see it later.

So, this is an optional step we skip this for now for the time being; then we do something
called stat assignment. What is state assignment? In the example that have seen so far we
name the states are A, B, C and D, but ultimately this states will be represented by flip flops.
So, the 4 states can be represented in binary; let us say using 0 0, 0 1, 1 0, 1 1 that is called
state assignment. Each state is assigned a binary code and also you decide what kind of
memory elements will be choosing J-K, D, T or S-R ok.

And of course, the final circuit complexity will depend on the selection and we shall see how
to do it. From this state assignment and selection of memory elements you first determine
something called transition and output tables and then the excitation table. And from the

531
excitation table you use some minimization technique; you start with something called
excitation and output functions; minimize them and finally, obtain the circuit diagram.

Now, we shall be looking at a number of examples to illustrate this process. So, that it
becomes quite clear to you right, but this is the overall steps for any kind of synthesis of
synchronous sequential circuits; that you are trying to attempt. So, with this we come to the
end of this lecture where we have talked about something about finite state machine FSMs,
the different types of FSMs.

And finally, we talked about the steps that are required to synthesize a circuit starting from a
FSM specification. In the next few lectures, we shall be looking at some examples and see
how this synthesis is actually carried out.

Thank you.

532
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology Kharagpur

Lecture - 37
Synthesis of Synchronous Sequential Circuits (Part II)

So, we continue with our discussion on the Synthesis of Synchronous Sequential Circuits
which we have started in the last lecture. So, we continue with the discussion in this second
part of this lecture Part 2. Now, before we start you recall in the last lecture we mentioned the
different steps that you need to go through to synthesize or design a synchronous sequential
circuit.

The first step is most important is to capture the behavior from some specification, it can be a
textual specification it can be any kind of specification, you may be having some idea in your
head you are trying to capture it or there is some text which is written, from there you can
you are trying to capture the behavior.

So, you are trying to capture the behavior and you are trying to formulize it by expressing it
in the form of a state table or a state transition diagram. Now in this lecture we shall be
spending some time specifically on this part of the entire flow that how to construct state
transition diagrams and state tables starting from specifications.

(Refer Slide Time: 01:40)

533
So, we concentrate on state transition diagrams and state tables in this lecture. So here we are
going into a little more detail. So, here although we have mentioned this I am repeating; a
state transition diagram when you see it is actually a graph, a graph contains some vertex
some vertices and edges; it is a graph which specifies the different states of the Finite State
Machine, the different conditions or the inputs under which the state changes will occur and
also the output values that will be generated.

So, this state transition diagram captures the whole behavior of the FSM. Now notation wise
these states they are denoted as circles and they are typically labeled with unique symbols, so
that you can distinguish one state from the other. While in the earlier example you saw in the
last lecture we labeled this states as A, B, C, D fine.

Now there are states there are also transitions for example, this is one state this is one state,
they are labeled with A and B, there can be a transition between A and B which is denoted by
a directed edge, an edge with an arrow, these are called transitions they are represented as
directed arrows between a pair of states. Well these arrows can be self loops also, so a state
and itself it can also be like this and each transition is labeled by 2 numbers α and β
where α denotes some input combinations and β denotes an output combination.

So, if I write here α , β , it indicates that if you are in state A and input α has come
then you go to state B and generate the output β right, this is what the interpretation is.
Now as we had said we shall again see some examples. State table is an alternate depiction of
the state transition diagram, it is not something separate they represent the same information
instead of showing it in a diagrammatic form it show it in a form of table that is the only
difference ok. So, in this table just like in the diagram for every value of the present state,
here the next state is specified and also the output values for each input combinations are
specified right. So, these we will see, we will take a number of examples.

534
(Refer Slide Time: 04:44)

So, the first example let us take these this is a very simple example; a example goes like this.
This is a very simplified form of a traffic light controller, this is nowhere near to a traffic light
controller but something similar, the only similarity is that there are 3 lamps; red, yellow and
green; nothing else is similar form of a traffic light controller ok. Here the problem
specification is like this; there are 3 lamps red, green and yellow that glows cyclically; that
means, first red then green then yellow and again back to red and let us say with a fixed time
interval, let us say 1 second. Now this time interval we are not really bothered about because,
will assume that our circuit is like this, there is a clock if I need a 1 second period then I will
be applying a 1 hertz clock.

So, changes will happen every 1 second, there are no other inputs, there are no other inputs in
the circuit and the outputs are red, green and yellow ok. Now first thing is that how to
represent this problem in the form of a finite state machine. First thing is that let us define the
states there will be 3 states corresponding to the 3 lamps that are glowing, red, yellow and
green, the arrows will indicate state transition, red to green, green to yellow and yellow to red
and the other thing we are assuming is that there are 3 outputs red, green, yellow, let us
assume in this order.

So, where when you are in the red state, this line will be 1, this will be 0, this will be 0 which
means red is glowing, if you want to glow green this will be 1, this will be 0, this will be 0
and if you want to glow yellow this will 1, this is 0, this is 0. So, now you can understand,

535
you see there are no separate inputs, that is why the input is shown as empty; ϕ means
empty.

So, whenever in the red state and a clock comes you go to green, the input is not the input is
ϕ and the output is 0 1 0; 0 1 0 means green is glowing ok. When you are in green and a
clock comes you go to yellow 0 0 1 which means yellow is glowing and when you are in
yellow you will go to red 1 0 0 means red and these goes cyclically right and you see because
there are no inputs the states directly determines what the outputs will be. So, this is like a
Moore Machine, the output depends only on this state it really do not depend on the applied
input.

(Refer Slide Time: 08:01)

So, talking of the state table, here this is a very simple problem. So state transition diagram is
shown on the left, on the right side I am shown the equivalent state table. So, broadly there
are 2 columns, I had shown in the example earlier, one is for the present state, other is for the
next state and outputs, because there is no inputs there is a single column in the second part,
the present state if it is red then the next state will be green and the output will be 0 1 0. If the
present state is green next state will be yellow output 0 0 1, if it is yellow next state is red
output is 1 0 0, so exactly what this state transition diagram specifies right. So, let us move on
to slightly more complex problems to see how the state transition diagram and the state table
can be constructed from there.

(Refer Slide Time: 09:04)

536
Let us see. This is a slightly more difficult problem. This problem is that of a serial parity
detector. So, what is this problem? You first look at the circuit; the circuit is having an input
X and an output Z.

So, X is applied with a serial bit pattern let us say 0 0 1 1 1 0 1 0 0 1 something like this and
output also will be generating a continuously bit pattern something like this. So, what will it
indicate? This will indicate the parity of the input bit stream. So, if you just write down this
bit stream one below that with (let us write it here, let us say this is 0 0 1 1 1 0 1 0 0 1.

Now parity means with the number of 1’s whether they are odd or even. If it is odd I will be
outputting 1 if it is even I will be outputting 0, let us say that is our convention. So, we will
be looking at the number of bits that you have seen so far and whatever is the parity the
output is set accordingly. Let us start this is my input X. So, what will be my Z? Even 1 has
come it is now odd parity.

So, again a 1 has come it is become even, well again a 1 has come it is become odd, 0 has
come it remains odd, there is a 1 has come it is become even again, 0 again even again even 1
again odd, so this will be the output right. So, in this problem, this X is the single input, Z is
the output, of course there is a clock. If you think of this state what are the things you need to
remember? You only need to remember one thing what was the parity till now. Was it odd or
was it even? This you can represent by a single bit in 2 states, let us name the 2 states as even
and odd.

537
So, the corresponding state transition diagram will look like this even or let us say even is the
initial state. So, if a 0 comes you remain in the even state, the output is 0. But now if a 1
comes you go to the odd state with an output of 1 and while in odd state if a 0 comes you
remain odd output 1 and if a 1 comes you go back to even with the 0 right.

(Refer Slide Time: 12:12)

So, from here you can similarly construct the state table. Here you see this second part is little
more complex, the first part as usual is the present state, second part is the next state and the
output; states there are 2 states even and odd.

Next state and output here there will be number of columns depending on all possible input
combinations, because in this circuit there is a single input X and X can be either 0 or 1, there
are 2 columns in the second part; one corresponding to X equal to 0 other corresponding to X
equal to 1 and whatever is listed here is actually this NS and Z, next state and output. So, if
you are in even let us say here if X is 0 then you remain in even and the output is 0; that
means, this edge. If the input is 1 you go to odd with the output 1 it means this edge. So, like
this for all the edges you can see the entries right, this actually quite easy to construct.

538
(Refer Slide Time: 13:28)

Now, let us look at a slightly more difficult problem. This problem is that of a sequence
detector. So, what is a sequence detector? Well it is something similar to the previous
problem in the sense that there is a single bit stream X as input and a bit stream Z as output,
of course there is a clock, I am not showing the clock. Now in X I am applying a bit stream.
Now whenever this pattern 0 1 1 0 appears the output will be 1, otherwise it is 0.

So, let us assume take an example. Suppose my input is like this, so you see this 0 1 1 0 in
what place it is appearing. See one place it is here one place it is here and there is a
overlapping pattern here. So, I am assuming that overlapping patterns should also be
considered this 0 1 1 0 before which finishes the second pattern starts.

So, in the output there will be a 1 out here, there will be a 1 here, there will be a 1 here, all the
rest bits will be 0, this is this circuit that we want to design. So, this will be my X and this
will be my Z right, this is a Mealy Machine because there is an input and the output will of
course, also depend on the input and the state both right, so this is not a Moore Machine, this
is an example of a Mealy Machine.

539
(Refer Slide Time: 15:31)

So, here is another example you see 0 1 1 0, there is a 1 generated here 0 1 1 0, 1 generated
here 0 1 1 0, 1 generated but here there is no overlap fine. Now let us see how we can
construct the state transition diagram for such a circuit. Before I show it let us try to work out
how it should be, let us see fine.

(Refer Slide Time: 16:13)

This is what the final solution is shown, let us try to understand why we have gone for this
because, we want to detect the pattern 0 1 1 0 right. Well we do not know how many states
will be required, say as we construct there more states are getting constructed. See the first

540
thing is that we start with an initial state, let us say S0, 0 is my initial state. So, I look for the
first 0, so whenever I get the first 0 I go to state S1.

So, S1 indicates that I have seen the first 0 this is my state S1. So, while in S1 if I see the next
1 here I go to state S2, you see if I see a 1 I go to state S2. Similarly if I see another 1, I go to
state S3, I see another 1 I go to state S3 ok. Now if I see a final 0; that means, I have detected
the pattern. So, from S3 if I get a 0, so naturally I will have to output a 1 because I have got
the pattern. Now the question is where should I move it back? Should I move it back to S0 or
somewhere else?

Now the point is that because we are allowing overlapping patterns, so let us consider an
overlapping pattern like this, so this was 0 1 1 0 there is an overlapping 0 1 1 0. Now when
you are in S3 you are in S3 means what, you are here; you are at this point right your S3, now
a 0 has come, now this 0 is also S1 for the next pattern, so instead of moving it back to S0 we
are actually moving it back to S1 because, you need also to check the next overlapping
pattern and it may start here also.

So, you are actually in S1 for the next pattern and for the other ones I think it is quite simple
because, in S0 if you get a 1 that means you are trying to detect 0 1 1 0, if you get a 1 in the
beginning you remain in S0.

(Refer Slide Time: 18:53)

541
Now in S1 you have detected a 0, but if you get another 0 no problem you are still in the first
0 you remain in S1, but in S2 that means, you have got a 0 1 and again a 0 this can be the
start of a 0 1 1 0, so you have to go back to S1. Similarly, from S3 to S1 and in S3 if you get a
1 which means it is a 0 1 1 1, so it is not the pattern you have to go back to S0 and start from
the beginning. So, you see drawing this diagram is not trivial you may have to think a little
bit consider all possibilities and then come up with the diagram the edges fine. Now once you
have drawn the edges the rest is fairly straight forward.

(Refer Slide Time: 19:46)

Other I am not shown in the state diagram there. Let us take another example state table I
mean. Here in this example we consider see if you look at the previous example just a
second. This was the example of a sequence detector just one thing let me tell you before you
move on, so if you want to construct the state table, which I have not shown there will be 2
parts to it, the first part will be the present state there will be 4 states S0, S1, S2, S3 and in the
next state because there is only 1 input so there will be one corresponding to X equal to 0 one
corresponding to X equal to 1 ok. Just like that serial parity detector same kind of table will
be there will be 4 rows here right.

Now we consider an example which is a serial adder. Now I took this example earlier when
we are discussing about the flip flops and the other simple applications of the flip flops. Now
let us look into the design of the serial adder in a little more formal way. Now I mentioned
earlier if you recall that a serial adder, this is a serial adder is a circuit, which will be having 2

542
serial inputs X1 and X2 where the bits will be coming serially and this sum will be generated
Z serially in synchronism with a clock.

So, when you add 2 numbers like 0 1 0 1 1 and let us say 0 0 1 0 1 1 1 0 with a carry of 1 0,
with a carry of 1 0, with a carry of 1 0, with a carry of 1 1, with a carry of 0. Now initially
carry is 0. So you see for a serial adder if you think what should be my states? For addition
the only thing that I have to remember is the carry from the previous stage. So, that will be
my state that will be my state, the carry there will be one state variable, so there will be 2
values 0 and 1.

So, in this diagram I am representing them as 2 states A and B where A represents carry 0 and
B represents carry 1. Now let us see how this state diagram works. So, when you are in state
A and you have got an input 0 0, 0 0 and no carry so, what will happen? The sum will be 0,
sum is Z, sum is 0 and you remain in A, means the carry remain 0. So, if you are applied 0 1,
sum is 1, no carry remain in A, 1 0, same sum is 1, no carry, but when you get the inputs 1 1,
then this sum will be 0 and there will be a carry of 1. So, now this state will change from A to
B, B indicates carry 1 and while you are in state B the things are similar.

So, if you have input 0 1 with a carry 1 then the sum will be 0 and carry will remain 1, 1 0
also sum will be 0 carry 1 and if it is 1 1 sum will also be 1 carry will also be 1. But only
when the inputs are 0 0 there will be no carry, so you move back to state A, but this sum will
be 1 ok. So, just you see and think how the process of addition goes on and you can get the,
you can get behavior of this serial adder in terms of the state transition diagram fine.

543
(Refer Slide Time: 24:25)

Now, from this state transition diagram here you can construct the state table same thing. So
here you see there are 2 inputs right X1, X2. So, in this case in the second part there will be 4
columns, 1 corresponding to the input combination 0 0, 0 1, 1 0 and 1 1, the rest is same.
Exactly what this diagram shows? So, you are in state A, the input has come 0 0 you remain
in state A and the output is 0 ok, like that. The same thing you can you capture it like this.

(Refer Slide Time: 25:12)

Let us take another example; this is a example of a counter. Well we shall be looking into the
design of counter in more detail later, lot of different kinds of counters we shall see. But let us

544
look into an example right here. This is a 3-bit binary counter. So what does this counter do?
This counter is supposed to count in the sequence 0 0 0, 0 0 1, 0 1 0, 3, 4, 5, 6, 7 and then
back to 0. Now, the counter is a circuit like this, counter does not have any input externally,
there is only a clock and the outputs are generated in synchronism with the clock, the count
value 0 0 0, 0 0 1 sequentially they will be generated.

So, there is no separate input, so this state diagram state transition diagram is very simple,
there will be 8 states, I am calling them S0 to S7; I am, just for convenience, I am considering
the binary equivalent, the decimal equivalent of the binary combinations and I am encoding
states like this, for example S6 means this 1 1 0, 6 ok.

So, when you are in S0 and a clock comes, no input ϕ , you move to state 1, your output is
0 0 1; that means now you are in 1. From S1 you move to S2, 0 1 0, 2, S2 to S3, 0 1 1, S3 to
S 4, similarly S4 to S5, S5 to S6, S6 to S7 and S7 back to S0, 0 0 0.

So, this state table logic is also quite similar. These are the present states, there are 8 states
and for each of these states, there is no input, so there is a single column here. So, you show
that what will the next state be, S0 will go to S1, S1 will go to S2 and what will be the
corresponding outputs right, same information you are depicting fine.

(Refer Slide Time: 27:25)

So, let us take one last example, this is a slightly more complex kind of a counter, this is also
a binary counter, but it does not count in the binary sequence. So, what it says is that this is a

545
4-bit counter that counts in the sequence first 0 0 0 0, then 0 0 1 1, then this, then this, then
this, this. So there are 6 states and then back again to 0. So, you see I have just again used the
decimal equivalence of the state S0, S3, S6, S12, S9, S12, S10 and then back to S0; this is my
regular sequence of counting and this state transition is exactly labeled in this same way as in
the previous example.

So, S0 to S3 it is labeled by 3, 0 0 1 1, there are no inputs, S3 to S6 labeled by 6, 0 1 1 0 and


so on and S10 back to S0, this of course is missing, it should be ϕ , 0 0 0 0 and it also
mentions if the counter starts whenever you switch on the power it if it starts with any of the
other states at the next clock it should go to 0 0 0 0.

So, if it is in any of the other 10 states out here, so all of them I have shown only 2 that is
dots, so there are arrows from all of them. So, it will be going to S0 right, so if you have a
specification like this from there you can also construct the state table same way and from
this state table you will be moving into the next step.

So, here we have looked at a number of examples for constructing the state table and the state
diagrams. Now with this we come to the end of this lecture. Now from the next lecture what
we will see? We shall look at some of the examples and try to go through the entire flow of
synthesizing a synchronous sequential circuits. So, far we have only seen how starting from
the behavior we can construct the state transition diagram and the state table. With the
remaining steps we shall be learning in our next few lectures.

Thank you.

546
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology Kharagpur

Lecture - 38
Synthesis of Synchronous Sequential Circuits (Part III)

So, in the last lecture we were talking about the method of Synthesizing a Synchronous
Sequential Circuits and you have seen the first step of it, starting from the specification how
to construct the state transition diagram and state tables. Today we shall be looking at a
complete worked out example starting from the state table or state transition diagram how we
can go through the other steps and arrive at our final circuit diagram. So, this is the third part
of our lecture on Synthesis of Synchronous Sequential Circuits.

(Refer Slide Time: 00:56)

547
So, let us recapitulate what we are said about the synthesis of FSMs. In the last lecture we
have seen through a number of examples how we can construct the state transition diagram
and also the state table starting from the problem description.

But now the question arises from the state table what next. So once you have constructed the
state table there are a few steps that are left to be done; these I shall be illustrating with the
help of examples from this lecture onward. The first step that we shall go through now is
something called state assignment. State assignment means that you are assigning some
unique binary code to the states, assign unary unique binary code to the states.

Now the way you do state assignment can be different for example, if you have 3 states let us
say A, B, C, then you can just assign the binary code 0 0 0, 0 1 or 1 0, let us say or you can
use something called one hot encoding, that means one of the bits in the state representation
is 1. But of course, here you require 3 state variables, there can be many other alternatives ok,
but in the examples that we shall be showing we shall be looking at the most compact
representation, say for 3 state variables, 3 states, we need 2 bits or 2 state variables to
represent.

Now after state assignment we will be constructing something called the transition and output
table. Now this can be constructed directly from the state table as we shall see from this state
table after state assignment we shall be going to the transition and output table. Now once
transition output table is done we will be selecting the type of memory elements that what

548
kind of flip flop we are choosing and accordingly we will be constructing something called
the excitation table.

Now, you recall when we had discussed the various kinds of flip flops, we talked about the
excitation requirement of the flip flop; for example, for a T flip flop if you want to go from
state 0 to state 1 you have to apply T equal to 1. So, for every pair of present state and next
state you know what exactly you have to apply to the inputs of the flip flop.

So, accordingly you create or construct something called the excitation table and from that
table you can obtain some functions, one is called excitation function, other is called output
functions and once you get them you can minimize them and once you have minimized you
can realize them using gates or using any other modules as you feel like. Now let us illustrate
this steps that I have mentioned through some examples.

So, the example that we take in this lecture is that of a serial adder which we have already
discussed in the last lecture. So, in the last lecture whatever we have discussed is shown in
the slide. So we have said that this is our functional depiction of the serial adder, there are 2
serial inputs X1 and X2, this is the serial output Z and of course there is clock. State
transition diagram, state table of course, for synthesis the steps we shall be using the state
table representation.

So, state diagram we are not looking at right now, we shall be looking at state table. So, just
let us recapitulate once in the state table. So, in the state table we have the present state
specified in the first column and the next state and the output specified in the second column
for all possible values of the inputs because, here we have 2 inputs X1 and X2, there can be 4
different input combinations. So, capital X is the combination of X1 X2, it can be 0 0, 0 1, 1
0, 1 1. So, state table depicts the whole behavior. Now, let us see how we can proceed with
the next steps.

549
(Refer Slide Time: 06:12)

So, as I said first step will be the state assignment, because in this example there are 2 states,
so we can use a single bit. Let us say state A may be represented by 0 and state B can be
represented by 1, well you remember this is not unique, this is just one of the state
assignment we can try out. So, you can have other state assignments also, you can have the
reverse 1 for A and 0 for B also fine.

Now, with this state assignment your state table becomes like this, you see this is exactly the
same as the state table. What we have done? We have replaced A by 0 and wherever B was
there we have replaced it by 1. The here everything have remains same you see here A and B
was there we replace it by 0 and 1 here, also A was there here also A and B was there here
also A and B was there here also B and B was there, the rest you have not touched.

550
(Refer Slide Time: 07:37)

Now just one thing you just see that in this state table every entry of the state, let us say here,
here, every entry, so what are we specifying we are specifying 2 things, we are specifying the
next state as well as we are specifying the output separated by commas, but for convenience,
let us separate these out in 2 different tables; let the NS be there in one table and Z with there
in another table. It is again not anything different, but instead of writing them in a single table
separated by commas for convenience let us separate them out and this is called transition
and output table.

You see the first part of the table is for NS, second part of the table is for the output Z exactly
the same table you see the inputs the NS 0 0 0 1 0 1 1 1 you see 0 0 0 1 0 1 1 1 and the
outputs 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1. So, it is exactly the same thing right and these this X
value instead of X, I am also just writing here 0 0, 0 1, 1 0, 1 1.

551
(Refer Slide Time: 08:56)

These are actually the value of X which means X1 X2 because, you recall the 2 inputs of a
circuit are X1 and X2, the 2 serial inputs that are coming those are X1 and X2 right.

(Refer Slide Time: 09:16)

Now, the next step is to select memory elements. Now, it is a choice we do not know which
one will be best. I can try out D flip flop T flip flop JK SR. let us start with D, because it is a
simplest kind of flip flop. So, let us select D flip flop. Now our model of the FSM was like
this, there are 2 inputs X1 X2, single output Z this was the next state capital Y, present state
small y.

552
Now, when we are using D flip flop what we are actually trying to do is that here instead of
this pink box we will be using a D flip flop, D and Q where in D we will be applying some
value and this Q will be generating this small y right. So, in the original table we had the
capital Y, but when we convert it into something called excitation and output table, see this
was the table which we had seen earlier transition and output table just in last slide we had
seen this.

But from this after we have selected D flip flop, now we are coming to something which is
called excitation and output table. Now excitation output table will look exactly same as this
because, in case of a D flip flop; so whatever you are applying same thing will be coming out
just like your model here so there will be no change.

So, here actually whatever you are showing here they indicate the value of D. So, instead of
Y see here Y so Y you can call as if you are applying the D value here. So, whatever is D that
you are applying here, there will be the same thing, because this capital Y becomes small y
whenever clock comes for a D flip flop, same thing happens. So, whatever you are applying
to D that goes inside when the clock comes ok. So, you have the excitation and output table
like this.

(Refer Slide Time: 11:58)

Now from the excitation and output table if you just look at it, just let me go back ones, if you
just look at this excitation and output table you see that what are the inputs here.

553
(Refer Slide Time: 12:18)

Here my inputs are on one side, I have my input small y and on the other side here I have my
inputs X1 and X2. So, 0 0, 0 1, 1 0, 1 1, so if you just inter change these 2 columns it will
become like a Karnaugh map right, in Karnaugh map you have 0 0, 0 1, 1 1 and 1 0. So, if
you just reverse it, it will be like a Karnaugh map, similarly for the output part it will it will
be like a Karnaugh map ok. Because you already have these, you already have this, if you just
interchange the last 2 columns, it will become just like a Karnaugh map and for the first part
you are actually generating the function for this small y, whatever this small, this actually
capital not small y, capital Y and here you are actually generating Z. So, if you just work this
out the same thing, I have mentioned, the Karnaugh maps will look like this, for the first one
the Karnaugh map will look like this.

So, if you try to minimize it, there will be 3 cubes, one like this, one like this and one like
this. So, if you see, there will be 3 terms, here I have shown X1 X2 on this side and small y
on this side and for the second part, this for the output function the 1’s are like this, you see
there are no cubes possible, you cannot minimize it, so this is actually the exclusive OR
functions. So, I have shown in an compact form in an exclusive OR form, but actually but if
you just want to write it in the expanded form it will be like this; this 1 will correspond to
X´ 1 X´ 2 y , this 1 will correspond to X´ 1 X 2 ý OR this 1 will correspond to X 1 X´ 2 ý
OR and this 1 will correspond to X1 X2 and y; this is nothing but the EXOR of X1 X2 and y.

554
So, you see once you have generated this functions Y and Z, this is actually a D flip flop you
have chosen here. So, you have a as good as design the circuit because, from this function
you can directly generate the value of X because here this will be nothing but an exclusive
OR gate, EXOR gate X1 X2 and y, this will be Z and for generating Y you will be using a
circuit like this there will be 3 AND gates and an OR gate this will be generating capital Y.

(Refer Slide Time: 15:44)

So, the inputs will be X1 X2, X1 y and X2 y. So, fine generating the output Z you need an
EXOR function as you can either use an EXOR gate or you can break it up into AND, OR,
NOT gates and for generating the next state capital Y you need 3 AND gates and an OR gate
right. This is actually how you do the synthesis the basic idea is this.

555
(Refer Slide Time: 16:27)

Now, the same design let us try out an alternative, suppose instead of D flip flop we try with
SR flip flop right. Now if we use SR flip flop actually what we are trying to do? Now, see
earlier we had this D flip flop there are single input single output Y, but now you see if you
consider SR flip flop here, now there will be not 1, but 2 inputs SR on one side and here the
output will be y. Now this circuit actually now will be have to be generate both S and R
because this capital Y is the FSM model but when you are mapping this memory element to
an SR flip flop, this capital Y will get split into 2 inputs S and R right.

So, from the transition and output table when you map it to this, just see how this table is
getting constructed. Second part is identical, second part there is no change this is just the
outputs. But for the next state here the values that are shown are the values of S and R just
see it carefully. Look at this your present state was 0 next state is 0, for an SR flip flop you
recall what are the excitation function for SR flip flop, just recall for SR flip flop what do you
have to apply.

Suppose I want to go from 0 to 0, for SR flip flop I can either apply 0 0 or I can apply 0 1
which means 0 don’t care, but if I want to go from 0 to 1 there is only one way 1 0. If I want
to go from 1 to 0, I have to apply 0 1, but if I want to go from 1 to 1, 0 0. So, now you see
here you are going from 0 to 0, 0 to 0, 0 to 0 all 3 see you see 0 X, 0 X, 0 X, 0 to 0 is 0 X, 0
to 1, 0 to 1 is 1 1 0 then 1 to 0, 1 to 0 ok.

556
Actually 1 to 0 this will be 0 1 not 1 0, 1 to 0 and rest are 1 to 1, 1 to 1, 1 to 1. So, it will be 0
0, 0 0, 0 0, so you see here all again you have constructed this specification for the map from
where you can minimize, see you can again minimize using Karnaugh map from here.

(Refer Slide Time: 19:39)

Just I am showing you, just 1 thing just you can rectify this error because, this 1 0 would
actually be 0 1, so we can make it correct. So, what we do? The first part of the table which
contains the pairs 2 those we split it up into S and R the first one we used for S second one
we used for R right.

So, you make that correction, without this correction I am just showing you, just to illustrate
what will be done. So, you just see where the ones are you put them in the k map, minimize
them you will get S, you will get R and in this case you will get Z or the output and this will
be the function you can implemented. But if you want to just try to rectify that confusion
which was there just a second let us go back, so from this let us try to construct the map from
here.

557
(Refer Slide Time: 20:53)

Let us construct the map Y here and on this side it will be X1, X2. So, it will be 0 0, 0 1, 1 1,
1 0 and 0 1, this is for S ok, let us say this for S, S is the first one first 0 1. So, 1 you just only
note down ones and 1 these 2 ones are there for S, there is a there is a 1 here and 1 in there.
And for R what will happen for R? For R the Karnaugh map will be the right side there are no
ones, there is no 1, so R will be 0.

So, R you do not apply anything, and S there will be 2 terms there will OR of these 2 and
output similarly output in the same way it will be a Karnaugh map, you can just do it. I am
sorry this will be 1, will be here because it is 1 1, it will be here. So, in this way you can just
generate the functions and after you generate this functions, you can actually minimize them
and after minimization, you can realize this circuit like this, this is the basic idea how you do
it.

So, here I have just worked out just one example that of a 2 bit serial adder and at some more
examples I shall be explaining in our next lecture also. So, we come to the end of this lecture.
If you recall we have worked out the complete synthesis flow with the help of a simple
example that of a serial adder.

We had seen earlier in our last lecture how we can construct the state transition diagram and
the state table, and from the state table we looked at the various steps that you need to do
started from state assignment, transition output table, excitation table, then minimizing using

558
the corner maps and so on and get the final circuit. So, we shall be working out some more
examples in the next couple of lectures.

Thank you.

559
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 39
Synthesis of Synchronous Sequential Circuits (Part IV)

So, we continue with our discussion and you take another example of Synthesis of a
Synchronous Sequential Circuit. In our, I mean if you recall in our last lecture, we talked
about a serial adder and in this lecture we shall be talking about that sequence detector. Let us
look into this lecture the part 4 of our discussion. So, sequence detector this is something
which we have already seen earlier just to recall.

(Refer Slide Time: 00:38)

In this sequence detector we want to detect this bit pattern 0110; in this input stream X, X is a
serial bit stream that is coming. And whenever this pattern appears the output Z also
generates a serial output stream, the output bit will become 1 and at all other times output will
be 0 and we will be allowing overlapping occurrences of 0110; as we have mentioned this
were discussed earlier ok

560
(Refer Slide Time: 01:31)

So, this was 1 example where this outputs once were generated ok. Now this was the state
transition diagram that we have worked out earlier; now from that point let us try to proceed.
Here we show the corresponding state table first step is to go into the state table. So, there are
4 states; so, I show the 4 states. So, 0 to S4 because there is a single input, there be 2 input
combinations 1 correspond to X equal to 0 X equal to 1.

So, whenever in state S0 and input X is 0; so you go to S1 with 0 if it is 1 you remain in S0


with 0. Similarly for the other combinations you can check let say from S3 if it is 0 you go
back to S1, S1 with output 1; if it is 1, we go back to S0 with output 0 well; so, this was the
state table. Now let us do state assignment; this is the table which is after state assignment
and the convention that you are followed is S0 state is encoded as 0 0, S1 as 0 1, S2 as 1 0
and S3 as 1 1.

So, you see this 2 tables are same, but wherever this S0 is that is replaced by 0 0; wherever
S1 is that is replaced by 0 1 just that. And then we have constructed the transition and output
table, this is what we have mentioned; transition and output table where we had separated out
this states and the outputs, instead of showing them in the same table separated by commas;
we have put them into 2 separate tables, one here and the other here. The first one is for the
states second one is for the, for the output.

The output you see it was 0 0 0 1 and all 0 0 0 0 1 and all 0. So, this is how we have
proceeded up to the transition and output table it is fairly simple.

561
(Refer Slide Time: 04:27)

Now, from there this is the transition an output table I am showing again this is the transition
and output table. Now suppose we are now selecting the memory element, let us suppose we
are using T flip flops right. Let us see what we are trying to do here; here we are saying is
that because there are 4 states our sequential circuit module will look like this; there will be 2
memory elements right.

There will be X, there will be Z and these are y1 and y small y1 and small y2. Now, what we
are saying is that we shall be using T flip flops; so, this will be T1 and T2. So, instead of
capital O and capital O; it will now, will be generating T1 and T2 directly from this circuit.
So, how will be generating T1 and T2? Now again for a T flip flop let us think of the
excitation requirement for a T flip flop whenever I want to go from 0 to 0. So, the T will be 0;
0 to 1, T will be 1, 1 to 0 T will be 1; that means, whenever there is a change T will be 1 and
1 to 1 again T will be 0; this is how were T flip flop works.

Now, let us see here instead of the next state I am just listing T1, T2 values here. Let us see
from 0 0; I want to go to 0 1; that means, first one is 0 to 0, first one is not changing, second
one is changing. So, T1 T2 is 0 1, second one is changing, 0 0 to 0 0 no change. So, 0 0; no
change 0 1 to 0 1, no change 0 0 no change, 0 1 to 1 0 both are changing ok; both are
changing. So, that is why T1 T2 both are 1 1, 1 0 to 0 1 both are changing 1 1; 1 0 to 1 1, only
the second one is changing. So, 0 1; 1 1 to 0 1, the first 1 is changing 1 0, 1 1 to 0 0 both are
changing 1 1.

562
So, this is how you are constructing the excitation table corresponding to T flip flop right this
is what you have done here.

(Refer Slide Time: 07:36)

Now, once we have constructed this excitation table; I am showing it again here. From here
you can directly generate the functions and construct the Karnaugh maps. Here I have one
input X, because this is one input I am showing it and this way and this y1, y2, I am showing
on the in this direction. So, this T1 T2, first let us consider T1; this is for T1, for T1 where
there is 1? There is a 1 here, here and here. So, let us see this 1 is for X equal to 0; 1 0, X
equal to 0; 1 0 this 1, X equal to 0 1 1 this 1 X equal to 1; 0 1 this 1 X equal to 1 1 1 this 1.

So, in this case there will be 2 cubes one will be like this, one will be like this. So, the
expression for T1 will look like this X bar y1 and this one will be X and y 2. Similarly for T2
if you now go if you now look at T2; so, you see where T2 is changing; T2 is 1 T 2 is 1 here,
here, here, here, here; 5 places. So, X equal to 0 and 0 0; X equal to 0 and 0 0 here X equal to
0 and 1 0 here X equal to 1 and all these 3 X equal to 1 all these 3.

So, here the cubes will be one cube will be like this, one cube will be like this and the other
cube can be either this or this. Let us see which one I have taken; this X́ y´ 2 , X́ y´ 2 is
this one; this cube y 1 y´2 , y 1 y´2 , I have taken this one y 1 y´2 and X y´ 2 is this;
this is T2 and finally, for Z there is a single 1 here it is a it is X´ 1 y 1 y 2 .

563
So, once you have done this. So, now what will your circuit look like? Now if I look into the
circuit as a big black box; inside your circuit all these functions will be there. There will be
X́ y 1+ X y 2 this will be there will be one block that will generating this function T1. Then
this will be another functional block there be another block that will be generating T2 and
there will be another circuit that will be generating Z right.

So, your this circuit will be generating your output Z and there will be 2 flip flops; this T1
will be connected to 1 of them and T2 will be connecting to the other of them there will be
fed back and X will be your input; this will how your circuit will look like. So, now you
understand that how you can arrive at the final circuit; you have your state table, make a state
assignment, select the flip flop type means after; you have the transition and output table you
get the excitation table based on the flip flop type.

And from the excitation table you can directly construct the flip flops; you can, you can
directly construct the Karnaugh maps; if the number of variables is 3 or 4 of course, and you
can straight way minimize the functions. Once you minimize the functions you have got the
circuit specification, you can directly arrive at the circuit right.

(Refer Slide Time: 11:59)

Now, here we have used a T flip flop right; now suppose now we are saying that well let us
use J K flip flop, not T. So, now how will it look like? Now see earlier in the previous case
we had 2 flip flops ok; the inputs were T1 and T2 and the outputs were generated as small y1
and small y2. But now if you are using J K flip flop; now your requirement will be there will

564
be 4 inputs that need to be generated J1, K1 and J2, K2. Because for every flip flop will be
generating 2 inputs, the first one will be generating small y1, the second one will be
generating small y2.

Now, again for a J K flip flop you think of the excitation requirement J K flip flop, how do
you do? 0 to 0, 0 to 1, 1 to 0 and 1 to 1; from 0 to 0 you can apply either 0 0 or 0 1. So, 0
don’t care; from 0 to 1, you can either apply 1 0 or 1 1; so 1 don’t care. 1 to 0 you can apply
either 0 1 or 1 1 which means don’t care 1; 1 to 1 means either 1 0 or 1 1; that means, just a
second 0 to 1 is 1 to no 1 to 1 is either 0 0 or either 0 0 or 1 0; that means, don’t care 0 don’t
care 0 yes fine.

So, now you see this J1, K1, J2, K2 that we will have to generate; this was our transition and
transition and output table. Now let us see one by one present state was 0 0 you are going to 0
1. First one is 0 to 0; 0 to 0 means 0 don’t care, you see 0 don’t care, then 0 1, 0 1 is 1 don’t
care, 1 don’t care. Then 0 0 to 0 0 both are 0 to 0, 0 don’t care, 0 don’t care, 0 don’t care.
Then from 0 1 you are going to 0 1 0 to 0 1 to 1 0 to 0 is 0 don’t care 1 to 1 is don’t care 0 0
don’t care like this you can have all right like this you can get all of them. Now, the point to
note is that once we have obtained this table; now you can get the Karnaugh map for this all
this 4 functions, you will now require 4 Karnaugh maps.

(Refer Slide Time: 15:24)

4 for J K, J1 K2 and 1 for Z, 5 in total; So, let say X in this direction and y1 y2 here; let say
this is for J1; J1 let say there is don’t care here; don’t care means 0 1 0 0 0 0 1 1 1 1 0 0 1 0;

565
this is don’t care 0 1 1; 0 1 1 is also don’t care and J1 this is also J1 this is 1. So, X 1 1 and 0
1 1 and 0 1; this is 1 and these are don’t cares 1 1 0 1 1 0 is don’t care and 1 1 1 is don’t care.

So, here you can have best you can have a cube like this and minimize it right for J1.
Similarly for K1 what will happen? K1 let say K1 will be the second one don’t care don’t
care 1 1; X equal to 0. So, it will be don’t care don’t care 1 1 and don’t care, don’t care; first
one don’t care, this is 1; 1 1 1 is 1 1 1 is 1. So, here you can have a bigger cube like this; this
is K1. Similarly you can have J2; J2 will be the middle one; 1 don’t care 1 don’t care. So, 1 is
for 0 0 0 and 0 1 0 0 0 0 and 0 1 0 other are don’t cares and J 2 0 don’t care 1 1 is here 1 0,
these 2 are don’t cares.

So, here you will have one cube like this and one cube like this; this J2 and then you go for
K2. So, when your K2 will last 1 don’t care 0 0 0 and 1 0 0 0 0 and 1 these 2 don’t care and
here you have 1 in 0 1 and 1 1 0 1 and 1 1 others are don’t care. So, you have a cube like this;
first for Z there is no need of construction it is a single 1 no minimization.

So, you see you can directly get the functions from here; now what you do? You have your
combinational circuit; you can design according to these functions. You will be having the
input X, you will be having this inputs y1 y2 coming from the flip flops and it will be
generating Z; it will be generating J1, K1; J2, K2 and you will be having 2 flip flops here.

One getting J1 K1 generating y1; other getting J2 K2 and generating y2 ok; so, you see the
examples that have worked out the show that; however, complex your circuit can be, while of
course, in D and T the number of variables are inputs to the flip flops you need to controller
less for JK and SR it becomes bigger, but still it is manageable you can work out you can
minimize.

And there are functions which can be minimized better using J K or T flip flop there are other
functions, which can be minimized better using D flip flop. So, there is no hard and first rule
that which flip flop is better which will give you smaller circuit this all comes from
experience right.

So, here we have worked out a number of examples; we shall look at some more examples
later. So, let us come to the, this already we have done let us come to the end of this lecture.
So, what will see later is that we shall work out a couple of counter design examples because
counter designs are interesting. So, in our next lecture we shall be looking at some designs of

566
counters, we shall we going through the entire process of counter design. And after that we
shall be looking into the design of resistors and counters from slightly different perspective.
So, those discussions we shall be going through in our later lectures.

Thank you.

567
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 40
Minimization of Finite State Machines (Part- I)

If you recall in the last few lectures, we had discussed number of things regarding the design
or synthesis of finite state machines. Now, we talked about the broad categories of finite state
machines, the Mealy machine and the Moore machine and we showed the different steps that
we typically follow in the synthesis or the design of such finite state machines. Like the
creation of the state table or the state transition diagram then creation of the so called
excitation table then the transition and output table and finally, the functions are minimized
and the combinational circuit part of the sequential circuits are synthesized.

Now, one thing we just mentioned, but did not discussed how to do it; given the specification
of a finite state machine let say in the form of a state table; how to minimize it? There may be
some states which are redundant or there may be states which are equivalent; we can possibly
merge the 2 states into a single state that will make the state table smaller and in the process
the final sequential circuit or FSM will also be more compact. So, the topic of our discussion
today is Minimization of Finite State Machines the first part of it.

(Refer Slide Time: 01:49)

568
Now, let us see the basic idea the basic idea; one thing let me tell you that the kind of
sequential circuits or sequential machines that we are discussed or discussing so far, they are
the so called synchronous sequential circuits. Synchronous means there is the concept of a
clock, the clocks are fed to all the flip flops that are used to implement the state variables and
all state changes occurred in synchronism with the clock pulse.

So, when the clock pulse comes only then the states will change and it is expected that the
inputs that are coming from outside; they will also be applied in synchronism with the clock.
So, we are continuing with that assumption that our sequential circuits or FSM’s are
synchronous in nature ok. So, here we are talking about the issue of elimination of redundant
states. Now the concept of elimination of redundant state is to identify something called
equivalent states.

So, the basic concept is we want to identify equivalent states for example, in a finite state
machine; we can see or we can find out the 2 states let us say A and B, they are equivalent.
So, we can merge them together into a single state that will make our FSM smaller and that is
expected to lead to a more compact representation.

Now, when you talk about equivalence of states, let us try to understand what we exactly
mean by that. 2 states we are saying that they are equivalent; now you recall in a state
transition diagram for what we have?

(Refer Slide Time: 03:52)

569
In a state transition diagram we have states, there are transitions and transitions are labeled by
some inputs and they also specify some outputs. There may be some other state which is
possibly going to the same state; they may be having some other inputs and some other
outputs.

Now, we are trying to answer the question, these 2 states A and B; now whether they are
equivalent or not right? So, let us try to just answer this question in a slightly formal way; we
talk about something called state equivalence. Here we are saying that 2 states Si and Sj; they
are said to be equivalent if and only if for every possible input sequence for example, this
α and x; these are some input sequences what, I am saying is that for every possible input
sequence.

Let us call it X, the outputs will be the same and the next states are equivalent. So, what we
are saying is that suppose I have a state Si here, I have another state Sj, here I am fine, let us
say there is a single transition only let us assume. So, here let us say for the input
combination 0 0; the output is coming as 1, here also for the input combination 0 0; the
output is coming as 1.

So, this is what is mentioned here for every input sequence X, this is 0 0 is our X; the output
is the same and let us say the next state is also some Sk. So, if the next state is also the same
then we say they are equivalent, but this definition says that the next state may also be
equivalent which means say instead of Sk, let us say the first one is going into a state Sk1 and
this Sj is going to a state Sk2 and we have earlier found out that Sk1 and Sk2 are equivalent.
So, even if they are going to 2 sates which are known to be equivalent then also we can say
that Si and Sj are equivalent right.

Now, mathematically speaking if λ denote the output function, so, let us saying that the
output when you are in state Si and input is X this output must be the same as the output
when you are in state Sj and the input is X.

570
(Refer Slide Time: 07:01)

Here λ is the output function. So, λ , ω we denote the output function in just using
variable symbols. And δ is the next state function which says that the next state when you
are in Si; the present state and input X and the next state when you are in Sj and input is X
that equivalent you say I am not shown it as equal that equivalent; these 2 states should be
equivalent.

So, when you have this kind of a condition holding then we say that Si and Sj of the machine
are equivalent. Just you try to understand that if there are 2 states and if I find that all possible
inputs that we apply from the 2 states; we are going to the same equivalent state and the
outputs that are coming are also the same what does that mean? It means that we cannot
distinguish between these 2 states, they are basically the same. So, we can merge Si and Sj
into a single state and you can make our FSM smaller this is the basic idea.

571
(Refer Slide Time: 08:29)

So, let us now show these steps of the minimization then we shall be illustrating with the help
of an example. The first thing to notice that we use something called an implication table or
an implication chart. Well implication chart is basically some kind of a 2 dimensional
structure which shows the relationship between every pair of states.

Let say Si and Sj, whether they are equivalent or not, if they are not equivalent, how they are
differing? So, every entry of this 2 dimensional structure will capture the information
regarding this similarity of every pair of states, this table or chart is called an implication
table or an implication chart.

Let say how this table looks like; now as I said this chart I should show you how it should
looks like. This chart will have a square for every pair of states because we want to compare
between every pair of states; let say Si and Sj, we want to compare. And what we put in this
squares that is explained later; so, what we do first step is compare each pair of rows in this
state table, pair of rows means actually we are talking about the pair of states.

We are comparing every pair of states Si and Sj in the implication chart, but that is reflected
from the state table; you can think of this state table, 2 rows of this state table indicates two
different states, present states ok. That is why we are saying we consider two different rows
of this state table and with respect to that we mark the entry in the implication chart how?

572
So, we compare each pair of rows in this state table suppose the rows corresponding to Si and
Sj. The condition is if the outputs in the state table associated with states i and j; in short I am
saying i and j instead of writing Si and Sj. If the outputs corresponding to i and j are different
which means this states cannot be equivalent, in the state table I find that if some input X is
coming; the first state is giving an output 0, the second state is giving an output 1. So, they
are not equivalent, so if they are not equivalent I put a cross in that entry in the implication
chart indicating that the 2 states are not equivalent ok.

So, if the outputs are equivalent, straight away we can conclude that this states are not
equivalent and we place across in the corresponding square i-j, that this square corresponding
to Si and Sj, i-j to indicate that they are not equivalent. But if we find that the outputs are the
same; then we place something called implied pairs in the square i-j, what is the meaning of
implied square?

Suppose in this state table we find that corresponding to state i for a input, for particular input
X, particular input X; the next state is m and for this state j for the same input X the next state
is n. So, what we are saying is that all though the outputs are the same, but the next states are
different, i is going to m, j is going to n. So, in the implication table, in that square we note
down m and n indicating that they are going to two different states m n, but we still do not
know whether m and n are equivalent or not. So, we just note them down that they are going
to the states m n; we note like this m dash n. We call these as an implied pair for a pair of
states i-j and an input X, m-n is an implied pair, if the outputs are the same we put the implied
pair in the corresponding square.

But if both the outputs and also the next states are the same which means this m and n are
identical or this i implies i and j implies j; they imply itself ok, then we can say that definitely
these 2 states are equivalent. There we put a tick mark in that square indicating that these 2
states are definitely equivalent. So, you see means we are tentatively marking 3 things, first
whether they are not equivalent at all we put a cross; they are definitely equivalent we put a
tick, but we do not know yet, we note down the implied pairs m-n because will have to check
m-n again later to check whether m and n are equivalent or not.

If they are equivalent we will replace them by a tick, if they are not equivalent will replace
them by a cross, this is how it works ok. So, we repeat these for all this squares, go through
the table square by square; if a square i-j contains the implied pair m-n. And the square m-n

573
in the implied table between m and n there is a X which means m and n are known to be not
equivalent, then you can place an X in the square; you can mark them as not equivalent then
after you have done it once will have to repeat it.

(Refer Slide Time: 14:58)

If you find that in the previous step, you have added some X; then will have to repeat the
process until no more Xs can be added.

So, you understand there is a iterative process; this not a onetime process you will have to
repeat this several times, make changes in implication table until no more no further changes
are possible. And finally, all those squares that does not contain X; you can say that the 2
states are equivalent; this is how the process works. Now let us illustrate this with the help of
example it will be easiest.

574
(Refer Slide Time: 15:43)

Let us take the example of a Moore machine; this state table as we have shown on the left
hand side. You see there are the 8 states A B C D E F G H; there is a single input capital X,
there is a single output z ok. This table shows that for all input combination 0 and 1 what will
be the next state and because it is a Moore machine the output will depend only on this state
and not on the inputs. So, the output I am showing only once or else I could have written D;
let us say D comma 0, C comma 0, F comma 0, H comma 0, E comma 1, D comma 1 that is
also ok, but because it is a Moore machine, I have shown the output as a separate column.

Now, you see the implication chart looks like this as I have shown here. So, I am not showing
it as a square matrix because I need to keep 1 square cell for every pair. Let say I mean along
the columns I label as A B C D E F G and along the rows I will label as B C D E F G H; So,
this cell corresponds to A and B, A and C, this A D, A E and so on. B C, B D, B because we
do not have to check A itself, A with A or B with B; that is why the number of columns get
reduced as we go from left to right.

Now, this will be our implication chart; now from the table let us see because it is a Moore
machine, let us look at it one by one between A and B. Look at the rows corresponding to A
and B, this first 2 rows; the next states the output is a same 0 0 ok. So, the next states are for
X equal to 0, D F, for X equal to 1, C H. So, we can write D F, C H between A and C; look
similarly between A and C you say outputs are different 0 and 1. So, straight away you can
write a cross here; they are not equivalent, then A and D; look at A and look at D the outputs

575
are same, but this states are D A, C E; D A, C E these are the implied pairs. A and E, A and E
again the outputs are different 0 and 1; so, there will be a cross. A and F, A and F again
outputs are different cross A and G output are the same D B, C H, D B, C H then A and H
again outputs are different 0 and 1; so, a cross.

So, like this you will be filling up all the cells, let us say B and C; look at B and C outputs are
different cross again, B and D, B and D outputs are same. So, F A, H E, F A, H E, B and E; B
and E outputs are different, B and F again outputs are different B and G output is same. So, F
B; H while H maps to itself; so we are not writing H, only F B the ones that are different
those only will write F B; B and H, B and H outputs are different.

Similarly, C D; C and D output different, C and E; C and E output is the same, E C D A E C


D A; C and F, C and F E F; D B, E F D B. C and G, C and G outputs are different C and H; E
C, D G; E C D G. Similarly, D E output different, D F outputs different, D G; A B, E H, D
and H output is different, E and F same output, C F; A B, C F A B, E and G different, E and H
C is same, A G; only A G. And F and G output different and F and H F C; B G F C B G and
finally, G and H outputs are different.

So, you see this is the first iteration of this implication chart that we have found out. Now this
process will be repeating, let us say I am giving example, let say the first cell contains D F
and C H; we will look at D F. D and F you see D and F are not equivalent; so, you can say
that this will also be not equivalent. Similarly D A and C E; D D and A, this we do not know,
D A is itself in fact, C E; C E is also something is that not X; so, this you cannot delete. So,
wherever you see there is an X in the next step will be removing those. So, I am just showing
this.

576
(Refer Slide Time: 22:27)

So, whatever the first step that we have shown, here I am showing here and after that
wherever there is a mismatch, I am crossing them out. For example, this A D is the same; self
A D mapped to A D, then C maps to C; I am just cutting them out; C E mapping to C E ok.
This is my initial table; now after that I will have to check in the next step: what are the other
things that are getting cancelled out.

Like as I said D F; you see D and F there is a cross mark here. So, this will also get cancelled
out I am cutting them out meaning there by that I am putting cross here also. Similarly, we
look at this cell A F and E H; A F there is a cross ok; so this also gets cancelled out. So, if
there is at least one cross they will also get cancelled out.

Then look at this E F this also is E F; E F is E F is not cancelled then B D ah; this one right
this one; E F; B D, E and F ok. Then B F B and F is cross this is getting cancelled out A B; E
H, E and H is this A G; A G is already cancelled out. So, in this way you systematically
cancel out ones which are not compatible pairs. So, this is the process you go on repeating ok.
So, you will be repeating this over iterations; this will be after first pass and you continue this
pass, wherever there has being some changes made there may be further changes.

577
(Refer Slide Time: 24:37)

Like after another pass something more are getting cancelled out. So, what we see is that after
all these cancellations have being made; you will find that there is still some entries which are
not cancelled; like here we can find these right. So, here there are 8 states right A B C D E F
G H; now you see that some states which you are identifying as equivalent C and E. So, we
are merging these states C and E.

Then for example, this A and D; we are merging these states A and D; well this A G also has
to be cancelled out this; this is a mistake here, this will also be cancelled out. So, there are
there will be 2 equivalent states C E and A D; this you can check once C E and A D. So, C E
you merge together, A and D you merge together; so from the initial table you arrive at this
final table which you will see from 8 rows we have come down to 6 rows this is the basic
idea behind state minimization.

578
(Refer Slide Time: 26:06)

Let us take another example a smaller example of a mealy machine where there are 6 states
and there is one input X. So, the next state and the output are both shown; so, for this the
implication chart is shown in a similar way. So, all this state pairs are shown; you see A and B
they are not compatible because for this output 1 0, outputs are different; so, x. Similarly A
and D are not compatible 1 0 then A and F are not compatible 1 and 0 similarly B and C are
not compatible was 0 and 1. You see B D I have put at tick why because in B and D you see
the next state is F output is 0, the next state is B and D which is 0; B D are the self the same
pair.

So, I can definitely say they are equivalent I put a tick there. So, similarly the other things I
have put and whichever are this self like B F; B F is a same pair I am cutting them out C E I
am cutting them out. So, like this is the first step of the implication table; so, we can continue
like this. In the next step will find the sum of these are getting cancelled out for example, B
and F; C D, C D for example, you see between C and D there is a cross between C and D
there is a cross.

So, C D gets cancelled out now this B F which was there between B and F you see between B
and F there is C D which is already cancelled out. So, this will also get cancelled out and
again B F has being cancelled out, B C has also being B C you see B C is already cancelled.
So, this is also cancel; so, there will be 3 additional crosses that will be added. So, some B D
C D F they remain; so, this is how you continue.

579
(Refer Slide Time: 28:31)

So, this is the final state table which shows that B D is equivalent, C E and D F, but if you
check once more you see between C and E there is a cross. So, C E also goes out between D
and F there is also a cross; so D F also goes out. So, only B D remains and no further changes
can be made. So, in this table you can conclude that only the pair of states B and D are
equivalent right; no other states are equivalent here.

(Refer Slide Time: 29:09)

So, this is the final implication chart and B and D; you merge together and this is your means
earlier of course, this A and C was also there A C and B D; this 2 are there. So, the A C you

580
merge together and B and D you merge together; so, you get the final reduced table. So, like
this you can reduce the size of the table. So, in this lecture what we have actually discussed is
given a state table; how we can reduce the size of the state table which is called state
minimization.

Now, the process of state minimization is very important before you actually do sate
assignment and then synthesize in terms of some flip flops; that you have seen earlier, but
state minimization is an essential step which can drastically reduce the size of the FSM this
specification. So, we will be continuing with our discussion in the next lecture.

Thank you.

581
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 41
Minimization of Finite State Machines (Part- II)

So, here we continue with the discussion on Minimization on the number of states in a Finite
State Machine. So, if you recall in our last lecture we discussed that given a finite state
machine, how we can reduce the number of states. We talked about the notion of an
implication chart how we can fill up entries in the implication chart, and using that how we
can decide whether 2 states are equivalent or not.

So, in this lecture we continue with the discussion this is the second part of lecture on the
minimization of finite state machines; now here we basically discussed first the concept of
equivalence of 2 finite state machines. Earlier we talked about a single finite state machine
and we explored whether 2 states Si and Sj are equivalent or not. Now we are saying that we
have 2 different finite state machines let say N1 and N2 and we do not know whether the 2
machines are equivalent or not meaning that they represent the same specification or not.

So, we will have to find out in some way that some state x of the first machine and some state
y of this second machine whether they are equivalent or not. And if we can find out such a
one to one correspondence then we can only say that the 2 machines are equivalent. So, the
concept that you follow is very similar to that of implication chart and we shall we explaining
with the help of examples.

582
(Refer Slide Time: 02:12)

Now, first let us look at the notion of equivalent FSMs. Two finite state machines N1 and N2
that I have said that I have 2 FSMs, I am calling them as N1 other one I am calling them N2.
They are defined to be equivalent if for every state p that is there in N1 there will exists a
state q in N2 such that these 2 states are equivalent, as I said there will be a one to one
correspondence. The first thing will be that the number of states in N1 and the number of
states in N2 must be the same and secondly, there as 2 have has to be a one to one
correspondence between the states of the 2 machines.

So, for every state p in N1 there must be an equivalent state in q and also conversely for
every state s and N2 there must be a corresponding state t in N1. So, it has to be both ways.
So, if we can show this then only we can see that the 2 machines are FSMs are equivalent and
we shall be using a process which is very similar to implication table for verifying the
equivalence.

583
(Refer Slide Time: 03:44)

Let us take an example to illustrate the process because it will be easiest. So, we considered 2
FSMs N1 and N2 as shown, both of them contain four states, here this states are A, B, C, D
here the states are S0, S1, S2 and S3. Now we want to have a pair wise checking between
these states of here and states of here therefore, now our implication chart will look like a
square. So, it will be something like this, will be listing these states of the first machine along
one side and these states of the other machine along the other.

So, our implication chart will look like this and we will be pair wise considering means every
cell of this chart like for example, between A and S0 what will see? That we have a state A,
here we have a state S0 here. You see that if the input is 0, output is 0, input is 1 output is 0.
So, outputs are 0 0 here also the outputs are 0 0. So, will have to write some entries here we
cannot put a cross, but you see between A and S1, A is 0 0, this S1 is 0 and 1. So, we can
definitely put a cross here. In this way will be filling up this table in a manner very similar to
an implication chart let us see.

584
(Refer Slide Time: 05:38)

So, here I have shown, I am just explaining. So, on the top I am showing the 2 state tables.
This is the first state table and this is the second state table and let us see how we have filled
this up this is the first iteration. So, we have shown A B C D along the columns and S0 to S3
along the rows. Between A and S0 let us see between A and S0 what is happening first let us
see the ones where the that is entries between A and S1, A and S1, B S3 A S0. Between A and
S1 you see between A and S1 B S3 A S0 I think there is a 20 just a second A and S2 I think
this should be 1 there is a typographic error here this should be 1 that will be corrected.

Between A and S0 there is the cross because 0 1, 0 0 there is a cross A and S3 also you see A
and S3 0 0, 0 1 there is a cross.

First look at the crosses, B and S1, B and S1, B and S2, 0 1, C and S1. So, like this you just
pair wise you compare and you fill up this table in a very similar manner ok. So, I have not
actually verified this table, you can check it yourself, let me do one thing. Let me just take
this original table and forget this table, let me just work out this chart here separately.

Let me work out this chart separately. So, on this side we have A B C D on this side we have
S0, S1, S2, S3, let us see from this table between A and S0 0 0 and 0 0. So, we can put B S3,
A S1, I am only writing 3 S, I am not writing B S3 and A S1, A S1 between A and S1 0 0 0 1.
So, there will be a cross between A and S2, 0 0, 0 1 there also a cross, between A and S3, 0 0
0. So, B S2, B S2 and A S3 then B S0, B and S0, 0 1, 0 0, across, B S1, 0 1, 0 1. So, C S3, D
S0, C S3, D S0, B S2, 0 1, 0 1. So, C S0, D S2, C S0, D S2, then B and S3, B and S3, 0 1, 0 0

585
that will be a cross, C and S0, 0 1, 0 0, cross, C and S1, A S3, C S0, A S3, C S0, C and S2, C
and S2, A S0, C S2, A S0, C S2, C and S3 that different cross, D S0 similarly D and S0 C S3,
B S1 C S3, B S1, D S1 D S1 different D S2 also different D S3 C S2 B S3, C S2 B S3.

So, this is the correct implication chart corresponding with this table because these tables
which are shown there are some type typographic errors somewhere. So, this you can try to
figure out, but for this table, the chart will be like this. Now in the next step what you do you
explore this further like A-1 B-3 you look A-1, A and 1 it is already cross right. So, in the
next step this will also get crossed, B-2 A-3; B-2 is fine, A-3 is also fine. So, A-3 is actually
self. So, this you can cut it out, only B-2 remains, then this one C-3, D-0, C-3 is a cross. So,
this will also get crossed, C-0, D-2, C-0 is a cross this will also get crossed.

Look at this A-3, C-0, A-3 is something there, C-0 is a cross. So, this will also get crossed, A-
0, C-2, A-0 is a cross. So, this will also get crossed and lastly here, C-3, B-1, C-3 is a cross,
this will get crossed. C-2, B-3, C-2 is cross this will also cross. So, you finally A left to the
single entry B-2 right. So, this way you can find out which pair of states are actually
equivalent. So, if you say it only between say this means that only you can find an
equivalence between state B and S2, the others are not equivalent. So, we can say that
machine N1 and this machine N2 they are not equivalent ok.

But the example that I showed here this was for some other state table. So, I leave you
exercise for you to find out that were this mistake was in the state table and for which state
table this is coming and the finally, equivalence states value like this right, but the process is
like this, process is very similar, just using the implication chart you work out and finally,
whatever is left in this table that will tell you which state pairs are equivalent and if you can
find like in this table shows, that four such correspondences for the four states, then you can
say that the states are equivalent right.

586
(Refer Slide Time: 13:15)

Let us take another example, I am not working it out, but I am actually telling you how to do
it. Suppose I have a problem like this, there are 2 machines where I can see in 1 machine
there are 7 states other machine there are 3 states and someone asks you to check whether the
machines are equivalent or not. Well at first look, you may tell that well because the number
of states are not the same so they cannot be equivalent.

But the next question is the 2 machines that are given to you there may be redundant states in
those machines itself. So, you have not done any minimization. So, always your first step will
be to reduce the individual machine as much as possible and then you check whether the
number of states are same or not. If they are not same, no need to check, if the number of
states are same then only you follow this process and test whether there is an equivalence
corresponding to state pairs in the 2 machines.

So, here the process will be like this for the machine N1, you will have to first carry out a
state minimization; and the same thing will have to follow for state for the machine I mean
N2. So, after you have minimized it then whatever machines you get is 2 state tables. So, if
you find that the number of rows are the same, then you follow the process and test whether
they are equivalent or not right. So, I leave this as an exercise for you, you can basically try
this out and check whether you are getting this equivalence or not ok.

587
(Refer Slide Time: 15:22)

So, whatever you have seen so far, concerns the equivalence of states in a FSM or in general
equivalence of 2 FSMs totally. Now what we have not mentioned so far there can be some
instances some problems, where there can be the notion of some don’t care inputs, some
inputs or some output values can be don’t cares. So, while we are doing state minimization,
how do you handle don’t cares. Let us take an example to illustrate this and such
specifications where there can be don’t cares are in general referred to as incompletely
specified machines.

So, this specification is not complete, some of the information like inputs and outputs are not
mentioned, they are called incompletely specified machines. Now what do mean by
incompletely specified machines? There are two things first is that; the number of input
combinations that can be applied can be a subset of all possible input combinations. Like for
example, I have a circuit, let us take an example let say there are 3 inputs. Because there 3
inputs, there is an FSM that we are trying to design, this an FSM. Because there are 3 inputs,
there can be 8 possible input combinations, 23 , but let us assume out of these 23 , only 3
input combinations are valid which means during normal operation only one of these 3 input
combinations can come.

So, when you are designing the FSM, the remaining five inputs will be regarded as don’t
cares. So, I can suitably fix them depending on my convenience when I am doing state
minimization. So, they can be set to 0 or 1 as per my convenience ok. So, other input

588
combination, make it treated as don’t cares. So, in general we say that an FSM is
incompletely specified where the next state and also the output values are not specified for all
input combinations, for some of them they are not specified. But we have seen earlier during
state minimization, we are looking at the outputs and also the next states. You see more states
you can club together that is better for us because our FSM will get smaller we can find out
equivalent states.

So, the next state and the output you can suitably set during the process of minimization. So,
that more states can be merged together and the FSM can become smaller, this is a basic idea
illustrating with an help of example ok, but the idea is basically this let us see let us take an
example.

(Refer Slide Time: 18:49)

This a very simple example, I am just illustrating it. Suppose we are trying to design this
sequential circuit B ok. Now the input of the circuit is coming from some other circuit let say
A and this input we are calling as X and the output is going to some other circuit C, this
output you are calling as z right. Now how this circuit is working? Let us say that this circuit
A is generating a serial bit stream. So, X is not a parallel number, serially some bits are
coming one by one, 1 bit at a time in synchronism with the clock, the bits are coming.

So, I am assuming that this circuit A can generate only 2 possible output sequences bit
patterns 1 0 0 and 1 1 0. And this sequential circuit is designed in such a way that when 1 0 0
is received z will be set to 0 and when 1 1 0 is received z will be set to 1, now you understand

589
this situation. I am trying to design a circuit which is being fed with 3 bits in sequence and
out of the 8 possible combinations, only 2 of them are valid, they are 1 0 0 and 1 1 0. Now in
one case the output should be 0 if 1 0 0 is received and the other case the output should be 1.

So, for the all remaining 6 cases because such inputs will never come so I can as per my
convenience I can set the output to 0 or 1 whatever, because it will not matter because those
inputs will never come. And the other thing is that the output will become 0 or 1 once this
entire sequence is received, but before that it does not matter the output can be either 0 or 1,
that also I can set according to my convenience. Only at the end of the third cycle the output
will denote that whether 1 of those 2 strings have been received or not 0 or 1 ok.

So, what I am saying is that assume that the circuit C ignores the value of z at all other times,
only when this 3 ones have been received then only the output will be either 0 or 1 depending
on which of the 2 sequences have been received. So, the possible input output sequence I can
depict like this, the first circuit is generating x which can be either 1 0 0 or 1 1 0 the others
can never come. And the corresponding z output only at the end it will be either is 0 or 1 this
t0, t1, t2 indicates the times the clocks, but before that these 2 states are don’t cares, I do not,
I do not care what is coming here, but only I care what is the final value right.

Now, depending on this I can create this state table or first I think need to be easier to show
the state transition diagram. So, here I require four states because I need to identify 3 bit
sequence, I start with an initial state suppose I get a 1, I move to state S1, but when I get the
first one the output can be don’t care that is why the output is dash, dash means don’t care.

Now, I am in state S1 when the first one is received. Now if the next bit is 0 then possibly 1 0
0 is coming, I move to state S2. So, again the output is don’t care, but if the next bit is 1 I
move to S3 because possibly 1 1 0 is coming 1 1 again the output is don’t care, but if 1 1 0
has actually come, then only my output is coming to 1, and if 1 0 0 has come then also output
will be 0 right.

So, the output is being generated only in these 2 cases. So, all other you can see they are
don’t care combinations just like for example, from S2, I am only showing an output
transition when there is a 0, but what will happen if the input is 1 this is don’t care. So, I can
put any next state as per my convenience right these are the places where I can play with and
for this state transition diagram the corresponding state table is this. So, directly this is a one

590
to one mapping for S0, if x is 0 nothing is mentioned that is why dash comma dash it is not
shown.

Next it is also undefined or don’t care output is also don’t care. But only for x equal to 1, 1
comma dash; that means, S1 comma dash for x equal to 1, S1 with dash similarly from S1
there are 2 states from 0 or 1 so that 2 entries from S2 there is single for 0 for 1 not specified
similarly for S3 ok.

(Refer Slide Time: 24:58)

Now, from this state transition diagram or state table now we can fill up the don’t care
suitably, so, that we can carry out minimizations effectively. Now all these dash which were
there. So, here in this blue that I have shown bold, these are the ones we have filled them up
suitably like this dash we have filled up with S0 this dash with S1 this dash with S3 so, that
some states can be made equivalent.

Similarly, this output has been made 0 this output has been made 1. So, this you can
arbitrarily fill up and your objective will be to make as many states equivalent as possible.
So, accordingly the next states and outputs you can set to either 0 or 1. So, I am just showing
an example here now in an ad hoc way I have filled it up ad per my convenience.

So, with this assignment let us see how the implication chart will look like. There are four
states S0, S1, S2 along here S1, S2, S3 along here let see S0 and S1. S0 and S1 you say that
they are differing here 0 1 in their output. So, we have put a cross S0 and S2. See since S0

591
and S2, I have put a tick just to make them equivalent you see there was this was S0, 0, I had
made it S0, 0 just to make them equivalent, similarly S1 dash I have made it a S1 just to make
them identical, because they are now identical you can mark a definite tick here and S0 S3 0
1 that different so this is also x.

S1 S2 S1 and S2 1 0 they are also incompatible so, a cross S1 S3. So, S1 S3 1 dash 1 dash
outputs are same, but S2 S0 S3 same S3 S3 S2 S0. So, I have marked it S2 S0 and S2 and S3
is also different 0 1 x. So, we have to obtain the implication chart like this. Now after we
have obtained like this let us look at S2 S0, S2 S0 are also equivalent S0 S2. So, this 1 also
you will be changing it to a tick. So, finally, S0 and S2 will be equivalent and S2 and S3 also
will be equivalent. So, S0 and S2 you can merge together into a single state, S1 S3 you can
merge together into a single state.

So, first let us look at the state table. So, if you merge S0 and S2 together you see S0 remains
same S0 0 and S1 dash S2 is merged with it. So, S2 has gone so S1 S3. So, S1 is S2 1 now S2
has become S0. So, it will become S0 1 S3 dash S3 is now gone S3 has become S1. So, it is
S1 dash and S3 is also same. So, this is the reduced table and the reduced state transition
diagram will look like this, there will be only 2 states from S0 of the input is 0 you remain in
S0 with an output 0 0 0, if the input is 1 you go to S1 with output don’t care when you are in
S1 if the input is 0 you remain in S1 you go to S1 this edge with the output 1, but if the input
is 1 output is don’t care you remain in S1.

So, for this specification of the problem this FSM can be reduced drastically in size as you
can see, and the final design will become much smaller. So, essentially what we have seen is
that, in the process of state minimization given FSM state table you can reduce size; we can
use this same principle to identify with 2 FSM equivalent or not. And lastly we have
mentioned if the FSM is incompletely specified there are many examples where we do have
such incompletely specification, then we can suitably set the don’t care entries in the state
table such that as many of this states can be marked as equivalent in the example that you
cited only we could made 2 or 2 such states were made equivalent by suitably filling them up.
And the net result is that our final FSM specification becomes simpler and our circuit
becomes much smaller right.

So, with this we come to the end of this lecture and we have completed our discussion on
minimization of finite state machines. Now in our next sequence of lectures we shall be

592
looking at the design of some very specific finite state machines or sequential circuits namely
resistors and counters, which are very useful building blocks in designing complex systems.
So, this discuss we shall staring in our next lecture.

Thank you.

593
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 42
Design of Registers (Part - I)

So, in this lecture we start our discussion on the design of certain sequential circuit building
blocks. Specifically we shall be looking at the design of registers and counters, the different
types of registers that can be there, different types of counters and some typical applications.
So, the title of today’s lecture is Design of Registers the first part.

(Refer Slide Time: 00:47)

So, let us try to first understand what register basically means. Look we have seen what a flip
flop is. A flip flop is a storage element which can store 1 bit of data, but there are many
applications where we need to store not just 1 bit of data, but an entire word of data that word
size can be anything it can be 8, 16, 32, 64 anything.

Suppose I need to store a 16 bit word then I can use 16 such flip flops to store all those 16
bits of data and we call it as a register. Register is nothing, but an array of flip flops, but how
the flip flops are interconnected that distinguishes the different types of registers ok.

So, broadly speaking a register is a group of flip flops. Typically there is a common clock
input, there is a clock input coming from outside which is feeding all the flip flops in that

594
group and it is used for storing binary data. Now not only just storing depending on how you
are connecting the flip flops, there can be several different variations of registers. Specifically
registers can be of four different types depending on how you are using them parallel in
parallel out, serial in serial out, parallel in serial out and serial in parallel out.

Now, we shall be discussing these variations in due course of time that what are the main
difference and how we can design a register that support these features ok. Now just one
thing to remember if the register is such that either or both of serial in and serial out modes
are supported which means I am talking about one of these 3, then we call it a shift register.
Shift register is a special kind of a register which supports either serial in or serial out or both
of these fine. Now, let us look at these different types of registers one by one.

(Refer Slide Time: 03:24)

First the simplest one the parallel in parallel out register. I have a group of bits a word, I just
want to store that word in a register and when required, I want to read the value that is stored.
So, as you can understand I can take an array of flip flops, D flip flop will be the simplest
because in D flip flop whatever you give in the input after the clock comes that will get
stored. So, if I have an array of D flip flops, I can very easily implement a parallel in parallel
out or PIPO register.

So, the basic PIPO register is shown in this diagram. So, this is a four bit register which is
shown, you see there are four D flip flops we are using. The parallel data input which is
coming from outside is fed to the D inputs of this flip flops. These are D1, D2, D3 and D4.

595
Now as you can see the clock input is being fed in parallel to all the flip flops. So, when the
clock comes, suppose in have a clock pulse coming like this. So, if it is positive which is
triggered whenever the positive edge of the clock comes, whatever I have given in D1, D2,
D3, D4 will get stored in the flip flop and they will be available on Q1, Q2, Q3 and Q 4 this
is how it works. Of course, after the clock comes there will be a small delay, this delay will
be determined by the propagation delay of this flip flop after that the output Q will be
changing state, whatever you are applying D1 will go to Q1 and so on.

And this flip flop as you can see this also as a clear input, which is active low because this
bubble that we are showing here this means it is active low, active low means whenever clear
is 0 then the flip flops will be cleared. So, if you connect this clear input to all the four flip
flops. So, whenever you want to clear them to all zeros, you can do it ok. So, the basic
parallel in parallel out shift register works like this.

(Refer Slide Time: 06:00)

So, as I said that when the active clock edge arrives here it is leading edge, rising edge the
input word this the four bit register, will get stored in the register and will be available on the
outputs. So, the PIPO register is very simple in concept right.

596
(Refer Slide Time: 06:23)

Now, a PIPO register can be represented symbolically like this, we do not always have to
draw all the four flip flops to show that it is a four bit register, we simply say that this is a
PIPO register, there is a clock input this symbol is clock, there is a clear input, again active
low symbol is there and D and Q are shown with vector notation. You see this four as you can
see here. So, in this line we have made a cross and then we wrote 4 by the side of it. This
means actually this is a collection of four different lines, not 1.

So, the number of bits or number of lines I can just represent it like this and this is called
vector notation. So, in vector notation D actually means D1, D2, D3, D4, there are four lines
and Q means Q1, Q2, Q3, Q4 ok. So, this is how symbolically a PIPO register looks like.

597
(Refer Slide Time: 07:35)

Now, let us add some features to this PIPO register. Now the register that we have seen so far,
there the data gets stored whenever clock comes. Now you know clock is a continuous signal,
it keeps coming, but all the time I may not be having some data to be stored in the register
from outside. So, there should be some kind of control or facility of available to me as a
designer. So, that I can say that well now you have to load the data, let the clock come, but
whenever I say load only then you load the data and at all other times do not disturb the
content of the register whatever was stored let it be there ok.

So, here we are talking about the addition of a load signal. Load signal, what you are saying
is that we have a separate signal called load, the idea will be like this, suppose the clock is
coming, let us say this is my clock, and these are the active edges of the clock rising edge and
suppose this load signal, I am giving like this, suppose I am giving it like this. So, when the
first clock edge comes, load is 0. So, there will be no loading; when this second clock edge
comes load is high. So, there will be loading and when the third clock edge comes again load
is 0 there will be no loading.

So, I can selectively specify when I have to load by suitably applying the load signal, but we
remember the load has to be made high a little before the clock edge comes; because if you
make them high together then it may not accept the high level of load alright. Now the first
approach in which you can use the load signal, you can implement the load signal that 2
approaches you shall see this is something using a gated clock.

598
(Refer Slide Time: 09:57)

What do you mean by gated clock? We use an AND gate where both clock and load are
AND-ed together. So, let the clock come, so the idea is that the clock signal may be coming
continuously, but load may be high only for some time. So, whenever both load and clock are
high in the output only during that time, the signal will go high, the other time it will be 0 and
this output of the gate will be fed to all the clock inputs of the flip flop so, that they will be
loaded only when required.

Now, the issue is that this method may be very simple, you can simply use an AND gate to
implement it. But the problem is whenever let say this load and clock signals are changing at
the same time, because you see load is a signal which you are applying from outside. So,
accidentally it may happen that you are staring to change the load signal exactly when the
clock edge is coming. So, during that time there can be some error in loading or not loading.
So, it might get loaded, might not get loaded.

So, this solution has a timing issue, there can be a timing problem. If you do not apply this
signal in exact timing sequence there can be an issue this can cause timing problems, but let
us look at another solution, which is better in this respect.

599
(Refer Slide Time: 11:36)

Here we are separating out clock and load we are not combining them using an AND gate
like we said, what we are doing here, we are using a multiplexer kind of a circuit.

The multiplexer will select what will be applied to the D input of the flip flop, is it the new
data coming from outside or something else ok. So, this approach is the better and
recommended solution as a set, where each flip flop in the register will get replaced by a
circuit like this, just let us try to understand this circuit. Here you have a 2 line into 1 line
multiplexer and you have this load control. If load is 0, just try to understand what is
happening, if load is 0 which means you are not trying to load.

So, whatever is stored in this flip flop should remain. So, in a register there will be a number
of such flip flops, I am just showing one flip flop, but the clock is coming continuously ok.
So, whatever is in D will anyway get loaded with every clock. So, when load is 0 what I do,
whatever is getting stored I select that, this will be my 0 input of the multiplexer when load is
0, this Q1 will get selected and come to D. So, the same value will get stored, same value will
get stored. So, no change, but when load equal to 1, I want to load the external data D1 which
is coming. So, this will be my 1 input of the multiplexer when load is 1, D1 will coming and
D1 will get stored. So, this is very simple approach this you can use.

600
(Refer Slide Time: 13:32)

And a register which has this kind of facility, can be represented schematically just very
similar to the previous one here with the addition of an extra input called load. So, clock is
there, clear is there, but in addition there is a load. So, whenever this clock is coming
depending on the value of the load either this same value will get loaded or the external value
will get loaded.

(Refer Slide Time: 14:02)

Now, there is another kind of input which is required in many applications as we shall see
very shortly. This is the so called enable input, the idea is just follows, I have a register, the

601
register output lines will be giving me some data the data that is stored in register, but there
are applications where I want to sometimes electrically isolate the outputs, electrically isolate
means I want to send them to high impedance or tri states. So, there can be a separate enable
input to the register also, where if I enable the register only then the outputs will come, but if
I do not enable, the outputs will be in the high impedance state this is the idea.

So, as you can see it, states that using the enable input, register outputs can be put in the high
impedance state, but for this you need some kind of a tri state buffer connected to every flip
flop output. So, I am showing this schematic here for one flip flop again. Suppose, I have one
flip flop of the register, the output instead of directly sending to Q1 here we are using a tri
state buffer this is a tri state buffer. Tri state buffer what it does? If you are enable is 1; that
means, you are enabling it then the output Q1 will be getting whatever is there in Q, this is
the normal mode of operation, but if you are not enabling it, enable equal to 0 then this output
Q1 will be in the high impedance state, we denote it as Z. So, it is electrically isolated, this is
the idea behind the enable.

(Refer Slide Time: 16:11)

Now, let us take a concrete example where this kind of enable input is actually required. Now
there are numerous examples where you do not have a single register, but many registers. You
can take the data from any one of the registers and you want to send it over a common bus to
some other place. So, you may need to tie the output of several registers together, but in a
normal circuit, you just cannot tie them together right, let say if one of the output is 1 other

602
output is 0, if you tie them together there will be a short circuit and a high current may be
flowing and the logic level may be either 0 or 1 something in between. So, there can be some
error.

So, in a similar situation, when you have a bus based architecture which is very common in
computer systems in microprocessors and computer systems, we can use the enable input to
resolve bus conflicts. So, I am showing a very simple example scenario; just look at this here
this RA, RB, RC and RD these are all k bit registers, they are storing k bit numbers, k can be
anything 8, 16, 32, 64 anything. Now see this 3 registers A, B, C have separate enable inputs
enable A, enable B and enable C and the output of this registers, I am tying them together,
well actually what do mean by tying them together? Let say the first register, let say, let say k
equal to 3, this is RA, let say this is RB, I am showing only for RA and RB, these are the 3,
tying them together means the first output of RA and the first output of RB, I am tying
together.

Second output of RA and the second output of RB, I am tying together and third output of RA
and third output of RB, I am tying together, similarly for RC ok. So, these were k bit
registers. So, the output what you get is also a k bit output, 3 outputs are coming right. So,
you see these registers have k bit outputs, after tying together you have k bit number then I
have another register RD and there is an adder. In this adder one of the input is coming from
one of this registers and the other input is coming from RD and the output of adder is going
to RD.

Now, here I am assuming that exactly one of the 3 enable inputs can be active at a time.
Exactly one input enable A, enable B, enable C. So, you will be making 1, let say I want to
enable RA, I make this as 1, I make these two 0 0, exactly one of them will be 1 and clock is
being fed to all the registers in parallel. So, what will happen if enable A is active then the
output of RA will be coming here, because RA and RB are electrically disconnected, they are
in the, they are in the high impedance state. So, RA will be coming to this adder input and the
other input is RD. So, RD will be getting added to RA, the result will be stored in RD.

Similarly, if enable B is 1 then RB will get added and if enable C is 1 then RC is getting
added. So, here I have shown you a very simple example, but in practical scenario there can
be numerous such examples of data coming from various places and you want to combine

603
them and send them over a single common bus, here this common line that I am showing here
this called a bus, 3 registers are feeding that data to a common bus right.

So, this is an example where this enable signal of the register can be very useful ok.

(Refer Slide Time: 20:42)

So, we have seen that how we can implement flip flops using gates. Gates can be
implemented using any kind of technology we talked about TTL, CMOS and some other
technologies as well. Now specifically let us now look at how we can implement registers
using CMOS logic because most of the circuits that are designed today are built using CMOS
and there are some very simple tricks that you can use in CMOS design to reduce the
complexity of the circuits, let us see how.

Now, what we are talking about is, here is how we can implement a register using CMOS by
using something called dynamic logic. So, what is the idea behind dynamic logic, well you all
must have heard about the dynamic memories in computer systems. So, the memory that
comes to the computer systems are almost universally they are dynamic in nature. Dynamic
memory is based on a very similar principal that we want to that we are going to discuss now,
dynamic logic.

So, in dynamic storage we store information not in flip flops, but as charge stored on tiny
capacitors. But where are these capacitors coming from, let us take a very specific example,
let us talk about a NOT gate connected like this. This is a CMOS NOT gate. So, how does a

604
CMOS gate looks like if you recall, there are 2 transistors one is a p type transistor, one is a n
type transistor they are connected together, this is your input and from this common point you
take the output, this is the power supply VDD and this is your ground.

Now, if you look at this point, this line is being feeding the gate inputs of 2 transistors. Now
for those of you who know how our transistor is fabricated, there is a substrate and the gate is
formed on top of the substrate like this and there is a insulating region here, and the substrate
is typically connected to ground. So, you see this looks like a parallel plate capacitor there is
one terminal this side, one terminal this side and insulating region in between. So, as an
equivalence circuit, I can say that well as if it is driving a capacitance.

So, when the input I am changing from 0 to 1, as if I am charging this capacitance and when I
am changing it from 1 to 0 as if I am discharging the capacitance. Now the idea behind
CMOS dynamic logic is that well by either charging or discharging capacitance, I can store 0
or 1 there and you know capacitor as a capacity to store some charge for certain point in time.
So, once I charge a capacitor to some required voltage, either high or ground representing 1
and 0, then it will retain that charge for certain amount of time. of course, the charges will
slowly tend to leak away, this is not a permanent kind of a storage as you see in a flip flop
where as long as power supply is there, the data will get stored, but here data will remain for
a short duration of time which is of the order of milliseconds.

Now, in terms of the circuit speeds today milliseconds is a very large time ok. So, the charge
you are storing on the capacitor can be retained for this period of time, after which they will
tend to decay or disappear, this is the basic idea behind dynamic storage. So, charge tends to
leak away with time and we need to refresh the charge if you want to store it.

605
(Refer Slide Time: 25:15)

So, let us see how we can implement simple register using dynamic logic. This is register,
each of this rows represent one flip flop or equivalent to equivalent of one flip flop.

So, you see my data input is coming here, my final storage output is coming there, this X is a
CMOS transmission gate. So, when clock or load, I am setting it to 1, so, this gate is open and
you see here after this X, there are 2 invertors right connected 1 after the other. So, I am
interested in the capacitor here. So, when this gate is open, when this control of this
transmission gate, I am setting to 1. So, whatever I had applied on D1, 1 1 0 that voltage will
get charged on the capacitor and after it again comes back to 0. So, this transmission get
again closes. So, this capacitor does not have any path to discharge, but of course, there will
be some leakage part, very slowly it will discharge, but the data will get stored in the
capacitor and there are 2 invertors because whatever you are storing there the same value I
want here.

So, 2 inversions will cancel each other ok. So, this is a simple storage. So, if you want a k bit
register they will be k such D1, D2 to Dk right, these are very simple PIPO register using
CMOS dynamic logic.

606
(Refer Slide Time: 27:01)

Now, if you want to also add tri state control as we are discussed for normal registers earlier,
what you can do, we can add another set of transmission gates on the output side and we
connect them to another. So, if enable is 1 then this transmission gates will be enabled, and
this output of the NOT gate it will come to Q1 or Qk, but if it is not enabled then outputs will
be in the high impedance states simple.

So, you see for a normal conventional design, we have to use gates to construct flip flops, D
flip flop, master slave flip flop, so many gates are required, but here you see the design is so
simple. In CMOS technology, every inverter requires 2 transistors, 2+2 , 4, transmission
gates requires 2 transistors and a 2. So, a total of 8 transistors are required to implement
register with tri state control a single bit register ok.

But now the question comes that this charges tens to decay or dissipate, how do you refresh
the charge because we want to store some data over longer periods of time.

607
(Refer Slide Time: 28:23)

So, there are simple ways available for this, here we are showing one possible solution and
you are showing again for a single flip flop stage. So, here we have our flip flop as we have
already discussed earlier, there is a clock or load control and there is a tri state enable control,
this is something we are adding. There is another transmission gate we are adding here which
is being fed with the NOT of clock, you see clock is coming like this. So, if I say NOT of
clock so it will be just NOT of this. So, when clock is high this gate is open and whatever
there is a D1 gets stored as charge on this capacitor which is here right.

But when clock is 0, this region, when clock is 0 new date is not getting stored, but this
transmission gate is now open because this is activated by ´
clock , ´
clock is high. So,
what will happen? Whatever is stored in the output of the second NOT gate that comes back
to the input of the first NOT gate and it recharges the capacitor. So, if it was 1, this will be 0,
this will also be 1, this 1 is fed back that same value gets charged and if it is 0 that same 0
will be fed back. So, in this way, every clock cycle, for every clock period when clock equal
to 0, the data is getting automatically refreshed right.

So, by the addition of another 2 transistors you can also have this refresh mechanism. So,
with this we come to the end of this lecture, where we have discussed the design of simple
parallel in parallel out registers and we talked about different kind of inputs that may be
useful like load and enable, and specifically we looked at how to design a PIPO register using
CMOS dynamic logic.

608
Thank you.

609
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 43
Design of Registers (Part – II)

So, in the last lecture means we had started our discussion on registers and we looked at the
design of parallel in parallel out registers. Now we continue with the discussion on the
Design of Registers in this lecture also.

(Refer Slide Time: 00:34)

Now, in this lecture we shall specifically be talking about shift registers. Now I mentioned
during the last lecture that a shift register is a register which has serial in and serial out
shifting facilities.

So, let us try to understand what this serial in serial out and shifting facilities mean and how
we can design a register with these facilities ok. So, talking about a shift register; a shift
register is nothing but a register. Register means it can store, let us say k-bit shift register will
be able to store k bits of data, where binary data can be stored, but additionally you can shift
the data either to the left or to the right whenever required. There is a corresponding shift
signal that can be used to do that. Now what do you mean by shifting?

610
This I will illustrate with this example that I have shown here. Let us talk about this example,
here I am showing a 4-bit shift register which is actually a right shift register; data will be
shifted on the right. Let us say how it works? Look here also we have used D flip flops, like a
just like a PIPO register and clock as usual is been connected in parallel to all the flip flops,
but the way I have applied inputs to the D inputs of the flip flops are different, the first flip
flop is getting its input from an external input call serial in.

The second flip flop is getting the data input from the output of the previous; similarly the
third D3 is getting from the output Q2, D4 is getting from the output Q3 and the final flip flop
the output that you are calling as serial output. So, let us see that how it actually works, let us
assume that the data that is stored in this register are 1, 0, 1, 1; let us say at any particular
point in time we have stored data 1, 0, 1 and 1 and in the external serial input I have applied
the 0.

Now, a clock comes. Now understand what happens; these 0 is being fed to this input D1, this
1 is being fed to the input D2, this 0 is in D3 and this 1 is in D4. So, whenever this clock
comes, this data will get stored in the flip flops in parallel. So, this 0 will get stored, this will
become 0, this 1 will get stored, this will become 1, this 0 will get stored 0, this 1 will get
stored, this remains 1. So, earlier it was 1, 0, 1, 1, it was 1 0 1 1; now this external data 0 has
come, now it will becomes 0 1 0 1.

Now, you see effectively initially I had 1 0 1 1; there has been a shift to the right, this 1 has
moved here, 0 has moved here, this 1 has moved here and some new data has being shifted in
and the previous rightmost data has shifted out right. So, in this way I can apply new data as
serial input once every clock and some new data can get shifted in one by one, this is
essentially how our shift register works. Now in this design I showed how to use a shift
register using D flip flops, but you can also use JK or SR flip flops.

611
(Refer Slide Time: 04:53)

Like for example; if you are using JK flip flop, this will be J2 and there will be another input
K2. So, there will be Q1 here, and ´ ; so, Q1 will be connecting to J2 and
Q1 ´
Q1 will be
connecting to K2 like this the connection will be there.

(Refer Slide Time: 05:15)

Similarly, for SR flip flop, right now there are some shift registers which have a separate shift
enable signal SHIFT and just like in a PIPO register which have used a LOAD. So, either you
can use a gated block or using a multiplexer, I can use it. So, in a very similar way I can use
that shift signal in conjunction with the clock signal to enable the shifting; I can use the gated

612
clock also, clock, AND gate, clock and shift, I can generate the clock signal for the flip flops
or that can you use that multiplexer kind of a circuit that you discussed right ok.

Now for the flip flop circuit that we have shown earlier, I am showing a so called timing
diagram. Now, for sequential circuits this timing diagram is very important to understand that
how the things work. Here I am showing the timing diagram of a 4 bit register as the diagram
you saw earlier, where have assume that the initial state of the register is 0 1 0 1. So, what I
am showing in this diagram? I am showing the clock signal. Clock signal is coming
continuously and this red marked edges means this is the active edge; it is rising edge
triggered or the flip flops and SI is one of the inputs serial input and the 4 flip flop outputs are
also shown.

So, as I said initial state is 0 1 0 1; so it is 0, 1, 0 and 1, let us see the when we first clock
edge comes here this value of SI was 1, high. So, this 1 will get shifted in, this 1 will come
here, this 0 will come here, this 1 will come here; these 0 will come here. Next clock comes,
this 1 will come here, this previous value of Q1, this will come here, previous value of this
will come here and previous value of Q3 will come here.

Similarly when the next clock comes here; now SI is 0. So, this 0 will come here, this 1 will
come here, this 1 will come here and this 0 will come here. So, there is a shifting that is going
on, bits are getting shifted Q1 to Q2, Q2 to Q3, Q3 to Q4 right.

So, this one this diagram whatever shown if I want to show in sequence. So, it will be like
this initially the 4 outputs I am showing; initially it was 0 1 0 1. So, after the first clock it
becomes 1 0 1 0, after the second clock it becomes 1 1 0 1, third clock 0 1 1 0, fourth clock 1
0 1 1. Now see there is a shifting going on, you see this 0 is getting shifted like this. This 1 is
getting shifted like this, this 0 is getting, this 1 is getting shifted like this. This is why with
quality shift register this is the axis of time right. So, with time the bit cell getting shifted this
is Q1 and this is Q4, Q1 to Q2; Q2 to Q3, Q3 to Q4 and finally, one of the bit goes out, this is
essentially what shift register is and this is how a simple shift register works right.

613
(Refer Slide Time: 09:19)

Now, there are several variations of shift registers that are possible depending on how you are
connecting them, ring counter, twisted ring or Johnson counter, bidirectional shift register,
universal shift register and something called linear feedback shift registers. So, some of these
we shall be discussing in this lecture.

(Refer Slide Time: 09:46)

Let us see first the ring counter; well the shift register circuit that you saw some time back,
that is essentially a serial in serial out register. Here the data are not being loaded in parallel;
data is being fed serially, 1 bit at a time and on the other side 1 bit at a time is coming out. So,

614
we call it serial in serial out, data is being fed 1 bit at a time and data is also coming out 1 bit
at a time.

So, this was a SISO register, Serial In Serial Out; now ring counter is a special kind of SISO
register where instead of applying an external serial input; SI what we do? The Q output of
the last flip flop which connects to the D input of the first flip flop; so, how does it look like?
This will see, there are 4 stages you ; just recall there will be multiple stages of the flip flop
whatever saying is that that the shift register be connected as usual.

But the output of the last stage we are feeding as the input of the first stage this is what a ring
counter is actually designed like. But typically in the ring counter we will load with a single 1
and the remaining 0. Let us say 1 0 0 0 or 0 0 0 1 because if you load it with all 0. So, it will
always remain 0 because this 0 will be fed back again, 0 will go in, but here this 1 will
cyclically rotate, 1 will come here then here, then here, again come back. So, we shall see
some typical applications of sub ring counter, we can use something called a multiphase
clock or some sequence of synchronizing pulses, but one thing to notice that for a k-bit ring
counter.

(Refer Slide Time: 11:59)

Let us say I have a 4 bit ring counter 1 0 0 0 ; so, after 1 clock, it will become 0 1 0 0, after 2
clock it will become 0 0 1 0, after 3, 0 0 0 1 and after 4 it again becomes 1 0 0 0.

615
So, the sequence repeats after every 4 patterns; so we call it modulo 4 counter, mod 4 counter.
So, for a k-bit ring counter, we can implement a modulo k counter; so, after this k number of
patterns; the pattern again starts with 1 0 0 0 and repeats right.

(Refer Slide Time: 12:43)

So, this is how a ring counter looks like; so, here I have shown the diagram of a 3 bit ring
counter, there are 3 flip flips, the output Q3 is connected to D1. And we assume that initially,
we will load the flip flop with 1 0 0; so this has 1, this has 0 and this has 0.

So, if you again look at the timing diagram for this, see initially this was 1, 1 0 0 as I said.
After the first clock comes there will be a shifting, 1 will come here, 0 will come and, this 0
will again come back here. So, this 1 we will come here, 0 we will come here and this 0 will
come back to Q1. So, 0 1 0 at the next clock, this 1 will come here, 0 will be here and 0 is the
here. Next clock again 1 is coming back 1 0 0, next clock this 1 again is shifting here 0 1 0, 0
0 1 and again 1 0 0; this repeats right.

Now, if you look at the kind of pulses that are generated in Q1, Q2 and Q3; you see in Q1
there is a pulse which is generated, here right this and remaining time it is 0, for Q2 it is here
and for Q3 it is here. So, if I consider this whole thing as my time period; then in this time
period first Q1 is active then Q2 is active, then Q 3 is active.

So, I can say that Q1, Q2, Q3 is generating non overlapping pulse train because Q1, Q2, Q3,
again Q1 starts; again Q1, again Q2 and Q3. So, in many applications we need something

616
called multiphase clocks and using a ring counter we can very easily generate such
multiphase clocks. If you need 4 phase clock use a 4 bit ring counter and apply a clock of
sufficiently high frequency, you will be generating the clocks from the 4 outputs, there will be
shifted in phase there will be non overlapping right.

And just another thing, I did not mention you see from this timing diagram that whenever this
clock edge comes; the time when this output of the flip flop is changing there is a small delay
which is shown, there is a small gap right. This small gap is due to the delay of the flip flops
that a value has been applied in a clock comes after sometime after how much time the output
Q2 will start changing.

So, this gap that you see for example, here this gap actually indicates that delay right. So,
when you draw a timing diagram like this; this is delay should also be specified because the
flip flops are not ideal flip flops, whenever you change a clock the outputs do not change
immediately.

(Refer Slide Time: 16:14)

There is a small delay, after which the outputs change ok; so this is how a ring counter works.
Next let us come to a very small variation of a ring counter; this is called a twisted ring or a
Johnson counter. Now in a ring counter what we did? The Q output of the last flip flop, we
fed to the D input of the first flip flop. Now in a Johnson counter what you do? We take the
Q́ output of the last flip flop. And that Q́ I feedback to the D input of the first flip flop
this is the only change this is called twisted ring or a Johnson counter.

617
So, as I mentioned it is specified here; we connect the Q́ output of the last flip flop to the
D input of the first flip flop. And in a Johnson counter you can very easily initialize it to the
all 0 state, I means unlike a ring counter; in a ring counter if initialize it to the all 0 state; it
will always remain in the all 0 state because none of the flip flops can become 1, but here it
can become.

And you will see that in a Johnson counter the kind of patterns that are generated 0s and 1s
are consecutive, which may be useful in some applications. And another feature is that for a
k-bit Johnson counter, I mean it counts modulo 2k like; let us say for a 4 bit Johnson counter
it will generate 8 unique patterns before repeating, twice of k right. So, this I will, I mean
illustrating with an example.

(Refer Slide Time: 18:06)

Let us take a 4 bit Johnson counter, as I have shown here, there are 4 flip flops, as you can
see I have connected the Q́ of the last flip flop to the D input of the first flip flop. And I
am showing in the sequence of time the clock pulses what is happening.

And I am assuming that the initial state is all 0. So, in this table I means; so, initially the
outputs are all 0s; so 0 0 0 0. Now just the only thing to remember is that the bar of the last
bit will be fed to the first bit D1. So, after 1 clock pulse Q4 was 0; so 1 was fed back, so this
will become 1 other zeros will be shifted 0 0 0.

618
Second clock pulse again this bit was 0; so, NOT of that this 1 will be fed here, this 1 will get
shifted, this 1 will shift, 0 0 this will be shifting. Similarly the third clock pulse this is again
0; so 1 will be fed and this bits will all be shifting. Fourth clock pulse again it was 0 ; so,
again 1 will be fed shifting.

Now, the last bit is 1 ; so now 0 will be fed shifting, again 1, again 0 is fed, again 1 again 0 is
fed, again 1 again 0 is fed and you finally, land up again to the all 0 state. So, you see, there
are 8 unique states that are there, 8 unique patterns which are followed; starting from the 0 1
2 3 4 5 6 7 8 0 up the 7 before it repeats again right.

(Refer Slide Time: 20:26)

So, this is something you have to remember that for a Johnson counter if we have a k-bit
Johnson counter as it said; there will be 2k unique patterns that you can generate. Now there
is another point that 1s and 0s are consecutive, if you look at any 1 of the output let us say
Q2. So, the patterns that are generated, I am looking at one side, you see, you see all the 1s
are consecutive that together.

The 0s will be consecutive, again there will be 1s, again there will be 0. So, the block of 1s
and 0s are all consecutive. So, in a cycle say starting from here up to 7 you see ones are all
consecutive; there are 4 ones, there all consecutive, this is the property of Johnson counter
right.

619
(Refer Slide Time: 21:21)

Let us now look at a more flexible kind of a shift register, bidirectional shift register. See
earlier the shift register, circuit that we saw, it was a right shift register. So, the idea was
something like this, there are 4 flip flops ; so, we connected the output of this to the input of
this, output of this to the input of this, output of this to the input of this; so it was a right shift
register.

But if I want to implement a left shift register what do I do? For implementing a left shift
register the output of this one should be fed to the input of this, output of this one should be
fed to the input of this, output of this one should be fed to the input of this and output of this
1 will be the final output. So, this will be shifting left; this bit will go here, this bit will go
here, this bit will go here.

So, we will have to connect the Q and Ds in the reverse order right. So, essentially the
requirement is this; so we need to use multiplexers so that the proper inputs are applied to the
D and I am assuming that there is a separate control input called L/ Ŕ .

620
(Refer Slide Time: 22:42)

L/ Ŕ which if it is 1; it means left shift, if it is 0 it means right shift. So, let us see how the
design looks like.

(Refer Slide Time: 22:59)

This is the design of the bidirectional shift register; let us see there are 4 flip flops, just like a
normal register. Now I have not connected Q1 to D2 or Q2 to D3 directly rather there are 4
multiplexer I am using; just see all these multiplexers are having this L/ Ŕ as this select
input and I have suitably apply the inputs to the multiplexer.

621
Let us assume I am trying to do a right shift which means L/ Ŕ is 0, right shift means this 0
inputs are selected of the multiplexer. So, this SI will come here, the SI will be coming here,
Q1 will come here, Q1 will come here, Q2 will be here and Q3 will be here. This is just like
how we connected the register earlier right, Q1 was connected here, Q2 was here and Q3 was
here, this is normal right shift.

(Refer Slide Time: 24:18)

But if you want to do left shift then my L/ Ŕ would be 1. Now if my L/ Ŕ will be 1; let
us see what will be happening; now it will start from the right side. So, my SI will be coming
here because my data will now be fed to the rightmost, Q4 is coming here just like I showed,
Q3 is coming here and Q 2 is coming here.

So, shifting will be, shifting will happening the reverse direction; Q4 to, Q4 will be coming
here, Q3 will l be coming here, Q2 will be coming and Q1 will get lost and new value will be
entering in Q4 right. So, this is how a bidirectional shift register can be designed; a normal
register where the inputs to D1 are properly selected by using multiplexers under the control
of this L/ Ŕ ok.

622
(Refer Slide Time: 25:18)

Now, the last kind of shift register that you talk about today in this lecture is a universal shift
register. So, universal shift register has some added functionality; so it is a bidirectional shift
register plus something more. So, what is more? What I am saying that the inputs that we are
applying can be either in serial form or it can also be in parallel form and also the outputs.

Like a universal shift register looks like this; there are serial inputs available, right shift, left
shift, there can be 2 separate serial inputs. But in addition you can also parallely load the
data; now in the shift register circuits that we have seen so far there were no parallel data
inputs that were there. Now in this universal register facility for parallel data input and also
parallel data output both are also available.

So, there is a D input which can be fed from outside and there is a, there is a Q output that is
available from outside. But for loading you see there is no separate load control line, rather
there are 2 control signals S1 and S0.

Now, I can use these are this S1 S0 combinations, I am showing, this is S1 and S0. So, if
there are 0 0, it means there is no change 0 1 means right shift, 1 0 means left shift, 1 1 means
parallel load. So, I can load it parallel, I can do a shift left, I can do a shift right, and the
output is also available anyway. So, let us see how I can implement this; this can be designed
in a similar way in a way similar to a bidirectional shift register.

623
(Refer Slide Time: 27:12)

But we need bigger multiplexers; there are 4 modes to be selected. So, I need to use a 4 to 1
multiplexer. So, you recall the multiplexer controls this S1 and S0; they were like this 0 0,
means no change. So, for the 0 input, the first one is the 0 input, this Q1 I am connecting
here, Q2 I am connecting here, Q3 I am connecting here. So, the same value is getting stored
no change.

0 1 means shift right. So, far 0 1 this shift serial input right is being fed here, Q1 is fed here,
Q2 is fed here it is just like a serial shift right and 1 0 means serial shift left. Here serial input
left is fed here, Q3 is fed here, Q2 is fed here this is shift left and 1 1 is parallel load. You see
D1 is fed here, D2 is fed here, D3 is fed from outside, data will get stored. So, this is how you
can design for universal shift register; well an universal shift register is the circuit using
which you can do all the kind of register operations; parallel load, serial input, serial output
everything.

But in some specific application you may be needing just a PIPO register or shift simple shift
register. So, you do not need all this multiplexers; so, if you really need then you need to have
this kind of a complex universal shift register right. So, with this we come to the end of this
lecture.

So, in this lecture we talked about shift registers and the various different types of shift
registers that are useful in some applications like ring counter, Johnson counter then the
bidirectional shift register and the universal shift register. So, we shall be continuing with our

624
discussion in the next lecture where we shall be talking about fifth type of shift register called
linear feedback shift register. And also we shall discuss some of the practical applications of
this registers and shift registers.

Thank you.

625
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 44
Design of Registers (Part – III)

So, we continue in this lecture with the discussion on registers which we continued in the last
couple of lectures. So, if you recall we discussed some of the different kinds of registers
namely the parallel in parallel out or PIPO registers and some variety of shift registers. So,
we continue with our discussion in the lecture, Design of Register, the third part.

(Refer Slide Time: 00:47)

So, we start with I means another variation of the shift register we discussed four different
types of shift register in the last lecture namely, ring counter, Johnson counter, bidirectional
shift register and universal shift register.

Now, here we consider another kind of shift register which is very useful in many
applications namely linear feedback shift register. Linear feedback shift register pictorially it
looks like this. So, if you look into this, there is one part of it which looks like a conventional
shift register, you forget the EXOR gate for the time being the four D flip-flops connected in
chain in cascade, this looks like a normal right-shift shift register, right. And, we call it a
linear feedback shift register.

626
Now, I am not going into the formal definition of linearity, but it would suffice if you
remember that this exclusive-OR function is linear. So, in a Linear Feedback Shift Register or
LFSR in short which you call, the idea is that we have an exclusive-OR gate, to the input of
this exclusive-OR gate we feed some of the outputs of this LFSR. Here for example, we have
taken this Q 1 and Q 4, these are called the taps. We take some taps from the output of this
shift register and feed it to the input of this EXOR gate and the output of the EXOR gate is
feeding here the D1 input of the first flip-flop. So, the input is not coming from outside rather
this exclusive-OR gate is generating the next input that is to be shifted. Basically, a linear
feedback shift register looks like this.

(Refer Slide Time: 03:01)

Let us look at the kind of patterns that are generated by this LFSR. The same example LFSR
I am showing here, but one thing you observe that if the initial state of the LFSR is all 0, let
us say 0000, then the inputs of this EXOR gate will also be all zeros and if it is all zeros the
output of the LFSR will also be 0 and 0 will get shifted in.

So, this 0000 after one clock will remain 0000. So, it will never change. So, in a LFSR we
should not initialize the register to the all zero state because if you do so, the state will never
change. Now, what we do? We initialize it with any other or some other suitable nonzero state
in the example that I have shown here. Here I have shown the clock pulses, 0 means this is
the initial state. So, I am assuming initially Q4 is 1 and the other three are zeros.

627
(Refer Slide Time: 04:22)

Now, if we look at the way we have connected, basically D1 is nothing, but Q1 exclusive-OR
Q4. So, you take the exclusive-OR of Q1 and Q4 and whatever it is, that is the next bit that
will be shifted in. So, in this LFSR the first thing is that it is a shift register. So, this shifting is
taking place just like a normal shift register. So, with the clocks the bits are getting shifted
like this.

Now, the point to notice that what is the next bit that is shifted in out here. Let us look into
this right. The first bit this one this will be the exclusive-OR of 0 and 1, Q1 and Q4, 0 1 is 1.
So, 1 get shifted in, the other three bits are simply shifted. Next one this is also the exclusive-
OR of this 1 and 0, then again exclusive-OR of 1 and 0 is 1, exclusive-OR of 1 and 0 is again
1, EXOR of 1 and 1 is 0, EXOR of 0 and 1 is 1, 1 and 1 is 0, 0 and 1 is 1, 1 and 1 is 0, like
this it goes on 0 and 0 is 0, 0 and 0 is 0 and so on.

Now, see that we started with this state 0 0 0 1 and after 10 clock pulses at the 11th clock
pulse we again get back 0 0 0 1.

628
(Refer Slide Time: 06:13)

So, in this example there are 10 unique patterns that are generated, 10 unique patterns. Now,
one property of this LFSR is that the patterns which are generated they bear good randomness
properties.

(Refer Slide Time: 06:46)

Like for instance if you treat this 4-bit output of this LFSR as the, as the binary number and if
you look at the decimal equivalents for example, in 0 0 0 1 means 1. The 1 0 0 0 means 8, 1 1
0 0 means 12, 1 1 1 0 is 14, 1 1 1 1 is 15, 0 1 1 1 is 7, 1 0 1 1 is a 11, this is 3. Then, 1 0 0 1 is
9, then 4, then 2, you see this numbers are very random. There are some standard tests of

629
randomness and it can be shown that this LFSR generated patterns they exhibit good
randomness properties with the exception of one test because the bits are shifted, the
correlation between the bits that is generated in one stage and the next stage will be just
shifted 1-bit in time other than that all other randomness properties are very well satisfied.

(Refer Slide Time: 07:47)

So, as we have seen at in a practical application LFSR is typically initialized with one 0 and
the rest 1’s and the point to notice that in the example that we took it was a 4-bit LFSR and it
generated 10 patterns, right. It generated 10 patterns, but if we choose the tapping points in a
n
suitable way then and n-bit LFSR can generate 2 −1 distinct patterns, well if I minus 1
with the exception of the all-0 pattern that we have said because all-0 pattern cannot be a
state in a cycle because if we initialize the LFSR to all-0 pattern it will indefinitely remain in
the all-0 state, it will never come out.

So, by suitably choosing the taps we can generate a very large number of distinct patterns.
16
For example, if I have a 16-bit register LFSR then 2 −1 is 65535. So many distinct
random patterns can be generated, right.

Now, there are many applications where such random patterns are used. Well, we shall be
discussing one such application later in this class, in this course, namely while we are testing
a digital circuit for faults, fault testing. There also these randomness properties are very well,
I mean exploited in some of the techniques this we shall see later.

630
Now, let us look at some of the typical applications of registers, you have seen the different
kinds of registers. Registers are used almost everywhere in digital system design. In any
digital system design which are sufficiently complex you will find one or more registers
inside the system. Let us take a couple of examples, a broad examples.

(Refer Slide Time: 10:06)

The first example that we take is that of parallel-to-serial and serial-to-parallel conversion,
this is used for data transmission. Let us try to motivate why we need this. Imagine that you
have a computer here, computer system A and there is another computer system out here, this
is B. We want to transmit some data from A to B. Well, B can be a computer, it can be some
other peripheral device also like a printer etc, but you are required to transfer some data.

Now, inside A or B data representation is parallel, data are stored in parallel in registers, but
when you transfer the data because of various cost considerations, you normally transmit data
in a serial fashion. Serial fashion means there is a single wire over which the bits are
transmitted, 1 bit at a time serially. The advantage is that the cost of the cable gets reduced,
because instead of transmitting 16 bits at a time, we will be needing 16 wires for that, but
here you will be needing only 1 wire to carry the data. So, the cost of the cable become
simple, the protocol become simple, the interfacing hardware also become simple.

So, although inside A and B you have parallel representation of data what was saying that
while you are communicating, you are communicating in a serial fashion. So, there is a
necessity to convert this parallel data into serial form and again and at the receiving end serial

631
to parallel form. Now, this motivation I have already mentioned, many such communication
required serial communication, advantages are less number of wires and because the less
number of wires the chances of loose connections and other faults are less, it will lead to
higher reliability and of course, lower cost, ok. And, this data conversion parallel to serial and
serial to parallel can be very conveniently done using shift registers. How? Let us look at it.

(Refer Slide Time: 12:43)

Let us say at the end of the transmitter, this is A, the data is stored in parallel in a register and
similarly at the receiving end B, the data will be received in a register because inside A and B
all data processing are done in parallel. But now what you are saying is that let this register
be a parallel in serial out register and at the other side this B will be a serial in parallel out
register. So, what does this mean? So, here this data you can load into this register in parallel.
So, this will be something like a universal register, universal shift register, you can load it in
parallel, then you will be shifting them out serially bit by bit, parallel in serial out. So, the bits
will be transmitted serially.

So, this kind of interfacing hardware is present in almost any communication system in the
inside the computers that we see today and these devices are normally called universal
synchronous asynchronous receiver transmitter. If you look at the literature you will see that
there is something called USART. USART, Universal Synchronous Asynchronous Receiver
Transmitter. Now, inside this USART, this kind of PISO and SIPO registers are there; for
transmission you need the PISO mode, for receiving unit this SIPO mode.

632
This is how data transmission and receiving takes place during data communication serially.
So, this is one of the very important applications of registers, specifically shift registers.

(Refer Slide Time: 15:10)

Let us look at another very important application of registers, but here we are not talking
about shift registers, but normal parallel in parallel out registers. There are many designs
where we need registers to store some temporary data and there you need a required this kind
of PIPO registers. Let us take a couple of examples to illustrate how such designs can be
carried out and how this registers can be used there, right. So, this application talks about
something called data path.

Registers can be used in data path for performing calculation we shall explain what is meant
by data path. The basic idea is while you are designing a complex system, relatively complex
system; well, normally for small systems what to do as we have seen if it is a sequential
circuit you break in up into some kind of specification in the terms of a state table or a state
transition diagram, then following some systematic steps you arrive at the final design. But,
let us take an example, suppose I want to implement circuit that compute the greatest
common divisor, GCD of two numbers. You see designing such a circuit using this FSM
mechanism will be extremely difficult. There should be some other simpler method to
approach this kind of complex problems. So, any kind of higher level description like this
requires a different approach.

633
The approach says that you need to separate out the data path and control path of the design.
So, what are data path and control paths? Data path will contain all the basic hardware units
that are required for the calculation. For example, the example that I mentioned GCD
calculation, for GCD calculation you will be requiring some registers to store the input
numbers, some register to store some intermediate data, some adders subtractors, maybe
some comparators, these are the kind of hardware will be requiring to carry out step by step
calculation for computing GCD, this will constitute the data path of the design, right.

So, it will contain the basic hardware units and they are required for carrying out calculation
and also storage. So, some examples are mentioned here, some combinational circuit modules
like adders, multipliers, subtractors, comparators, multiplexers, and storage elements like
flip-flops and registers, right and you need separate unit called control path. The idea is like
this the data path contains the basic circuits that are required to carry out the calculation, but
someone has to tell what calculation has to be done in what sequence of time. So, there must
be some other circuit which will be generating the signals in proper sequence to carry out the
required operations that is the role of the control path.

So, control path is nothing, but a finite state machine. Control path is a finite state machine
that will be generating control signals which will be activating the data path. Like for
example, if there is a register you can say that you clear the register, load the register, like
that. So, this we shall be explaining with the help of some examples. So, that you have an
idea that how these are actually done.

634
(Refer Slide Time: 19:26)

So, let us take one example which is quite simple in terms of the complexity. Here we are
saying that let us try to multiply two numbers by repeated addition. So, what do mean by
this? Suppose, I want to multiply 6 by 5, 6 ×5 so, the simple approaches I can add 6 to
itself, 5 times, 6+6+ 6+6+6 that is one way to carry out multiplication. Of course, it may
not be very efficient. So, if the numbers are very large we will have to add so many times, but
this is a very simple method.

Let us try to see how this can be designed, right. So, here I am assuming that the second
number is nonzero, because here we are not checking whether the second number is 0 or not
this steps are explain in flowchart form like this, the two numbers are A and B, these are first
given and P is the final product which is initialize to 0. And in a loop we continuously add A
to P, P plus A and the result we are putting it back to P and after every addition, we are
decrementing B and your testing whether B it has reached 0 or not, if it is not 0 we go back,
right.

So, as it said if A is 6 and B is 5 then first P is initialize to 0, first we make P equal to P plus
A, A is added. So, P become 6, B is decremented by 1, B becomes 4, B is not 0, no, you again
go back, again P equal to P plus A, you add A to P. So, now, this becomes 12, decrement B,
again it becomes 3, B 0 no, go back, again add A, 18, decrement B, not 0, go back. So, again
add, 24, decrement B, not 0. So, you so again add, 30, decrement B, now B become 0. So, B
become 0, you come out and whatever is in P that is your final result, right.

635
Now, it is up to this designer to decide what are the things you require to implement this
flowchart? You see intuitively speaking what are the things you need? You need two registers
to store A and B, you need one register to store the product P, you need an adder circuit. Well,
B equal to B minus 1, you can, you can use a subtractor, but because you are just subtracting
by 1, we shall see later that we can use a counter to do it. A counter that can count upper
countdown, you can use a down counter also decrement by 1. So, the register B that we use
that can be a down counter and a comparator that will check B with 0 that will constitute the
data path and the control path will be generating signal such that computation is carried out in
this sequence as specified by the flowchart.

(Refer Slide Time: 23:10)

Let us see, this will be the data path how it will look like. Just see you have a register A here,
register B here, just look at it in a little more detail. This register A has one control input can
loadA, which means this is a parallel in parallel out register. So, from outside data in you can
load a data into A, the new data. Similarly, you can load a data into B for that also a load
signal is there loadB and you may also required to decrement B by 1 for that there is another
control signal decrementB and P is the partial product which initially we will have to set it to
0. So, there is a clear control clearP, it will set it to 0 and every time A plus P is computed and
the result goes back to P for that you will have to load P again and there is a comparator
circuit which will be generating an output which will tell whether B is equal to 0 or not, ok.

636
This is basically the data path which you require to carry out the GCD computer, the
computation of multiplication, but in what sequence there the control path will come into the
picture.

(Refer Slide Time: 24:45)

So, for the control path the idea will be like this, there will a data path like this and there will
be a control path like this. The signals I have already mentioned. These are the control signals
with the data path, with the control path will be generating and this equal to 0 signal will be
an input for the control path and these are some external signals start, done and a clock is
coming.

637
(Refer Slide Time: 25:07)

The control path I am not showing the complete design, I am showing you, this will be an
FSM. This is the same flow chart I am drawing in a slightly different way, these are the
different steps of computation that you need to carry out and S0 to S4 are the 5 states. And,
this you can encode as a finite state machine like this where sequentially will go for S0, S1,
S2, S3 and S3 will be remaining in this loop until this equal to 0 condition is holding true,
then you come out.

So, here I have shown you simplified state transition diagram the idea is that whenever you
are in a particular state for example, in S2 now whenever you are in S2 you have to generate
the control signals for loading data in B, loadB and clearP, P equal to 0. Similarly, when you
are in state S3 will have to load the value of P; that means, loadP and decrementB and so on,
this is how it will work.

638
(Refer Slide Time: 26:19)

Let us take another example this example is somewhat similar, but a little more complex we
want to compute the GCD of two numbers, greatest common deviser by repeated subtraction.
So, what is the basic approach? The two given numbers and A and B. So, we repeatedly keep
on subtracting the smaller number from the larger number; that means, you have two
numbers A and B, you compare the two numbers A and B, if A is less than B then you do B
equal to B− A . If A is greater than B you do A equal to A−B , but if there equal you
are done you can output either A or B as the final result.

So, you see for this you need again two registers A and B, you need a comparator and you
need a subtractor and of course, you need some other circuit because sometimes you are
doing B− A , sometimes you are doing A−B .

639
(Refer Slide Time: 27:27)

So, possible data path can be like this; the two registers A and B, this multiplexer will select
whether you are doing A minus B or B minus A. The first input of the subtractor can be either
A or it can be B, this multiplexer will be selecting that. Similarly, the second multiplexer will
be selecting the second input of the subtractor, sorry it can be either A or again it can be B.
This comparator will be comparing the values of A and B whether they are less than greater
than or equal to and this multiplexer will be either loading the data in A and B from outside
data in or during operation the output of the subtractor will be again loaded back into A or B
depending on whether it was less than or greater than. So, this is the data path for you can see
you are requiring three, two registers in this case and some other combination blocks.

640
(Refer Slide Time: 28:30)

In a similar way the block level diagram will look like this, data path is here where these are
all the control signals which are required and these are the signals which are generated by the
comparator less than greater than equal to based on that the control path is taking the
decision.

(Refer Slide Time: 28:56)

So, the flowchart for the control path is again shown here. Same flow chart in a slightly
different way in terms of the operation with respect to the data path I have shown. Here you
can see there are 6 states. So, I recommend you just compare this with respect to data path

641
and verify so, whether this operations are being carried out in a correct sequence or not. And,
with respect to the FSM, now the FSM will look like this S0, S1, S2 then depending on this
comparison it can either go to S3 or S4 or S5; S5 means you are done and from S3 also after
the subtraction when you go back in next comparison you could go to S5. Similarly, from S4
you can also go to S5. So, this is red lines show you the different paths where I means if it
indicates that your computation is over, ok.

So, the idea is very simple, I mean you start with the flow chart you decide what hardware
you need to implement that flow chart the basic operations that are mentioned there, that is
your data path that the same flowchart you translate into some kind of a finite state machine
specification. Here I have not shown how to design that FSM it is fairly simple because the
FSM structure is fairly simple, you already know how to do it, from there you design the
FSM and the FSM will be generating control signals for your data path which will allow you
to execute the final operation.

So, with this we come to the end of this lecture. Means over the last three lecture we discuss
the various kind of registers and in this lecture we looked at a couple of applications. So, we
shall be continuing with a discussion in the next lecture where you shall be starting on
discussions on the design of counters.

Thank you.

642
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 45
Design of Counters (Part – 1)

In this lecture we shall be discussing the Design of Counters, specifically a kind of counter
which is called ripple counter, we shall be explaining the definition of ripple counter due
course of time. So, the title of this lecture is design of counters part 1 ok. The kind of
counters that we are considering here is the binary counter.

(Refer Slide Time: 00:47)

So, what do mean by binary counter? First thing is that the binary counter will be designed
using a set of flip flops, whose states change in response to pulses applied at the input.

Like you say if I have a counter this is my counter, and I am applying some pulses in the
input well we can call it clock, we can call it anything else some pulses are coming some
pulses are coming in the input. And this counter is suppose to count how many pulses of
come the combine state of the flip flop for example, if this is a 4-bit counter. So, in the output
I will be having a 4-bit number coming out this I refer to as the combine state of the flip
flops.

643
This will be the binary equivalent of the total number of pulses that have been applied; like if
it is 4-bit it will start with 0, then 1, then 2 then 3 and so on this is what a binary counter
essentially is right.

(Refer Slide Time: 02:10)

Talking about the types of counters broadly speaking counters can be either asynchronous or
synchronous, let us try to see the differences between these two first.

(Refer Slide Time: 02:23)

First asynchronous counter; this kind of counter is sometimes also referred to as ripple
counter, now let us explain why we call it ripple counter and why is it asynchronous ok.

644
Because earlier if you recall we had the encountered this term ripple in the context of the
design of adders, the ripple carry adder. In a ripple carry adder if you recall the carry is who
are propagating from one full adder stage to the next, that is what we refer to as a rippling
effect. Here also there is a similar rippling effect that will come in that we shall see how.

The first thing is that this kind of asynchronous or ripple counter are the easiest to design
there, this simplest, they required the minimum amount of hardware for implementation, let
us see how it we will this can be designed. Now let us try to explain these 2 terms
asynchronous and ripple. So, we call this kind of counter asynchronous, because the flip
flops, it can be a four bit counter, there will be flip flops. The flip flops are not driven by the
same clock unlike a register. Now in a registered design if you recall all the flip flops where
driven by the same clock signal, but in an asynchronous counter we are not using the same
clock to drive the flip flops, which means the flip flops are not changing the states at the same
time, they are not synchronous. So, asynchronous means not synchronous and these kind of
counters we shall see why are very conveniently designed using T flip flops.

Next let us see why they are called ripple counters. See here let us taken example that for a 4-
bit counter the current state is 1 1 1 1, 1 1 1 1 means 15 right, 1 1 1 1 means 15, now if
another clock pulse comes like count is suppose to be 16, but in 4-bits you can only store a
maximum of 15.

So, if another pulse comes, it will again a reset to 0. So, a 15 you will again go back to 0. So,
1 1 1 1 to 0 0 0 0. Now in a ripple counter what happens is that these four flip flops will not
change state simultaneously. So, what will happen? The first flip flop will change state first,
then this second flip flop will change state, then the third will change state, then the last one
will change state. So, this was the initial state, this was the final state and there are 3
intermediate or temporary states. So, when you apply a clock pulse or some signal in the
input, the flip flops will not change state simultaneously. There will be a rippling effect, the
first flip flop will change that will cause the second flip flop to will change, that will cause
the third one to change which will cause the fourth one to change.

So, this is something similar to the rippling effect that we talked about this is why it is called
a ripple counter. Now the problem in such ripple counter is that this as you can see that the
transition from 1 1 1 1 to 0 0 0 0 is not smooth, there is some intermediate glitches or pulses
that might come in between. So, if the output of the counter is driving some other circuit, this

645
unwanted transition may cause some errors because of these glitches, but if such glitches are
not really a problem, then you can use this kind of a counter because designing this kind of a
counter is very simple right, this is asynchronous counter.

(Refer Slide Time: 06:58)

Now in contrast a synchronous counter is characterized by the fact that you have the same
clock signal that is feeding all the flip flops; and because the same clock signal is feeding the
flip flops whenever the active edge of the clock comes, all the flip flips will change state
simultaneously, it was intermediate states are, there was just now mentioned for ripple
counter they will not come ok. So, all flip flops change state simultaneously which means no
glitches in the outputs, such counters are typically faster than asynchronous counter because
there is no rippling effect, but the drawback is that it needs more hardware. Now we already
know how to design a synchronous counter we have already seen it earlier.

When we talked about the formal design procedure for FSMs, we took an example of counter
design also. So, when you want to design a synchronous counter that is the way to go about
it. You start with the state transition diagram or a state table, go through these steps and
finally, arrive at the design using your chosen flip flop, it can be T flip flop, SR or JK, D. So,
as I said this kind of counters can be designed using a methodology which we have discussed
earlier fine.

646
(Refer Slide Time: 08:35)

Now, here we are concerned about the design of ripple counters, are asynchronous counter.
Let us look at a 4-bit ripple counter which counts up means 0 1 2 3 4 up to 15 and then again
back to 0 1 2 3. Now we make an interesting observation here which will help us in deciding
how to design this kind of a counter the observation is as follows. During counting whenever
bit position changes from 1 to 0, the next higher bit position will change state, let us try to
explain this with this sequence of counting.

So, I am showing the decimal value 0 up to 15 and the corresponding binary values. Now I,
let me try to validate this whenever a bit position changes from 1 to 0, I look at the list
significant thing, see 1 to 0, there is, there is a change here 1 to 0, there is another change
here, 1 to 0, then again this 1 to 0, here 1 to 0, 1 to 0, 1 to 0.

Now, see whenever there is a 1 to 0 change the next higher bit also changes, it was 0 it
becomes 1, here it was the 1 it has become 0, it was 0 it has become 1, this is changing, this is
changing, this is changing, this is changing, and so on right. So, whenever there is a change in
one stage when the bit goes from 1 to 0 the next higher bit position changes state.

So, let us verify for the next position, in the next position 1 to 0 transitions is coming here,
next which is coming here, next which is coming here and the final one is coming here 1 to 0.
You verify 1 to 0 so the next higher bit is change into 0 it is become 1. Then here this bit is
changing it was 1 it has become 0 here, this bit is changing it was 0 it has become 1 and here

647
also it was 1 it has become 0. So, this is true for all the bit positions and this is a very
interesting characteristic of this counting process in binary.

Now, because of this we can have a very intuitive method of designing this kind of a counter,
now you think of a T flip flop. Suppose I have a T flip flop which is triggered by the negative
edge of the clock. So, whenever there is a 1 to 0 transition in the clock the flip flop will
change state. So, exactly the same behaviour is captured in this table right. So, whenever
there is a change from 1 to 0, the next state is suppose to change. So, if the output of one
stage, I connect to the clock input of the next stage that will make my counter very simple.
So, what you are saying is that this feature can be directly mapped into a T flip flop circuit
with negative edge triggered clock, let us see how the counter will look like, the counter will
look like this.

(Refer Slide Time: 12:27)

This is a 4-bit binary counter, I use 4 T flip flops, for the T inputs are permanently set to 1
which means whenever there is a clock, the flip flop will toggle, it will change state. So, the
input clock is fed only to the first flip flop, this is your least significant bit, LSB and the
rightmost one is your most significant bit. The output of this first flip flop is connected to the
clock of the second flip flop, output Q1 is connected to the clock here and output Q2 is
connected to the clock here. So, that whenever there is a 1 to 0 transition here this flip flop
will change state, whenever there is a 1 to 0 transition here this flip flop will change state and
so on right this will continue to happen.

648
This is how was simple binary up counter can be designed, just connect a number of T flip
flops in cascade which are triggered by the negative edge of the clock. Now one point to
notice that suppose instead of negative edge triggered I have positive edge triggered T flip
flop, then what do you have to do.

(Refer Slide Time: 13:50)

Which means these are not negative edge triggered these are positive edge triggered. So, what
will do only changes that, I will be taking the Q´ 0 output and I will be connecting this here
I will take the ´
Q1 output I will be connecting it here and so on ´
Q2 I will be
connecting here. We simply connect the compliment output of the stage to the clock input of
the next stage. Because if Q0 change from 1 to 0, Q´ 0 will change from 0 to 1 and this is
the positive edge triggered T flip flop this will also work right.

649
(Refer Slide Time: 14:38)

Now, let us work out the timing diagram for this example. Well here for the time being I am
ignoring the delays. So, I am assuming that all changes are happening simultaneously. Let us
assume initially Q0 Q1 Q2 Q3 there all in the 0 state, 0 0 0 0 and clock is coming, the
activation of the clock is shown in red, the falling edge. So, whenever there is a falling edge,
clock is connected to T0. So, Q0 will change state first, let me draw Q0. Again at the next
falling edge Q0 will again change state, again falling edge change state this will continue.

So, Q0 will become like this, let us look at Q1, this Q0 output is fed to the T1, is fed to the
clock input of the next flip flop. So, whenever there is a falling edge in Q0, this edge Q1 will
change state. So, Q1 will change, here again, it will change here, again it will change here,
again it will change here and again here, similarly for Q2, Q2 will change here. So, whenever
there is a falling edge in Q1, next it will change here next it will change here and so on and
finally, Q3 we will change here.

So, whenever there is a falling edge, just if you see it is 0 0 0 1, next it will be 1 0 0 0. So, it
is exactly counting in binary, next it will be 0 1 0 0 then 1 1 0 0. So, look at it, 0 0 0 0 is 0, 0 0
0 1 is 1, 0 0 1 0, 2, 3. So, like this it will go on, this will be 4, 0 1 0 0, this will be 5 right. So,
in this way this will go on fine.

650
(Refer Slide Time: 17:23)

Now, there is one interesting observation, from the truth table or actually from the timing
diagram that you have shown this if you recall the input clockwise coming, the output Q0
was changing at the falling edge of the clock; that means, it was something like this the clock
was like this coming and Q0 was changing at the falling edge of the clock.

So, one thing you can say if the frequency of the input signal was f, the frequency of the Q0
output will be f by 2 because for every 2 pulses of f you are getting 1 pulse of Q0. So, at Q0
the frequency will be getting divided by 2, similarly Q1 with respect to Q0 again divided by
2. So, it will be f by 4, at Q2 it will be f by 8, at Q3 it will be f by 16. So, binary counter can
be very nicely used for dividing the input frequency by some power of 2, this is one
interesting characteristics.

Now, another point also to note suppose by input clock is like this means it is not square it is
something like this very nano pulses are coming in.

651
(Refer Slide Time: 18:59)

But if you look at the Q0 output, at every falling edge, it will be changing state. So, whatever
you get it is a perfectly square wave, the ON period and the OFF period will be exactly equal.
So, from the output of the counter the f, the signals that you get, there will be perfect square
waves, ON period and OFF periods are equal. So, whenever you require such a kind of a
signal, you can use a flip flop or a counter to generate such square waves right, these are 2
observations. So, an n bit binary counter in general, this was a 4 bit counter, you divided by
16. So, for n bit counter you can divided by 2n and such a counter is called the modulo
n
2 counter.

(Refer Slide Time: 19:52)

652
Now, one observation we have not discussed yet, I mentioned in the beginning that whenever
a ripple counter is counting there will be some intermediate transient states because all the
flip flops are not changing state simultaneously, there will be some delay. So, here I am
showing the real state transition diagram of a ripple counter, where the blue states are the
permanent states and the pink marks states are the temporary states. Like whenever a single
bit is changing like from 0 0 0 to 0 0 1 it is going directly, but 0 0 1 to 0 1 0 there are 2 bits
which are changing.

So, here the first bit will be changing first then the second bit will be changing. 0 1 1 to 1 0 0
all the 3 bits are changing. So, first the LSB will change, next LSB will change, then the most
significant bit will change. Similarly 1 0 1 to 1 1 0, similarly 1 1 1 to 0 0 0, first bit will
change, second bit will change, third bit will change.

Now, let us see how it or may or may why it works, let us state, illustrate it for this state 0 1 1
to 1 0 0 right.

(Refer Slide Time: 21:25)

Let us look at this 0 1 1 to 1 0 0 state. So, let me just show you the timing diagram to explain
why this happens. This is the falling edge, I just I am showing 1 clock only, the first this is
these are the 3 outputs Q0, Q1 and Q2.

This state was 0 1 1. So, Q2 was 0, Q1 was 1, Q0 was 1, 0 1 1, now an active edge of the
clock has come. Now what will happen, you see each flip flop will have a delay. So, I am

653
showing the timing diagram with the delay here, because earlier I have ignore the delay in
this timing diagram that had drawn.

So, Q0 will change, after a short delay like this, and this Q0 will be fed to the T input to the
input, the clock input of the next flip flop. So, Q1 will change now after again a little delay
sorry. So, this will remain like this and Q2 will change, again after some delay. So, you see
the intermediate states out here, the state was 0, it was 0 1 1, 0 1 1 was initial of course, it was
0 1 1.

Then look at the state after this here, it was 0 1 and 0, it become 0 1 0 then look at the state
here, it became 0 1 1 to 0 1 0, then it become 0 0 0. So, and finally, out here it became 1 0 0.
So, these 0 1 0 and 0 0 0, these are intermediate states that are being generated, 0 1 1 was the
initial state. 1 0 0 was the final state and this intermediate states have been generated and
these are happening because of the delays of the flip flops right. So, this is what you mean by,
there are transient states during the counting.

(Refer Slide Time: 24:25)

Now, if you want to design a down counter, the observation is very similar, but the direction
of the changes are just reverse. So, whenever a bit position changes from 0 to 1, then the next
higher position changes the state, if a accounting down. Let us say, sometimes we need a
counter to count down 5 4 3 2 1 like that, this is called a down counter. Suppose we want to
design a down counter. So, it will start from 15 for a 4 bit counter, 14 13 down to 0 and again
back to 15.

654
So, whenever there is a, let us 1 to 0 transition, I am showing some examples, not to 1 to 0, 0
to 1 here, 0 to 1 transition the next bit is changing, it is a here, also 0 to 1 translation next bit
is changing. So, here also you see 0 to 1 transition, 0 to 1 transition, the next bit is changing,
it was 1, it was become 0.

So, just the polarity is reverse, otherwise there is no basic change. So, to implement this you
can use a T flip flop, earlier we use negative edge triggered clock, but here you recall positive
edge triggered clock. So, whenever there is a change from 0 to 1, then the flip flop should
toggle right, this is the requirement.

(Refer Slide Time: 26:04)

So, to summarize, so if we use negative edge triggered flip flop, then we will have to connect
Q́ output to the clock input of next stage, but if we use positive edge triggered flip flop,
then you can directly connect the Q output to the clock input this will result in a down
counter. So, with this way come to the end of this lecture.

Now, in the next lecture we shall be continuing with our discussion on counters, we shall be
seeing some other issues with respect to counter design, and we shall be talking about general
counter designs, not necessarily some counter which is counting up to some power of 2 and
then going to 0, and this we should be discussing in the next lecture.

Thank you.

655
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 46
Design of Counters (Part – II)

In this lecture, we continue our discussion on Counters. If you recall in the last lecture we
talked about the design of binary ripple counters; both up counting and down counting
modes, where the counters were counting modulo some power of 2. If it was a 3 bit counter it
was module 8; module 8 means it counts from 0 to 7 and back to 0; modulo means divide by
8 and take the reminder. So, it will go up to 0 to 7.

Modulo 16 for a 4 bit counter, 0 to 15 and then back to 0 and so on. Now in this lecture to
start with we shall be discussing the design of arbitrary modulus counter not necessarily to
the power of 2.

(Refer Slide Time: 01:12)

So, the second part of counter design; we start with the design of binary ripple counter of
arbitrary modulus. So, what do mean by arbitrary modulus? We are saying that in general we
are trying to design a modulo-M counter for any value of M.

Let us say M is 6; so if we have this kind of a scenario then the counter will count from 0 0 1
2 3 up to M −1 and then again back to 0. So, as I said modulo M means divide by M and

656
take the remainder. So, if M equal to 6; I am saying number of pulses that have come divide it
by 6 and take the remainder. So, the value will be something from 0 to 5 that will be the final
value in the counter right.

Now, let us look at the general design principle for such a counter. Now we make some
assumptions here, based on that we arrive at the design. The first thing is that we are trying to
design an up counter, down counter can be designed in a very similar way, but for the time
being we are assuming that it is an up counter; it will count in the upward direction 0 1 2 3 4
up to M −1 . Now we are assuming that the flip flops have individual asynchronous clear
inputs, which is active low which means if the clear input is set to 0; all the flip flops will be
reset to 0.

The idea is very simple. I do not want the counter to go to M; after M −1 , I want it to
come back to 0. So, whenever it tries to reach M, I will forcibly bring the counter to 0; I will
clear the flip flops, the idea is that; not 6, counter 0 1 2 3 4 5 as soon as it tries to become 6, I
will activate the clear inputs, they will all become 0 again ok; this is the basic idea.

Now the first thing is that for a modulo M counter, we need log M number of flip flops,
ceiling means the smallest integer that is greater than or equal to this number. Let us say if M
equal to 6, log M will be how much? log M will be 2 point something. So, ceiling will
be 3; so, I will be needing 3 flip flops, let us take M equal to 14; then log 2 M will be 3
point something. So, ceiling of that will be 4, I would need 4 flip flops.

So, here you take a decision, how many flip flops are required then we use a simple gating
circuit, a simple circuit as I show you how to design it which will take inputs from the
counter outputs and it will generate a 0 in the output whenever the count value reaches M;
that means, I will not allow the counter to reach M; I will make the counter 0 immediately.
So, whenever the output of this gating circuit is 0 that is connected to the clear inputs of the
flip flop with flip flop will become 0 again.

657
(Refer Slide Time: 04:52)

So, let us take an example of a modulo 6 counter; so, how to design it. So, for 6, number of
flip flops as I said will be equal to 3, log 2 6 will be 2 point something, ceiling will be the
next higher integer 3. Now for a mod 6 counter, we want the counter to count in this sequence
0 1 2 3 4 5 and then back to 0; which means I am not allowing the natural next count which is
6; 1 1 0 to come.

So, whenever it tries to become 1 1 0; I will forcibly make it 0 again. So, how do I do it? This
is what I mentioned, it can be done very simply like this, you see you just ignore this NAND
gate for the time being; the first 3 flip flop is just like a modulo 8 counter.

3 flip flops where the output of a flip flop is feeding in the clock input of the next flip flop;
active load triggered. Now what this NAND gate is doing? NAND gate is trying to look for
whether the count is reaching 1 2 0 or not, 1 1 0 means what? This is 1, this is 1 and this is 0;
so, you look at the ones the outputs which are supposed to be 1 in this invalid state; you feed
them as the input of this NAND gate. So, whenever these 2 bits are trying to become 1 1; the
output of the NAND gate will become 0. And this 0 will immediately clear the flip flops; so,
the idea is that when you are in state 1 0 1.

Suppose you are in state 1 0 1 and you apply a clock, it will temporarily go to 1 1 0 and
immediately the counter will get cleared; this 1 1 0 will be a temporary state right. This is
how we can design such a counter with arbitrary modulus right.

658
(Refer Slide Time: 07:26)

Now, let us work out for a divide by 6 counter, as we have seen here. Now whenever there is,
I mean 1 1 0; it will, it clear, just you remember this thing whenever the count value will be 1
1 0; it will be immediately clear to 0 0 0. Now let us look at this Q0; Q0 will be going on
changing at every falling edge.

There is no issue with Q0; let us start with all 0. So, we have 0 0 0; so, the first clock comes it
becomes 0 0 1, next clock comes there is a falling edge; this will change state it becomes 0 1
0, 2. Next clock is comes this will continue 0 1 1 which is 3, next clock is comes there is
again a falling edge, this will change again, sorry here there is a falling edge, this will go up
yes. So, this will be 1 0 0, this will become 4, then again there is a falling edge here, this will
not change, this will also not change, 1 0 1 this is 5.

Now, here temporally it will try to become 6, because there is a falling edge here; it will go
up, this is already high. So, it is 1 1 0, but as soon as this tries to go up the counter will be
reset; it will become 0 right. So, it will become 0 0 0 again and this will continue 0 0 0, again
in the next clock it will become 0 0 1, again it will become 0 1 0 and so on right.

Now the point to note is that if you look at the Q2 output, you see it is starting from 0; it is
going up here, it is again going down here. So, how many clock pulses are required, 1 2 3 4 5
6. So, after every 6 clock pulses this Q2 is completing 1 pulse which means I can say in Q2;
the frequency will get divided by 6.

659
So, the basic observation is any mod M counter that we design, if we take the last output in
this case Q2. So, the input frequency will get divided by 2M , not 2M ; input frequency
will be divided by M, it is a module M counter. In this case it was mod 6 counter, the
frequency was getting divided by 6 right, this time in diagram also shows that, let us
continue.

(Refer Slide Time: 11:35)

Let us now look at a very widely used kind of a mod M counter which is mod 10; this is also
called a decade counter, decade is nothing but 10, this is sometimes called a decade counter.

Now, if M equal to 10 if you take log 2 10 and ceiling of it will be 3 point something. So,
this is 4, you need 4 flip flops and 10 is what? 10 is 1 0 1 0. So, whenever you try to reach 1 0
1 0, you must reset it to 0; 1 0 1 0 means 1 0 1 0. So, Q3 and Q1 are trying to become 1; so,
in this NAND gate you will have to connect the output Q3 and connect the output Q1.

And the output of this NAND gate you will have to connect to the clear input, this will make
your decade counter. So, just by selectively connecting the outputs to this NAND gate you
can design any arbitrary mod M counter. In this case I required mod 10; so, I connected the
outputs corresponding to the number 10; 1 0 1 0, 1 0 1 0 1 is here 1 is here I connected like.
So, I get a mod 10 counter right simple.

660
(Refer Slide Time: 13:19)

Now, next point is how do I cascade counters and what will happen if I cascade 2 counters?
Cascading counter means connecting them like this, suppose I have a mod M counter; the last
state, last means the most significant bit that is connected to the input clock of the next
counter.

If I have a mod M counter or an mod N counter, the point to remember is that you already
know; I have seen, I have shown for a mod 6 counter that if I have a mod M counter; if I
input clock has a frequency f, then in the more significant bit output the frequency will get
divided by M. But if this signal is now fed to the clock of another counter mod N this will
also do a frequency division by N. So, in the final output the frequency will be divided by
MN; so, you should remember this.

If counters are cascaded then the modulo will be the product of the individual modulus mod
M mod N; if you connect them in a cascade it will become mod MN, modulo MN counter
right. Now this principal can be used in the design of arbitrary kind of counters; let us take
some specific examples.

Let us see, I want a modulo 1000 counter, like we have so many kind of equipments where
some digits are being displayed in decimal, like a frequency counter or any kind of digital
displays. Numbers are normally displayed in decimal; so, how do I implement a 3 digit
counter in that case? Because numbers I mean decimal 0 to 9 and then again back to 0
suppose in a 3 digit, I need a modulo 1000 counter.

661
(Refer Slide Time: 15:32)

So, what I do? We use 3 mod 10 counters; mod 10, mod 10, mod 10 we will here I am
showing from the other side.

Suppose my input clock is coming like this, mod 10 counter will be having 4 outputs suppose
this is the most significant bit, this is MSB. This MSB is connected to the clock of, this is
again MSB, this is connected to the clock of this. So, if I have a circuit like this then this will
be a modulo 1000 counter where your counting in decimal; that means, this is a, you can say
this is the BCD counter.

Because each of these stages will be counting from 0 to 9 and then again back to 0, 0 to 9 0 to
9; so, this is a binary coded decimal BCD counter; this is not a binary 1000, this is BCD 1000
counter. Now if I want to display them in digit what I can do? You know BCD to 7 segment
decoder we discussed earlier. So, you can use that kind of a decoder circuit here, BCD to 7
segment decoder. So, we have this kind of a circuit and this decoder circuit you can use to
drive some 7 segment display units. So, in this way you can have a 3 digit display; where it
will be counting the number of clock pulses in decimal.

Right this kind of an application is quite common in many equipments, we have this kind of
circuit; internally they are implemented like this. There are a number of decade counters,
decade counters are actually counting in BCD essentially there cascaded together and you can
directly drive the digits right; so this is one example. Let us take another very interesting

662
example which you see everywhere a digital clock; now in a digital clock, first let us see what
kind of output we get.

(Refer Slide Time: 18:24)

We have 2 digits that indicate our 2 digits that indicate minutes and again 2 digits that
indicate seconds. But the point to notice that the way this counting goes on are not the same
for the 3 different parts.

In second you are counting from 0 to 59 and then back to 0 which means it is a mod 60
counter, for minute also your counting from 0 to 59; this is also a mod 60 counter. And hour,
let us assume that it is a 24 hour format; so it will count from 0 to 23; it will be a mod 24
counter. And again because the numbers are all displayed in decimal, they will have to be
designed using decade counter or BCD counters only because we want the display in decimal
not in binary; this is the basic requirement right.

663
(Refer Slide Time: 19:58)

So, to do this what we have is that basically I will be using a cascade of 3 counters. As I said;
so, again I am showing from the right side first modulo 60, then modulo 60, then modulo 24.
Now in input I am applying a 1 hertz clock, 1 hertz clock means 1 pulse every second; it will
be counting the number of seconds. So, the this output will give you seconds, this output will
give you minutes and this output will give you hours; this is the overall block diagram that it
is. So, I need to design mod 60 counters, 2 of them, I need to design a mod 24 counter one of
this; let us see how I can design this counter separately.

(Refer Slide Time: 21:17)

664
First we talk about modulo 60 counter design, because you are designing using BCD. So,
what we do? You use a mod 10 counter and mod 6 counter, you see I need 60. So, if you
multiplied 10 by 6 it become 60. So, I said mod M and mod N counter if you cascade
together it will become mod MN.

So, I require mod 60; I take a mod 10, I take a mod 6 because 10 ×6 is 60 this will serve
my purpose. So, I connect them in cascade, connect them in cascade in basically if this mod
10 has 4 outputs; the most significant bit is connected here. And mod 10 will be fed with the
input clock, mod 6 will also be having 4 outputs and here I will be using this BCD to 7
segment decoder circuit.

And to the output of it, one digit will come here and to the output of it one digit will come
here. So, the mod 60 counter I can design like this, mod 60 modulo. So, this will be a mod 60
counter; this will be a driven by these are BCD to 7 segment decoder driver. And finally, they
will be driving the 7 segment displays right. So, I will be requiring 2 such mod 60 counters
then I need a mod 24 counter.

(Refer Slide Time: 23:11)

So, how do I design a mod 24 counter? Because you see mod 24 is an odd number I cannot
generate 24 by multiplying 10 with something ok. So, I will take the next higher; so what do
you do? I take a mod 10 counter, you see I have to use mod 10 because I am counting
decimal and next one I take mod 3. So, it becomes mod 30, mod 3 sorry mod 3 next higher

665
and the input clock is coming here, mod 10 in the most significant bit is fed as here and this
will be mod 3; 4 bit output and generating.

Now, on this part this side it is similar, BCD 7 segment decoder driver and 2 displays. Now
the point to notice that I need mod 24 counter, but this is a mod 30 counter ok. Now 24 means
what? If you write down the number 24 in binary; 24 means 16 plus 8 means 1 0 0 0 is 8 and
0 0 0 1 is 16 this is 24.

So, you see in 24, these 2 bits are 1; so again similarly I use a NAND gate; more significant
bit of the first stage which is this; this I connect as one of the input and least significant bit of
the next one, this one. And this I use to connect to the clear input of all the flip flops ´
clear
´
; clear .

So, whenever it becomes 24, I reset it to 0; so, this will be counting from 0 to 23 right. So, in
this way I can design a mod 24 counter; so you see in this way I can design arbitrary counters
and for many practical application as I have shown; I need to count in decimal so that I can
display the results in decimal. I need decade counters are BCD counters they are also
sometimes called for this purpose. So, I have taken 2 examples of a mod 1000 counter and
also that our digital clock which is very common to illustrate this process.

So, with this we come to the come to end of our discussion on the registers and counters.
Now I reemphasize registration counters are very important building blocks in system design;
whenever we design complex systems you need to design registers, you need to use counters.
Depending on the situation you will have design or register or a counter with a specific
functionality; based on whatever we have discussed, I believe we will now been a position to
design any kind of a register or counter as per the requirement or the need.

Thank you.

666
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 47
Digital-to-Analog Converter ( Part I )

So, if you recall our discussions over the last few weeks, we have been discussing the design
of digital circuits specifically both combinational and digital, combinational and sequential.
Now, the thing is that the world we live in here most of the signals that we see around us they
are not digital in nature rather they are continuous or analog that we call it. Let us take some
example, take the example of temperature, pressure, humidity, light intensity, so everything
through which we interact with the environment they are continuous in nature. And there are
many applications many systems which directly interact with the environment.

So, we discuss now some devices or building blocks using which we can interface digital
functionalities or digital systems that you already have seen how to design with the so called
analog world. So, our first lecture in this regard is related to something called digital to
analog converter or DA converter we call in short.

(Refer Slide Time: 01:48)

So, let us try to look into the overall picture as I have said. Let us see. Look at this schematic
diagram. So, in this diagram, in the center we have our digital system where all calculations,
all processing are carried out. And in the environment input is coming from some physical

667
variable. Now, as I have said the physical variable can be temperature, pressure, humidity
anything. So, these physical variables are not digital in nature as I had said.

So, the first thing is that you have to use some kind of a sensor or a transducer to convert this
physical variable into a voltage. So, you convert it into a voltage let us say V. Now, digital
system cannot accept a voltage directly, it only understands 0 and 1, two discrete levels or
digital levels. So, this continuous valued voltage that we have got that needs to converted,
needs to be converted into an equivalent or proportional digital data. For that reason, we
require a building block called an analog to digital converter which we shall be discussing
later.

So, this ADC what it does, it takes this voltage as input and generates a proportional digital
data as output. Then this digital data comes to the digital system where whatever processing
you want to do, can be carried out. And the result of the processing, which is again some kind
of digital data, let us call it D, it is fed to something called a digital to analog converter, the
reverse of ADC. Digital to analog converter what it does, it takes this digital world and
converts it into an equivalent analog voltage A. And there is an actuator which uses this
analog voltage to control the physical variable of the environment.

Let us take an example suppose, we are trying to build a temperature controller means we
want to set the temperature of an environment to some preset temperature, let us say 35
degrees. So, what do we do first, we sense the temperature, transducer can be some kind of a,
some kind of a temperature sensor, there are so many kinds of temperature sensors available.
You can use any kind of a temperature sensor that will convert the temperature into a voltage.
Then the digital system will see that the equivalent digital data for that voltage whether it is
below or above the preset value, that accordingly it will be generating an actuation signal,
this actuator can be, this actuator can be either a heater or it can be an air conditioning
machine.

So, this actuator will be activating the heater or the air conditioning machine, so that the
physical variable here temperature that you want to control is controlled in a proper way. So,
in this overall picture, we require the transducers, which will convert the physical variable
into some electrical signal. Then we have analog to digital converter, then in the digital
system, which can be a computer or any kind of digital subsystem and digital to analog

668
converter at the output side then the actuator. Now, in this lecture, we shall be first looking
into some aspects of digital to analog converter.

(Refer Slide Time: 05:53)

Let us move on. So, what is a digital to analog converter? Let us try to understand this first.
Broadly speaking, it is a building block which takes a digital data D as input and generates a
voltage Vout as output. This voltage is continuous in nature that is why you call it analog
output. Now, the relationship between D and Vout is Vout should be proportional to D. So,
whatever digital data we are applying the decimal equivalent of it, the output voltage should
be proportional to that, this is how a DA converter works.

Let us take an example of a 4 bit DA converter, we call it a 4 bit DA converter, 4 bit, because
it takes a 4-bit digital data as input D0 to D3. And there is a reference voltage which we
typically apply to a DA converter the reference voltage determines what will be the maximum
output voltage of the DA converter. Now, in the example that we are showing here, let us
assume that our reference voltage is 15 volts. And you see these are the different input
combinations, I am showing in decimal value 0 0 0 0 means, 0 0 0 0 and means 1, 2, 3, 4 up
to 15. So, this DA converting work in such a way that depending on the input value, Vout will
be 0 volts up to 15 volts. The output voltage will be proportional to the value of the digital
input right. This is how a DA converter basically works.

669
(Refer Slide Time: 08:03)

Let us move on. So, this figure shows the input-output relationship of a DA converter. You
see here along the x-axis actually here I have taken the example of a 4 bit DA converter
again. So, I am showing the four bit digital value 0 0 0 0, 1, 2, 3, 4 I have shown up to 11, but
it will continue till 15; and along the y-axis I am showing Vout. You see in the input I cannot
apply continuous data, I can either apply 0 or I can apply 1 or I can apply 2 or 3 that means,
discrete values, because I can apply only discrete values.

So, in the output side also we can get only discrete voltages. For example, when the input is
0, I get a 0 voltage. So, when the input is 1, let us say this voltage is 1 volt; then when the
input voltage is 2, this is 2 volts; then when the input voltage is 3, it is 3 volts like this. So, it
will be a staircase kind of an waveform, I get like, this goes on. So, depending on the input
value I am applying, as I increase the value of D, the output voltage jumps in steps, it does
not increase continuously, because input I cannot increase continuously, I can only increase
by discrete amounts. So, the output also increases in steps. This is how the characteristics of a
DA converter looks like.

670
(Refer Slide Time: 09:57)

Now, we would talk about some of the parameters of this DA converter. So, when you design
a DA converter, there are some parameters we talked about. The first important parameter is
something called the resolution or step size. So, I talked about the output increases in steps.
Now, what is the minimum width of that step? That is my resolution. You see suppose I
specify the reference voltage as 15 volts. So, the maximum voltage can be 15, now if it is a 4
bit DA converter there can be 15 steps, 0 to 15; if it is a 5 bit DA converter, it will be 0 up to
32.

So, if it is a 4 bit converter, every step height will be 1 volt; for 5 bit it will be half volt 0.5
volt; for 6 bits, it will be one-fourth or 0.25 volts. So, as we increase the number of bits, the
accuracy or the height of the step decreases that means, I can generate more accurate ranges
of voltages in the output. This is what is defined as resolution or step size, defined as smallest
change in the output voltage as a result of a change in D. See the minimum change in D will
be equivalent to 1, 0 to 1, 1 to 2, 2 to 3, so you can say it is equivalent to the weight of the
least significant bit. So, as if the least significant bit is changing that will be equivalent to a
step of 1 right.

Now, typically we express resolution as a percentage, how? See step size I have already
talked about. If you divide step size by the reference voltage, this multiplied by 100; this is
defined as the percentage resolution. Step size, see Vref is, step size by Vref, you can
represent it like this, 1 divided by total number of steps same thing and for an n bit DA

671
n
converter, number of steps will be starting from 0, there will be 2 −1 steps, so

1
n
×100 .
2 −1

So, if you write it like this, then you will get the percentage resolution. Let us say for a 4 bit

100
DA converter n=4 , the resolution will be , this much percentage. So, it will be
15
how much 6.6 percent approximately right. So, the resolution will depend on the value of n.
Let us move on.

(Refer Slide Time: 13:23)

Now, suppose I connect a DA converter directly from the output of a binary counter and there
is a clock. So, the binary counter is counting 0 1 2 3 4. So, the output of the DA converter if
you observe on an oscilloscope, you will see that there will be a staircase kind of a structure
generated, the output voltage will be increasing in steps. Suppose it is a 4 bit counter, so it
will be counting from 0 0 0 0 up to 1 1 1 1 and then again back to 0. So, this staircase will
start from 0 here, it will go up to maximum 1 1 1 1; after that it will again go down to 0.

So, it will be a staircase kind of an waveform, not like this, it will be like this, it will be go
up, then again down, again go up, again down, again go up, again down like this, but it is not
a continuous thing, it will be small steps right. This is how the output waveform will look
like.

672
(Refer Slide Time: 14:34)

Now, there are some, you can say errors in DA conversion process that can occur, first is gain
error, well we said that the output voltage Vout is proportional to D, which means some
constant of proportionality multiplied by D. Now, due to some design variations or some
issues the value of K might change, if K changes, well here I am showing as a straight line,
but actually these are the steps, these are the steps, I am showing it as a straight line
approximately.

So, what will happen, the slope of this straight line will change, if the value of K changes.
This is called gain error, suppose this black one is the ideal output. So, if there is a positive
offset error, gain error, so you will be moving in this direction; if it is a negative error, you
will be moving in this direction right slope will change.

673
(Refer Slide Time: 15:51)

The other kind of error is offset error. Here the slope is not changing, but there is a DC shift,
you say instead of starting from 0 maybe, it is starting from 1 volt, sorry maybe it is starting
from 1 volt instead of 0 volt, it is starting from 1 volt or let us say here. So, there will be a DC
shift all along the curve and this separation is same; that means, the curves are still parallel,
this is called offset error. There is a constant offset between the actual output and the ideal
output. Suppose, say this middle one is the ideal output and this is the actual output or this is
the actual output you are getting, there is a constant offset fine.

(Refer Slide Time: 16:49)

674
Then comes non-linearity; non-linearity means you see for a DA converter we said that the
step height should be the same. So, this is the ideal scenario, so all step heights are exactly
same, but you will see later that this DA converters can be designed using resistances. Now, if
the value of some resistances changes due to fabrication defect or because of some other error
design error. So, what might happen these step heights may become different, like it is shown
on the right side, you see this step is larger, this step is smaller, this is again smaller, this is
larger, this is even larger. This is called non-linearity, this is non-linearity error right.

(Refer Slide Time: 17:51)

Then you can have another kind of an error which is called monotonicity error. Like normally
as the digital input increases the output should be increasing in steps, but what you may see
sometimes the output may decrease instead of increasing, but you may say that why it should
happen, the input is increasing output should always increase, it may happen due to some
other problem. Let us take a very specific example. Let us consider a 3 bit DA converter,
because it is a simple example you can work out, let us say D2, D1, D0, and this is your Vout.

Now, suppose while you are designing and making the connections, this D0 connection you
have forgot to make or there is a disconnection, this is disconnected. So, as you increase, you
see, if you just note down the values D1, D2, D1, D0, it will start with 0 then 1, then 2, then
3, then 4, then 5 and so on, but because D1 is not connected D1 will be always 0. So,
effectively this will be 0 0 0, this will be 0 0 1, this will. Then again, if you say 6,1 1 0, this
will become 1 0 0. So, you see if you look at the output 0, 1, 0, 1, 4, 5, so there will be 0 1,

675
again 0 1, then there will be a jump 4 5, again 4, again 5, then back to 0. So, your wave form
will be like this, you see there is a monotonicity error, sometimes it is decreasing. And it can
happen, because of this, some error in connection, somewhere might become open circuit
right.

(Refer Slide Time: 20:07)

And the last kind of error, let me talk about is settling time and overshoot error. Settling time
says see, when you have a DA converter, you are applying a digital input D and you are
getting an analog output Vout. So, as soon as you give the value of D, output is not obtained
instantaneously, there can be a delay before which the output stables. This is called settling
time and during settling time there came an overshoot. Suppose this is the ideal output level,
this one. Sometimes the output can go up, go down, again go up and go down before settling
down, this is called overshoot or this time, this part will become undershoot.

And setting time is, after how much time, means even due to this overshoot or undershoot,
the output will fall within plus minus half of the step size of the or the LSB equivalent
voltage plus half to minus half. So, in this example, settling time is up to here right. So, these
are the different kind of errors that I talked about that may occur in a digital to analog
converter.

So, with this we come to the end of this introductory lecture. From the next lecture onwards,
we shall be talking about how to design first digital to analog converters then we shall talk
about analog to digital converters.

676
Thank you.

677
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 48
Digital-to-Analog Converter (Part II)

So, we continue with our discussion on the design of digital to analog converter. In the last
lecture, if you recall, we talked about the basic characteristic of a DA converter, and the
different kind of errors that may happen in such a converter. Today in this lecture we shall be
talking about some of the typical designs of a DA converter, how we can build a DA
converter ok? This is the second part of our lecture.

(Refer Slide Time: 00:47)

So, here we shall be talking about two different types of DA converter, as you can see. The
first one is called weighted register type, second one is called resistive ladder type. Now, it
will be clear when we discuss the details of this design, the first one is simpler to understand,
it is easier to analyze that how it works. But, the second one, it may be a little more difficult
to analyze, but it is more practical, because it is easier to implement, why? The reason will
become clear as we move on to the design details. Let us see how we can implement a digital
to analog converter.

678
(Refer Slide Time: 01:41)

The first kind of a digital to analog converter that we talk about is called weighted resistor
type DA converter. Let us, talk in general about an n-bit digital to analog converter, where we
use n different resistance values and the resistance values are of magnitude R, 2R, 4R up to
2n−1 R. Let us, say if n=4 , there will be n different resistance values and the resistance
values will be R, 2R, 4R and 8R. Similarly, if n=5 , then will be having R up to 16R. if
n=6 , we will have up to 32R.

So, the idea is that you see we require so many different values of resistances. Let us say we
design a 16 bit DA converter. See, for a 4 bit DA converter, 8 is how much 24−1 that is 8.
16−1 15 15
So, for a 16 bit DA converter, we need 2 or 2 , 2 is 32768, 32,000
approximately. Now, the trouble is that when you require so many different values of
resistances, it becomes difficult to manufacture.

If you say I require only 1 value or 2 values or 3 values of resistances, it is much easier to
manufacture them accurately, but if you say I require 10 different values, it becomes difficult.
So, this is one of the major drawbacks of this weighted register type digital to analog
converter right.

679
(Refer Slide Time: 04:02)

Let us see how this kind of a DA converter looks like, this main drawback as I said different
value resistances are required ok.

(Refer Slide Time: 04:12)

First you see this is the schematic diagram, before explaining how it works, I will talk a little
bit about this building block Op amp ok. Let me talk about this then I will again come back to
this diagram and explain. Now, Op amp is the short form of Operational Amplifier,
symbolically we write it like this. There are two input terminals, there is one output terminal.
Now, op amp is supposed to be an ideal amplifier, where ideal amplifier means it is gain

680
should be infinity, gain is ideally infinity, but in practice it is not infinity very large. And
input impedance, this is another characteristic of an amplifier, good amplifier; input
impedance should be as low as possible. Now, for an operation amplifier we assume that the
input impedance is 0.

Let us look at a typical diagram; this is a typical connection of an operation amplifier.


Suppose I apply a voltage V here and this is my V output, let us say Vo, this two resistance
values are R1 and R2. Now, because the gain is infinity and the output voltage is finite, gain
means the difference of the 2 inputs multiplied by the gain, because the output value is finite,
the difference between these 2 voltage should tend to 0, because plus I am connected to
ground. So, this point also should be very close to 0 volt, this is the characteristic.

Now, if that is so then if I talk about the current flowing, let us call it I1, so what will be the

V −0 V
value of I1? Say I1 will be , so . And this current, because input impedance is
R1 R1
0 or so there will be no current flowing, this current will be flowing through this R2. The
current will be flowing in this direction, this same current will be, sorry I am a little, this
input impedance is not 0, it is infinite, this is also very high, sorry this input infinite, should
be very high, not very low, output impedance should be low alright.

So, because input impedance is high, no current is flowing inside, the current will be flowing
through R2. So, this current I1 will be flowing through R2, so, how much is the value

0−V 0 −V 0 R2
. So, it will be . So, if you solve, your V0 value becomes – ×V ,
R2 R2 R1
this is the expression for the output for, and this configuration is called inverting amplifier
configuration. Now, let us now come back to this diagram here, this op amp has been used in
the inverter amplifier configuration. Here exactly similar diagram the plus input is connected
to ground.

And there is a resistance Rf that you have connected in the feedback, from this point to the
output. And in the input instead of a single voltage, we have connected several resistances
and D3, D2, D1, D0, 4 voltages and because, these are digital voltages that can be either 0 or
1. Let us say 0 means 0 volts, 1 means 5 volts, let us say so, because this point is at 0 volt,
just like I have said.

681
So, while calculating the value of I1 what will be I1 here? Let us say this is R2, R4, R8 and
example is given 1 kilo ohm, 2 kilo ohm, 4 kilo ohm, 8 ohm. So, the total current here will be

D3 D2 +D 1 D 0
+ , the total current have to calculate + , so here the expression is
R 2R 4R 8R
written. This will be the total current flowing into here, this I1 and this will be equal to

−V 0 Rf
, so V0 or Vout, here will be, if we simplify, if we take this R outside, ,
Rf R

D2 D1 D0
D3+ , , you see the same expression.
2 4 8

And if you just rearrange it a little bit take 8 also outside, then this becomes just the decimal
equivalent of the binary number 8. You see for this 4 bit number binary number what are the
0
weights 8 4 2 1, 2 , 21 , 22 , 23 . So, it is exactly that you see D 0+2 D 1 , 4 D2,
8 D3, this is nothing but the decimal equivalent of the number. So, Vout is becoming

−Rf
proportional to D where the constant of proportionality is . So, by controlling the
8R
value of R1, Rf the exact voltage can be fixed up, but it will always be proportional right.

(Refer Slide Time: 10:48)

So, this is a very simple way to build a digital to analog converter by using different values of
resistances. Now, as I said I am just repeating for a 4 bit number, you need different weights
for the inputs. So, in order to generate these weights, we are using resistances which are also

682
multiples of 2, R, 2 R, 4 R, 8 R, so that the current flowing through this D3 resistance is
maximum, current flowing through the D2 resistance will be half of that, D1 will be again
half of that and D0 will be again half of that. So, these resistances automatically generate
these weights. And here the weighted sum is carried out and this op amp is actually basically,
it is converting this current into a voltage this is working as a current to voltage converter
right.

(Refer Slide Time: 11:50)

Let us move on to the second kind of resistive ladder DA converter, which I have said is more
practical. So, here also I take the example of a 4 bit DA converter. Let us see here this D3 and
up to D0, these are the 4 bits of the input. And as you can see only two values of resistance
are required 2 R and R. There are total of 8 resistances, so if it is an n-bit converter, the first
thing is that I require 2n resistance values. So, I need n+1 resistance values of magnitude
2 R and n−1 resistance values of magnitude R right.

Now, there is another point to note in this kind of a resistive ladder circuit. Here, we are not
generating a current like in the weighted register type we saw earlier. In the weighted register
type we had generated a current, which is proportional to the input digital word, and the op
amp was converting that current into a voltage. But, here whatever we are getting here, this is
a voltage and this voltage is proportional to this digital word ok. Before going into that how
this voltage is proportional? Let us talk about another kind of an op amp in connection, let me
work it out a little bit.

683
(Refer Slide Time: 13:48)

Let us look at another kind of an op amp connection. Here what I am saying is that the input
voltage I am applying to the plus input, let us say I am applying a voltage V here. And the
resistances are connected as previously and this point I am connecting to ground. Let us, say
we have R1, here we have R2 here and this is V0. Now, again we have an op amp, because it
is, gain is infinity, virtually infinite and the output voltage should be finite.

Therefore, these two input points must be approximately at the same voltage, because in the
plus input I am applying V, so here also it should be V right. Now, if I again try to compute

V −V
the current flowing here in this direction, it is ground minus , so it will be .
R1 R1

V −V 0
This current should be the same as the current flowing through R2, .
R2

Now, if you solve this I give it an exercise for you I am jumping some steps, V0 will be given

R2
by (1+ ) ×V and there is no minus sign here ok. So, this is called a non inverting
R1
amplifier, non inverting. And another property you see in this expression, if I set R 2=0 ,
then what will happen, if R 2=0 , then V0 becomes equal to V you will see in this
configuration, we have done exactly that. In the feedback part there is no resistance that
means R is 0.

684
And this kind of a configuration where the output voltage is same as the input voltage this is
called a voltage follower. So, in this circuit we have used a voltage follower using op amp as
you can see voltage follower fine. Now, let us see how the voltage that is being generated in
this plus input becomes proportional to the input digital word step-by-step, let us see. Now, as
I said earlier that for this case, it is a little more difficult to analyze.

(Refer Slide Time: 17:00)

So, what we do, we start by assuming that exactly one of the inputs are at 1 and the remaining
inputs are at 0. Let us say to start with the first case D3 is 1 that means most significant bit is
1others are 0 right. Let us see what will happen in this case? So your equivalent circuit will
be like this D3 is 1 which means it is at plus V1 means plus V, let us say 0 means ground and
0, 0, 0, 0, 0 and 0.

Now, you know that 2 resistances from any point, if you have 2 resistances connected to
ground, let us say 2R and 2R, 2 resistances in parallel. This is equivalent to a single resistance
of value R. Here same thing will happen you see this 2 R and this 2 R, they are in parallel and
connected to ground. So, at this point so equivalently you have a single resistance to ground
of value R, again this R and this R are in series. So, it will be equivalent to 2 R, again this 2R,
2 R will be in parallel, there will be R, again this R and R will be in series. So, in this way it
will go on, I urge you to work this out and it will go on.

685
(Refer Slide Time: 18:51)

And finally, what you will get you will get an equivalent circuit like this. You will reach the
last point plus V or 2R is here. And on the left side the, this entire network, which is here, this
entire network becomes equivalent to a single 2R resistance. Again there is a voltage divider

V V
V2, R2, R, this is equivalent to ok, because the total current will be and the
2 4R

V V
voltage Vx will be × 2 R , so . So, you see if the MSB is 1, most significant bit is
4R 2

V
1, then the output will be , where V is equivalent to that 1 voltage right.
2

686
(Refer Slide Time: 19:47)

Next, let us see what will happen if the next bit is 1, next MSB is 1, 0 1 0 0. So, the
equivalent circuit is this. Here the next MSB, here MSB is a, is a right hand side, least
significant bit is a left hand side. So, this is the circuit, so in the same way, these two
resistances are in parallel, there will be R, R, R, in series again 2 R, again this R, and 2 R and
2 R will be in parallel R, R and R will be in series again 2 R.

So, you will be getting an equivalent circuit like this when you reach this plus V point, 2 R on
the left, this entire circuit is equivalent to 2 R. But, now you directly cannot use this parallel
calculation, because if a voltage here not ground at one point it is the ground, other point
voltage.

Here what you do you apply something called Thevenin’s theorem, this you must have
studied in your school, if you recall. Thevenin’s, so let me tell you what this Thevenin’s
theorem says. So, I am applying a Thevenin’s theorem at this point so, what it says that if we
have a circuit like in a V ground and 2 R, 2 R, this entire thing can be replaced by a resistance
and a voltage source by R some resistance and a voltage source. What the resistance value
can be calculated by setting all the voltages to ground like here we set it to ground 2 R, 2 R in
parallel becomes R, so it is R.

And this voltage value will be equal to what is the voltage here? If this V is applied here, so
you now again forget this ground plus V 2, R 2, R, ground, so at this point it will be V by 2,

687
V
so in series with R, this is what Thevenin’s theorem is. So, once you do it again you
2

V
have 2 R, this 2 R again , 2 R and 2 R to ground. So, if the output voltage again a
2

V V
resistance divided 2 R, 2 R to ground middle point, so it will be , V x will be . So,
4 4

V
you will see earlier we said when the MSB is 1 output was , but when the next MSB 1
2

V
it is right.
4

(Refer Slide Time: 22:36)

So, if now the next MSB is 1, 0 0 1 0 in a similar way you combine these two, then apply
Thevenin’s theorem at this point yes, it will become like this. Then again apply Thevenin’s

V
theorem at this point, it will become like this, you have , 2 R, 2 R, it becomes V by 8
4

V V V
you see it was , , .
2 4 8

688
(Refer Slide Time: 23:08)

Lastly, when the least significant bit is one, circuit will like this again similarly you start by
applying Thevenin’s theorem from the beginning like this and continue apply Thevenin’s

V V
theorem once here, once here, again once here, first time here , next time here ,
4 8

V V
now , 2 R, 2 R, ground, so here it will be half of V by, . So, what we have seen is
8 16
that for a 4 bit DA converter, for the 4 bit positions if individually one of the bit is 1, the

V V V V
output voltage is coming as , , , and .
2 4 8 16

689
(Refer Slide Time: 23:55)

Now, what now when all the four inputs are applied simultaneously, then we can apply
another theorem or principle is this called principle of superposition. Principle of
superposition applies only for linear circuits, I am not going into the detail definition, but just
remember any circuit designed using resistances that is a linear circuit. Principle of
superposition says that the total output voltage will be the same as the sum of the output
voltages with respect to one of the inputs applied individually 1 at a time.

Like, what I am saying is that when all 4 inputs are applied, you calculate it separately. First
D3 is applied, all others are grounded, so what is the voltage? Then D2 is applied, all are
grounded, then what is the voltage? Then D1 is applied, then applied, you add all of them up
that will be the net VA right.

D3
So, this already you have seen VA is what , because, V we assumed D2, we said it
2

D1 D0 1
becomes by 4, , . So, if you take common, it becomes the decimal
8 16 16
equivalent of the number. So, VA is proportional to D and because this is a voltage follower,
Vout it will also be proportional to D right.

690
(Refer Slide Time: 25:46)

V
This is how this works; this is the calculation as I said. So, this becomes finally ,
16
therefore VA is proportional to D. And just one thing to note for an n-bit DA converter now,

V
in general if you look at the k-th input Dk the corresponding contribution will be ,
2 N−k

V V V
because for a 4 bit convert it was , , , it comes just like this.
2 4 8

So, with this we come to the end of this lecture. Now, in this lecture, we talked about two
different designs of a DA converter, digital to analog converter. First one in the weighted
resistor type, we had so many different values of resistances, because of which with respect to
implementation and accuracy, it becomes a problem. But for resistive ladder type DA
converter although the number of resistances required a larger, but only two different values
of resistances are required which can be fabricated very accurately. So, from the
implementation point of view, this weighted resistor type DA converter is most commonly
used. So, the next lecture we shall be starting our discussion on the reverse kind of
conversion namely analog to digital converter.

Thank you.

691
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 49
Analog-to-Digital Converter (Part I)

So, in this lecture we start our discussion on the design of Analog-to-Digital Converter. So,
earlier we had seen how we can design a digital to analog converter, now we are looking at
the reverse. Given an analog voltage which is continuous valued, how to convert it into an
equivalent digital world.

(Refer Slide Time: 00:39)

So, this is what is mean by A to D conversion, let us see. So, an analog to digital converter
said, it takes an analog voltage V as input; that means a continuous valued voltage as input.
Well, it can be the temperature of an environment; you are sensing the temperature and using
a transducer you are converting into a voltage that can be VA for example. And it will
generate digital word D as output such that there is a proportional relationship between D and
VA.

But, for DA converter we only looked at a couple of designs, but here will see for an AD
converter different designs are possible we shall be looking at 5 different designs; flash type,
counter type, tracking type, successive approximation and dual-slope type, right, but before
going into the designs there is some basic issues regarding analog to digital conversion that

692
needs to be thought about; you see when you talk about a DA converter you are saying that
well you are applying a digital input, you will be getting an analog output, now in digital
input may be coming from some register or somewhere or some output of some gates. So,
during the conversion the digital input can be held constant, but now think of an AD
converter, think of the example I talked about, suppose we are measuring the temperature or
means any kind of an analog signal from the environment, maybe the temperature of an oven
I am measuring.

Now, it is possible that because of some issue the temperature of the oven is changing or
fluctuating very fast. So, it may so happen that while the conversion is going on, your input is
changing. So, you would expect that while conversion is going on, input should be fixed then
only the conversion takes place in a correct way, but the input continues to change. Then,
there can be some error in conversion, ok. This is one issue for AD conversion you need to
look at.

(Refer Slide Time: 03:16)

So, let us talk about the overall schematic here. So, here we have the analog to digital
converter, but we need another lock before that sample and hold circuit to address the
problem that I mentioned just now. Sample and hold circuit means you sample the input
voltage what is the voltage value at this point in time, suppose it is 5.3 volt that is your
sampled value, and hold it, somehow keep that 5.3 volt fixed or constant while the conversion
is going on. So, that whatever digital output D is generated that will be proportional to this

693
5.3 volts, ok. If the input voltage changes then the D, you really do not know what the value
of D will be, fine. So, just what I have said that if the input signal changes during the process
of conversion, then the final digital output value may be erroneous, we use this kind of a
sample and hold circuit to sample the input voltage and hold it to a constant value while the
conversion is in progress.

You see the idea is like this, suppose my input signal, let us say it is varying like this and let
us say I am sampling the values periodically like this, here and here. So, in the scale I will be
getting one sample here, corresponding this voltage, may be another sample little higher
corresponding this voltage, third sample here may be somewhere here, fourth sample here
may be somewhere here and fifth sample here may be somewhere here. So, with respect to D
values I will be getting values like this. So, I will be not getting the original waveform, but I
will be getting discretized values of the waveform, these five values, these are called the
sampled values.

(Refer Slide Time: 05:47)

So, the sample and hold circuit, I am not going into the detailed design of it. This is the little
beyond the scope of this course, but conceptually let me tell you how it works. So, inside
there is a capacitor, there are some buffers, these triangle symbols are two buffers and there is
some kind of an electronic switch which can be turned on and off. So, while you are sampling
the sample phase, your switch is on, switch is on means whatever is your analog input AI this
voltage your capacitor will get charged to that voltage. So, your voltage is retained in the

694
capacitor and during hold phase your switch is turned off. Switch-off means whatever voltage
or charge was there on the capacitor, it will remain, the charge cannot leak through or
discharge through any path.

So, the input of the buffer will remain at that voltage and this analog output will be, will
contain the sampled value. So, sampling is carried out by closing the switch and charging the
capacitor and hold is carried out by opening the switch and using this buffer to transfer the
voltage on the capacitor to the output, right. This is how sample and hold works.

(Refer Slide Time: 07:35)

Now, there is some interesting observations and theorems. Now, the natural question to ask is
how fast we should sample the signals. So, I have a signal, I know some characteristic of the
signals, I know the frequency of the signal, suppose it is an audio signal, I am sampling some
speech voice. So, I know the frequencies, typically audio signals if we sample up to 5 or 6
kilo hertz, we get good quality of reproduction, ok. So, let us see.

The question how fast we must sample, this is answered by a theorem referred to as Nyquist
theorem. Nyquist theorem goes like this, it says a band limited analog signal; band limited
means there is a signal, you see in a signal there can be many frequency components, there
may be some minimum frequency, maximum frequency, a signal is a composition of all the
frequencies; maybe I have a minimum frequency fmin, I have a maximum frequency fmax.
This range of frequency, this is called a band band-limited means all frequencies are laying
within this range.

695
Band-limited analog signal which we are sampling can be perfectly reconstructed if we have
a sufficiently large number of samples, if this is important, if the sampling rate fs exceeds
twice fmax, the maximum frequency in this range, where fmax is the highest frequency in the
original signal; that means, if we know what is the maximum frequency component in your
original signal, you must sample at a rate which is greater than twice of that, Nyquist theorem
specifies this condition, ok. So, if I know my maximum frequency is 6 kilo hertz, I must
sample at a frequency which is greater than 12 kilo hertz, ok.

So, if you do not do it, then there will be something called as aliasing error, I should show
some examples what is aliasing. Aliasing means because of this error, the sampled value
whatever you are sampling, it will appear to have a different frequency than the original
signal that you are sampling.

(Refer Slide Time: 10:32)

And, there is another kind of theorem or you can say postulate this was by someone called
Valvano. Valvano’s postulate says, this is just a role of the thumb, it says again that if we
know the maximum frequency in a signal fmax, then if you sample the signal at least 10
times then this sampled value will approximately look like the original. That means, the
shape of the signal you can quite accurately guess from the sampled value.

The previous Nyquist theorem says that you can reconstruct this signal, Valvano’s postulate
says, just by plotting the sample values you can have a very nice idea regarding the original
shape of the signal, ok; so some examples I am showing, next some sampling examples.

696
(Refer Slide Time: 11:30)

This is with respect to Valvano’s postulate at 200 hertz signal, let us say a single frequency
component which is sampled at 10 times the rate. So, here I am showing a sign wave 200
hertz, these blue dots are the samples, sample period which is, you are sampling at 2000
hertz.

So, if you do not have the original waveform; if you only have the blue dots, just by joining
the blue dots, by straight lines, you will have a very fair idea how your original signal look
like, just join them by straight lines. You will have a very fair idea about the original
waveform; this is what Valvano’s postulate talks about.

697
(Refer Slide Time: 12:26)

Let us take another example where we have a 1000 hertz signal, for the sake of example I am
taking a single frequency, because in real signals we have a combination of several frequency
like an audio signal let us say. It is not a single frequency, I am generating when I am
speaking, it is a combination of many frequencies that I amplitudes are different, there all
combined together. Normally when you do a Fourier transform we can find out the frequency
components that are present in a signal, you need not go into the detail.

But, you see here, we are sampling this 1000 hertz signal at exactly at double the frequency,
not greater than double, let say exactly at double what will happen. Well, it may so happen
this is an extreme case that we are sampling at the dot points which are exactly in the middle
which means you see; let see the signal values are ranging from 1000 to 3000; minimum is
1000, maximum is 3000, but if you sample at the blue points that double the frequency you
will see all the sample values are 2000. So, you will have an illusion or means you will
having a feeling that your signal is constant always at 2000. So, that it is a sine wave you are
losing that information totally but this is happening, because you are not sampling at a
frequency which is greater than twice fmax, it is exactly equal to fmax, twice fmax you are
sampling right, but it should be greater than that, ok.

This is one nice example.

698
(Refer Slide Time: 14:18)

Now, let us say we are sampling at a frequency which is much less. This is again an example,
worked out at 2200 hertz signal which you are supposed to sample at a minimum greater than
4400, but you are, let us say you are sampling at only 2000 hertz. So, the dots of the sampling
points are shown. So, if you just join the dots you will see that it also looks like a sine wave,
but the frequency of the sine wave is entirely different with respect to your original
waveform.

So, your original waveform was at 2200 hertz, but here your this waveform is having a
frequency which is much smaller, may be about hundred hertz or so, right. This is called
aliasing that because of improper sampling frequency, you may be getting some sample
values, where the frequency you are getting or you are guessing, may be entirely different
with respect to the original waveform.

699
(Refer Slide Time: 15:43)

So, 100 hertz sampled at a sufficiently high frequency more than 10 times, 16 times. So, 100
hertz signal sampled at sufficiently high frequency, while on the right side, I am showing the
Fourier transform results. So, if you take these sampled values and you can use something
called a fast Fourier transform, you will be getting the frequency components of all the
frequencies. So, here you see that for this 100 hertz you have a maximum amplitude value
coming. So, you are getting the correct result here and for all the other frequencies
amplitudes are showing as 0. So, this no component, there is only one frequency component.

(Refer Slide Time: 16:34)

700
Let us take another, this is a composite signal which consist of three frequency component,
frequency of zero which is a DC component; that means, average value is not 0 that is called
DC 100 hertz and 400 hertz and you are sampling again at a sufficiently high frequency. So,
again if you do a Fourier transform, you see at DC, 0 frequency you are getting a frequency
component, at 100 hertz you are getting a frequency component, at 400 hertz you are getting
a frequency component and all others are 0. So, you are getting correctly you are retrieving
the frequency components, ok. So, if you sample at a sufficiently high frequency.

(Refer Slide Time: 17:24)

Let us take an example here while you are not sampling at a sufficiently high frequency 15
hertz, signal sampled at 1600 hertz, 1500 hertz. So, you see this sample value is again if you
just join them they appear to have a much lower frequency. And this will be very apparent if
you compute the fast Fourier transform of the sampled values you will see that not 1500
hertz, 1500 hertz is much towards a right you are getting a very high amplitude at 100 hertz
which is wrong, right. In your original waveform there is no 100 hertz component, it is a
single waveform of 1500 hertz. So, here aliasing is taking place.

So, you should be very careful when you are choosing the sampling frequency when you are
designing the input circuit to an AD converter. When you are getting your input ready by
sampling you should be very much aware of the frequencies of the signals ok, so that you
know; what is the sampling rate you should use so that whatever you are sampling and
converting that is very much commensurate and corresponds to your original waveform right.

701
(Refer Slide Time: 18:53)

So, you look at a complete picture here. Suppose, I am showing a schematic of a sample and
hold circuit like this and input voltage is coming and the switch where turning on and off, let
say and your input voltage is like this is your input voltage. And, here I am showing a
sampling a clock kind of a thing which tells you where I am sampling, I am sampling here, I
am sampling again here, I am sampling here and here, right. So, with respect to the input
waveform I was sampling here; that means, at this point then again after a gap, I am sampling
here; that means, here again here then here then here.

Now, in between that I am not sampling anything. So, I am assuming that the signal is not
changing. So, if I reconstruct the signal form the sampled values it will look like this sampled
value then here, I am assuming signal is not changing then again sampled value, here again
signal is not changing. I am assuming next the signal, sample value is here again not
changing, again sample value is here. So, there is a drop, again not changing, again sample
value is here not changing. So, you see your original waveform is like this, your
reconstructed waveform will be something like this right.

So, more frequent is your sampling your reconstructed signal will be looking more like your
original waveform, right. This is what you should remember.

702
(Refer Slide Time: 20:46)

And, now that you are trying to design an analog to digital converter, your ADC transfer
characteristic will be looking something like this; means along the x axis you can plot your
input voltage which is analog which is continuous. Let say from 0 to 12 volts and in the
output side you will be potting the digital value let say from 0 these are the digital value. So,
even if you are applying your inputs continuously the output value can only be discrete, they
will be some discrete levels. So, they will be some jumps 0 to 1, 2, 3, 4, again this kind of a
staircase kind of a waveform will be there, right.

So, this we shall be discussing in detail later when you talk about the design of AD converter
circuits. Now, the design of AD converter circuits will be starting to discuss from the next
lecture onwards. So, in this lecture we come to the end where we talked about a very
important facet to analog to digital conversion that of sampling the input signal and holding it
to a stable value while the conversion is in progress.

And, also you should be aware of the frequency components of the input signal in order to
take a decision that how fast to sample. You cannot do this blindly, if you do this blindly then
such aliasing and other problems I talked about that can occur which may mislead the sample
values that you are converting. Because, ultimately this sampled values will be failing to your
digital circuit or to some other sub system where some computation will be carried out. Now,
if the data you are feeding are not consistent then the computation may also be wrong.

703
So, I talked about several different types of AD converter from the next lecture, we shall be
discussing the designs of those AD conversions techniques.

Thank you.

704
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 50
Analog-to-Digital Converter (Part II)

So, we continue with our discussion on Analog to Digital Conversion in this present lecture.
So, this is the part 2 of our lecture on AD converter let us look at some of the general
characteristics first, there is something called “Resolution”; we want to talk about for a AD
Converter first. Now just like for a DA converter, when we give a digital input and obtain the
analog output we said that the step height was a measure of the resolution.

(Refer Slide Time: 00:39)

So, here also what is the minimum change in the input voltage that will result in one step
jump in the output because here the output is digital. So, you see here our input is analog
voltage VA, but our output will be digital. Digital means there will be some voltage levels or
digital levels. So, the output cannot be continuously changing, they will be discrete.

So, as you change your inputs continuously, your output will be changing in steps like; this.
The resolution tells you that what this is your one step height. And what is the input voltage
that corresponds to that change. So, you see if A is your full scale voltage of the AD

705
A
Converter, then full scale voltage divide by the total number of steps A divide by n . This
2
is defined as the Resolution. Let us take an example: suppose, I have an 8 bit AD Converter

1
with a full scale voltage of 1 volt. Then full scale voltage 8 . This many volts, this comes
2
to 3.9 millivolts and so, this if you multiplied by 100, you get it as a percentage right. This is
what your resolution talks about.

(Refer Slide Time: 03:17)

Then comes the Dynamic range. So, what is the dynamic range of an AD Converter, suppose
you see I have designed an AD Converter for some signal right. So, input there is an, there is
an analog signal which is coming. So, I know that what is the minimum voltage level and the
maximum voltage level in my analog signal. So, I am designing my AD Converter
accordingly. So, the minimum and the maximum amplitude to be measured; that means, input
the ratio between the two defines the dynamic range. Now there are a few things, let me tell
you.

706
(Refer Slide Time: 04:15)

So, for a linear AD Converter means the output and input are proportional. That is a linearity
converter; the dynamic range can be calculated by the number of bits, like for an 8 bit AD
Converter will be having a dynamic range of 256. Because there are 256 so, when you divide
the maximum voltage by the minimum voltage that will be equal to 256 right. This you can
calculate for a linear converter if it is proportional, but for some application where the input
is very large, then you need to just add some nonlinearity to the AD converter, but we are
ignoring that for the time being; just mentioning that for large dynamic range, some
nonlinearity may be introduced.

Now, some terms that you resolution of 8 bits like, this I mentioned earlier. And dynamic
range n bit dynamic range, let us take an example: suppose we talk about 8 bit resolution and
a 12 bit dynamic range, well 8 bit resolution means what, to just 8 bits of resolution we just

1
now talked about earlier, which comes to 0.39 percent. If you multiply it by 100 right,
28
resolution of 0.39 percent and 12 bit dynamic range means, how many steps 212 means
4000 something 4096 and so, approximately 4000. So, range is 1 to 4000.

So, it means when you combine the resolution with dynamic range, it means that you specify
a range, that I want to measure a range of 1 to 4096 with an accuracy or resolution of 0.39
percent, both are important. One says that: what is the limit you are measuring with and other

707
one says what is the accuracy with which are measuring it ok. Dynamic range and Resolution
both are important in that respect.

(Refer Slide Time: 06:53)

Conversion time and Bandwidth Conversion time says that, suppose I have an AD Converter
right. So, it depends on the design of the AD converter, how fast it can convert. So, suppose I
am applying an Analog Voltage VA, in the output I am getting a digital word D. So, how fast
it can convert, it can take a time Δ . Now depending on the technologies in which you are
building this, the conversion time can be a few nanoseconds; it can also be as low as a few
milliseconds right.

Secondly, you as I said earlier, you will be using this sample and hold circuit in the input that
will determine the input bandwidth that, what is the maximum rate with which you can apply
the inputs right, the conversion frequency will depend on that. So, these are some of the
parameters of an AD Converter. So, I am not going into the much detail of this.

708
(Refer Slide Time: 08:11)

Fine let us, talk about ADC Transfer Characteristics for Analog to Digital Converter; I said
that on the input side you have your input voltage and the output you are getting a digital
word. So, it will, ideally it will be a step, now there can be possible errors. Now, the first
error is an offset error, offset error, I already mentioned earlier with respect to DA converter
is very similar that instead of this straight line, the curve is either getting shifted up or shifted
down. There is a steady offset, a positive offset or a negative offset, there can be an offset
error now. Why this offset error, I mean occurs it depends on the actual technology with
which you are building it.

(Refer Slide Time: 09:17)

709
Fine, then comes the linearity or nonlinearity. Linearity means, you see this step heights again
quite similar to DA converter step height should be equal. But in this case, you may find that
for some of the steps the heights may be, may be unequal, which means you see, the steps
when you show like this, suppose I have the steps. So, if you join the middle point of these
steps, it will be like a straight line. So, the ideal case will be a straight line, as this blue line
shows now. Because of this non-linear error, if you join the middle points it may show like a
curve. As this red line shows, this shows the nonlinearity error right. So, this kind of
nonlinearity error you can be there.

(Refer Slide Time: 10:35)

And there can be a differential nonlinearity, this is another kind of error. Differential
Nonlinearity says that again the least significant value should be constant with respect to the
output voltage. But it is not, but for a real AD converter for a, for an acceptability converter.
The difference from the idle value should not exceed half of the LSB value. Now typically
how do we check, we apply a large number of random inputs covering the whole range, now
the frequency histogram should be equal across all values.

But what will see, if there is a differential non-linearity that it will not be the same. Because
ideal frequency should be flat like this black. One shows, but for differential nonlinearity you
will see that it may vary ok. But these are some of the errors related to AD converter, you
need not have to worry too much about these things now, just remember that this kind of
errors can take place.

710
(Refer Slide Time: 11:47)

Now, let us come to the design of the different types of AD Converters. This is more
interesting, the first kind of AD Converter design we talk about is something called a Flash
Type AD Converter. Now, let me tell you flash type AD Converter is the fastest type of AD
Converter that is known, but the problem is that it requires too much hardware. Let us, see
how flash type AD Converter, as I said it provides the fastest conversion. But it requires large
n−1
amount of hardware in what way, suppose I have an n bit AD Converter, I need 2
comparators, 2n resistances and one priority encoder of length 2n . Let us, talk a little
bit about this, well we talked about a comparator here right.

What is a comparator, this is not a digital comparator. Here what we are talking about is
something of an analog comparator ok. Symbolically, it looks similar to an op amp, but this is
not an op amp, this is a comparator. The way it works is that, the inputs are two voltages V1
and V2. And the output is a digital output f, the way a comparator works is as follows. If V1
is greater than V2; that means, plus input is greater than the minus input then the output will
be 1, digital output. If V1 less than V2 output is 0, but remember that when the inputs are
equal then the output is not defined it may be zero or one doesn’t matter. But it is defined
when it is greater than or less than ok, this is how a comparator works. And priority encoder
already you have seen earlier, what is a priority encoder. Let us see, how these things can fit
in place to create an AD converter.

711
(Refer Slide Time: 14:19)

Well on the left hand side. Let us, look at the design of a 3 bit, this is a 3 bit AD Converter.
3
So, according to what we showed in the previous slide. We have 2 −1 , 7 minus then
8−1 is 7. There are seven comparators; there are n, 23 , 8 resistances. And, we need a
priority encoder with 8 input means actually 1 less 7 inputs. There will be output will be three
now. Let us, see how this works, the first thing you see that I have a long resistive network
starting a voltage.

(Refer Slide Time: 15:17)

712
I have a resistive network like this. So, how does it work? So, how many resistance are there
1 2 3 4 5 6 7 8 talk about this point. What will be the voltage, here ground, here this is R. So,
what is the resistance on top 1 2 3 4 5 6 7 7 R. So, if you look at this is Vref ok. Vref.

(Refer Slide Time: 15:39)

R
This is a resistance divider. So, the voltage here will be ×Vref . So, the point here will
8R

V
be Vref, I am just writing only voltage, here it is 2 R and 6 R, on top this is 2 R and this
8

2R 2V
is 6 R. So, now, the voltage will be right. So, it will be here this point. So,
8 8

Vreference
now, what I mean to say is that the voltage at this point would be voltage at
8
this point.

713
(Refer Slide Time: 16:43)

2V 3V 4V 5V 6V 7V
Will be , here , , , . And at this point , this V is
8 8 8 8 8 8
actually Vref right. Now, what are these comparators doing depending on the input voltage

Vreference
suppose input voltage is less than . So, what so in the plus inputs I am
8
connecting the input voltage Vin, analog input is connected to the plus inputs of the
comparator and these voltages reference voltages are connected to the minus inputs. So, if the
plus input is less than all of them. So, the output A-B-C-D E-F-G will be all 0’s. So, it will 0
0 0 0, in such cases the priority encoder is generating a digital output of 0 0 0. That is the

V 2V
lowest voltage level, suppose now my voltage is greater than , but less than . So,
8 8
only the G comparator, G will be 1, all others will be 0 right. You see G is 1 all others are 0
and this is encoded as 001.

2V 3V
Now, if is greater than , but less than , then G equal to 1, F will also be 1. But
8 8
all others are 0, you see well if F is 1; obviously, G will be 1 that’s, why I am showing it as

2V
don’t care. Because it’s a priority encoder if it is greater than ; obviously, it is greater
8

714
3V
than by 8 ok. So, in this case it is 0 1 0. Similarly if E is 1, is 1 means greater than
8

4V 5V
then 0 1 1. If D is 1, D is 1 means greater than ,1 0 0. Then C means , 1 0 1.
8 8

6V 7V
Similarly, B is , 1 1 0 and if it is greater than . A is 1, then 0 0 0. And this is
8 8
priority encoder means, if a particular bit is 1 then the lower priority ones you need not see,
you can directly encode it. So, you see just using a resistive network, I have generated all the
reference voltages. And using so, many comparators, I am comparing the input voltage
parallely with all these reference voltages. And whatever is the output coming using a priority
encoder I am encoding it into a digital word.

So, see this conversion is fast, because it uses a single process delay of the comparators plus
delay of the priority encoder that’s it. There is no other delay in this conversion that. So, this
is very fast, but the problem is that suppose, I want to design a 16- bit AD converter. So, you
cannot afford to use 65000 comparators not possible ok. So, this can be used only for very
small values of in right.

(Refer Slide Time: 20:47)

Let us see more practical kind of converters which you can use for larger values of n. let us
look at something called a counter type AD converter. So, I have shown this schematic
diagram on the left, here again we are using a comparator, but only one comparator. And we

715
are using a binary counter; there is a clock which is coming continuously. Let us see what
happens, this counter value is initialized to 0. Suppose it is a four bit counter, we initialize to
0 0 0 0 and we are using a DA converter also. We already know how to design a DA
converter. So, in this approach we are using a DA converter to design an AD converter ok.

The idea is very simple we initialize the counter with 0. We feed it to the input of the DA
converter. DA converter generates D output, which is equivalent to 0, the lowest voltage, you
compare this lowest voltage with your input voltage that you want to convert. If you see your
input voltage is larger which means the output will be 1. So, when the clock comes this will
be 1. So, the counter will count, counter will become 1. So, DA converter will increase the
output voltage by one step, again that voltage will be compared, if Vin is still larger the
output of the comparator will be still 1; so, again counter will count by 1 more to.

So, in this way the counter will go on counting 0 1 2 3 4 and the output of the DA converter
will also go step by step, it will go on increasing. And continuously the comparator will be
comparing that increasing voltage with your input voltage as soon as the D value or the
output of the DA converter crosses the input analog voltage, it will stop because the output
will become 0 and the this and gate output will 0 and the counter will stop counting.

(Refer Slide Time: 23:13)

So, schematically you can show it like this, the output of the DA converter. It will increase
step by step as soon as it crosses this Vin. Let’s say this is your input voltage level, you have
applied, whatever voltage you have applied as soon as it crosses the counter stops. And at this

716
point whatever is the value in the counter that is the equivalent digital value in the, for the DA
Converter. This Vin, well instead of calling it D; let us call this as D, this makes more sense
Vin will be proportional to D. So, you are getting the value of D right, 0 1 2 3 4, but the
problem here is that in the worst case for an n bit conversion the counter will have to count
n
2 times, its an n bit counter. So, number of clock pulses required can be as large as 2n .

So, you can see why I have said that flashed up converter was faster because everything was
done in a single step, but here there is a clock you are applying, clock maximum 2n times
for a 16 bit converter. 2n can be about 65000. So, 65000 cycles may be required to
complete the conversion right. So, whatever we have said this is actually mentioned here.

(Refer Slide Time: 25:03)

So, we use a DA converter, a counter and a comparator. Before the conversion starts, the
counter is reset to 0, the counter output is fed to the input of the DA converter. And the DA
converter output is compared to the input voltage that you are applying the input voltage is
greater than the DA converter output. Then the counter increments by 1, if it is less than the
counter does not increment anymore and conversion is complete. As I said, the worst case
conversion can be 2 to the power n time number of clock pulses.

717
(Refer Slide Time: 25:27)

Now let us modify the design a little bit. When we move to a method, which is called
Tracking Type AD converter. Let us see for a simple counting type AD converter. What we
are doing, we are having a counter starting from 0, we are counting up, up, up, up, up. And
we are stopping as soon as we cross the level of the input voltage that count value will be my
digital output.

Now, suppose what I do, instead of a binary counter normal up counter, we use a up/down
counter. Now, here we say that, well we want to continuously track the input voltage, we do
not want to start counting from 0, every time one problem with the counting type converter
was that you always start counting from 0, 0 1 2 3 4. Suppose your input voltage is equivalent
to a count of 1000. So, you will have to count 1000 times, 1000 clock pulse next data you
sample. Maybe it was 1005, again from 0 will have to count 1005, next sample value was let
say 1001, again from 0 you have to count to 1001.

So, it was very time consuming. So, what I am saying is that suppose the input waveform is
slowly changing. We know that from 1000, it can go up 2005 or say 995, not much change.
So, why every time start from 0. If you remember the last count value, why do not you start
from there, either go up or go down. So, you use a up/down counter instead of a normal up
counter. So, your modified schematic will look like this, you use an up/down counter. And
rest is very similar from the output of the DA converter. You use the comparator and the
comparator output can be either 0 or 1 and similarly the output of this clock will be 0 or 1.

718
So, when it will be 0, it will be 0. If V in is smaller ok, the D A converter output is larger. So,
if DA converter output is larger, I will have to reduce the value of the DA converter output.
So, 0 means, I will be counting down, I will reduce the value of DA converter, but if it is 1,
which means V in is larger DA converter, output is smaller. I have to increase the D A
converter value.

So, if it is 1, I have to count up. So, there is no concept of stopping the count. I am
continuously doing it, if it is 0, the count is counting down. If it is 1, it is counting up, this is
called Tracking Type, because the DA converter output tries to track the variation of the input
waveform continuously. So, as this example shows, suppose this is my input voltage, this
straight line just an example: suppose the input voltage is changing linearly in a straight line.
And well the first time you start with 0, 0 1 2 3 4 5, here you cross. So, here you have 1
conversion after that if you go up you see that input voltage is less than that, that is why you
decrease, here is a down count next step, it is higher you increase. So, again its crossed next
step, you decrease, next step your input voltages increase. So, it will be increasing here, it
crosses again, it decreases again, it increases two steps.

So, it is continuously trying to track. So, it is not coming to 0 and from 0 it is again trying to
search right. So, this tracking type of AD converter works much faster as compared to a
simple counter type counter type. For those of you who know programming, it is like a linear
search you are searching for a certain value of Vin starting from 0, 0 1 2 3 4 5 6. You are
going up now in tracking type it says once you have found it for the next time you start from
there either go up or go down. Tracking type works well for signals which are changing
relatively slowly ok because, it changes very fast. So, it may not be able to track.

So, whatever we have said is here we use a binary up down counter if the comparator output
is 1. We increment the counter if it is 0 we decrement the counter. As I have said is suitable
for relatively slowly varying waveforms because the reason is that if V in changes too fast the
counter may not be able to track the changes because we are basically using a clock to either
count up by 1 or count down by 1.

719
(Refer Slide Time: 31:25)

If it changes in a jump by 100 steps, again you will have to count up 100 steps slowly-slowly.
n
It will not be so efficient right, but in the worst case the conversion time is still 2 right.
So, we come to the end of this lecture. In the next lecture we shall be looking at well a similar
kind of a technique, but a scheme which works much better in the worst case. Because here
also we said in the worst case I may have to start from 0 and go up to maximum 2n .

But in the next method that we shall be talking about in the next lecture, we will see that for
an n bit conversion maximum n clock cycles will be required for 1 conversion. Suppose, I
16
have a 16 bit converter I do not need 65000, 2 cycles, clock cycles. I need maximum
only 16 clock cycles, it will be much faster and scalable for larger values of n, you will be
able to use it. This will be discussed in the next lecture.

Thank you.

720
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 51
Analog-to-Digital Converter (Part III)

So, we continue with our discussion on Analog to Digital Converter. If you recall what we
discussed in the last lecture, we looked at some of the various techniques for AD Conversion.
We started with the flash AD Converter, which was very simple in concept, very fast, but
requires lot of hardware, lot of comparators, lot of resistances and also a priority encoder.

Then we looked at some of the counter based AD Converter: one was based on an up counter
other was based on an up-down counter. Now in both of these methods, if the total number of
n
bits is n, the number of steps can be 2 . So, in the worst case, the counter has to count
from 0 up to 2n . So, the worst case conversion complexity was 2n clock cycles.

(Refer Slide Time: 01:27)

Now, today we shall be discussing in the third part of our lecture. We shall be discussing a
new technique, another technique of counter-based AD Conversion. This is called Successive
Approximation Type ADC. Let me first intuitively tell you that what we are trying to do in
this kind of a converter. See here, we have 2n steps; the basic idea is follows that we do
not start with 0 and count up, up, up till you reach the level of the input voltage. Rather we

721
n n−1
start with the middle 0 to 2 . We start with 2 , that is the middle, we start with a
middle value and check whether that middle value is greater than or less than the input
voltage. If it is less than we have to move further up, if it is greater than we have to move
below.

So, in this one comparison, we have reduced the size of the total list to half. So, either we
have to look at the upper half or the lower half. So, we repeat the same process. Suppose we
see that we have to go to the upper half, we again look at the middle of the upper half. So,
again with respect to the upper half, we have to look into the upper half of that or the lower
half of that. So, the size of the list is becoming half at every step. So, the good thing here is
that, instead of 2n comparisons or trials, we require only n trials this is a very big advance.

So, let us look into this in detail. This method is called Successive Approximation Type AD
Converter. Now, for those of you who are familiar with this method of binary search in
software, so this method does something similar to that, so if you have a sorted list of
numbers. And if you want to search for a number you start with the middle, so you take a
decision either to look into the left side or the right side, again look into the middle of that
half and so on. So, for n numbers, the complexity of searching becomes log n , log to the
base 2 same thing happens here ok.

So, here as an analogy you can assume that the number we are searching here is the input
voltage ok. So, we start with the middle of the list as I have said, and after comparing the
middle of the list, we have to look at either the upper half or the lower half. And as I said, if
n
there are 2 steps, the advantage again is that number of steps required is reduced to only
n. So, what modifications are required to have this? See earlier, we used a counter either an
up counter or an up-down counter. Here we use a modified version of a counter, not exactly a
counter. This is something called a successive approximation register, which simulates this
binary search in hardware. Let’s see how it works.

Let us take an example of a four-bit successive approximation register. We start by making


the most significant bit 1; that means, 1 0 0 0. So, in decimal what does 1 0 0 mean? It is 8.
So, we have a range from 0 to 15. So, 8 represents approximately the middle point, so we
search the middle. Then we check the output of the comparator, whether the input value
which you want to convert is less than or greater than this value. If it is less than, if you have
to go down, what we do? We change this 1 to 0 and set the next bit to 1, but if we find that it

722
is, input value is less, input values greater, we have to move up. So, we leave this bit as 1, we
set the next bit to 0.

So, we set the next bit to 0 always, but either the current bit we reset to 0 or leave it as it is.
You say, 0 and 0 0 means 4, which is the middle point of this half and 1 1 0 0 is 12 which is
the middle point of the upper half. This process we go on repeating ok. So, if there are n
number of bits in this word, the number of steps we required this will be n only. So, instead of
n
2 , we have a technique here, where you require only n number of steps right.

(Refer Slide Time: 06:45)

Let us pictorially show it in terms of a decision diagram like as shown in the left. So, as I
said for a 4-bit converter, I am taking the example of a 4-bit converter. We start with 1 0 0 0.

Suppose the input voltage Vin that we have applied, that corresponds to the digital output; let
us say 1 0 0 1. So, 1 0 0 1 is my expected output. The way the searching will proceed is as
follows, you first make a comparison here, the output of the comparator, there will be a
comparator which will be comparing the current voltage from AD Converter with the input
voltage. If we see that the Vin input is greater then, we have to move upwards, no you have to
follow this link.

And the next data you have to compare is 1 1 0 0, which is 12. This will be half of the upper
half right. This is the lower half, this is the upper half. Here we check and find out that the
input voltage is less because it is lower. So, you will be following this path, what you do? See

723
since you are moving up in the first case. This one remained as one, the second bit was made
1, 1 1 0 0. But at the next step if you see that it is less, what to do? You leave the first 1 as it
is. The current one you reset it to 0, next bit you set to one, so it becomes 1 0 1 0. Then again
there is a comparison you find that input is less, you follow this path. So, again the previous
bits remain, the current bit which are set to 1 is reset to 0, the next bit is set to 1, 1 0 0 1. And
here you finally, see it is greater and you arrive at the final value.

(Refer Slide Time: 09:03)

So, you see you require 1 2 3 and 4; four comparison steps only right. So, only four
comparison steps are required to find the final digital value. Let us see the overall schematic
now. The schematic looks very much similar to a counter type AD Converter, the only
difference is that here instead of a counter we use a successive approximation register. Well,
you just ignore this convention so which is MSB, LSB, these are the 4-bits. So, 4-bits, as I
said we start with 1 0 0 0, the corresponding output of the AD Converter is compared by a
comparator with the input voltage, depending on whether the output voltage of the
comparator is 0 or 1 the successive approximation register takes the decision, because you
start with 1 0 0 0. So, if the output is 0 means Vin is smaller, you reset this to 0 to make the
next bit 1.

(Refer Slide Time: 10:21)

724
If it is 1 means Vin is greater, then you keep it 1, make the next bit 0 and you continue this.
There are two additional signals as you can see, Start-of-Conversion and End-of-Conversion.
So, at the beginning when you want to start a conversion, you set this Start-Of-Conversation
signal to 1. So, this Successive Approximation Register will be initialized to 1 0 0 0 and the
process will start and after the conversion is over the End-of-Conversion which is an output
signal this will be activated.

(Refer Slide Time: 10:43)

So, you know that the conversion is over right. So, conceptually this is very simple. So, let us
take a very small example of a 3-bit converter and see that how the signal changes with time,
while the conversion is going on.

725
Suppose, I am showing the input-output curve; so along the X-axis, I am showing number of
iterations and along the X, Y-axis is the output voltage of the AD converter. Because we have
the AD converter, successive approximation register is feeding the data to the AD converter.
And this AD Converter output is compared with Vin right. So, this output of the AD
converter, we are plotting along the Y-axis. And suppose this is my input voltage Vin.

So, the first iteration we shall be starting with 1 1 0, this corresponds to 1 0 0, the first

4
iteration here. So, you see this is the level of the voltage, it is half of the full scale Vf,
8
this corresponds to 1 0 0. Then you compare with Vin and find that well Vin is greater, so you
have to move up. So, the next value will be 1 1 0. So, in the second iteration here you put 1 1
0.

6
So, 1 1 0 will be equivalent to a level like this Vref. So, here you see Vin smaller, Vin is
8
lower than this. So, you in the 3rd iteration you reset this 1 to 0, make the next bit 1; 1 0 1 is
5, it is the lower, 1 this 1 you find Vin is greater. So, in the last step, means you will be
moving up and the final result will be 1 0 1; this 1 0 1 is the final result.

So, you see during conversion the output of the AD Converter will fluctuate widely, ok.
Unlike a simple counter type, where the voltage rises steadily from 0 up to the input voltage
level, but here since you are doing something like a binary search the voltage level will
fluctuate like this and then it will stabilize ok. This is how this successive approximation AD
converter works, ok.

726
(Refer Slide Time: 13:31)

Fine, now the last kind of converter that you want to talk about here is something called
Dual-Slope Type AD Converter. Now Dual-Slope type AD Converter is accurate, is quite
accurate. But the problem is that because you have some op-amp based circuits, you are using
here, speed of operation is a little slower, ok. Let us try to understand how it works.

These are the components required as you can see, there are 6 things required, which you can
identify in this diagram. Op-amp based in the integrator, I shall talk about this, this is your
operational amplifier. Then you have a binary counter, this is your binary counter, you have a
clock, you are applying a clock, you have some control logic. Of course, some control logic,
you have a voltage comparator, this is your comparator. And you have some electronically
controlled switches; there are 2 switches, one is this S1, other is this S2, the switch S1 works
as follows: This input of this op-amp, this point is either connected to an input voltage from
here or is connected to a reference voltage out here.

(Refer Slide Time: 15:33)

727
So, it switches like that and the second switch S2, this is just an on-off switch, either it is
connected or not connected, ok. Let us very briefly talk about how an op-amp based
integrator works without going into the mathematical details.

Well, you have already seen how an op-amp can be used as an amplifier, non-inverting and
inverting. Now in an integrator, in this feedback instead of a resistance, here use a
capacitance. So, this looks very much similar to a inverting amplifier, if this is Vin, this is
Vout, it like this. So, here what happens, well again without going into the derivation, if we
look at the output voltage, suppose initially the capacitor is discharged, there is no charge
because this point is at 0 Volts, this because, other point is grounded. So, V0 will also be 0.
So, if you just plot V0, I am just plotting V0.

This is the 0 level. So, it starts with 0. Now with time, with a time constant of RC, this this
capacitor will go on charging and because it is inverting it will go in the negative direction,
ok. This is how this integrator works, right. And just one point to note is that this slope of this
curve or this that means that within a particular time T, how much this voltage drops down to;
this will be proportional to the input voltage. This is something you should keep in. Whatever
voltage you are connecting, the rate of change of the output voltage will be proportional to
that. So, we are basically exploiting this principle, ok.

So, let us move on we will be explaining how it works.

(Refer Slide Time: 17:37)

728
So, just if you move back once, here we are applying a reference voltage or we were applying
an input voltage, ok. Now the idea is as follows: So, we are showing the output voltage; well
actually this is actually negative, we were showing the positive direction, is actually negative
way into our plotting. So, this is the output of the integrator, this is V-INT we are saying ok,
this is the integrator, the output is V-INT. So, the operation process, like this.

So, initially at time T equal to 0, we connect the input unknown voltage to be converted at the
input. We connect Vin the input and we allow the integrator to operate for a fixed time
duration T-INT; T-INT is fixed, ok. So, the voltage will change and will reach a certain point
here. So, after this T-INT, we connect the input, now to the reference voltage; reference
voltage is a fixed reference voltage which is given. And we measure after how much time
now the voltage will go down, because reference voltage has a different polarity.

So, if one is positive other will be negative right, the polarities are different.

So, the output voltage of the capacitor will change in the reverse direction. So, we measure
after how much time the capacitance value is reaching 0, right. Now the calculation is very
simple, first part of it the input voltage, whatever you are applying Vin divide by this time,
this is this level of voltage proportional to this. And in the second part the reference voltage

Vin
divide by this time. So, if you just solve this time T-DE-INT will be proportional to ,
Vref
now because Vref is constant, this will be proportional to Vin. So, if we can measure this time

729
that will be our corresponding digital output. If we use a counter to count this time in some
way that will be proportional to Vin, ok. This is how it works.

(Refer Slide Time: 20:19)

Let us see with respect to the diagram again, how the thing works, step by step we proceed
starting from time T equal to 0, and we see how the electronic switches S1 is to, there are two
electronic switches, how they are actually controlled, right.

So, initially, before the conversion starts we were saying T less than 0, so S1 is set to ground.
So, S1 is set to ground, S2 is closed, which means the the capacitor is totally discharged and
the output voltage V0 out here V0 is at 0 level here. At time T equal to 0, conversion starts,
what we do, we open S2, this is S2 switch, is open, we disconnect it. So now, it works as an
integrator, this is not there and S1 is said to Vin, we connect. This input to this input Vin out
here and there is a control logic on a counter, this counter will count up to a certain count
value, this will measure this T-INT.

So, we allow this to happen till T-INT time elapses, in T-INT the voltage will rise up to a
level, which will be proportional to my input voltage Vin right. So, the output of integrator

Vin× T −
∫¿
will be actually RC , which is proportional to Vin because T-INT is constant RC
¿
are also constant now. At this point, at time T-INT, we do two things: We said the switch to
the reference voltage, negative value; V-ref is a negative value, we apply −Vref .

730
So, the output voltage of integrator will start de-integrating, will start reducing and this

−Vref
reduction will be carried out with a slope of , this comparator out here, it checks
RC
when the output crosses 0. And in the meantime, the counter is reset to 0 and the counter
starts counting. So, as soon as the comparator output resets 0, the current value of the counter
will be proportional to this T-DE-INT. And this count value is the required digital output D
that is how the dual slope AD Converter basically works, right.

(Refer Slide Time: 23:21)

Now, just on last thing here; just here I am showing if the input voltages are changed, the first
part we are using a constant time, we call it as T-INT, right. The difference is that the slope of
this curve will just be different; that means the final value at the output of the integrator, this
will be different. But in the second part we are applying a constant reference voltage at the
input, so that the slope will be the same. So, the time it takes to cross 0 will be different either
this or this or this. And if you use a counter to count and see after how long it crosses 0, that
D will be proportional to my input applied voltage. So, I have a Analog to Digital converter
starting with Vin. I have obtained a corresponding proportional digital output right. So, this is
how this dual slope AD Converter works.

So, with this, we come to the end of this lecture. So, if you recall in this lecture, we looked at
two different methods of AD Conversion: one was the successive approximation type, which
is basically one classic example of implementing binary search in hardware and the second

731
method was a method called Dual-Slope AD converter. Now let me tell you, I just mentioned
Dual-Slope is more accurate; now, the reason why it is more accurate is that well in an
integrator, in an op-amp circuit there can be errors in the values of the resistances and the
capacitances, you cannot get a component with precisely a value which you want.

So, even if there are variations you are doing it in two phases, one is integration then de-
integration; the tolerance of the differences in the value, they cancel out. So, the final result is
independent of the variation in the component values that is why we say that the Dual-Slope
method is more accurate.

So, with this, we come to the end of this lecture.

Thank you.

732
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 52
Asynchronous Sequential Circuits (Part I)

So, in this lecture, we start our discussion on Asynchronous Sequential Circuits. You see
the kind of sequential circuits that we have discussed so far is basically synchronous in
nature. What is meant by synchronous? There is a clock; all operations in the circuit are
carried out in synchronism with the clock. When I say in synchronism, it means that
there are flip flops which store the state of the circuit, the machine and with the clock
this state changes occur in synchronism. But in an asynchronous circuit, there is no
concept of flip flops or a clock. Everything happens with respect to the delays of the
circuit element gates etc, right.

So, naturally speaking, if we have this kind of a scenario, there is no clock to


synchronize, the design of such circuits is much more difficult in general. Let us look at
some of the aspects of asynchronous sequential circuits, fine.

(Refer Slide Time: 01:31)

So, as I have said the asynchronous circuits in general, they do not rely on a clock. What
do they rely on? They exploit the delays of the gates and other circuit element. Let us
take an example. Well, one asynchronous circuit element you have already seen without

733
well explicitly knowing about it. Well, you think of a 1 bit latch. A 1 bit latch you can
implement using cross coupled NAND gates or cross coupled NOR gates, both ways you
can implement, well.

While we are discussing flip flops, we saw this kind of 1 bit latch designs. Now, you see
this circuit I means this itself is a sequential circuit, because it can store some data. So,
once I store some data 0 and 1, so, as long as I am applying 1 1 here, it remembers this
value. Because 1 and 1 comes 1 1 is 0, it remains 0, 0 and 1, it is 1; it remains 1. So, it
can memorize something which means it is a sequential circuit. But there are no flip
flops, only gates. The way this 0 and 1 are remembered is based on the delay of these
gates. So, after some delay, this is computed again 1 is generated, after some delays this
0 is generated. So, this is a classic example of an asynchronous circuit where operation is
determined by the delays of the circuit element, here gates. Let us move on. But as it said
design of asynchronous circuit is difficult.

(Refer Slide Time: 03:43)

So, in general when we design large systems, they are synchronous in nature. But
sometimes what you do? As part of a large system, smaller subsystems can be
asynchronous. Just like in a flip flop, this cross coupled latches were there. The reason is
that in general asynchronous circuits run faster because they do not have to wait for the
clock to come. So, whenever the input changes depending on the gate delays, the output
can be immediately computed, this is what is done.

734
So, if we have a large subsystem like this, as I said, certain smaller subsystems can be
made to operate asynchronously, right. And if you look at the general structure of an
asynchronous circuit, there are two differences you will find in general. There are delay
elements in place of the flip flops.

So, we will come to this and the combination of the signals on the primary input and the
delay outputs; just like in a synchronous sequential circuit, the flip flops determine the
state, but here states of the primary inputs, as well as the delay outputs determine the
state. This is called total state.

(Refer Slide Time: 05:13)

Let us look into this, let us look at a structure diagram first, like this. This diagram you
see on the left, this is very much similar to what we have seen for a synchronous
sequential circuit. But the difference is that, in place of these delay elements, you see out
here, there you had flip flops and the flip flops were triggered by a common clock signal.
We said in synchronous circuits that whatever you are storing in the flip flops that
determine the state of my machine.

But for an asynchronous circuit, because we do not have any separate flip flops.
Everything is determined by the gates. This delays that we are talking about here, this
delay can be a NAND gate, let us say delay is just a NAND gate, a gate. These inputs are
coming; the inputs are also going into some gate, right. So, there are such delay elements
everywhere in the circuit. So, when you talk about the total state, I talked about, so, it is

735
not only the primary inputs, but also the output of the delay elements. The total inputs
that are being applied to the combination circuit that determines the state of the system.
Now, there are a few terminologies let us look at.

(Refer Slide Time: 06:53)

There is something called input state. Let us say there are l input variables, X1, X2 to Xl.
So, there can be 2l possible combinations. They determine something called input
state. So, some input is applied to a combination circuit. Then you have some internal
state which is sometimes also referred to as secondary state. This refers to the
combination of the outputs of the delay elements, the small y1 to small yk. There are k
k
delay elements here, I am showing schematically. So, there can be 2 such internal
states. And total states when you talk about, it will be 2l multiplied by 2k ; 2l+k .

Now, this y1 to yk is sometimes also called secondary or internal variables. And the
signals that are fed to the input of this delay element, let us say gates, as an example I
said. These are referred to by capital Y1 to capital Yk. These are referred to as excitation
variables right ok. Now, the point to note here is that, you see for an asynchronous
sequential circuit, there are no clocks. There is no signal coming from outside that tells
you when this Y1 will be copied to small y1.

736
(Refer Slide Time: 08:35)

For a synchronous circuit the clock did that. But here this will happen after the delay of
the gate, I showed here a gate, let us take that same example. After this gate delay, this
will happen. So, when some input is applied, the circuit will go through some transition
region, some temporary states before they get stabilized into y1 to yk. There is a concept
of stable state that comes into the picture here, ok.

(Refer Slide Time: 09:13)

So, let us go into the definition. We define something called stable state. For some given
input combinations, we say that this circuit in a, is in a stable state when this capital Y’s

737
are all copied to the small y’s for all the secondary state variables. Because you see the
delay elements, each of the delay elements the input was capital Yi, the output was small
yi.

So, it says that when all these outputs that you are computing based on a given input,
they have been moved to yi, you call that the circuit is now in a stable state. But
temporarily when there is a change in the input, the excitation variables, excitation
variable means this capital of Yi’s. They change, but those changes have not yet been
reflected in small yi because there is a delay here that we refer to as an unstable state
because the circuit may go through an unstable state before stabilizing.

And the next point is, when this capital Yi’s are copied into yi, we say that we have
reached the next stable state. So, basically transitions take place from one stable state to
another whenever the input changes. In a synchronous sequential circuit, state changes
occur in synchronism with the clock. But in an asynchronous circuit, whenever the inputs
change, the state changes will occur. For example, you think of that 1 bit latch. So,
whenever the input I am applying 0 1 or 1 0 who’s only then the output of the latch will
change. So, when you apply 1 1 that is the stable state; no change, right.

(Refer Slide Time: 11:45)

Such an asynchronous circuit has something called a fundamental mode of operation that
you refer to about. So, in the fundamental mode of operation what we assume is that
when we have made some changes in the input variable, we must be very careful. No

738
other changes in the input values are permissible before the circuit enters a stable state.
Because here there is no concept of clock everything happens in synchronism with the
delays or depending on the delays of the gates.

So, whenever there is a change in the input, you should allow for the circuit to stabilize
before you apply the next input. So, you should not apply or change the inputs before the
circuits have stabilized, right. And there can be 2 different kinds of changes that you can
talk about, that one input changing at a time. This is called Single Input Change or SIC in
short or in general you can have Multiple Input Change or MIC. These are the 2
fundamental modes of operation we talked about.

(Refer Slide Time: 13:07)

Now in asynchronous circuit, the most important thing that we talked about that you
need to worry about is something called hazard. Say, hazards are also sometimes called
glitches. You say let me just explain with the help of a general things. Suppose I have a
circuit, I have a circuit, there are some inputs. So, I apply some input I1, I change it to
some other input I2. Suppose, there is a output. For I1, the output is supposed to be 0, for
I2 also the output is supposed to be 0. So, it is supposed to be a 0 to 0 transition means
no change.

But actually because of the delays of the gates what you may see that the output is
temporarily going into 1, this is a temporary state before stabilizing to 0. Or if it is 1 to, it

739
was 1, remains 1, it may be temporarily going to 0 like this. This is what is called glitch
and this, whatever I mentioned here, this is sometimes also referred to as static hazard.

Let us take another example, where the input was 0, let us suppose it is going to 1; for I1
it is 0, I2 it is 1. But instead of making a clean transition like this, it may so happen
because of the delays that it makes a temporary transition like this before going to 1.
Similarly, for 1 to 0 instead of making a clean transition, it makes transition like this.
These are also glitches; this is called dynamic hazard.

So, in sequential in this kind of asynchronous circuit design, we need to worry about
these hazards because the reason is that if the output of a circuit is going to the input of
another circuit. Suppose, I have a circuit here C1, it is going to the input of another
circuit C2. Now, if there is a glitch like this here because of this glitch this C2 can start
malfunctioning and the output may be wrong, ok. So, it is always good to avoid such
hazards when you are designing asynchronous circuits. There are 2 types of hazards we
normally talk about.

(Refer Slide Time: 16:03)

So, one is the example I talk, I gave. This is called logic hazard. They are caused by
changes in the circuit signal. Because of the delays in the gates, the changes will take
some time, they are non-instantaneous.

740
And in general there can be function hazards which do not that much depend on the
delay of the gates, but they depend on the functional specification themselves. But
hazards in general as I said can result in erroneous behavior; the glitches may be fed to
some other circuit that can lead to errors, right, fine.

(Refer Slide Time: 16:45)

Let us now take an example. Let us look into the design of single input change, SIC
hazard free circuit. Let us take an example, this function T in sum of products, sum of
minterms representation; 2, 3, 5, 7. Now in a Karnaugh map, the 4 min terms are shown
here; 2, 3, 5 and 7. No, these are not exactly 2, 3, 5, 7; anyway 2 is 010, this is 2, this is
3, 5 is this, ok. This is 5, this is a 7 ok, alright.

Now, suppose I minimize this function like this, the 2 cubes are shown by solid lines and
the minimized form of the expression is this. So, I can implement this like this, the first
get implements x́ y . I feed x́ in the first input, y here. The second input I apply x
and z. There is OR. Now let us suppose, let us take a scenario, that I have applied y equal
to 1, z equal to 1. This is fixed and x was 0 and I make a single input change, I make x
equal to 1. So, when I make x equal to 0 to 1; that means, x changes from 0 to 1 and
x́ changes from1 to 0. Now, you see the delay of these 2 gates can be different, right.

Suppose, delay of gate 2 is more than G1, so, this transition occurs first. G1 output
becomes 0 first, but you see this is at AND gate, this is delayed. So, the output will
become 1 after some delay. So, there will be a small time where this G2 will be 1 then it

741
will become 0. So, there is will be a small period when both G1 and G2 will be 0 and 0.
That will result in this static hazard as I have shown here. This will happen if the delay of
G2 is greater than delay of G1. I suggest you draw the timing diagram and verify that a
glitch like this actually happens right.

(Refer Slide Time: 20:01)

Now, coming back to this Karnaugh map, this glitch happens because we are changing a
single variable; here it is x and there are 2 adjacent combinations we can identify x́ y
z and x y z. You see looking at the cube, you look at these 2, 1, you look at this. You look
at this these 2 correspond to x́ y z and x y z. In the 2 cubes there is one adjacent cube.
And whenever this x changes, temporarily you make a change, this arrow is shown this
temporary transition is taking place.

Now, in order to avoid this hazard, this kind of a glitch, what you do? You look at this
kind of adjacent cubes and you see that well, there are 2 adjacent cubes; you also include
this cube in your implementation. Well, this may not be minimal, you are adding one
more gate alright, but you are avoiding hazard.

So, if you add this gate, this means what; y z. So, you add this third gate here. And what
you get? You can check that here even if there are delays in these gates, the output will
be a perfect one; no glitch, this is a hazard free circuit ok.

742
(Refer Slide Time: 21:35)

Talking about static logic hazard, as I said in the example, transition between a pair of
adjacent input combinations which correspond to identical output values, in the example
it was 1, it remained 1. It may temporarily generate a spurious output value, glitch. And
this occurs as I just now said with respect to Karnaugh map. When no cube in the
Karnaugh map contains both combinations which was shown in the dotted way. Solution
is to cover both combination which a cube, which we actually did, right. And in the final
solution, there is no hazard.

(Refer Slide Time: 22:23)

743
So, let us continue with the discussion. There is something called transition cube and
required cube. Let us see, transition cube is defined with respect to 2 minterms, m1 and
m 2. This refers to set of all minterms that can be reached from m1 and up to m2. Let us
take an example.

Suppose I have 010 and 100, this is m1 and m2 is 100. So, I want to go from 010 to 100,
but I can change only one input at a time. So, there can be minterms like that in addition
to 00 and 100, there can be 010, 010 is there, there can be 000 and there can be 110. You
see to change 010 to 100 I can have one possibilities, 010 to I can go to 000 first, ok.
Then I can go to 100 or I can start with 010. I can go to 110 first, then I can go to 100
single input transitions.

So, I try to find out all such possible means additional minterms that will be part of the
transition cube. And required cube is a special kind of a transition cube that must be
included in some product term to get rid of static logic hazard. Static 1 means it was 1,
this is static 1.

(Refer Slide Time: 24:15)

Now here the required cube like this one I have shown, this is 011 to 101, this is 011, this
is 111. This is just an example of a required cube, ok.

744
(Refer Slide Time: 24:37)

And the last thing static 0 or dynamic hazard, both static 0 means, you see it was 0, it
remains 0. Now, you see in a sum of products realization of a function, we do not keep
track of the minterms for if the function output is 0. So, there are no gates which are
generated for this. That is why for static 0 we need not have to worry about it. Such
hazards can be avoided right. There is no cube for any product term that will lead to this
0 0, because 0 we do not have in any of the cubes, only 1’s.

Only in a very with hypothetical case, if we have a gate where I apply xi as one of the
input and also x́i as one of the inputs, there is no reason why I should apply like this.
Only in such cases this kind of an hazard can be there, but otherwise it is not there. The
only situation is both xi and x́i can be the input literals of one of the cubes. But
normally we do not do that, there is no reason to do that; that is why such hazards can be
avoided.

745
(Refer Slide Time: 25:55)

And during a 0 to 1 transition, 0 to 1, I said that there can be dynamic hazard, there can
be a temporary change. Like this, this is called a dynamic 0 to 1 logic hazard and
dynamic 1 to 1 also I mentioned earlier, this kind of a transition. And for single input
change scenario we can never have dynamic hazards. This is the point to note. Only
when one input changes at a time, you can only have single input static hazards.

So, with this we come to the end of this lecture. We have tried to give you a very basic
overview about the concept of hazards in asynchronous circuits and some of the simpler
kind of hazards, how they can be avoided. So, we shall be continuing our discussion in
the next lecture.

Thank you.

746
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 53
Asynchronous Sequential Circuits (Part II)

So, we continue with our discussion on Asynchronous Sequential Circuits and Hazards.
So, in our last lecture we had some discussion about the different kinds of hazards, and
particularly the; I say hazards we saw how to avoid that with respect to Karnaugh map.
So, we continue with the discussion in this lecture. This is Asynchronous Sequential
Circuits part II.

(Refer Slide Time: 00:39)

Now here we start by talking about multiple input change kind of situations, and how
such hazards can be handled. Now one thing let me tell you here, when multiple inputs
are changing then in addition to logic hazards you can also have function hazards. For
that mean suppose I have realized a function, there are some inputs, and more than inputs
can change.

Now, when more than in one input can change, the order in which the inputs will finally
change, will depend on the delays, their variations. So, there can be multiple scenarios.
So, if you look at the function, at the functional level, at the functional level when this
multiple inputs are changing, you can have this function changes occurring as a hazard,

747
with respect to the function. So, even if you forget about the delays, if the input changes
are not happening simultaneously, such kind of hazard situations can occur right. Like in
this diagram I have shown here, in this diagram I have shown that for multiple input
changes there can be, there can be a hazard scenario. Now, we talk about a method for
avoiding hazards in such kind of an MIC scenario when multiple inputs are changing at
the same time.

(Refer Slide Time: 02:35)

Let us see, several inputs can change values monotonically. Monotonically means they
do not change more than once, at most once, but they may change one after the other.
Because of this as I have just now said, the function value may change more than once
which may lead to a function hazard. Like, here in this Karnaugh map I am showing a 4
variable function, the true meanterms are shown. And, some of the changes are shown
here, you see in this realization; there are some cubes, and then implementations. You see
here we have said, the value of x is changing from 0 to 1.

The value of z is changed or is also changing from 0 to 1. So, 2 inputs are changing so,
the transitions are shown. Now here again if you look into the unequal delays of the
gates; so, I suggest you can work out that under what scenario it can happen. There can
be a glitch occurring in the output, which is a static 1 logic hazard. But in this case there
will be no function hazard. Function hazard will occur in case of this dotted line. If you
show, but here we are not showing here, we are only showing static 1 logic hazard where
it was 1, remains 1, but in between there is a glitch, right.

748
(Refer Slide Time: 04:31)

So, how to avoid such static 1 logic hazard? Here in this Karnaugh map it is represented
by this arrow, this change. So, both x and z are changing from 0 to 1, right. So, what we
do just like this single input change case. So, you should avoid any product term that
contains both xi and x́i as the inputs. This is the trivial case I told earlier, right. So,
what we do here is that, we cover the solid arrow by a cube to get rid of the hazard.

(Refer Slide Time: 05:21)

You see here we have used a cube like this. This is an addition cube, we have added. And
what does this cube indicate? Indicates w, y, so this last gate. So, if you add this one

749
additional gate, in the earlier case there was a static 1 hazard coming. You see, now there
will be clean output coming no glitch, right.

So, for static 1 hazard this kind of a thing will come, but for static 0 hazard, you need not
have to worry, just like in the single input change case, because static 0 hazard will occur
only for this hypothetical scenario; which normally you will never do fine.

(Refer Slide Time: 06:13)

Now, talking about dynamic hazards in multiple input changes, here again we show a
function in Karnaugh map, and you show this arrow. This arrow corresponds to this
transition. It was in 1 1 1 0, this was 1 1 1 0. The input is changing to 0 1 0 1, this one is
0 1 0 1. Now such a dynamic transition will be hazard free, there are certain conditions to
satisfy, necessary conditions.

See the function was on because this was not a true meanterm. So, if you make this
transition, so for 1 1 1 0 it should be 0. So, you see, you look at the sub transitions, you
would have to ensure that all these sub transitions are hazard free. You look at this
implementation, here you have a transition w going from 1 to 0. You have a transition z
going from 0 to 1. You see w and z both are changing here, right. Now, for this, we look
at the sub transitions; as I said to go from 1 1 1 0 to 0 1 0 1. So, you can have 1 1 1 0 to 1
1 0 1; like again, let us see from 1 1 11 0 you can go to 1 1 1 1.

750
Then you can go to this 0 1 1, this is one possibility. Or you can go to 0 1 1 0, 0 1 1 0,
then 0 1 0 1, one input changing at a time. So, there are 2 possible intermediate
temporary states you can go through. Those will be the sub transitions, I am talking
about. So, what we are saying is that, to make an MIC dynamic hazard free, you identify
all these sub transitions; like, here I have identified the sub transitions, I said that one sub
transition can be this, another sub transition can be this. It leads to a temporary state, and
all these sub transitions must be hazard free separately. They must correspond to required
cubes. So, if you ensure this, then your implementation will be dynamic hazard free.

(Refer Slide Time: 09:07)

Like as I had said this 1 1 1 1 0 to 0 1 0 1, that makes 2 different transitions, this is


sometimes called illegal intersection. And illegal intersection you avoid by, just said by
looking at the sub transitions, and just adding additional cubes; like I will show in an
example to easier.

751
(Refer Slide Time: 09:43)

So, here as you can see, here you have added the cubes. This is, this was your original
implementation; which was resulting in a dynamic hazard in the output ok. Now here
what I am doing? Instead of this w z, here I have modified this. So, with this
modification, you see in the output, I am having a clean transition. So, the idea is this,
you must prevent some of the transitions from happening, if you can do this, then the
dynamic hazard in the output will be avoided.

Well, I am not going into too much detail of this, just trying to give you an basic idea;
that this is how it is done, but one thing I think you are appreciating, now that design of
asynchronous circuits, making them hazard free is much more complicated. You will
have to look at all possible scenarios for hazards and then you will have to add additional
gates. You see, I am looking at Karnaugh maps here. But think of larger functions, how
to handle larger functions? Very difficult, right.

752
(Refer Slide Time: 11:09)

So, that is the trouble here. So, to summarize for a multiple input change transition, if we
have a 1 to 1 transition, so the q must be completely covered by a product term, as we
had said, 0 to 0 transition in the practical case will never happen. So, forget it, but for 1
to 0 or 0 to 1 transitions, this is the last example you took. We have to ensure that every
product term that intersects with the transition must also contain it, starting for the first
case and ending for the second case points. Then such hazards can be avoided.

(Refer Slide Time: 11:55)

753
So, to obtain a hazard free implementation of a function f, let us say the hazard free
implementation, we call H, the condition is that each required cube, required cube must
be contained in some of the implicants of this hazard free implementation. No implicant
of H must illegally intersect the dynamic transitions. Such an implicant is called a
dynamic hazard free implicant; that which does not illegally intersect, and in short I call
it as DHF implicant, dynamic hazard free implicant. A DHF prime implicant just like the
definition of a prime implicant is a DHF implicant, not properly contained within any
other larger DHF implicant, ok.

So, here we require that we make use of DHF prime implicants only, and we need to
cover all the required cubes; which is quite similar to Quine-McCuskey that tabular
method of minimization. So, we are giving one example to just illustrate the idea, again
not going into too much detail ok.

(Refer Slide Time: 13:29)

So, let us take an example here. Suppose, we have a function here where the true
meanterms are shown and also the required cubes are shown, the required cubes are all
shown here. And from the required cubes, you identify the privileged cubes; like, there
was a transition, you include this transition also in the cube we make it a larger cube.

Then there was a transition here, a transition here, you make everything included, this
transition can be smaller, but to include everything, you make it itself a large cube,
include all transition inside it. And this last transition was already inside it. So, there is

754
no problem, these are the privileged cubes. And the prime implicants with no illegal
intersections, you can also compute the prime implicants from the original true
meanterms. You see these are the prime implicants which are shown 3 of them, they are
no illegal intersections with these transitions as per the definition.

Now, if you look at a cover of this, the prime implicant x z has an illegal intersection,
what is x z? x z is this and z is, this is x z. This x z has an illegal intersection with one of
the privileged cubes; such illegal intersections are not allowed, right. So, if we have such
an illegal intersection, what you do? You reduce the size of this cube, such illegal
intersections are not allowed, this x z, you make it x y or z this, so that this intersection
with privileged cube is not there anymore. This is the basic idea, ok.

(Refer Slide Time: 15:51)

Now once you do this, then you have the DHF prime implicants, there are 4 here, you
can check w ý wyxy ź ; 5 in fact, ẃ y z and ẃ x́ y. And these are the
coverings, these are the cubes, and these implicants are covered like this.

Now, you see all of them are essential, in these columns all of these check marks are
single. So, all these product terms must be there in the hazard free sum of products
implementation. So, all these product terms w, this is w, you say this large one was w.
This w was this. This was w right. Then y z, y z was this one. This one was y z, x́ y
was, x́ and y this, this was x́ y. And finally, x ý z, x ý z is the one which
you have added separately, this one right, this one. These are the cubes and you need to

755
include all of them. If you implement it like this, this will be a hazard free realization,
this is ensured.

(Refer Slide Time: 17:41)

So and there is another thing, this is something called hazard non increasing
transformations. Like, suppose you have a 2 level hazard free realization. There are some
rules, Boolean algebra or switching algebra rules, if you apply them, you can convert this
2 level realization to multi-level realization with the guarantee that the multi-level
realization will also be hazard free.

Now without any proof, I am just mentioning some of the transformations that can be
used. First one is the Associative law, x+ y z is x+ y , and also it is dual.
DeMorgan’s theorem you can apply DeMorgan’s theorem and it is dual. Distributive law
x y+x z you can take common. Absorption law x+ x y equal to x and you can also
use this law x+ x́ y equal to x+ y . And additionally you can insert NOT gates at
the primary inputs and circuit outputs if you want. They will all ensure that if your
original circuit was hazard free, and if we apply this rules it will remain hazard free.

756
(Refer Slide Time: 19:07)

Let us take an example, suppose I have a function; which was implemented in some of
products terms like this, there are 5 product terms, and these are the 5 AND gates, they
are implementing the product terms. And this implementation is free from the dynamic
hazard 1 1 1 0 to 0 1 0 1. This is same example, I took earlier. So, in the output there will
be a clean transition. Now I am saying that well, I want a multi-level realization; the
objective may be to use less number of gates. So, I can take, do some factoring, I can use
the distributive law, I can apply distributive law, I can take y common from 1, 2, these 3
terms, and I can write it like this.

So, I can use an OR gate in the first level to implement this, say this sum term, then end
it with y. Then other 2 terms remain as it is. So, what the rule says, that if I apply this
rules, and I modify this realization into a multi-level realization, this will also be
guaranteed to be free from the dynamic hazard. These are some circuit design techniques
or rules available to the designer. So, once hazard free realization is obtained, this can be
extended or reused for multi-level circuits, fine.

So, with this we come to the end of our discussion on hazards and asynchronous
sequential circuits. So, again I am repeating, I have not gone into very much detail of
this, I have not taken too many examples. Just try to give you a flavor of the problem.
Design of asynchronous circuits, making a circuit hazard free is quite complicated.

757
Just using Karnaugh map, I have given you some techniques, but again Karnaugh map
can be used only for up to 4, 5 or 6 variables. What happens when the number of
variables are more? The problem is really difficult. So, the application of a synchronous
circuits are rather limited; even though there are some interesting benefits like higher
speeds, lower power consumption and so on, but there are some genuine design
problems. So, we shall be talking about a higher level sequential circuit description
called ASM in our next lecture; which is also quite useful in designing complex or higher
level sequential systems.

Thank you.

758
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 54
Algorithmic State Machine (ASM) Chart

In this lecture, we shall be talking about something called algorithmic state machines or ASM
charts which you sometimes call, which is a very useful tool for designing sequential circuits,
particularly complex circuits. So, the lecture title is algorithmic state machine or ASM chart.

(Refer Slide Time: 00:41)

Let us see what an ASM chart is? But this ASM chart is some kind of a tool you can say, it is
a pictorial tool just like a flow chart or a state transition diagram that we use to represent
finite state machines, something similar, but it captures a lot of information with respect to
the FSM, we will see. This is quite useful for specifying detailed logic for sequential circuits
which I said is somewhat similar to flowcharts.

Now, it specifies two things together. It specifies sequence of events that is reflected in a
sequential circuit behavior; also it reflects timing relationship between the various states of
the finite state machine. Now, in a finite state machine you recall, there are two kinds of
pictorial elements, one we represent by circles, the states and the transition represented by
arrows. Now, along the transition we specify the inputs and the expected outputs. Now, in an

759
ASM chart there are three kinds of elements which are shown. We shall see these things, state
box, decision box and conditional box. Let us see the function of these.

(Refer Slide Time: 02:25)

First the state box, state box is a rectangular box as is shown in this example here which
represents the state of a machine. Well, if you think of an FSM, a finite state machine where
you represent states like this, let us say S1, S2 and so on and arrows indicating transition
from one state to another. So, each of these states is represented in ASM by a state box.

Now, in this state box, you have a state name, either you show this state name inside with a
slash, like you can write S1 slash. In the output optionally you can mention the output lists or
the name you can also show outside the box on the left, there are several conventions, you
can use any one of them. And inside the state box you could also contain some register
operations to initialize the values of registers, like here you can have some register operations
optionally. And each of these state boxes can have some state assignment.

Like you see when you specify a finite state machine, you only mention the states S1, S2, S3
and so on, but you think that when you synthesize a machine, when you in the process of
implementing it in hardware. You also carry out some state assignment, like you may say that
this state S1 will be using this code 0 0. So, in the state box, the state assignment is also
specified typically in the upper right corner of the box. This we are calling as something
called state code. Well, we will take some example to show how the state boxes are created,

760
but this is how or these are the information it contains. And there can be optional output list
also.

(Refer Slide Time: 04:56)

Then comes the decision box. Just like a conditional box in a flowchart, where if it is true,
you go here; if it is false, it comes here. It is a diamond-shaped box, but it can be more
general. So, when instead of two outcomes, there can be multiple output nodes depending on
how many possible conditions can be there. So, in the simplest case, it became a binary
decision as is shown in this diagram, yes or no, or 1 or 0 right. And conditional box is a last
one; it is an oval shape in box, which contains some output list. Like suppose I create an oval
shape in box, I mention Z1, Z2; it means that these 2 outputs Z1 and Z2 must be set to 1. And
this input arc for the condition input list must be coming from a decision box, maybe from the
decision box it will be coming right, these are the constraints.

761
(Refer Slide Time: 06:20)

Let us look at a somewhat complete kind of an ASM block that how it looks like, this is an
ASM block. First thing is that what is an ASM block? ASM block is something that
corresponds to one state of the corresponding finite state machine. So, every state of the
FSM, let us say there was an state S1, it was going to some state S2, going to a some state S3,
maybe there was a self loop depending on input combinations. So, every state will be mapped
to an ASM block, ASM block is a structure which will be consisting of one state box
corresponding to that state.

In this diagram that state box is shown here. So, you see here, this state name is given as S1
with an optional output list Z1, Z2; it says that these are the output list of the state box. Now,
this you may give, may not give, this is optional. Now, inside the state box there can be
decision boxes and conditional boxes connected to it, is so called exit path. There can be
more than one exit path as is shown here; there can be n exit path in general 1, 2, 3 up to n.
An ASM block will have exactly one in turn entry path or entrance path, but it can have
multiple exit paths.

Now, inside it what kind of things you can have? You can have a decision box like this. Like
you check X1, if X1 is 0 false you come here; if X1 is true, you come here which means, if
X1 is false you set Z3, Z4 to 1. Then check X2, if it is 0, you follow this exit path; if it is 1,
follow this exit path. Then along this path you can check the condition X 3; if it is 0, you said
Z5 to 1 and exit through 3, if it is 1 you continue with something else. This is how generally
an ASM block looks like.

762
(Refer Slide Time: 09:12)

So, let us explain this, that same example. So, when this system enters this state S1 from the
entrance path, first we have mentioned, the two outputs Z1 and Z2 here means the output Z1
and Z2 are immediately set to 1. Then condition X1 is checked. If X1 is 0 which means this
path Z3 and Z4, they are also set to 1; but if X1 is, both X1 and X3 are 0, like say both X1 is
1, this should 1, actually X1 is 1 and X3 is 0, you follow this path, then Z5 becomes 1 and the
system exits.

(Refer Slide Time: 10:16)

763
Now, there are some rules you need to follow when you are creating an ASM chart, you
should remember this. For this same system specification, you can have multiple possible
ASM charts, you can draw again several different ways, but certain rules must be followed.
Like the first one says that for some, for every valid combination of input variables, like let
us let us take an example, there are three input variables x1, x2, x3, let us say x1 is 0, x2 is 1,
and x3 is also 1, 0 1 1. For every combination there must be exactly one exit path, that means,
in that ASM block for this 0 1 1 combination, you must be following the same exit path, there
should not be multiple exit paths.

And inside the ASM block, internal feedbacks are not allowed; it should be only from top to
bottom kind of transitions. Parallel paths to the same exit paths are allowed, there can be
multiple paths going to the same exit path. More than one parallel path can be active at the
same time; this can be active, this can also be active, such things are allowed. And as I
showed in the previous diagram there can be multiple exit paths. So, such rules should be
followed when you are drawing or creating an ASM chart.

(Refer Slide Time: 11:58)

Let us take a small example. So, here we shall show how an FSM can be converted into an
equivalent ASM chart. So, we start with a simple FSM description of a sequence detector
example, a sequence detector which detects a sequence 1 0 1 in the input sequence. Just look
at this state diagram, this is the starting state, the top one. So, if the input is 1, you move to
this state.

764
So, if the input is again 0, you move to this state. And if you get a third one, you move back
to this state with the output 1 that means, you have found out this sequence. Now, this third
one means there can be overlapping sequence also like this. The third one means, you have
detected a 1 0 1 here, but it can be the start of the next 1 0 1. So, you move back to this state,
so there can be another 0, another 1, so this second sequence can be detected again right. So,
from here, if it is 0 that means, you have to start from the beginning, you go back here right.
Now, let us see.

Here once you are trying to detect this sequence 1 0 1, so you have an FSM, where I am
assuming, there is a single output Z, and a single input X. This X denotes the serial bit
stream, and Z is the final output 0 or 1 right. So, the output will be 1 only when 1, 0 and 1 is
detected right. There are three states in this FSM; this will correspond to three ASM blocks.
And let us also make the state assignments. Let us say this state is S0; this is our S1; this is
our S2. Let us make this state assignment also 0 0, 0 1 and 1 1. Let us say. So, let us show
how the ASM chart may be constructed from this FSM.

(Refer Slide Time: 14:39)

So, here some of these symbols are not visible, I shall show them. Let us again talk about the
three state, this is S0, this is S1, and this is S2. So, one by one, let us see. This is the ASM
block corresponding to state S0, this is state S0. So, the state encoding is also shown here 0 0.
So, the output list is default, it just shows the state name S0. So, you look at this state, is the,
what it does, if the input is 0 it remains in S0; and if the input is 1, it goes to S1, in both cases
the output is 0.

765
So, you see, if X is 0 left side, you said Z equal to 0, because output is 0. And you go back to
this same state, you remain in S0. But if it is 1, then also Z is 0, and you go to this transition,
you go to S1, you move to S1 right. This is your state S0, so exactly similar to here. Now, let
us look about state S1, this is your state S1, your state encoding is shown here 0 1, this state
name is mentioned. Now, in this state S1, if it is 0, it goes to S2; if it is 1 it remains in S1, so
let us see. So, if x is 0, then you set output to 0; and you move to state S2. But if it is 1, then
also output is 0; and you remain in the same state, remain in state S1. So, exactly from the
FSM you are mapping here.

Now, the last state S2, this is your S2 and this is the state encoding 1 1. Now, in S2, this is
your S2. If it is 0, you go to S1, you go to S0, if it is 1, you go to, you go to S1. And here the
output is set to 1. Let us see what you have done, if input is 0, you go to Z0 and you move to
here, state S0. But, if the input is 1, you set Z equal to 1 and you move here to state S1.

So, you see exactly from the FSM, you can have a one-to-one mapping to the corresponding
ASM chart. Now, you may ask that why do you need this ASM chart, the FSM was good
enough right, but you see this ASM chart is one step forward towards implementing the
hardware circuit, implementing the circuit, because you have also done the state assignment.
Normally from the ASM chart you can directly design the hardware circuit. So, each state
box will correspond to a flip flop and after that some gates you can add, you can directly
implement. So, it is really easy to map it to hardware that is why sometimes from the FSM,
you create the ASM chart, from the ASM chart you can create the hardware.

So, in this lecture we have given you a very brief introduction to ASM chart. As it said, ASM
chart is useful for the complete description of our system, which is more powerful than an
FSM, because it can also capture the state encoding. And particularly for more complex
system, for larger FSMs, this ASM chart is sometimes considered to be an useful tool. So, if
you are a system designer, if you are using these tools for designing complex systems, well
FSM is of course one way and you have already seen earlier starting from an FSM, how you
can formally synthesize a sequential circuit, but ASM is another route, just using ASM also
you can generate a hardware. Of course, your hardware may not be that efficient, but for
more complex systems it is a fairly simple approach to do that.

So, we have discussed, means almost all the aspect, different aspects that you wanted to
discuss regarding digital circuit design and synthesis particularly at the gate level and the flip

766
flop level. We talked about synchronous circuits; we also talked about asynchronous circuits.
Now, in the next few lectures, we shall be discussing on something which is slightly different,
like once you design the circuits, once you fabricate the circuits in the form of chips, there
can be some errors in design, there can be some defects, which can occur during the
implementation. So, we need to test our circuits, how to test, what are the different ways of
testing, these are a few things we shall be discussing during the next set of lectures.

Thank you.

767
Switching Circuits and Logic Design
Prof Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 55
Testing of Digital Circuits

So, far in the lectures that you have gone through, we have basically discussed different
ways in which you can design a digital circuit. But today in this lecture will be
discussing slightly different issue related to digital circuits; namely, once you have
design the digital circuit, it can be a circuit designed using gates on a breadboard or it
can also be a VLSI chip, how to test it? Because, during the process of manufacturing or
fabrication or prototyping, there can be so many different kind of errors that can creep in.
So, a very important step in the entire design cycle is how to test the manufactured
hardware. So, the topic of today’s lecture is Testing of Digital Circuits ok.

(Refer Slide Time: 01:16)

So, let us first try to motivate our self and understand why we need testing ok? So, I have
just now said that during the process of manufacturing. So, when you say manufacturing,
it can be manufacturing of a chip, manufacturing of a circuit on a breadboard, using
available chips you interconnect them using wires, it can be at different levels. It can be
printed circuit boards, many chips which are soldered together ok.

768
So, once you have such a manufactured hardware, there can be various kinds of errors
which can come into the picture. Now, here I have highlighted 3 different kinds of errors,
but there can be other kind of errors as well. First is that but in the design process itself,
let us see there was some errors, there are some errors during the design process.

So, when you are designing the circuit, because of a human error or maybe the
specification was not correct, there has been some problem there or normally we use
some software tools for design and synthesis. These are called Computer Aided Design
or CAD tools.

There can be some errors in the tools itself, because the tools are huge software. And
testing a huge software is an enormous task, there can always be some errors that are
setting inside those tools themselves ok. So, there can be some errors or bugs in those
tools and finally, during the fabrication or packaging there can be some faults. For
example, when you are connecting two points using a wire, there can be a loose
connection for example, or there can be a short circuit, this kind of errors.

So, because of such error occurrences, it is necessary to test each and every chip or
manufactured device before they can be used. If you talk about the modern day VLSI
chips, there are of the order of billions of transistors inside them. So, you can understand
how complex the circuits are and chances of faults creeping in, such a very complex
VLSI chip is also pretty high ok, because of that the problem of testing is important.
Now the basic objectives of testing, what is the basic objective that you trying to meet
ok.

769
(Refer Slide Time: 04:08)

The point to notice that we use testing to determine the presence of faults if there are any
but, there is a fallacy that someone thinks that well, we do testing to guarantee that a
circuit is fault free. But the problem or issue is that no amount of testing can give you
this guarantee. Well, you can for example, test and find out that your circuit is working
perfectly at room temperature, but as soon as you increase the temperature by 5 degree
Celsius your circuit fails.

So, it is not possible to test the circuit against all possible environmental circumstances.
Temperature, humidity, there are so many parameters, right. Then the fault fluctuation in
the supply voltage, there are so many issues, so testing to guarantee that a circuit is fault
free, this is a false statement, no amount of testing can give this guarantee. But, however,
what we do using testing, we can increase our confidence, it should be, we can increase
our confidence in the correct working of the circuit. That is why we use testing and along
with testing, we use another term called verification, it is briefly talk about what are the
main differences between testing and verification.

770
(Refer Slide Time: 05:37)

Verification talks about the design, we want to ascertain whether the design is correct or
not, not the actual hardware. Design we can do on a piece of paper, say a function
minimize using Karnaugh map that is an example of a design ok. So, verification
concentrates on the design and it can guarantee the correctness of a design ok. Now in
contrast testing as I just now mentioned, it tries to, it can never give a 100 percent
guarantee.

It tries to guarantee correctness of the manufactured hardware; it can be circuits, chips,


anything. For verification because we are talking of the design which possibly of done on
a piece of paper, this is performed only once before you start the actual manufacturing
right. Now in contrast, faults can occurred in each and every device therefore, testing
must be performed on every manufactured device, this is one important difference.
Verification assures the quality of the design that the design is free of any errors, testing
ensures quality of the devices, that the devices are probably free of errors.

Well again I am repeating, you can never give a 100 percent guarantee ok and
verification is typically carried out using mathematical techniques, formal methods,
theorem proving, simulation; but these are the things we shall not be discussing here, but
for testing we normally use a two step process, given a circuit with some inputs and
outputs, first we find out that, what are the inputs we have to apply for testing? That is
called test generation. Then for every circuit or chip we have to actually apply those

771
inputs and verify whether the output is coming correctly, that is called test application
ok, fine.

(Refer Slide Time: 08:05)

Now, the question is when we can do testing? You see, when a chip is manufactured, let
us say, talk about a chip because chip is the most basic of the devices that we use today,
no design. So, when to do the testing? So, the point to notice that, testing can be carried
out at various different levels. Well, when the chips are manufactured, we can do testing
at the chip level. When several such chips are integrated on a printed circuit board, we
can carry out testing at the board level.

And when you build a more complex system, there can be several such printed circuit
boards that make up a system. So, when several boards are assembled together, we can
finally test at the system level. But the point to notice that as we move up the hierarchy,
so as the system becomes more and more complex; the problem of testing becomes more
difficult. Testing a single chip may be easier, but when I say, I have a system which
consists of 1000 chips and it is not working, testing that one thousand chip assembly is a
real much more complex problem ok, this is something you have to understand.

So, rule of thumb says, that you need to detect the fault early if possible which reduces
the cost of testing. There is an empirical rule, well as you move from one level of
hierarchy to another chip level, board level, system level it becomes 10 times more

772
expensive to test the device; when you go from chip to board or board to system. So, it is
always good to test a circuit at the level of the chip as soon as they are manufactured ok.

(Refer Slide Time: 10:11)

Now, talking about the faults; how many faults may be there when you are doing testing?
There are some important points to note. You see, in a circuit, I mean the number of
physical defects can be enormously large, well when you say physical defects, there can
be a short circuit; there can be an open circuit.

Say, a transistor may not be working properly; the gain of the transistor may vary. Now
these are all you can say continuous parameters, the gain of a transistor can vary. So,
how much varying by, how much? You cannot quantify, it can be 1.1 times, 1.15 times,
1.155th times, there are infinitely many possibilities.

So, you really cannot count how many such physical failures are there, they can be quite
enormous. So, how do you judge the quality of a test? Now the accepted solution in the
industry is that, you use something called logical fault models, where you are abstracting
physical defects. Like, you are not thinking about the actual physical defects, like there is
a short circuit, open circuit; there is a change in gain of a transistor. Rather, at a much
higher level you are abstracting the failures and thinking about faults occurring at some
other level. Like for example, a gate you assume that one of the inputs of the gate is
permanently at 0; that is one kind of a fault model, you can think of ok, is just an
example.

773
So, the advantage is that if you assume such a fault model, then it is easy to count the
total number of faults and also you can analyze how good your testing process is; that
means how many of those faults are getting detected. How many faults are getting tested
ok, fine.

(Refer Slide Time: 12:23)

There is one parameter we use to assess the quality of testing, this is called fault
coverage. Fault coverage is determined by 2 parameters; here the implicit assumption is
that we already have a fault model. So, we already have a mechanism using which you
can count how many faults are there in the circuit? This is our inherent assumption ok.
So, the denominator here says total number of faults and numerator is well we are doing
testing and how many of these faults were able to test.

The ratio of these two is defined as the fault coverage. Sometimes you multiply this
factor by a 100 and express this fault coverage as a percentage. So, it is essentially
percentage of the total number of logical faults that can be tested using a given set of test
vectors T. Suppose for a circuit I say, that well these are the 10 test vectors I have to
apply to test the circuit. So, these test, these 10 test vectors, how many faults can be
detected by them, this is something which is determined by this parameter called fault
coverage ok.

774
(Refer Slide Time: 14:00)

Now, why testing is considered difficult? Well, you consider a simple combinational
circuit. Let us say, I have a combinational circuit with certain number of inputs and there
can be one or more outputs. Let us say there are there N inputs, well I can say that, well,
let us follow a simple process, let us verify the truth table. That is the best possible
testing we can do, we apply all possible inputs and see whether the truth table outputs are
N
matching. So, at the inputs we have to apply 2 combinations. Now let us think of
practical circuits, the value of N can be pretty large. So, here I am showing values of 25,
50, 100, but practical circuits can have of the order 1000 inputs as well.

But, if we just show here what is the value of 2N , you see, 225 is about 33 ×106
, 33 million, 250 is 1× 1015 , 2100 is 1030 . So, you see numbers are become,
becoming enormously large; you really cannot apply so many test vectors in practice. So,
it is just out of the questions, so you would be requiring years and centuries to complete
your test. So, this is not possible ok, this is why we say testing is a difficult problem, not
feasible as N increases.

775
(Refer Slide Time: 15:37)

(Refer Slide Time: 15:40)

Well we have talked about combinational circuits, but if you consider sequential circuits
the problem becomes even more complex. Suppose there are N number of inputs and S
N
number of flip flops or state variables. So, there can be 2 number of primary inputs
and 2S number of states. So, the machine can be in 2N multiplied by 2S that
many possible states.

The output will depend on so many combinations, so, the complexity becomes of the
order of 2N + S . So, the problem is even more complex as compared to a combinational

776
circuit, verifying the state table of a sequential circuit is also infeasible. So, we need
some mechanism for handling this complexity. We shall see later that we talk about some
method called design for testability; which gives us a practical and feasible solution with
which we can address this issue.

(Refer Slide Time: 16:53)

Now, let us briefly talk about the various processes or steps that we carry out during the
testing of a circuit. Some of this we shall be discussing, well one or two of these we shall
not be discussing as part of this lecture series. The first point I have already mentioned,
this is most important that modeling of the faults. Fault modeling here, we abstract the
physical defects as we mentioned earlier and we define a suitable logical fault model. So,
we shall see examples of this later, this simplifies the scope of test generation. Because,
now you can count how many faults are there, let us say for a circuit you say that I have
20 faults.

So, let me try to generate a test set which can detect these 20 faults. But unless this
number 20 was given to you, it would be very difficult for you to generate the test set.
So, you are testing for what? That question would come right. So, after fault modeling,
next natural step is test generation. So, you are given a circuit and of course, after fault
modeling you have a set of faults.

So, you have to generate a set of test vectors T that can detect this faults, this is called
test generation. So, here we shall also be very briefly talk about test generation, then

777
comes fault simulation. Fault simulation says, well we have a circuit; we have a set of
fault and also set of test vector. Fault simulation is a software that will analyze the circuit
and tell you that how many of these faults are getting tested by this set of test vector.
Fault simulation is a very useful tool, but however, in this lecture series we shall not be
talking about fault simulation. But just remember fault simulation is also a useful step in
testing.

(Refer Slide Time: 19:02)

Then you have something called design for testability. Design for testability says well we
talked about the thing, that the problem of testing is quite difficult. Combinational circuit
is difficult; sequential circuit is even, is even more difficult. So, design for testability or
DFT in short, it consists of a set of design rules which if you follow during design, it will
make a circuit easily testable. This is the basic idea ok, well of course, if you do this
there will be some additional overheads that will come in.

We shall look at one of the techniques later and finally, you have something called built
in self-test; where when you fabricate a chip, some testing hardware is also fabricated
inside the chip itself. The chip can test itself from outside, you need not have to do
anything; the chip will tell you that I am good or I am bad ok. So, these are the different
processes that you typically talk about during the process of testing.

778
(Refer Slide Time: 20:23)

So, this is the overall picture, here you have a circuit that you want to test. We apply
some input patterns; you get some outputs, you know what is your expected output, that
is called the golden response. You compare the output response against the golden
response and if they match, you said is ok, if they do not match, you can say that your
circuit is bad, it is not working ok.

This is a very overall simplified schematic diagram. So, with this we come to the end of
this lecture; where we have introduced you to the basic problem of testing. Now, over the
new, over the you can say next few lectures we shall be talking about the different steps
that you talked about the important steps, fault modeling to start with that are required in
the process of testing.

Thank you.

779
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 56
Fault Modeling

In this lecture, we shall be starting the discussion on Fault Modeling. So, if you recall, we
discussed in the last lecture that fault modeling is one very important step that we should do
at the very beginning before we go through the other steps like test generation, fault
simulation etc ok. So, the topic of this lecture is fault modelling.

(Refer Slide Time: 00:47)

So, let us start with the basic motivation once more that why we need fault models. So, we
mentioned in the last lecture that the number of physical defects in a chip can be too many.
So, you really cannot count how many physical defects can be there in a chip, because most
of the parameters are continuous in nature, and there can be infinitely possible values. So,
you cannot test the circuit against all possible values of the parameters ok.

So, it is difficult I should not say difficult, it is impossible to count and analyze all possible
faults, so because of this we do some kind of an abstraction. We abstract the defects and
define some so called logical fault models. So, if we do this, the advantage that we get is the
number of faults to be considered can be drastically reduced. We shall see some examples
which will support this statement. And if we can reduce the number of faults, the subsequent

780
steps of test generation and fault simulation becomes that much easier ok. And if we can
count the number of faults, then we can also compute the value of fault coverage.

We can analyze a circuit with respect to testability that how many of faults are getting
detected, which are the faults which are not so easy to detect and so on ok. So, we shall be
discussing a few of the fault models which are very important. There are many other fault
models as well just you should remember. So, we shall be discussing some fault models at the
functional and structural levels.

(Refer Slide Time: 02:49)

So, we start with a functional level fault model. So, functional level fault model means the
circuit that we are trying to test. We have the description of the circuit given at the functional
level. What do mean by functional level, functional level is sometimes also called register
transfer level or RTL level. Here the basic building blocks of these circuits are not gates, but
slightly higher. We talk about registers, adders, multipliers, multiplexers and so on. So, we
have a circuit at that level that is something which we called functional level.

And the point to notice that fault models that are defined at the functional level they are very
specific to blocks like, for a multiplexer I will have some fault model, for an adder I will have
a different fault model, they are not general. They will be very specific to the blocks that are
being used, but let us say for a multiplexer. If you have a strategy that will be very efficient, I
will take an example for a multiplexer you can see that the test set can also be generated

781
fairly directly, this we shall be seeing. So, this 4 to 1 multiplexer I shall be taking as an
example, where there are 4 data inputs 2 select inputs and one output.

(Refer Slide Time: 04:37)

So, let us take this example. So, you look at the multiplexer schematic diagram here. The 4
inputs are here, A0 to A3, select lines S0 and S1 and the output is F. The corresponding truth
table is shown here. Well, here the assumption is that S0 S1 if it is 0 0, then we are selecting
A0 ok. 0 0 means A0 is getting selected. If it is a 0 1, 0 1 that means S1 is 0, S0 is 1, S0 is the
least significant bit, 0 1, then A1 is selected. If it is 1 0, S1 is 1, S0 is 0, then A2 is selected.
And if it is 1 1 then A3 is selected. So, with respect to the test generation what we do, you see
we apply the 4 possible values of the select lines 0 0, 0 1, 1 0 and 1 1.

Now, we know when I apply 0 0, this A0 is supposed to be selected. So, we apply both
condition if A0 is 0, then the output is supposed to be 0. If A0 is 1, output is supposed to be 1
ok. But we also do something more, like we are talking about a functional fault model, like
we know how a multiplexer works ok. If S0 S1 is 0 0, A 0 is supposed to be selected, but
because of a fault at the functional level what you may say that well, because of some fault
instead of A0 maybe some other input is getting selected maybe A1, maybe A2, maybe A3.
So, just to safeguard against that what I do, you see, when A0 is 0, we are applying reverse
values in A1, A2 or A3, so that if accidentally any other of them are getting selected, so the
output will not be 0, it will become 1 and it can be detected.

782
Similarly, for A0 when it is 1 then we are applying reverse values in the others, same thing
for the other combinations, for 0 1, this A1 is made 1, 1s and 1 and the other inputs are at the
reverse value 1 1 1 or 0 0 0, for 1 0, A2 same and for 1 1, A3. So, you see at the multiplexer
just looking at how a multiplexer works functional level, we require eight test vectors only,
but if you look at the total number of inputs, there were 6 with respect to the truth table, 26
is 64. So, instead of applying 64 possible inputs we need to apply only 8 inputs to test the
functional behaviour of a multiplexer right.

This is the basic idea. We need not have to apply all possible inputs. We have to understand
the functional behaviour of the circuit. We want to test with respect to that, we can directly
generate the test vectors fine. So, in general for a 2n to 1 line multiplexer with n select
n+1 3
lines, we need 2 vectors. Here this n was 2, that is why we need 2 equal to 8 test
vectors.

And in this scheme all the functional faults can be detected, like what are the function faults,
we are talking about that some input line is incorrectly selected as I had said or you may
check some of the line they are permanently fixed at 0 or 1. So whatever you are applying
from outside instead of that value it is always at 0 or always at 1. Some line maybe
accidentally short circuited or you are connecting, it is not connected, it will treat as 0 maybe,
because of those kinds of failures this kind of faults can occur.

So, this example shows that functional fault model can be very effective. And once you have
a functional fault model you can have a very efficient way generating the tests, but again
whatever we talked about for a multiplexer, you cannot apply it for an adder, for an adder the
logic and the behaviour will be entirely different. So, these techniques are very specific to
blocks alright. So, these are not general.

783
(Refer Slide Time: 09:45)

So, let us now look at structural level fault models which are more common. So, structural
level means, we view this circuit as a netlist. Netlist means: some basic blocks and their
interconnections typically the blocks are considered as gates or flip- flops, for combinational
circuits it will be only gates, for sequential circuits there will also be flip flops and their
interconnection. Let us say I have a circuit like this. So, this structural fault model that we are
talking about it rests on a very important assumption.

The assumption says that the gates or the blocks they are fault free. There cannot be any fault
inside the blocks rather faults can be there in the interconnection lines. So, all the
interconnection lines there can be faults only there, but you will argue that why this kind of a
assumption, there can be fault inside the gate also. But, the logic is, there can of course be a
fault inside a gate, but you think the fault may be such that because of that, one of the inputs
is permanently being treated at, as if it is at 0.

So, you abstracted in a way that the gate you are assuming to be fault free, but that input you
are saying that it is permanently at 0. This is the kind of fault you are talking about. Now
although, it seems to be overly simplistic, but this has been found to be very effective. And it
can cover a very large percentage of the actual physical faults right.

784
(Refer Slide Time: 11:51)

Now, we shall be discussing one very popular structural level fault model which is called the
stuck-at fault model, which is widely used in the industry also.

(Refer Slide Time: 12:07)

Let us think, see what is the stuck-at fault model. In the stuck-at fault model the basic
assumption is like this, we assume that, because of the failures or faults some of the circuit
lines are permanently fixed at either logic 1 or logic 0. And we call them as stuck-at-0 or
stuck-at-1 fault. The faults are denoted like this Astuck-at-0, As-a-0 or in short we can also
write like this A/0 or Astuck-at-1 or in short I can write A/1 ok. So, suppose this is one line of

785
the circuit, I can say that there can be a fault Astuck-at-0, there can be a fault Astuck-at-1
alright.

(Refer Slide Time: 13:11)

Now, suppose there is a fault in this first gate, because of which the output is always at 0, like
Fstuck-at-0. Now, this fault will equally affect F1 and F2, because the value of F is going to
both F1 and F2. But imagine that there is a fault in this gate, because of which this line let us
call it F1, this F1 is stuck-at-0. But this fault behaviour will exhibit only on, this is not an
electrical fault, but because of a fault this gate input is being treated as if it is always at 0, but
F or F2 will still work correctly. There will be no stuck-at-0 fault in F or F2, because of this
when you count the number of lines fanout stems and fanout branches are considered as
separate lines, just remember this ok.

786
(Refer Slide Time: 14:57)

First let us talk about single stuck-at fault which is the most widely used stuck-at fault model.
This assumes that only one line of the circuit has a stuck-at fault at any given time. If a circuit
has k number of lines, so each of the line can have 2 faults stuck-at-0 or stuck-at-1, because
only one of them is occurring at a time, total number of faults will be 2 k. Let us, take a gate
level netlist here. This is the NAND realization of the XOR function. You see here there is a
fanout, and here also there is a fanout. You count the number of lines 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, and 12, so k is 12, because k is 12 total number of stuck-at, single stuck-at faults can
be 2 k or 24 simple.

(Refer Slide Time: 16:21)

787
Now, people have also talked about multiple stuck-at faults which mean more than one lines
having stuck-at faults at the same time. So, here something is mentioned that for a circuit
with k lines total number of multiple stuck-at fault can be 3k −1 , but how it is coming
3k −1 . You just imagine like this; this circuit has k number of lines; think of each of the
individual lines one at a time; say the first line it can be in 3 different states; it can be stuck-
at-1; it can be stuck-at-0 or it can be go to fault free.

The second line can also be in one of 3 states stuck-at-0, stuck-at-1, fault free. So, all k lines
can be in three states each. So, if you look at all possible combinations, it will be 3 multiplied
by 3 multiplied by 3, k times, 3k ok. And out of this 3k there will be one combination
which says all of them are fault free. So, you subtract that 1, so it becomes 3k −1 right.

12
So, for the same example there are twelve lines you see multiple stuck-at faults, 3 −1
becomes about 5.3 lacks. So, just imagine for this very small circuit with 4 gates, the number
of multiple faults is becoming 4 lacks. So, what about a large gate with 1000, large circuit
with 1000 gates or more ok; so counting multiple stuck-at faults may become impractical that
is why most of the people they concentrate on single stuck-at faults only fine.

(Refer Slide Time: 18:25)

Now, let us look at some examples of single stuck-at fault testing. You think of a single 2
input AND gate. So, there can be 6 faults Astuck-at-0, Astuck-at-1, Bstuck-at-0, Bstuck-at-1,
Fstuck-at-0, Fstuck-at-1. So, I am showing the set of tests that can detect all faults. So, I tell

788
you 0 1, 1 0 and 11 are sufficient. 01 if we apply 01, then under fault free condition output
will be 0.

So, if either Astuck-at-1 fault is there, then 1 will come, the output will change to 1, it can be
detected or output F stuck-at-1 fault is there, then also out become 1, it can be detected.
Similarly, 1 0 can detect Bstuck-at-1 and F stuck-at-1, and 1 1, if we apply both 1, let us say 1
and 1, so output is supposed to be 1. So, if there is a fault Astuck-at-0 that can be detected,
because one of them is becoming 0, output will be coming 0. The second 1 Bstuck-at-0 can
also be detected that will also change. And third one this Fstuck-at-0 can also be detected.

So, you can detect a fault, if the fault free output and the fault, faulty output let us called
Falpha they are different, which means they are exclusive OR-ed is equal to 1. Exclusive OR
equal to 1, when they are either 0 1 or 1 0 means they are different. This is the necessary
condition for the detection of a fault ok. Take a larger AND gate for input and gate. Following
a similar logic you can see this you can verify that you require only 5 test vectors for testing
all faults, 0 1 1 1, this will detect only Astuck-at-1 and Fstuck-at-1, 1 0 1 1 B and F, C and F
and this one D and F. But 1 1 1 1 will detect all the stuck-at-0 faults. So, this 5 test vectors
taken together can detect all the 10 faults right.

(Refer Slide Time: 21:03)

Let us, take some more example, let us take a 3 input exclusive OR gate, 3 input XOR gate.
So, I am saying that 2 test vectors are sufficient, why, let us say you apply 0 0 0, the output is
0. The property of an exclusive OR function is that whenever one of the input changes the

789
output also changes. So, if the first one changes from 0 to 1, that means there is a stuck-at-1
fault, it will be detected. Second line stuck-at-1 fault that will also be detected, third one
stuck-at-1 that will be detected, and output stuck-at-1 of course, it will be detected.

Similarly, for 1 1 1, if we apply 1 1 1 in the input, the output will be 1. So, any stuck-at-0
faults will make the output 0. So, this 2 test vectors are sufficient, but for a 4 input XOR gate,
means 4, means in general for an even input XOR gate, this will be true for any odd number,
like for an m input XOR gate. Well, if it is odd you need only 2 test vectors like this all 0s
and all 1s, but you have even number of inputs. Then you will be needing 0 0 0 0,, but the
output is 0, this will detect all stuck-at-1 faults in the input, and also stuck-at-1 in the output.
But if we apply 1 1 1 1, for that also even number of 1 that also output will be 0.

This will detect all stuck-at-0 faults in the input, stuck-at-1 fault in the output, but only stuck-
at-0 fault in the output is missing, for that you have to add another test vector. Any test vector
with odd number of 1s which will make the output 1 that can detect a stuck-at-0 fault in the
output ok. So, for an XOR gate if the number of input is even unit, 3 test vectors. So, these
examples illustrate that in the process of test generation, number of test vectors can be
drastically reduced ok. So, even for a 4 input XOR gate or a 100 input XOR gate you need
only 2 or 3 test vectors ok. So, these are very simple examples to illustrate.

(Refer Slide Time: 23:43)

Now, there is one other thing I want to talk about, the problem of reducing the number of
faults. Like, there are some properties that we shall be talking about, 2 properties called fault

790
equivalence and fault dominance that can actually reduce the number of faults we need to
consider in a gate ok. So, let us take some examples, it will be clear.

(Refer Slide Time: 24:17)

First let us talk about fault equivalence. Well I am taking couple of examples others you can
verify. Similarly you think of an AND gate, let us say inputs are A, B, output is F. So, if there
is a fault Astuck-at-0, what will happen so one of the input is always at 0 so the output F is
always 0. If there is a stuck-at fault B stuck-at-0, then also output is 0. So, if there is the fault
output Fstuck-at-0, then also output is 0. So, you cannot distinguish between these 3 faults,
they are equivalent ok.

So, the idea is that when you are counting the number of faults, you are counting Astuck-at-0,
Bstuck-at-0, Fstuck-at-0, three times, but you can delete two of them, if you want. What I am
saying is that you can delete two of them. You can only keep Astuck-at-0, because these they
are equivalent. And you only generate a test vector that can detect a stuck-at-0 fault which is
1 1 output is 1. And this test vector can also detect Bstuck-at-0, Fstuck-at-0 automatically.

Similar is the case for the other gates you can verify for a NAND gate, the input stuck-at-0
and output stuck-at-1 are equivalent, for an OR gate all stuck-at-1 faults are equivalent, input
stuck-at-1, output stuck-at-1 they will all make the output F equal to 1 like this. Now, this
kind of a thing that when there are a set of equivalent faults, you keep one, delete the others,
this is called equivalence fault collapsing.

791
(Refer Slide Time: 26:17)

Let us take an example, let us take a circuit like this. There are some AND gates, and OR
gates only we are considering ok. These are fanout stem, so the stuck-at-0 and stuck-at-1
faults will be different. For this AND gate, as I have said the stuck-at-0 faults are all
equivalent. So, these let us, say these three faults are equivalent, stuck-at-0 in the first input,
stuck-at-0, and stuck-at-0. So, using fault collapsing what I was saying, I keep only one other
two I delete. The red ones are the ones I have deleted.

Similarly, for the second one I keep this, I delete the stuck-at-0, and stuck-at-0. Third one I
keep this, I delete this, I delete this. For an OR gate this stuck-at-1 faults are equivalent. So, I
keep only one stuck-at fault I delete this, I delete this. Here I keep this, I delete this, I delete
this. For the last AND gate, I keep this, I delete this, I delete this. So, how many I have
deleted 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.

792
(Refer Slide Time: 27:45)

So, in the original circuit, number of faults were 32 ok. And after deletion, so I have remove
the red marked ones, I am left with 20, this is what is meant by fault collapsing. After fault
collapsing using equivalent faults, I am able to reduce the number of faults ok. This is one
technique.

(Refer Slide Time: 28:05)

And the other technique is called fault dominance ok. Fault dominance; let us try to
understand the definition then we will take an example. This says if all the tests for some
fault, let us call the fault is fi also detects another fault fj, then we define that fj dominates fi,

793
we denote it like that fj arrow fi. So, I have an example I will show like output stuck-at-1 for
an AND gate dominates input stuck-at-1, but whenever such a dominance relation is there, we
will keep f1, and we will be deleting fj, we will keep f i, we will delete fj. This is called
dominance fault collapsing well. This may look a little confusing, let me take an example to
explain, this will be easiest.

(Refer Slide Time: 29:05)

Let me take an example, consider a 3 input AND gate ok. And you take the two faults fi and
fj as follows; fi is stuck-at-1 fault in one of the input A, let us say A stuck-at-1, this is my fi, I
call this as fi. And fj; let us call the stuck-at-1 in the output Fstuck-at-1, this is my fj. Now, for
this AND gate you see for fi what are the tests you can use for testing, for a stuck-at-1 there is
a single test vector 0 1 1, 0 1 1, because under fault free condition F will be 0. If there is a
Astuck-at-1, the output will be different, so it will be detected.

But for output Fstuck-at-1 fault, there can be many test vectors possible. So, any test which
makes the output 0 that will be a test for fj, because normally output will be 0, Fstuck-at-1
output will become 1. Out of this, you see 0 1 1 is also there. So, what I am saying if such a
pair of faults are there, you remove fj from the list. Because if you keep fi, this 0 1 1 test will,
this will be included anyway, and 0 1 1 is also a test for fj. So, fj will also get detected ok.
This is the idea behind dominance fault collapsing, you keep fi, you delete fj ok.

794
(Refer Slide Time: 31:03)

So, remove fj and keep only fi, because if you detect this fi, it will automatically also detect
fj.

So, with this we come to the end of this lecture where we have briefly covered the problem of
fault modelling. And we have looked at some of the very, you can say important and
commonly used fault models, we looked at one of the functional fault model for multiplexers,
then we talked about the stuck-at fault both single and multiple. So, we shall be continuing
our discussion in the next lecture where we shall be talking about the problem of test
generation.

Thank you.

795
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 57
Test Generation Pattern

We now talk about the problem of Test Pattern Generation. Earlier, we have already seen that
given a circuit, gate level circuit is example you talked about. We can define a fault model,
stuck at fault model, we have said, is the most widely used fault model and in all the
examples we shall be showing now, we shall be considering this single stuck at fault model.
So, the title of the lecture is Test Pattern Generation.

(Refer Slide Time: 00:50)

So, let us see what is the scope of test pattern generation? So, we have a circuit, we have a
circuit C. We have a list of faults F; we call it a fault list. Then we have a set of test vectors
that we want to generate. This will be our output; our input is the circuit and the list of faults.
The objective of test pattern generation is to generate a set of test vectors to detect all the
faults in F. Like for example; we took examples earlier. If I have a three input XOR gate, my
fault list will concern; see there are 4 lines. So, my fault list, the size of the fault list will be 8,
there will be 8 faults, but my test set will consist of only 2 patterns; 000 and 111. So, given
my circuit, this should be my output ok, this is what we want to solve here ok.

796
(Refer Slide Time: 02:09)

So, this motivation we have already taken some examples earlier, but let us again repeat this.
Test generation can drastically reduce the number of required test vectors; like the example of
an Exclusive OR gate that we have taken, that any odd input XOR gate will require only 2
test vectors for testing all single stuck at faults. You take an AND gate with 4 inputs and 1
output. Normally, speaking the number of input combinations are 16, so, for truth table
verification you would be requiring 16 test patterns, but we have seen that only 5 test patterns
are sufficient to detect all signal stuck at faults.

So, there can be a drastic reduction in the number right. So, these are some of the examples
that I am showing here. For a 4 input AND gate these are the 5 test vectors you need. For any
odd input XOR gates you need only 2 test vectors. So, from 16 you come to 5, from 32 you
come to 2; that is a drastic reduction.

797
(Refer Slide Time: 03:40)

So, let us take an example net list with 3 gates. You see there are 2, 4, 5, 6, 7; 7 lines. So,
there are 14 single stuck at faults.

So, I am just showing, this is not the process how we are generating. Let us see the test vector
1011, what are the faults it can detect? 1011, 10 means here I get 1, here I get 1, here I get 1.
So, you see a fault will be detected if “the output G in the absence of fault” Exclusive OR
“the output in the presence of fault” is 1, which means they differ. This is the necessary
condition for detection of a fault.

Now, 1011 since the output G is 1, so, obviously, G stuck-at-0 will be detected and E, F are
both 1. If either E is 0 or F is 0 then G will be 0, G will change. So, this fault will be there of
course, E stuck-at-0 is missing here, E stuck-at-0 will also be there, E stuck-at-0 will also be
there and similarly here let us say either C or D, if either C or D becomes 0 then F will be 0,
intern G will also be 0, will change. So, C/0, D/0 will also be there. On the other side, if A is
0 output of this OR gate will become 0 and hence G will be 0. So, A/0 will also be there.

So, like this if you check, you will find that these stage vectors are able to detect these faults
and you will find that these five test vectors are sufficient to detect all the 14 faults in the
circuit, it includes all the 14 faults, just an example to illustrate. So, this I suggest you verify
all the other test patterns and check whether these faults are getting detected ok, fine.

798
(Refer Slide Time: 06:15)

Now, let us, let us talk about some of the methods of generating test in a systematic way. Let
us consider that I have a circuit which implements the function f =a b∨b c . Here I am
giving method where I am showing a truth table. For the inputs a b c, these are the f, the
output values a b OR b c, a b OR b c, sorry a b OR a c, not b c, a b OR a c, a b, a c. Now,
suppose we consider a fault on this line, stuck-at-0, we call it α . If this line is stuck-at-0
that mean this is always 0. So, effectively the function in the presence of fault will be only a
c, because if this is 0, the term a b will disappear, this will become 0. So, the function f α
in the presence of fault will be only a c.

So, whenever a and c are 1 then only they will be 1. So, from the truth table, the idea is as
follows, let us just omitted, you look at all the rows and you find out the rows where f and f
α are different; you see that they are differ here. So, 110 will be your required test vector.
If we apply 110, then in the absence of fault output will be 1 and in the presence of the fault
output will be 0. So, you can detect. So, this will be your required test vector. Well, but the
problem here is that well you see, you have to start with the truth table. And constructing
truth table for a large function is not easy.

30
Suppose I tell you that I have a 30 variable function. 2 means how much? It is about 1
billion ok. So, it is huge. So, you really cannot construct a truth table of that size and compare
the fault free and faulty output and see why they differ. So, this method, all that is good for
understanding, this cannot be use for large functions.

799
(Refer Slide Time: 09:17)

So, let us talk about a systematic method called the method of Boolean difference, this is an
algebraic method meaning that you have to carry out some algebraic calculation in order to
find out what test vectors can detect the fault. Let us try to understand the basic principle
behind the method first.

Now, we define something called Boolean difference. Consider an invertible function f, f is a


function of n variables x1 to xn. So, when we define the Boolean difference, it is with respect
to some variable xi, let us say xi. The idea is very simple. I have the function. I am talking
about a variable xi, first I set xi equal to 0, let us look at what the function is? I set xi equal to
1, let us say what the function is?

Then we take the Exclusive OR of these two functions that is what is defined as the Boolean
difference. Let us see Boolean difference, notationally this is expressed in the derivative

df
notation and as I had said first, in the function you set xi equal to 0 then in the
d xi
function you set xi equal to 1 and take the XOR. The function with xi equal to 0 in short there
is a notation fi(0); f suffix i 0 means, the function with xi equal to 0 and fi(1) means, the
function with xi equal to 1 ok.

So, what does the Boolean difference really mean? You say, I am taking the Exclusive OR of
two functions. Now when, let me erase this, now when will the exclusive OR of 2 functions
be equal to 1? This will be 1 if either their values are 0 1 or 1 0 ok, which means this

800
condition says that whenever xi changes the function also changes, this is at indirect way of
saying it. Boolean difference equal to 1 means, when the variable xi changes the output also
changes that condition ok fine. This is what Boolean difference means.

(Refer Slide Time: 12:35)

Now, this is what I just now told. Boolean difference specifies the condition under which
change in line xi will make the output change, we say, will propagate to the output, the
change will propagate. Now, talking about a fault on line xi stuck-at-c, c can be either 0 or 1;
we are talking about either xi stuck-at-0 or xi stuck-at-1. So, to detect a fault there are two
things to be satisfied. Let us see, let us take a very simple example to illustrate what I am
saying. Suppose this is my line xi and I am trying to detect a fault xi stuck-at-0.

To detect a fault xi stuck-at-0, the first thing I have to satisfy is, I have to apply a reverse
logic value at the line where I am trying to detect the fault because if I apply a 0 then stuck-
at-0 also it will be 0 and 0 is also 0; so, I cannot see any change. So, first condition is the
reverse logic value c bar must be applied to xi, we call here that we are exciting the fault.
Then the change on this line i, there is a change we are forcing on this line i . Normally it will
be 1, if there is a fault, it will be 0. So, this change must be propagated to the output. How is
the change we are expressing?

We are expressing by the Boolean difference. So, these two things must be combined
together.

801
(Refer Slide Time: 14:17)

So, what we are saying that the two things combined together, we are saying that if we want
to detect xi stuck-at-0, we have to set xi to 1 first, which we denote by xi and the change must
be propagated to the output, which we are expressing by the Boolean difference. This and this
must be 1 which means, xi should be 1 and also Boolean difference should be 1. Similarly,
for xi stuck-at-1, I have to set xi equal to 0. So, that here I am writing x́i . So, x́i means
xi has to be 0 and Boolean difference has to be 1.

That means I am exciting the fault and I am propagating the change to the output both the
conditions must be true simultaneously. This is the basic idea behind the method of Boolean
difference. Let us take an example. This is what I had said excite the fault either xi or x́i
and propagate the fault is a Boolean difference ok.

802
(Refer Slide Time: 15:34)

Let us take a slightly complex circuit like this. Here there is a circuit with 5 gates. So, if you
compute the function realised you see function is this A + B Ć and OR C D . So, if you
compute the Boolean difference, let us say we want to find out some faults on line C.

dF
So, we are computing . First we are setting C equal to 0, if in this F you set C equal to
dC
0, the second term will disappear and Ć will be 1. So, only A plus B, then C equal to 1, C
equal to 1 means Ć is 0 this will disappear only D. So, XOR D. So, if you expand this, I
am not showing the steps, the final expression will be this ok. So, when you are trying to
detect C stuck-at-0, the condition will be C and Boolean difference equal to 1. Boolean
difference I have already calculated C and this. So, it will be A C D́ , B C D́ , Á
B́ C D bar.

From here directly you can know what are the test vectors will be generated. See A, C, D bar
means A is 1 there is no B. So, B is don’t care, C is 1, D is 0. The second term there is no A.
So, A is don’t care B C D is 0 here 0 0 1 1. Similarly, for C stuck at 1 you have a Ć here.
So, the expression will be like this. So, A Ć D́ , A B is not there Ć D́ like this.
So, you see the good thing is that for this fault you are getting all test vectors which can
detect it; don’t care means it can be either 0 or 1. So, it can be 1 0 1 0 or 1 1 1 0 and this 0 1 1
1 and 1 1 1 0 we have already included and 0 0 0 1.

803
So, there can be 2 3 4 possible test vectors here ok, similarly here. So, this is the method of
Boolean difference where with respect to the input if you take the Boolean difference then
you can directly generate all the possible test vectors that can detect either stuck-at-0 or
stuck-at-1 faults on those lines ok. This is an algebraic method, it generates all faults, but
because it is algebraic it is difficult to automate, it is difficult to write a program ok. So,
practical test generation tools are normally not written using this Boolean difference method
because of this problem.

(Refer Slide Time: 18:54)

So, I am giving an idea what kind of methods are used there. There is something called path
sensitization; I will just give you the basic idea behind this. So, you will have an idea. See for
testing we still follow the same principle as Boolean difference. We have to excite the fault,
means reverse logic value I have to apply and propagate the fault effect to the output. The
difference between Boolean difference is that for Boolean difference we started with the
function expression and did algebraic manipulation, but in the method of path sensitization
we start with the gate level circuit and whatever we do we work at the gate level circuit only
ok, this is the difference. So, let us see what will happen.

And another thing is that when you are talking about propagating, let us say, let us take an
example. Suppose, I have a gate here, the output is going to some other gate, let us say it is a
NAND gate; this is the output and I am talking about a fault on this line, this is α . So,
some change on this line must be propagated to the output. So, what we are saying is that

804
whenever some gates are encountered in between like it is a NAND gate what should be the
other input because if I apply a 0 on the other input the output will be permanently 1, the
change will not be propagated.

So, I have to apply 1 on the other input. This is called non controlling value. Similarly, for
AND gate also it will be 1, for an OR gate it should be 0, for a NOR gate it should also be 0
ok, then only the change will propagate. This is what you mean by non controlling value. So,
we will take an example.

(Refer Slide Time: 21:07)

But the basic principle just works in two phases. So, what you do? I will illustrate these steps
with an example do not worry. So, at this site of the fault suppose we want to detect the fault.
So, on the line where the fault is there, you assign a logic value that is reverse or complement
of the polarity of the fault. Like in a line f, if you want to detect f stuck-at-0 then you apply f
equal to 1, the reverse value.

Then forward drive phase; from this site of the fault, the line where the fault is there, you
select a path to one of the outputs. You sensitize the path by assigning non controlling value,
we just now said what is non controlling values. So, that change at the site of the fault will be
propagating to the output. After we have done this enough to do a backward trace phase,
because whatever logic values have assigned you have to backtrack. So, that all the primary
inputs are assigned appropriate values.

805
So, once we have done that you know what are the inputs that have to be applied, these steps,
let me explain with the help of an example then it will be clear.

(Refer Slide Time: 22:41)

Take a circuit like this, there are 5 gates and suppose we want to detect a fault on a line E;
this E is the output of this OR gate, E stuck-at-1. So, in the first step what we said that a
reverse logic value must be applied at the site of the fault. So, because we are talking of E
stuck-at-1 what we do? We apply a 0 at the site of the fault, the reverse logic value right. This
is the first step.

Then from the site of the fault we have to select a path to the output. Here there is a one path
only from this H to Z. So, next step we have to propagate the fault effect to the output. So,
there is only one path, no option and we have to apply non controlling values. See change on
line E must propagate as a change on Z. For that you see I have an AND gate here, I have an
OR gate here. I told for an AND gate non controlling value is 1, for an OR gate non
controlling value is 0. So, what will happen is, you will apply a 1 here and you will apply a 0
here.

So, if we do this then it is guaranteed that any change on line E will be propagating as a
change on line H and a change on line Z output right. So, your forward drive phase is done.
Now you have to do backward propagation. You see backward propagation what is the basic
idea that you have set this line E to 0, line F to 1 and G to 0, but what we have not done, it is
that you do not know what values have to apply to A B C D. So, from E equal to 0, I have to

806
propagate back, from F equal to 1 again I have to propagate back and from G equal to 0 again
I have to propagate back.

This is back, this is called back propagation. I have to finally, assign some values to A, B, C
and D ok.

(Refer Slide Time: 25:20)

Back trace towards the primary inputs and assign values to the gate inputs. So, to make the
output of this OR gate 0, both the inputs must be 0 0, A 0, B 0 done. Now, F is 1, so, this
NOT gate should be 0, which means, C is 0 and if C is 0 the value of G is automatically 0,
you do not have to do anything more it is already done; so, D can be don’t care. So, you have
got a test vector A 0, B 0, C 0, D don’t care. You see, this is the basic idea behind the method
of path sensitization.

This is very simple in concept, you see from the gate level netlist, you can simply propagate
forward and backward and generator test. So, test vector will be this.

807
(Refer Slide Time: 26:17)

But the thing is not so simple, if you remember something very clearly, that this path
sensitization method is not as simple as this example shows, because during backtracking or
back tracing there can be conflicts. Some path may tell that an input A must be set to 0, but
some other path may tell that A must be set to 1, which will not work. So, you will have to go
back make some changes and again come back, lot of backtracking may be required in such
cases ok, or more than one paths may need to be sensitized together, but these are a things I
am not discussing here.

But you should remember this because the practical tools are much more complex than what I
have shown in the example. Very good automated test pattern generation, it is called ATPG;
Automated Test Pattern Generation, such ATPG tools exist and sequential ATPG tools are
more time consuming, we shall see later how sequential circuit test pattern generation can be
handled and as I had said; we shall not be discussing fault simulation here in this course, but
you should remember one thing that the process of test generation because of it is complexity
is typically slower than fault simulation.

So, with this we come to the end of this lecture. In the next lecture we shall be talking about
some of the techniques that we can follow for generating test for sequential circuits. Namely,
generic strategy called design for testability. This we shall be discussing in the next lecture.

Thank you.

808
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 58
Design for Testability

We now talk about Design for Testability Techniques. Well, design for testability generally
says some design principles of guidelines that make the problem of testing easier. There are
many such methods and techniques available but here in this lecture we shall be
concentrating on one of the most widely used and popular techniques which is targeted
towards the testing of sequential circuits which inherently is very difficult. So, the title of our
lecture here is design for testability ok. Let us try to understand why we need this.

(Refer Slide Time: 00:58)

Design for testability or DFT in short as I said this constitutes of a set of design techniques or
rules that makes both test generation and test application easier and also cost effective. So, as
I said that here we primarily target the problem of testing sequential circuits because, I
mentioned earlier that in a sequential circuit, because of the presence of the internal state
variables the flip-flops, the number of possible input combinations can be even larger than
combination circuits, not only the primary inputs but also the state variables.

You have to consider all possible combinations of them there can be so many possible input
combinations that may come to the input of the combinational circuit part ok. So, and

809
because flip-flops are inside, the circuit from outside it may not be so easy to initialise them
to any known values we call it control and observe.

The DFT techniques particularly the one that we will be talking about, will make the flip-
flops easily controllable and observable. Controllable means you can initialise them to any
value you want; observable means we can read out the value of the flip-flops whenever we
want ok. So, the most important thing that we gain out of this is that is this last point, because
we said that the sequential circuit, test generation problem is very difficult. We want to
simplify this problem into a combinational circuit test generation problem, how we shall be
seeing this?

(Refer Slide Time: 03:13)

So, we look at the standard model of a synchronous sequential circuit, where there is a
combinational circuit part and the state variables in terms of the flip-flops, which are fed by a
clock, present state and the primary inputs feed to the combinational logic which generates
the primary output and the next state.

Now, as I said the main problem lies because, since the flip-flops are inside the circuits which
are not directly connected from the output, from the inputs or outputs. Directly you cannot
initialise this flip-flop, or directly you cannot observe the value of the flip-flops, this is the
main problem.

810
(Refer Slide Time: 03:57)

So, we will look at a very widely used and popular design for testability technique called scan
path. Well, scan path I will explain what it means, a non-scan sequential circuit is the
conventional sequential circuit, we know of which start with a conventional synchronous
sequential circuit and try to apply some rules to make it scan enabled, you can say scan path
enabled. So, what are actually done, there are a few steps, the steps are as follows.

First is that the flip-flops, they are modified into something called scan flip-flops. Then this
scan flip-flops are connected in a certain way such that you can use them as a configurable
shift register, we shall see how? One additional primary input pin is added and there are a
couple of other pins scan in and scan out pins for the shift register that are also added. Let us
see how it works?

811
(Refer Slide Time: 05:21)

First let us see what is a scan flip-flop? This is a conventional master slave D flip-flop this
part of it. So, you can say that I have a conventional master slave D flip-flop, D and Q this I
have with a clock. What I do at the input side, I add a multiplexer to the input side and
multiplexers select line is enabled by the, this extra input TC, TC is stands for Test Control.
During testing we need this and let us say this is 0 input; this is 1 input and the two inputs are
called scan data SD and D.

So, a conventional flip-flop along with a multiplexer, this is called a scan flip-flop, right.
Scan flip-flop is nothing but I am adding a 2-to-1 multiplexer at the beginning of a normal
master slave D flip-flop, right.

812
(Refer Slide Time: 06:50)

So, in the normal mode TC is set to 1, when D goes inside in the test mode when TC is 0, it is
not the D input but SD goes inside, this is the only change, right. Earlier the normal master
slave flip-flop required 10 gates but because of this additional thing we need 4 additional
gates, it becomes 14, just remember this.

(Refer Slide Time: 07:27)

Now, how do I modify our circuit? In the combination circuit, the normal flip-flops are there,
so we have replaced the flip-flops by scan flip-flops. You see the output of the combination
logic was going to the input of the flip-flops, so, these lines remain as it is. They go to the D

813
input of the scan flip-flop, for the scan flip-flop, the first the upper input is D, the middle
input is the test control, the middle input is TC and the lower input is the scan data right. And
the output of the flip-flop we feed back to the input of the combination circuit as usual, they
are coming.

But some additional circuitry we have added, how we have added? We have added another
scan in input. This scan in goes to the SD of the first flip-flop, it goes to the SD of the first
flip-flop and the output of the first flip-flop goes to the SD of the second flip-flop. This goes
to the SD of the third, and the output of the last flip-flop also is brought out and it is called
scan out one extra output pin.

(Refer Slide Time: 09:03)

So, what we are actually doing here in this circuit? Just think, if TC equal to 1, this is the
normal mode of operation. So, the combinational output goes in, D goes in and the output
goes back the normal circuit operation. But, when TC equal to 0, the lower input of this scan
flip-flop are selective, so it becomes like a shift register, you see it becomes a shift register.

So, once it becomes a shift register, you have 3 stages of a shift register ok, they are
connected like this. So, you have this scan in input connected here and you have this scan out
connected here. So, whatever input you want to initialise in the flip-flop, you can shift
through the scan in input and whatever data is coming out you can observe through this scan
out pin. This is the basic idea ok, just by adding this additional pins you can easily control the
flip-flops, you can easily observe the value of the flip-flops ok.

814
Now, another thing you see now that we have a mechanism to completely control the flip-
flops. This means not only the primary inputs, I can apply any value I want here also because,
I can initialise the flip-flop. Similarly, on the output side I can observe not only the primary
outputs but also I can observe these, how? I can store them in the flip-flops then I can shift
them out which means now my test pattern generation, I can ignore the flip-flops.

I can look at the combinational circuit as if all the inputs I can apply directly, all the outputs I
can observe directly. So, I do not need a sequential circuit test generator, I need only a
combinational circuit test generator where all the inputs I can apply, all the outputs I can
observe. This is what is the big advantage that we get here.

(Refer Slide Time: 11:15)

So, now the question is how was the test vectors applied? Now, the test generator you see just
go back to the previous diagram once more, the PI and the present state, and PO and the next
state. PI and the present state are the inputs; PO and the next state are the outputs. So, the test
generator will be generating the data where these will be the inputs of the combinational
circuit and these will be the outputs of the combinational circuit.

So, let us say symbolically I am just given example. Suppose the test generators generated
some inputs I1, S1, I2, S2, I3, S3 and the expected output values are O1, N1, O2, N2, O3, N3.
Because, PI is available from outside, it can be applied directly, but when you are applying
this S1, S2, S3 you have to apply them serially through the shift register just after setting this
equal to 0 not 1, this equal to 0. Similarly, the next state when you want to observe the output,

815
you have to observe them serially by considering them as shift register shifting them out
right. And how are they shifted in and shifted out? I am showing pictorially in this diagram.

(Refer Slide Time: 12:43)

As shown this is your axis of time, this is time. So, in the beginning you set TC equal to 0 and
you shift in this S1 bit by bit through this scan in input. So, after S1 has been shifted in
suppose it is 7 bit, 7 bits are there then you apply I1 because PI, through PI you can apply I1
directly, now you are in the normal mode, TC equal to 1.

So, after doing this you can get through PO the outputs and whatever is the secondary output
N1 the next state that through the scan out again you set TC to 0, you can shift them out. And
here you can do some parallel operation, while N1 is being shifted out, in parallel the next
serial data this S2, the present state can be shifted in, so there can be parallel operations going
on. Only in the last state there is no S, only N2 is there, this N2 has to be shifted out and this
dark places are don’t cares during the shifted in, so PI are don’t care, you can apply anything
you want. So, this is just to show you pictorially how the data input and output operations
take place.

816
(Refer Slide Time: 14:26)

But, the point to note is that that how much time it takes? So, you can make an simple
calculation regarding the total number of clock cycles that are required, right. Now, the
number of clock cycles can be determined like this ns+1 nc , because you see for every
test vector ns this ns denotes the number of scan flip-flops. This many clock cycles will be
required to shift in the data, then one cycle will be required to apply the PI and observe the
output and you will have to repeat this for so many test vectors. There are nc number of test
vectors and for the last test vectors you will have to shift out the next state. So, for that you
will need another ns, this is how this equation is coming right

So, this is one drawback of this design for testability technique that the number of clock
cycles required here will be roughly proportional to number of flip-flops multiplied by
number of test vectors ok, this is something you have to remember. That your testing time is
increasing but you are able to get complete controllability and observability of the flip-flops
that is the biggest advantage you gain out of this technique.

817
(Refer Slide Time: 16:02)

So, a very specific example 3 input, 2 outputs and 3 state variables, so we can just apply this.
Suppose, these are the tests which are generated by the test generator, these are the inputs and
these are the expected outputs.

(Refer Slide Time: 16:22)

And clock cycle wise you can apply like this. So, first 3 clock cycles TC will be 0, you shift
in the value of the present state, after it is done you set TC to 1, apply the primary input, this
is your I1 and observe the primary output. Then again set TC to 0, you observe the next state,
in parallel you also feed in the present state of the next vector. This is your present state for

818
the first vector, this is your primary input part of the first vector, this is your present state for
this second factor, this is your output for the first vector, this is your TC, this will be your
next state for the first vector. So, in this way it will go on ok, so this gives you roughly an
idea how the thing works.

(Refer Slide Time: 17:42)

Now, just regarding the total scan testing time, let me very briefly talk about another thing
that well we have tested the combinational circuit part but what about the scan flip-flop part,
there can be some faults there also. Now, there is a standard technique of testing a shift
register. So, you apply a pattern like this 0 0 1 1 0 0 1 1 0 0 1 1 of total length ns plus 4. So,
when we apply a pattern like this, so you can see that you are going through all possible
transition 0 to 0, 0 to 1, 1 to 1 and also 1 to 0. So, you are verifying that all the flip-flops are
able to carry out all possible transitions, so, this way you can test this scan flip-flop.

So, the total scan test length will be, this is something which we have already seen earlier
plus this. So, as an example if there are 2000 scan flip-flop and 500 test vectors, if we
multiply the test length, becomes about a million. But, it is not really much, you see during
6
testing if you apply a clock pulse, let us say at 100 megahertz frequency then 10 is
nothing, it will hardly take you how much I think 10 milliseconds or so ok. So, this is not a
really big deal, because nowadays the clock frequency is very large. So, even if we apply, you
have to apply a large number of test vectors or you can afford to do that ok.

819
(Refer Slide Time: 19:31)

So, talking about this scan overheads, well you need one mandatory extra input pin TC. But
regarding this scan in and scan out, well if you can afford to have additional pins it is alright
but, otherwise you can share them with your PI or PO pins also, because PI and scan in you
are never using them together. Similarly, PO and scan out, you are never using them together;
you can share some of the pins if you want.

And talking about area overhead, for every scan flip-flop we said that 4 additional gates are
required and if ng denotes the total number of gates in the combinational logic, it will be ng
plus so many flip-flops. So, originally the flip-flop contains 10 gates, but in this scan flip-flop
4 additional are required, so this is your gate overhead. So, a simple example numerical
example, 100k gates and 2k flip-flop, 2000 flip-flops, so overhead will be about 6.7 percent,
these are rough estimate.

820
(Refer Slide Time: 20:50)

And other thing is that there are some additional delays which are included in the path which
you should also consider, because you have an additional multiplexer delay from the output
of the combinational circuit, because, the flip-flops have been replaced by scan flip-flops, so
approximately two gate delays. And for the flip-flop the fanout is increasing, because earlier
the flip-flop output was only feeding to the input of the combinational circuit, but now it is
going to the input of the next flip-flop also. So, increase in fanout means extra delay.

So, roughly speaking this leads to 5 to 6 percent degradation in the clock frequency, so these
are a few things you should keep in mind. So, we with this we come to the end of this lecture,
where we have given you a brief overview about some of the design for testability
techniques, DFT techniques that can be used to handle or tackle the problem of testing
synchronous sequential circuits.

Thank you.

821
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture – 59
Built-in Self-Test (Part I)

Continuing with our discussion on testing, in this lecture we shall be talking about something
called Built in Self test. Well, in some sense you can say that this is one kind of extreme in
the area of testing because what we are saying that the circuit that you want to test will have
the ability to test itself.

So, a circuit can do something called Self Testing, this is the basic idea and let us see how it
works. So, this is a first part of this topic on built in self test.

(Refer Slide Time: 00:57)

Let us first try to understand the basic concept what the idea is all about. Well, the first thing
is that we want to test a chip, some circuit which is there in the chip. First important thing to
note is that we put in some extra hardware in the chip. And this extra hardware we will have
two responsibilities. First is that it will automatically be generating some test vectors and it
will be evaluating the responses of the circuit. Both these things will be carried out by the
extra hardware.

822
So, from outside we need not have to apply any test vectors and on the outside we do not
have to evaluate the circuit responses. Everything will happen inside within the chip or within
the circuit.

So, this as I said, this will be done on the chip and because of the extra hardware there will be
some additional overhead involved. And to support this kind of built in self test in short BIST
operation, we need two additional pins; one is an input pin which will be something like an
activation signal. I can tell the chip that well now you can test yourself, test control and one
output pin which the chip tell us that whether the chip is good or bad. This is how it works.

(Refer Slide Time: 02:35)

Now, this is the very high level schematic. This is the schematic of a chip where I have said
we have two additional pins; one is something called test control and other is good or bad.
The chip will report to the user, to the outside world whether the chip is good or bad. Now, if
you see inside the chip, in addition to the circuit you want to test you will be having some test
generator circuit and also some response evaluated call it response compactor circuit inside it.

So, we will be looking into the detail of this test generator and response compactor in the next
few slides.

823
(Refer Slide Time: 03:27)

But before that let us try to understand the basic question why do we need BIST, what are the
additional advantages we get if we have a built in self test facility in a chip. Let us try to
understand these advantages.

First and most important thing is that we can do field test and diagnosis ok. What is meant by
field test? Field refers to the area where the chip is actually being used. While it can be used
inside our mobile phones, it can be used inside a computer system, in a laptop or in any
specialised circuitry which is deployed anywhere in an industrial control plant anywhere. So,
wherever it is used that is called the field. And what you are saying, conventionally the chips
will be tested in the laboratory before hand and then they will be put in the circuit and then
they will be used.

When I say that they would be tested in field it means, let us say a laptop, while the chip is
inside the laptop, the chip can test itself. We do not have to take the chip to the lab or the
laptop to the lab for the purpose of testing right..

So, we do not required something called Automated Test Equipment which is normally used
to test circuits or chips which is typically a very expensive equipment.

Now, you can also want to compare this BIST approach to something called software test for
field test and diagnosis which normally we see in our PC’s or laptops. Whenever you turn on
the PC you might have seen that there is some self testing going on, there are some messages

824
which are coming. And in case there is some problem either in the motherboard or in the hard
disk interface somewhere, there will be some error message either in the form of a audio beep
or in the form of a textual message that will come, that come a failure in the memory system
also ok. So, these are called software tests for testing and diagnosis because there is a
software which is running which is trying to test the various subsystems.

But the problem is that the fault coverage is typically not that high. And you cannot have
good diagnosis. You can say just the motherboard is bad, but exactly which chip of the
motherboard is bad, you cannot tell that, you cannot pinpoint that right. And of course, it is
time consuming; it takes many seconds to complete the test. But if you do it in BIST, in
hardware, the advantage is that you can have much better diagnosis. Why? Diagnosis means
to locate the source of fault.

So, if every chip can tests itself then the chip which is faulty can tell you that well I am bad.
So, you can replace only that chip. And another advantage is that because of that you can
have much improved system maintenance and repair capabilities. These are some of the
additional advantages you have.

(Refer Slide Time: 07:03)

This is a little more detailed schematic for the BIST schemes or the BIST architecture. Let us
see in the centre you have the circuit that you want to test, circuit under test. Normally, the
circuit will take, it is input from some primary input lines that is, call it PI, this can be coming
from outside. But while in the test mode, there will be some kind of a pattern generator inside

825
the chip. The input to the circuit will be coming from the pattern generator and there will be a
multiplexer here, so which will be selecting either PI or the pattern generator outputs to be
fed to the circuit.

Similarly, the output side normally the output will go to some primary output lines, but now
here some kind of response compactor circuit is there which will be compressing the output
or compacting the output to a small, some kind of signature we will talk about this.

And the good signature will be stored in a small memory, ROM which you can compare and
at the end if it matches you say that which chip is good. If it does not match you say your
chip is bad. And there will be a finite state machine; you can call it a test controller that will
control the operations of the multiplexer, pattern generator, response compactor everything.
And it will be activated whenever the test control pin goes high, I mean is activated. This is
how the whole thing works.

But the point to notice that, well there are some drawbacks. See some paths in the circuit you
cannot test like from the primary input pin to the input multiplexer this part you are not able
to test. You are testing with respect to patterns from the pattern generator, but from the
primary input if there some fault in this path, you are not able to test this. And similarly the
primary output, the circuit output you compact in the response compactor, but if there is any
fault in this part of the circuit, sorry then this part will not be able to test. These are some of
the drawbacks here.

(Refer Slide Time: 09:34)

826
Now, with respect to test pattern generation inside the chip the standard technique that we
follow is some kind of random pattern generation because we talked about test pattern
generation earlier. Well, we can spent lot of time and effort to generate an optimum set of test
patterns, but when you say I want to generate this test patterns automatically by hardware
how will you do it? If the test pattern does not have any relationship between them, then the
only way you made it is to, you will have to store it in some memory and from the memory
you cannot put the patterns one by one, but the trouble is that the overhead of the memory
can become high.

Let us say if we need 1000 test patterns, you need a large enough memory to store 1000
patterns and also and the output side you will be requiring 1000 circuit responses ok. So, your
hardware requirement will be larger. So, the standard way is to use some kind of random
pattern, we call it pseudo random patterns because these random patterns can be repetitively
generated. And how do we generate? We already discussed this kind of a shift register
structure using linear feedback shift register which we mentioned. It can generate very good
random patterns. So, using LSFR you can generate these random patterns.

And well you know what is meant by fault coverage. In BIST, I may want that while I require
95 percent fault coverage, but how many such random patterns are required to achieve 95
percent coverage, I cannot predict beforehand. This has to be done a priori through fault
simulation. You can generate the patterns from this LFSR, you can carry out fault simulation
and find out how many faults are getting detected.

So, as soon as it reaches 95 percent you know that how many patterns need to be generated
and that way you can configure your test generation inside the chip.

So, there are two things because of LFSR based test generation, a test length may be much
larger. Like you may required only 10 test patterns to be generated, but LFSR the patterns are
generated in some random order.

So, there you may find to achieve 95 percent, you may need to generate, let us say 200 test
patterns. So, the number of test patterns required maybe larger, but you really do not care
because a generating tests at a very high frequency. So, it will take hardly a fraction of
second.

827
Much fastest test generation; so, this is normally how it, there is sometimes you can combine
random pattern with automated test pattern generation based testing also, but here we are not
discussing this.

(Refer Slide Time: 12:53)

So, this is the typical behaviour of a circuit. So, as you increase the number of random test
vectors and through fault simulation, if you calculate the fault coverage, you will find that it
will increase like this, initially very rapidly, but it will slowly level off. Well, for very rare
cases it will reach 100 percent, but normally it will level off in the range of let us say 80
percent, 90 percent, 95 percent like that. So, you can fix up some acceptable level, you reach
up to that you see that how many test patterns you will be requiring to reach this acceptable
level, this is your acceptable level ok. So, you run the LFSR, for these many clock cycles,
fine. This is how you proceed.

828
(Refer Slide Time: 13:46)

Now, coming back to linear feedback shift register, we have earlier very briefly talked about
it. Let us have a relook into it. Linear feedback shift register is a simple hardware circuit
which is based on shift register. You recall there is a feedback circuit which is consisting of
Exclusive OR gates and earlier we told that exclusive OR is a linear function. That is why we
call it a linear feedback shift register. And it has been found that LFSR can generate very
good pseudo random patterns.

And talking about applications, not only for testing there are many other applications where
this LFSR is used. For error checking, using cyclic redundancy check, later on for response
compression or compaction we shall see it later, there again LFSR will be using, for data
communication application where there can be errors you can again use LFSR for error
detection and stuff like that ok. So, there are many applications, but here we are interested in
test pattern generation for the time being.

829
(Refer Slide Time: 15:06)

So, LFSR looks like this. There can be two types of configuration. Earlier, we looked at only
the first kind of configuration. This plus is actually an Exclusive OR gate. So, we have just
shown it like this. This is actually a two input XOR gate. So, one input is coming from here.
And other input is coming from the output of D4, from here and the output of the XOR gate
is feeding to the input of the first flip flop.

So, there are 4 D type flip flops which are connected as a shift register. And we use a linear
feedback circuit using Exclusive OR and from some tapping point we take the feedback
connections. This is called type 1 LFSR.

Well, you can have another kind of LFSR design also where the Exclusive OR gates are not
outside in the feedback, but they can be just inside in the forward shift register path that
means, this Exclusive OR for example, can be connected directly like this. One input coming
from here output of D3 and one input coming from out of D4, from here right and output will
go to D4. So, like this you can have here, here, here, it depends where you want the tapping
points. This is type 2.

Now, we shall see later this type 2 LFSR is more suitable for response compaction, but for
test generation purposes normally we use type 1 LFSR.

830
(Refer Slide Time: 16:52)

Let us take an example of pattern generation using this kind of type 1 LFSR. Now, this is an
example of a type 1 LFSR. And I mentioned earlier very briefly, but let me again tell you here
that the behaviour of an LFSR depends from the points from where you are taking this
feedback connection ok. Now, let us say if every flip flop output you regard as the coefficient
of a polynomial x4 , x3 , x2 and x then you see from where you are taking the
feedback. And of course, this is x 0 or 1. You are taking feedback from x 4 and x.

So, you define something called the characteristic polynomial of the LFSR. Here, you show
0
the corresponding terms; x 4 and x because the output of the XOR you are feeding to x ,
this x0 term is always there ok. This will be the characteristic polynomial of the LFSR.
This is called characteristic polynomial.

Now, let us assume that this LFSR we initialize with 1 0 0 and 0. You see yes 1 point, if you
initialise it with all 0, it will remain in the all 0 state because Exclusive OR of 0 and 0 is 0, 0
will fead back. So, it will never come out of 0, but let us say we feed it 1 0 0 0.

So, I show it like this. The first it is 1 0 0 0 and clocks are applied one by one. So, what will
happen in the next state? You see 1 and 0 are fed to the input. So, XOR output will be 1. So,
in the next clock cycle, 1 will be fed back and everything else will be shifted right. So, it will
be 0 0 0 1 you see, next is 0 0 0 1. So, like that if you just check you will see that it will be
generating various patterns 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15. And after the 15th pattern

831
again 1 0 0 0 comes back. So, you see what we have told that for LFSR, the all 0 pattern if we
apply it will never come out of it.

So, in 4 bits there are 16 possible patterns. So, there are 15 remaining non zero patterns. Now,
for this example for instance if you start with any non zero state like 1 0 0 0, the LFSR will
go through all possible 15 states and in a random order. You see if you look at the decimal
equivalence you will have an idea; 8 1 3 7 15 14 12 10 5 then 11 6 12, this 13 this 12 9 2 and
then again 4. So, these are apparently random.

So, you see this LFSR is able to generate all non zero patterns in some random order. This is
one very good property of a LFSR that it can generate all the patterns for a given size if you
choose a characteristic polynomial in a proper way. So, we will have some discussion on this.

(Refer Slide Time: 21:02)

n
So, the LFSR let us say an n stage LFSR which generates all 2 −1 patterns, like in the
example I took earlier. We call it a maximum length sequence that the LFSR generates, the
maximum length sequence in short we call it m sequence and the characteristic polynomial of
the LFSR that generates an m sequence is called a primitive polynomial.

So, from the point of view of test generation we are more interested in primitive polynomials
because they can generate all the patterns and we need as many patterns as we want.

So, here there is a characterization of a primitive polynomial. It is a polynomial it is called


irreducible which cannot be factored. It does not have any factors and this primitive

832
polynomial is a type of irreducible polynomial. I am not going with the detail of this. Because
if you want such primitive polynomial you can refer to books you will find that a big list is
given.

(Refer Slide Time: 22:26)

Here, I am showing list of primitive polynomials up to n=64 , but in the books you will
see up to several thousands, this kind of things given. The ideas that if we use a polynomial
like this which means I need an XOR with, forget 1, this is the output with 4 inputs. I need a
4 input XOR in the feedback, but it is guaranteed that I will be generating so many unique
random patterns, non-zero random patterns. These are all primitive polynomials.

So, I did not have to calculate primitive polynomials. A list is already prepared by someone;
we can simply take from that list ok.

So, you see if I have a 64 input circuit, I can take a 64 bit LFSR. And through fault simulation
I can find out how many patterns I need to generate to reach a certain fault coverage because
264 is a really a huge number, we never would required to apply or possible patterns right.

833
(Refer Slide Time: 23:48)

So, there are a few interesting properties, I am not going into all of them because all of them
n
may not be interesting in this context that the period of the m sequence as I said is 2 −1
because after 2n−1 , the patterns repeat itself.

So, starting from any non-zero state as the truth table showed the LFSR will go through all
n
the non-zero 2 −1 states before repeating. And in every column of the truth table if you
check that the number of 1’s will be different from the number of 0’s by 1, number of 1’s
should be more by 1 just because the all 0 pattern is not there that is why number of 0’s will
be 1 less and the next two points are not so important in the present context. So, let us not
discuss this right now.

834
(Refer Slide Time: 24:50)

So, randomness property is something which is important in our context. The maximum
length sequences that are generated they are called pseudo random sequences. You see there
are some standard tests for randomness. It is found that most of the random, this kind of
randomness tests are very well satisfied by the patterns which is generated by LFSR. Like,
the autocorrelation of any of the output bit whatever bit patterns is generated is close to 0.
The correlation of any two output which is close to 0, but the cross correlation is poor
because it is a shift register, whatever pattern is generated by 1 bit, in the next bit the same
pattern will be generated by respect to 1 bit time shift. This is the only drawback. Just other
than that all other properties are very well satisfied ok.

So, as it said in a typical test environment we can generate as many patterns as required. And
this another advantage, you think of this scan base testing, scan path which we were
discussed in the last lecture. For applying every test pattern we have to serially shift some
pattern in a shift register then apply the pattern which means, we cannot operate the circuit at
the maximum possible rate with which circuit is supposed to operate, but here we can. The
test patterns can be generated in the maximum related frequency of the circuit here, let us say
1 gigahertz, 2 gigahertz whatever it is. And this is called AT-speed testing that we are testing
the circuit at the maximum clock rate. This is another advantage. Many of the timing errors
also get detected in this process.

835
So, with this we come to the end of this lecture where we have discussed how test patterns
can be generated in a built in self test environment. In the next lecture, we shall be looking at
the other part, how these circuit responses can be compacted to take a decision whether the
circuit is good or bad.

Thank you.

836
Switching Circuits and Logic Design
Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture - 60
Built-in Self-Test (Part II)

So, we continue with our discussion on Built in Self Test; in our last lecture we talked about
how we can generate the test patterns inside the chip. Now in this second part of the lecture,
built in self test, we shall be talking about the so called Response Compaction Part.

(Refer Slide Time: 00:36)

So, if you look into this picture we have already seen how using LFSR you can generate the
test patterns, the so called pseudo random test patterns. Now let us say we are applying
4
10 or 10000 such patterns. So, at the output of the circuit we will be getting 10000 circuit
output responses. What do you do with these responses? How do you take a decision whether
this is good or bad? This is the question we are trying to answer now ok.

(Refer Slide Time: 01:19)

837
Let us see, the first thing to note is that we required to do something called a compaction
operation. Why is it required? Let us take an example, just like I was saying that the circuit
output can generate huge volume of data. Suppose realistic figure, we are applying 5 million
random patterns, the circuit can have 200 outputs which is pretty common in modern day
circuits. So, how many bits of data is generated? Number of test patterns multiplied by
number of outputs; so, it will be 5 million multiplied by 200, this translates to 1 billion bits.
So, what do we do? Do we store all this 1 billion bits in memory and for every pattern the
output generated we compare it; this means we require a 1 gigabyte size memory just to store
the circuit responses well; obviously, this is impractical right.

So, we have to do something else. So, it is really uneconomical or impractical to store all
these responses on chip. So, we have to do some kind of compaction before we compare the
circuit responses ok. This motivates us why do we need compaction.

838
(Refer Slide Time: 03:00)

Now, the point to notice that we have huge volumes of data; we said the circuit is generating
huge volume of data then we have a compacting operation let us say C. The compaction
operation takes this huge volume of data and generates a small compacted response and this
small compact response is referred to as a signature. So, what we store in a ROM, in a,
because this signature is relatively much smaller in size, you can store this golden signature;
that means, the signature in the absence of any faults.

So, through fault simulation you can do this experiment beforehand, you calculate the good
signature, you store it in a ROM and you can compare this signature with the signature stored
in a ROM whether they are equal or not right. But, you see here we are having a many to one
mapping; a huge set of data is being compacted into very small signature, typically, let us say
the signature can be as small as 32 bits. So, there is always the possibility that two or more
input patterns in the set can map to the same signature because of which some of the faults
might go undetected. So, what do you do? What is the probability of that? That a fault is
occurring, but it is going undetected, let us try to make a simple calculation.

839
(Refer Slide Time: 04:52)

But before that let us talk about a few terminologies that is something called aliasing. Well,
aliasing said just the point I have made that there is a fault, but the faulty signature by
accident becomes the same as the good signature ok.

The signature in the presence of the fault matches with the golden signature and under such
conditions fault goes undetected, we say that aliasing has taken place. Well to reduce the size
of data we broadly use two techniques: one is called compaction other is called compact, this
is called compression. Now in computers sometimes we compress files, there are commands
like tar, zip, rar, many commands are there. They are used to reduce the size of a file, but this
is a temporary process whenever we need the file again we can unzip it, we can expand the
file again and get back the original. So, this compression is a reversible process, you can do
compression, you can again uncompress it ok.

Reduces number of bits, but there is no information loss, it is fully invertible. But, in
compaction you cannot go back, you can drastically reduce the number of bits, like you can
have a huge 1 gigabit data, you can bring it down to as low as 32 let us say, but there is some
9
loss of information. From this 32 bit, you cannot go back to the original 10 , this you need
to remember. Now in circuit testing in best, we go for compaction; we do not go for
compression ok.

840
(Refer Slide Time: 07:00)

Now, broadly speaking this compaction and compression can be done in two ways: one using
something what signature analyzer, which can compress a single serial bit stream; that means,
assuming the circuit has a single output. So, single bit stream is coming and second
alternative is something called multi-input signature register, where the circuit can have
multiple outputs. So, we can compress all of them together ok, let us see how.

(Refer Slide Time: 07:40)

First let us talk about signature analyser. So, how it works, we can use an LFSR again here,
Linear Feedback Shift Register again to use as a response compactor, the same technique is

841
used for error detection using cyclic redundancy code check. The basic mathematical
foundation is that we are dividing a polynomial by another polynomial and take the
remainder, this remainder is what we are calling as the signature mathematically this is the
basic concept, but how it is implemented let us see. You see this circuit output is generating
some data, we are applying input patterns one by one and the output bits are generated.

Now, these bits are treated as coefficients of a polynomial in decreasing order, we will take an
example and we have a compaction circuit based on LFSR. What the compaction circuit will
be doing, it will be dividing this polynomial representing the output bit stream by the
characteristic polynomial of the LFSR. So, LFSR we have seen that there is a characteristic
polynomial. So, polynomial division will take place and whatever remainder is, will be there
in the LFSR that will be the signature and one thing remember before testing starts, we must
initialise the LFSR to the all 0 pattern fine. Let us see how it works.

(Refer Slide Time: 09:37)

Let us take an example this is a 5 bit LFSR which implements a polynomial like this X5 ,
5
X 3, X and 1. So, the feedback are coming like this X , X 4, X then X, X power, X cube,
there is a feedback from X cube, then X, then 1. Well you can actually call them in a
different, in terms of the polynomial, these are the outputs, but in terms of the coefficient of
the polynomial, you can call it X 5 , you can call this X 4 , you can call this X3 , X
0
square, X and this is X or 1.

842
5 3
So, feedback is taken from X , X and an X, you see these terms are there and we have
used the type two LFSR, where the XOR gates are in between the D flip flops right and the
circuit is generating an input bit stream say 0 1 0 1 0 0 0 1 ok. So, the input polynomial you
see if you consider the output side 0 1 2 3 4 5 6 7. So, this will be X 7 , 7 6 5 4 3, X3 2
7
1, that is why X , X 3 plus X. So, any bit pattern can be represented by a polynomial. So,
input polynomial is this, your characteristic polynomial of the LFSR is this.

So, I am saying that there will be a division of this polynomial by the characteristic
polynomial; this will happen automatically and here if you just work it out, take the
remainder. Remainder comes to X cube plus X square plus 1 which in terms of bit pattern X
cube, X square, no X and 1 1 1 0 1 this should be the remainder right.

(Refer Slide Time: 12:05)

Now, if we just work out, just initialise it to the all 0 pattern, simulating the process of
division the input bits are coming like this. And, if you work out, I leave it an exercise for
you, leave it, you just see it step by step, this corresponds to the input polynomial and as this
input bits are coming one by one, the LFSR runs starting from the all 0 state you will see that
the end whatever remains this is nothing, but this 1 1 0 1 you see this is, this polynomial I
said X cube plus X square plus 1.

Final value in the LFSR whatever, it remains, this will be the remainder polynomial; this is
how polynomial division takes place or works ok. So, you need not have to know much more
detail about this, just remember that it works.

843
(Refer Slide Time: 12:58)

Talking about the probability of aliasing that what is the chance that error will, some error
will occur, but will not be able to test it Let us see. Suppose the number of bits in the input
stream is m and LFSR is an n bit LFSR. So, in the input we have an m bit stream which
naturally will be having 2 to the m combinations and finally, in the LFSR we are compressing
it to an n bit stream. So, there can be 2n possible signatures and out of these 2n one of
the signature will be good and out of this 2 to the, you see there is 2n input combinations
in m bit.

But out of them one of them is good, similarly in the output signature there can be 2n
possible signature, out of them one of them is good. So, there are 2m−1 faulty input
streams and 2n−1 faulty signatures ok. Now assuming that the input pattern, this is a
much larger set are uniformly distributed among the signature. So, how many input patterns

2m m−n m−n
will map to every each signature? which comes to 2 , 2 patterns map to a
2n
particular signature in the LFSR, just remember this.

844
(Refer Slide Time: 14:51)

So, probability of aliasing I mentioned it is defined as the ratio of the number of faulty bit
streams, it is a probability. So, number of faulty bit streams that map to the golden signature
to the number of total faulty signature.

Now, we have said already seen that 2m−n input patterns can map to every signature and
one of them will be the golden signature. So, 2m−n input patterns will map to the golden
signature and out of these 2m−n one of them will be the good one. So, 2m−n−1 will be
faulty bit streams that map to the golden signature and these will lead to aliasing. So,
probability of aliasing will be this number of cases divided by total number of cases, total
number of faulty bits 2m out of them one is good. 2m−1 , so if m is very large.

1
So, this approximates to and you see even if you choose n=32 , this is independent
2n

1
of m, 32 is so small the probability something into 10−10 . So, well yes there can be
2
aliasing, but the probability of aliasing is very very small, because of this people do use this
technique and a signature of size 32 or 64 is considered to be good enough alright ok.

845
(Refer Slide Time: 16:44)

Now, talking about multiple input signature register, let us briefly talk about it without going
into the detail because, a circuit in general can have multiple outputs ok. So, when you apply
a sequence of test patterns, bits will be generated from all of these outputs in a stream right.

So, suppose if you use an LFSR based compressor with every output bit then this will require
lot of, too much hardware, lot of hardware, if there are 100 outputs, we will be needing 100
LFSR’s ok. The solution is to use multiple input signature register is called MISR in short,
where you are using a single LFSR, how I will show you. So, you need not have to remember
all these details because, LFSR is linear, you can compress all the output bit streams in a
single LFSR, the final response will remain something like same; as if you XOR these
responses and the final response you will get, they will be same ok. Let us see how it works?
How the circuit will look like?

846
(Refer Slide Time: 18:01)

Well a MISR looks like this, you see this is a type two LFSR where not only I am feeding
data in the first XOR, but I am adding an additional input to all other XORs. So, even in the
places where there is no feedback or taping connection I am adding an XOR gate, so that for
the circuit, so, all the outputs can be fed in parallel to all these XOR gate inputs. So, all the bit
patterns suppose, out here I am applying 106 patterns, 1 million patterns I am applying.
So, each of the circuit outputs should be generating 1 million bit streams, they will all be
compacted together and finally, a single signature will remain stored in the register at the end
right. This is how this circuit works.

(Refer Slide Time: 19:07)

847
So, to summarise here if you talk about built in self test the preferred method as I told is that
you use pattern generation using LFSR based random patterns and on the output side you do
compaction using multiple input signature register. But, you understand BIST has a lot of
overheads, you need to have this pattern generator, response compactor, the ROM to store the
golden signature, the comparator, this additional hardware also has to go inside the chip ok.
These are the additional overheads and they also lead to performance degradation because
now, the primary inputs that you are feeding to the circuit that has to go through a
multiplexer. The multiplexer will be having some delay, so, the delay of your circuit
increases. So, some degradation in performance also occurs.

But the advantages of BIST’s are pretty significant, test cost gets reduced drastically, you are
able to do field testing and diagnosis, which chip is good or bad and you can test the circuits
at its full speed, these are some of the very big advantages. So, with this we come to the end
of this lecture and also this course. So, over the last 12 weeks we have covered many of the
topics, most of the topics are conventional in the sense that you will find them as part of
standard curriculum in most of the courses in switching circuits or digital circuits. But I have
also tried to cover a few topics, which are a little unconventional, which are little of the
tracks, which are normally not a part of the syllabus, but I just wanted to give you a flavour
of some of the techniques or technologies which have possibly greater impact that you can
see in the future. So, with this and wishing you all the best let me say goodbye to you.

Thank you very much.

848

You might also like