Coa Chapter 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

DATA REPRESENTATION

INTRODUCTION

We human beings commonly use the decimal number system consisting


of digits 0, 1, 2, . . . , 8 and 9. But in computers, instructions as well as
data are stored in binary number system consisting of digits 0 and 1. it
is so because binary digits 0 and 1 can be easily created by electrical
switch. If the switch is closed it could be 1 and if it is open it could be 0.
The computer is required to store the following types of data.

a) Numbers used in arithmetic computations


b) Alphabets used in symbolic instructions and some data
processing.
c) Special characters or symbols like #,$,%, & etc. used in some
special cases.

Computers eternally store all data in binary form that means, as strings
of 0s and 1s. But since binary numbers are useful to see and work with,
octal and hexadecimal numbers are now widely used to compress long
strings of binary data. In addition, there are many other binary codes
which are used in digital electronics. We will examine all these number
systems in this chapter.

NUMBER SYSTEMS

We shall now understand various number systems. Since decimal


number system is very familiar to everyone, it will not be discussed here.
We shall also learn how numbers is one number system are converted
into another number system.

1
Conversion from one number system to another number system is
required because, we human beings are so accustomed to decimal
number system seems very difficult for us. Therefore, the alternative lies
in letting users enter data. The computer is then expected to convert the
decimal numbers into other number system which it under-stands for
further calculations.

BINARY NUMBER SYSTEM

Binary number system consists of 2 digits namely 0 and 1. it has a base (


or radix) 2. For example, 1011.11012 is a binary number. The
subscript 2 at the end of this number indicates the base of the number.

DECIMAL TO BINARY CONVERSION

Although there are multiple ways of converting decimal numbers to


binary numbers the most commonly used is the dibble-dabble method.
In this method the decimal number to be converted is repeatedly divided
by 2. The remainders obtained, 0s and 1s, are then read in reverse order
to obtain the binary equivalent of the decimal number.

Example 1

Convert 197510 to binary

Answer :- p (ss)

2
OCTAL NUMBER SYSTEM

The octal number system is an important number system which is often


used in microcomputers. It has a base or radix 8. it consists of digits
0,1,2,3,4,5,6,7.

OCTAL NUMBER TO BINARY NUMBER CONVERSION


The conversion of octal to binary number can be accomplished easily by
simply converting each digit of the octal number to I-bit binary
equivalent number and then placing them side by side.

Example:- (4267)8 = 100 010 110 111


4 2 6 7

2) There fore, (100010110111)7

The conversion from binary to octal can be done similarly by grouping


the binary number into groups of 3-bits, and then writing decimal
equivalent of each group of 3-binary bits from right most side.

Example:- convert (110011111001)2

110 011 111 001


6 2 7 1
There fore, (110011 111001)2 = (6271)8

HEXADECIMAL NUMBER SYSTEM

Hexadecimal number system is another number system commonly used


in microcomputers. It has a radix 16. The digits are 0 to 9 and A, B, C,
D, E, and F. the symbols A through F represent the equivalent decimal

3
numbers of 10 through 15, respectively. For example, (F 3 D.C 8)16 is
hexadecimal number.

BINARY TO HEXADECIMAL CONVERSION

The bits in the binary number are put into groups of 4 bits from the right
most side. Each group is the replaced by the equivalent hexadecimal
digit.
Example: Convert (11010101000.11110101 1100)2

0110 1010 1000. 1111 0101 1100


6 10 8 15 5 12

Here 10 is A
15 is F and 12 is C
Therefore, (11010101000.111101011100)2 =(6A8.F5C)6

HEXADECIMAL TO BINARY CONVERSION

To convert a hexadecimal number to binary, convert each hexadecimal


numbers into its 4 binary digits. Zeros are added on the right and left, if
necessary, to form a complete 4 binary digits. Of convert ( 3 D.5 7)16 into
binary digits.

3 D . 5 7
0110 1101 1010 1110

Thus ( 3D.57)16= (01101101.01010111)2

4
BINARY-CODED DECIMAL (BCD) NUMBER SYSTEM

In BCD number system, while converting decimal number into a binary


number, every decimal digit is shown as equivalent binary number. The
most common method is to represent each digit by its binary equivalent
using 4-bits. For example, digit 5 is equivalent to binary number 0101
and digit 9 is equivalent to 1001. thus decimal number 59 is represented
as 0101 1001 in BCD. Similarly, decimal number 95 is represented as
10010101.

REPRESENTATION OF NEGATIVE NUMBERS

Systems for representing negative numbers have been used in digital


computers are the following:
• Sign Magnitude
• 1’s complement
• 2’s complement

SIGN-MAGNITUDE REPRESENATION
In sign magnitude representation, the first bit is reserved for the sign, 0
for positive and 1 for negative, and the remaining bits give the binary
representation of the magnitude of the number. For instance, 24 and -24
would be stored in 8-bits as follows:
24: 00011000 -24: 10011000

Sign bit

The general case can be expressed as follows:

5
n-2

A= ∑ 2iai if an-1=0
Sign Magnitude i=0
n-2

∑ 2iai if an-1=0
i=0

Consider an n-bit word. These are 2n possible different values ranging


from 0 to 2’n1. If one bit is used for the sign, then the remaining n-1 bits
represent the number. Thus, the range of a sign-magnitude number in n-
bits is from -(2n-1 -1) to (2n-1-1) that is -2n-1+1 to 2n-1-1. For instance, a
3-bit word can represent the numbers 0,1,2, - - - , 7 and its sign
magnitude ranges from -22-1+1 to 22-1-1, that is -3,-2,-1,0,1,2,3.

There are several drawbacks to sign magnitude representation. One is


that addition and subtraction require a consideration of both the sign of
the numbers and their relative magnitudes to carry out the required
operation. Another drawback is that there are two representations of 0.
+06=00000
-06=100000 (sign magnitude
This is inconvenient, because it is slightly more difficult to test for 0(as
operation performed frequently on computers) than if there were a single
representation.

1’s complement
The ones complement of a binary number is obtained by changing each 0
to 1 and vice versa of the number including the sign-bit. For instance,

6
the 1’s complement of 0110 is 1001 and that of 1111 is 0000. is
complement is basically used to represent negative numbers. For
instance, the representation of the decimal -4 in 1’s complement in 4-bits
can be constructed as follows with the left most bit as sign-bit.

Binary representation of 4: 0100


-4 in 1’s complement: 1011
Since 0=-0 there are time one’s complement representation for zero:
00- - - and 11- - - 1

2’s Complement

The 2’s complement of a binary number is obtained by adding 1 to the


1’s complement of the number. For instance, the 2’s complement of
0110 is 1010 and that of 1111 is 0001.
Like 1’s complement, 2’s complement is basically used to represent
negative numbers. For instance, the representation of the decimal -4 in
2’s complement in 4 bits can be constructed as follows:

Binary representation of 4: 0100


-4 in 1’s complement: 1011
+ 1
-4 in 2’s complement: 1100

Unlike both sign-magnitude and 1’s complement, the 2’s complement of


plus zero is also plus zero (+0).

7
SUBSTRACTION BY USE OF COMPLEMNT

Complements are mainly used for representing negative numbers and


subtraction. In performing binary complements only one procedure,
addition, is needed as one can subtract by adding its complement.
Assume n-bit numbers to subtract any number, positive or negative,
substitute the required complement for the number to be subtracted and
then add. If the result is:
a) An (n+1)-bit number, and the arithmetic is in
i) 1’s complement the (n+1)th bit, a carry, in added to the
right most bit of the result.
ii) 2’s complement discard the (n+1) th bit
b) An n-bit number and the arithmetic is in
i) 1’s complement, to read the binary value, calculate the
1’s complement of the magnitude bits and place a minus
sign in front of it.
ii) 2’s complement, first the binary value, calculate the 2’s
complement of the magnitude bits and place a minus sign
in front of it.
EXAMPLE: perform the following commutations in 1’s and 2’s
complement in 5-bits.
a) 12-6 b)6-12

Solution:
Binary representation of 12= 01100 and 6= 00110
is complement of -12= 10011 and -6= 11001
+ 1 + 1
2’s complement of -12= 10100 and -6= 11010

8
Decimal 1’s complement 2’
a) 12 01100 01100
-6 + 11001 +11010
6
100101 100110
Add Discard
b) 6 00110 00110
-12 + 10011 + 10100
-6 11001 11010

5.6 Floating-point Numbers (Real Numbers)


The two basic methods of binary number representation in
computers are fixed-point and floating-point representations.
Signed integer numbers are referred to as fixed-point numbers.
The representation of integer numbers was detailed in section
5.5. This section deals with the representation of real numbers.

Before looking at floating-point numbers and their representation


in more detail, it is worthwhile comparing fixed and floating point
numbers in terms of speed, range and accuracy. Due to extra
work involved in calculation with floating-point numbers,
computers perform fixed-point arithmetic faster than floating-
point arithmetic. Fixed point representation limits the range of
numbers being represented whereas there is greater flexibility in
the range usually at the cost of accuracy with floating point
representation.
5.6.1 Floating-Point Number Representation
A floating-point number or a real number has two parts. Integer
part and fractional part. 234.56, -123.431, 0.0025, etc are
examples of real numbers. These numbers can as well be
represented using the scientific notation. Using this notation, a
number can be expressed as a combination of an exponent and a

9
mantissa (fractional part). For instance, the number 234.56 can
be written as 2.3456 x 102 or 0.23456 x 102. The number 0.0025
can be written as 0.25 x 102.

A floating-point number, like a fixed-point number, is a sequence


of contiguous bits in the memory of the computer. But it is
interpreted by the computer to have two distinct parts, exponent
part and fractional part (mantissa). In most computers, floating-
point numbers are required to be normalized, use the scientific
form, which means that the first digit to the right of the binary
point must be non-zero unless the mantissa is identically zero.

For instance, the decimal -46.5 can be represented as -0.45 x 102


in normalized exponential notation. A similar representation can
be for binary numbers. For instance, the binary representation of
-45.5 (-10110.101)2 can be written as -0.10110101 x 25 in binary
normalized exponential notation. More examples are given in
Table 5.8.

Binary Normalized As Mantissa Exponent


1101.10 0.110110 x24 0.110110 4
0.001101 0.101101 x 2-2 0.101101 -2
-1101 -0.1101 x 24 -0.1101 4

Table 5.8 Examples of Floating-Point Representation.

Thus, each non-zero binary number can be so normalized


yielding unique sign, an integer n representing the exponent and
a unique mantissa as shown below.
Sign Exponent mantissa

10
The exponent is represented by a biased exponent, which is
expressed as biased exponent=true exponent + excess 2n-1, where
n is the number of bits representing the exponent. The exponent
expressed in this form is also called the characteristic of the true
exponent. Table 5.9 shows the relationship between the true
exponent and its characteristic for n=3. Observe that a 3-bit
exponent can represent true exponent ranging from -3 to 3
allowing us to store floating point numbers between 2-3 and 23.

Computer Arithmetic

True Exponent -3 -2 -1 0 1 2 3
Characteristic 1 2 3 4 5 6 7

Table 5.9 Relationship between true exponent and its


characteristic.
Let us now consider some examples.
Example: Convert to normalized binary floating point form using 7-
bit exponent and 16-bit mantissa.
a) A=9.5
b) B=-18.375
Solution: a) 9=(1001)2 = 0.1001 x 24
0.5 = (0.1)2

Hence, the normalized exponential form of A is 0.100011 x 24


Characteristic = true exponent + excess 27-1
= 4 + 27-1
= 4 + 64 = 68 = (1000100)2

11
Thus, 9.5 is represented as follow
0 1000100 10001100 . . . 0
7 bits 16 bits
b) 18=(10010)2=0.10010 x25
0.375=(0.011)2
Hence, the normalized exponential form of B is -0.10010011x25
Characteristic = true exponent + excess 26
= 5+64=69=(1000101)2
Thus, the representation of -18.375 is:
1 1000101 1001001100 . . . 0
7 bits 16 bits

NOTE: if the mantissa has fewer digits than the required bit positions,
zeros are added to the end of the mantissa. Likewise, zeros are added to
the beginning of the characteristic to fill out the required bit position if
the characteristic has fewer digits.
5.6.2 Floating-Point Arithmetic
To perform a floating point addition (or subtraction) the following steps
must be carried out.
1. line up binary points by making the exponents equal
2. Add or subtract mantissas.
3. normalize the result, if necessary
Solution: a) 10= (1010)2
0.1= (0.00010 . . . )2
Converting 12.5 to normalized form yields:
12.5= 0.1100100000 x 24
Now ,add: 10.1 = 0.1010000100 x 24
12.5 = 0.1100100000 x 24
22.6 1.011010100 x 24

12
Normalizing the result yields: 0.1011010110 x 25.
b) 157.3= 0.10011110 x28
12.6= 0.1100100110 x 24

Computer Arithmetic
Since the exponents are not equal, we align the smallest exponent with
the largest. Thus, 12.6 becomes 0.0000110010 x 28
157.3 = 0.1001110110 x 28
-12.6 = -0.0000110010 x 28
144.7 = 0.1001000100 x 28

Thus, the result is 0.1001000100 x 28

13
1.3 DIGITAL COMPONENTS

Computer hardware consists of several electronic parts that are available


as separate modules or units. These are assembled and connected
together to give the machine names as computer that computes at
electronic speed. These building blocks or electronic components are
explained in the following sub-sections. The main building blocks are the
following:
a) Logic Gates
b) Decoders and encoders
c) Multiplexers and demultiplexers
There are two new terms that we shall be using to understand the
different building blocks of a computer. These terms are Boolean
Variable and Truth table. These are explained as follows.

1.3.1 Boolean Variable


In algebra, we define a variable as the one that can take different values
at different instants of time. However, a Boolean variable is a variable
that is capable of taking only two states or values. These states are
represented by 1 or 0 (True or False).

1.3.2 Truth Table


A truth table is the pictorial representation of a Boolean variables for
showing results of all possible combinations of input. The input Boolean
variables will have one of the two states 0 or 1, TRUE or FALSE. The
combinations of all such input variable states will result into an output
value to be 0 or 1 for that Boolean expression.

The study of truth table is very useful in simplification of Boolean


expressions and in designing electronic circuits.

14
1.3.3 Logic Gates
Logic gates are blocks of hardware that produce a logic-1 of logic-0
output signal depending on the input signal fed to the logic gate.

Logical Operator OR

We can operate on two Boolean variables using logical OR operator.


Thus, A OR B will convey a logical meaning. The result of an OR
operation is logic-1, if one of the inputs is in logic-1 state. When both the
inputs are 0, output is 0.

Table 1.3 truth table for OR logical operator


A B Result (Z)
A or B
0 0 0
0 1 1
1 0 1
1 1 1

Table 1.4 is the truth table for three Boolean variables A,B,C. the last
column gives the output state. Output Z is 0 (zero) only when A,B as well
as C are in state zero. Otherwise Z is in state 1.
Table 1.4 truth table for OR logical operator (with three variables)
A B C Z=A+B+C
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1

15
OR Gate
We can use a symbol to represent an OR operator. Such a symbol is
named as OR gate. Symbolically, an OR gate in electronics is shown in
Figure 1.2. Input variables are connected to the left of the OR gate.
Output value is available at the right hand side. This OR gate will follow
the truth table given in Table 1.4 for A,B and C.

Logical Operator AND


The result of an AND operation is logic-1 only if all the inputs are in
logic-1 state. Otherwise, the output is logic-0.
Figure 1.2 The OR gate with three inputs A,B,C and its output as Z.

In Table 1.5, the outcome Z is 1 only when the states of both A AND B

are 1. Otherwise Z is zero. AND operator is represented by a . (dot) in

Boolean algebra. This should not be confused with the multiplication


sign used in algebra.

Table 1.5 Truth table for AND operator for two variables.
A B Z
0 0 0
0 1 0
1 0 0
1 1 1

16
Truth Table 1.5 can be extended for any number of Boolean variables.
Table 1.6 is drawn for three Boolean variables A,B,C.

Table 1.6 AND operation on three Boolean variables A,B and C.

A B C . .
Z=A B C
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1

AND Gate

AND gate is represented symbolically in electronics as shown in Figure


1.3. Here the Boolean variables are A, B, C connected to the left of the
AND gate. The output is available at Z. This AND gate combination
follows the truth table given in Table 1.6.

Figure 1.3 The AND gate with three inputs A,B and C and on output Z.

17
Logical Operator NOT and NOT Gate
NOT operator in Boolean algebra is used to describe the opposite or
complementary state of that variable. For example, if the Boolean
variable “A” is in state 1, then NOT A will be in state 0.

NOT A is written as A and pronounced as A bar or complement of A.


sometimes, we also write it as A’.

If we apply NOT operator to A , then the resultant value is NOT A and


thus, it will be NOT NOT A or A or A.
Symbolic representation of NOT operator or NOT gate is given in figure
1.4. Table 1.7 gives the truth table for NOT operator.

Table 1.7 Truth table for NOT operator.


A NOT A NOT NOT A
0 1 0
1 0 1

Figure 1.4 The NOT operator and NOT gate

NAND Gate

NAND gate is just the opposite of AND gate. While AND gives output 1
when both the inputs are in binary state 1, NAND gives output 0 when
all the inputs are in binary state 1. Table 1.8 gives the Truth Table for

18
two inputs. Figure 1.5 gives the symbol for a NAND gate with three
inputs A, B and C.
Table 1.8 Truth Table for NAND gate

A B (output) Z= A.B
0 0 1
0 1 1
1 0 1
1 1 0

Figure 1.5 The NAND gate

NOR Gate
NOR gate is logically opposite of OR gate. It gives output as 1 only if all
the inputs are in 0 state. Truth Table for NOR gate is in Table 1.9 for
two input variables. The symbolic representation of NOR gate is given in
figure 1.6.
Table 1.9 Truth table for NOR gate

A B (output) Z= A+B
0 0 1
0 1 0
1 0 0
1 1 0

19
Figure 1.6 The Nor gate

Logical Operator XOR (Exclusive OR)

In this logical operation, the output is 1 (True) when both the inputs are
of different types. The truth table for this operator is given in Table 1.10.
It is obvious from Table 1.10, that the output is true when inputs A and
B are having different logical values. If A is True, then B should be false
to get an output value True. When the inputs A and B have the same
type of values say true or false, the output is always false.
Table 1.10 Truth table for XOR operator
A B Z= X⊕Y
0 0 0
0 1 1
1 0 1
1 1 0

A physical example of this type of operator is the lighting system used in


the staircase of a house. When the switches in the lower wall and the
upper wall have different positions then only the staircase will light,
otherwise the light remains off.

Symbolic representation of exclusive OR gate that follows the Truth Table


of XOR operator is shown in Figure 1.8 shows the exclusive OR gate for 3
inputs.

20
Figure 1.7 The XOR gate

In Boolean algebra, we represent the exclusive OR operator by a + sign in


a circle like⊕.

XNOR Gate
Just like AND and OR gates have their antonyms as NAND and NOR
gate respectively, XOR has XNOR gate for this purpose. In XNOR gate the
output is 1 when the inputs are of same values. Table 1.11 is the truth
Table for XNOR. Figure 1.9 shows the symbolic representation of XNOR
gate for two inputs.
Figure 1.8 The XOR gate for three inputs

Table 1.11 Truth table for XNOR

A B Z= A⊕B
0 0 1
0 1 0
1 0 0
1 1 1

21
Figure 1.9 the XNOR gate for two inputs.

Boolean Expression

In the following two examples, we shall learn the symbolic


representations of AND, OR and NOT gates in terms of Boolean
expressions. Boolean expressions are similar to algebraic expressions
containing the relationship among several terms. These terms will have
Boolean variables and logical operators.

Example 1
In Figure 1.10, the two NOT gates and on AND gate are connected as
shown. What will be the output Z for the inputs A and B?

In Figure 1.11, the output of each NOT gate is shown as A and B


respectively. Thus A and B are connected to AND gate to give A AND B.

Figure 1.10 Logic diagram for the Boolean expression Z= A .B

22
Figure 1.11 Logic diagram for the Boolean expression Z= A . B
with outputs shown.

Example 2
In Figure 1.12, two NOT gates, one OR gate and one NOT gate are
connected. What is the output Q for inputs A,B?

Figure 1.12 Representation of circuit for Example 2.

Figure 1.13 is drawn to show outputs from every gate of Figure 1.12. it
is clear from this figure that the output Q at the last NOT gate is given
as (A + B). we can pronounce it as prime of (A prime OR B Prime).

Theorems in Boolean Algebra


We shall now study theorems that are applicable to Boolean variables.
These are useful to simplify Boolean expressions.

23
1.3.4 De Morgan’s Theorems

De Morgan was a great logician and mathematician, as well as a friend of


Charles Boole. Among De Morgan’s contributions to logic are the
following two theorems.
Figure 1.13 The output is Q= (A + B)

Theorem 1.
The complement of a sum of two Boolean variables equal to the products
of the complements of these two variables.

Or A+B = A . B Where A,B are Boolean variables.

Theorem 2
The complement of the product of two Boolean variables is equal to the
sum of the complements of these variables.
Or A.B = A + B Where A,B are Boolean variables.
These two theorems can be easily proved.

24
To prove Theorem 1 ,i.e. A+B= A . B, we need to show that the left hand
side equals the right hand side for all possible values of A and B. here
are all the four cases:
Case 1. A=0 and B=0; LHS A+B =0+0 = 0 = 1
RHS A .B = 0. 0= 1.1 =1

Case 2. A=0 and B=1; LHS A+B = 0 +1= 1 = 0


RHS A.B = 0 . 1 =1.0 =0

Case 3. A=1 and B=0 LHS A+B= 1+0 = 1 =0


RHS A. B= 1 . 0 = 0.1 =0

Case 4. A= l and B=1 LHS A+B = 1+1 = 1=0


RHS A. B = 1 .1 = 0.0 =0

Table 1.12 and Table 1.13 represent Case 1 to Case 4 in a tabular form.
It is clear from Table 1.13 and Table 1.14, that for any combination of
the values of A and B, the output values for A +B and A . B are the same.
Hence we can say that A+B = A . B.

Table 1.12 Table 13

A B A+B A B A. B
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 1 0 0
1 1 0 1 1 0

25
To prove the second De. Morgan’s theorem, i.e. A . B= A + B, we draw the
Truth Tables 1.14 and 1.15 and compare the output values of (A . B),
with (A + B) for different input values of A and B.

Table 1.1 Table 1.15

A B A.B
A B A+B
0 0 1
0 0 1
0 1 1
0 1 1
1 0 1
1 0 1
1 1 0
1 1 0

The output values for A.B are the same as for A + B for the same input
combination of A and B. Therefore, these two expressions are equivalent.
We can say that the logic circuits represented by A . B and A + B are
interchangeable.

1.3.5 Application of De Morgan’s Theorems

De. Morgan’s theorems are useful in changing Boolean expressions from


sum of the product to the product of sum equivalent forms. This helps in
simplifying Boolean expressions.

To apply De. Morgan’s theorems, change OR operation i.e plus signs to


AND operation i.e multiplication signs and take the complement of the
individual terms rather than the entire expression. For example to
change, A + B into its equivalent form, do the following;
a) Change the + sign to multiplication sign (.) sign to get A.B
b) Take the complement of each term to get A . B

26
Thus, (A + B = A . B)
The A and B can also be very complicated Boolean expressions but we
can still apply De Morgan’s theorems on them to get simplified
equivalent results.

Some of the theorems that are used to simplify Boolean expressions


are listed in Table 1.16. These can be easily proved by using truth
tables.

Table 1.16 Theorems in Boolean algebra

Sl. NO Theorems Remarks


1. 0+A=A
2. 1+A=1
3. A+A=A

4. A+A=1
5. 0.A=0
6. 1.A=A
7. A.A=0

8. A.A=0

9. A=A
10. A+B=B+A Commutative Law
11. A.B=B.A Commutative Law
12. A + (B + C)= (A + B) +C Associative law
13. A (B . C) = (A . B + A . C) Associative law
14. A (B + C) = A . B + A . C Distributive law
15. A + AB = A Absorption law
16. A (A + B) = A Absorption law
17. (A + B) . (A + C) =(A + BC)

18. A + AB = A + B

27
Example 3
With the help of De. Morgan’s theorems, convert the following logical
expression into its sum of the products form.

Y = (C + DE) . (CE + DE)

The logical expression for Y is in the complement of a product form.


Therefore, we can apply De. Morgan’s second theorem and rewrite
this expression as :

Y = (C + DE) + (CE + DE) [Taking C + DE as Y1, and CE + DE as Y2]

We can apply De. Morgan’s first theorem to Y1 and Y2.


Y1 = C + DE
= C . DE
and Y2 = CE + DE
= CE . DE
Combining Y1 and Y2, we get
Y= C . DE + CE . DE
Or
Y = DE (CE + C) (Taking DE common)

= DE (C + E + C) Applying De. Morgan’s Second Theorem

= DE ( C +E) As C + C= C and DE =D + E
Or Y = (D + E) (C +E)
Y =E+D.C (applying absorption law)

From this example it is clear that a complicated logical expression can be


simplified so that its logical meaning can be understood easily.

1.3.6 Minimisation of Boolean Expressions


A Boolean sum of the products expression is said to be minimal if:

28
a) No other sum of product expressions for the same has fewer
products.
b) Out of the other sum of products expressions for the same
function, with the same number of products, none has fewer
factors.

Example 4
Simplify the following Boolean expression.
Q = x’ y’ z’ + x’ y z’ + x y’ z’ + x y z’ [here we are using x’ for x or complement of x]
Q = x’ y’ z’ + x’ y z’ + x y’ z’ + x y z’
= x’ (y’ z’ + y z’) + x (y’ z’ + y z’)
= x’[z’(y’ + y) + x[z’ (y’ + y)] [because y’ + y = 1]
= x’ z’ + x z’
= z’ (x + x’) [Because x + x’ = 1]
Q = z’

Example 5
Simplify the following Boolean expression:
Q = (x’ y z’) + (x’ y z) + (x y z’)
= x’ y z’ + x’ y z + x y z’
= x’(y z’ + y z) + x y z’
= x’y + x y z’ [Because z’ + z = 1)
= y (x’ + xz’)
= y(x’ +z’) since x’ + x z’ = x’ + z’
= yx’ + y z’

Example 6
Simplify the following Boolean expression:
Q = (x + y + z) (x + y +z’) (x’ + y + z)(x’ + y + z’)(x’ + y’ + z’)
Let x + y = A and x’ + y = B

29
Then
Q = (A + z) (A + z’) (B + z’) (B + z’)(x’ + y’ + z’)
Using theorem 17 of Table 1.16
i.e. [(x + y) (x + z) = x + yz]
we can write

Q = ( A + z z’) ( B + z z’)(x’ + y’ + z’)


= (A)(B)(x’ + y’ + z’) (since zz’ =0)
= (x + y) (x’ + y) (x’ + y’ + z’) [substituting values of A and B]
= (x + y) (x’ + y) (x’ + y’ + z’)
Opening (x + y) (x’ + y), we get
Q = (xx’ + xy + x’y + yy)(x’ + y’ + z’) [because xx’ =0, (x + x’) =1 and y.y=y]
= (0 + y( x + x’) + y) (x’ + y’ +z’)
= y (x’ + y’ + z’)
= y[x’ + z’] [because yy’ = 0]

1.3.7 Translating Truth Tables into Logical Expressions

The logical operations to be performed by a computer are frequently put


into schematic form as truth tables or charts. They must be translated
from the tabular form into Boolean expressions so that appropriate
circuits for performing the indicated operations can be developed. After
obtaining complete logical expressions in algebraic form, they must be
reduced to their simplest form in order to minimise the required number
of electronic components. We shall study this with the help of the
following examples.
Example 7
Write the logical expression for the Truth Table 1.17 and reduce it to its
minimal form

30
INPUT OUTPUT REMARKS
A B C
0 0 0
0 1 1 (condition 1)
1 0 1 (condition 2)
1 1 1 (condition 3)

The general method of solving a problem is as follows:


Any truth table describes the conditions of the independent (input)
variables for which the output (dependent variable) is true (1). These
conditions are logical alternatives, expressed by OR (+), which taken
together completely describe the output (dependent) function. In the
present example, three conditions exist for which the output C is true
(1).

Condition 1:
C is true if A is False (0) and B is true (1), or we can say C is 1, if A is 0
and B is 1. if A is false, A (Not A) must be 1 (True). Hence Conditions 1
finally may be expressed.
C is 1 if A is 1 and B is 1, which becomes C = A . B )The dot (.) represents
AND logical operator)

Condition 2:

C is 1, if A is 1 and B is 0 (or B is 1), which becomes


C= A . B

Condition 3:

C is 1, if A is 1 and B is 1, which becomes C = A . B

31
The truth table represents that C is true for either condition 1 OR
condition 2 OR condition 3. Hence we can form the logical sum of the
three conditions and obtain:

C=A.B+A.B+A.B

Reducing to Simplest Form (Minimising)

The equation directly obtained from the truth table is usually not in its
complete form. For practical use in computers, it should be reduced to
its simplest form, known as the minimal form. There are various ways of
doing this, but no specific method exists. Experience in recognizing
familiar forms and the clever manipulation of De Morgan’s rules, the
basic laws (commutative, associative, and the application of useful
relations we have developed earlier, will assist in minimizing the
complete form of the equations.

Conventional algebraic techniques such as factorization may be


employed with caution. The operations like cancellation of terms and
removal by subtraction from both sides of the equation are not allowed.

Example 8
Simplify the expression C= A B + A B + AB
The expression
C= A B + A B + AB
Can be factorized as follows:
C= A B + A(B +B)
But we know B + B= 1
Hence C = A B + A
=A+AB

32
Using theorem No. 18 of Table 1.16, we get:
A+AB=A+B
Hence C = A + B

Example 9
Derive the Boolean expression for the Truth Table 1.18 and reduce it to
its minimal form:
Table 1.18
INPUT OUTPUT REMARKS
A B C D
0 0 0 0
0 0 1 1 (Condition 1)
0 1 0 1 (Condition 2)
0 1 1 1 (Condition 3)
1 0 0 0
1 0 1 0
1 1 0 1 (Condition 4)
1 1 1 0

As shown in Table 1.18, there are four conditions for which the output
value of D is true (1).

Condition 1: D is 1 if A is 0 and B is 0 and C is 1 or D= A . B . C

Condition 2: D is 1 if A is 0 and B is 1 and C is o or D= A . B. . C

Condition 3: D is 1, if A is 0 and B is 1 and C is 1 or D= A . B . C

Condition 4: D is 1, if A is 1 and B is 1 and C is 0 or D= A . B . C

From the truth table we can say that D is true (1) for either condition 1
OR condition 2 OR condition 3 OR condition 4. Hence we can write:

33
D = ABC+ABC+ABC+ABC
= ABC+ABC+ABC+ABC

Taking A C common from the first two terms, and B C common from the
last two terms, we get
D = A C (B + B) + B C (A +A)
D =AC+BC ( as B + B =1 and A + A =1)
Example 10
Derive the boolean expression from the truth table given in Table 1.19
and then simplify it.
Table 1.19
INPUT OUTPUT REMARKS
A B C D E
0 0 0 0 0
0 0 0 1 0
0 0 1 0 1 (Condition 1)
0 0 1 1 1 (Condition 2)
0 1 0 0 0
0 1 0 1 0
0 1 1 0 1 (Condition 3)
0 1 1 1 1 (Condition 4)
1 0 0 0 0
1 0 0 1 0
1 0 1 0 0
1 0 1 1 0
1 1 0 0 1 (Condition 5)
1 1 0 1 1 (Condition 6)
1 1 1 0 1 (Condition 7)
1 1 1 1 1 (Condition 8)

As shown in Table 1.19, there are eight conditions for which the output
function E is true (1).

Condition 1: E is 1 if A is 0, B is 0, C is 1 and D is 0, i.e. E = A B C D

Condition 2: E is 1 if A is 0, B is 0, C is 1 and D is 1, i.e. E = A B C D

Condition 3: E is 1 if A is 0, B is 0, C is 1 and D is 0, i.e. E = A B C D

34
Condition 4: E is 1 if A is 0, B is 0, C is 1 and D is 1, i.e. E = A B C D

Condition 5: E is 1 if A is 1, B is 0, C is 0 and D is 0, i.e. E = A B C D

Condition 6: E is 1 if A is 1, B is 1, C is 0 and D is 1, i.e. E = A B C D

Condition 7: E is 1 if A is 1, B is 1, C is 1 and D is 0, i.e. E = A B C D

Condition 8: E is 1 if A is 1, B is 1, C is 1 and D is 1, i.e. E = A B C D

The truth table shows that E is true (1) for either condition 1, OR
condition 2, OR condition 3 OR condition 4 or Condition 5 OR condition
6 OR condition 7 OR condition 8, i.e:

E= A B C D + A B C D + A B C D + A B C D + A B C D + A B C D + A B C D + A B C D

= A B C (D + D) + A B C (D + D) + A B C ( D + D )+ A B C ( D +D)

=ABC+ABC+ABC+ABC (since D + D=1)

= A C (B + B) + A B (C + C)

=AC+AB (since B + B=1 and C + C=1)

This is the minimized form of the boolean expression.

Example 11

Draw the truth table for the following Boolean expression.

F=x yzw

It is obvious from the given function that the value of F is 1 for Boolean
variables x, y, w, z when each one of them is equal to 1. otherwise F is 0
for all other input combinations x, y, z, w. therefore the corresponding
truth table is shown in Table 1.20.

35
Table 1.20

INPUT OUTPUT
x y z w F
0 0 0 0 0
0 0 0 1 0
0 0 1 0 0
0 0 1 1 0
0 1 0 0 0
0 1 0 1 0
0 1 1 0 0
0 1 1 1 0
1 0 0 0 0
1 0 0 1 0
1 0 1 0 0
1 0 1 1 0
1 1 0 0 0
1 1 0 1 0
1 1 1 0 0
1 1 1 1 1

Example 12

Prove the following where x, y, z are Boolean variables.

y+xz+yz=xy+xz

We shall draw the truth table for the expression on the left hand side of
the equation sign and also for the expression on the right hand side of
the equal sign. If the truth table for both sides of the equal sign are the
same, then the expression on the left is equal to the expression on the
right of the equal sign. Here the Boolean variables x, y, z can take values
0 or 1.

36
It is found from the Truth Table 1.21, values for the expression x y + x z
+ y z for different combinations of x, y, and z are the same as x y + x z.
hence the expression x y + x z + y z = x y + x z is true.

Table 1.21

x y z xy+xz+yz xy+xz
0 0 0 0 0
0 0 1 1 1
0 1 0 0 0
0 1 1 1 1
1 0 0 0 0
1 0 1 0 0
1 1 0 1 1
1 1 1 1 1

We can also prove it by using Boolean theorems:


x y + x z + y z = x y + x z + y z ( x + x) (because x +x =1 and we can
add it without changing the
left hand expressions)
= xy+xyz+xz+xyz (because in x y + x y z; x y z is
redundant factor and can be
ignored)
=xy+xz (Also in x z + x y z, x y z is
redundant factor and can be
ignored)

37
The Canonical Form
From the Truth Table 1.22, we know that
Z=A⊕B
Or
Z=AB+AB
The function Z can also be written as
Z= (A + B) (A + B)
The latter form can be verified by examining the Truth Table, 1.22.

Table 1.22
A B Z
0 0 0
1 0 1
0 1 1
1 1 0

These two expressions are;


a) sum of products form of individual variables
b) Product of sums form of individual variables.

These alternative ways of writing the same expression are known as


Canonical forms of the Boolean expression.

The sum of products form is known as the Min term form and the
product of sums form is known as the Max term form.

Important Points to Note


a) Each term in canonical form must contain each of the binary
variables once and once only.
b) The meaning of the words sums and products means the OR
and AND logical functions respectively.

38
Karnaugh’s Map

The complexity of the digital logic gates that implement a Boolean function is directly
related to the complexity of the algebraic expression from which the function is
implemented. Although the truth table representation of a function is unique, expressed
algebraically, it can appear in many different forms. Boolean functions may be simplified
by algebraic means as discussed in section __. However this procedure of minimization is
awkward because it lacks specific rules to predict witch succeeding step in the
manipulative process. The map method provides a simple straightforward procedure for
minimizing Boolean function.

So what is a Karnaugh map?

A Karnaugh map provides a pictorial method of grouping together expressions with


common factors and therefore eliminating unwanted variables. The Karnaugh map can
also be described as a special arrangement of a truth table.

The diagram below illustrates the correspondence between the Karnaugh map and the
truth table for the general case of a two variable problem.The values inside the squares
are copied from the output column of the truth table, therefore there is one square in the
map for every row in the truth table. Around the edge of the Karnaugh map are the values
of the two input variable. A is along the top and B is down the left hand side. The
diagram below explains this:

The next diagram shows with three variables:

39
We show our previously developed Karnaugh map. We will use the form on the right

Note the sequence of numbers across the top of the map. It is not in binary sequence
which would be 00, 01, 10, 11. It is 00, 01, 11 10, which is Gray code sequence. Gray
code sequence only changes one binary bit as we go from one number to the next in
the sequence, unlike binary. That means that adjacent cells will only vary by one bit,
or Boolean variable. This is what we need to organize the outputs of a logic function
so that we may view commonality. Moreover, the column and row headings must be
in Gray code order, or the map will not work as a Karnaugh map. Cells sharing
common Boolean variables would no longer be adjacent, nor show visual patterns.
Adjacent cells vary by only one bit because a Gray code sequence varies by only one
bit.

There are five basic steps in the minimization procedure, these are

1. Plot 1s in the Karnaugh map for each minterm in the expression. Each AND-ed
set of variables in the minterm expression is placed in the corresponding cell on
the K-map. See below for more information on labelling the K-map.
2. Loop adjacent groups of 2, 4 or 8 etc 1s together.
3. Write one minterm per group, eliminating variables where possible. When a
variable and its complement are contained inside a group row or column wise
then that variable can be eliminated (for that group only), logically AND the
variables that are left.
4. Logically OR the remaining minterms together to give the simplified minterm
expression.

Let us move on to some examples of simplification with 3-variable Karnaugh maps. We


show how to map the product terms of the unsimplified logic to the K-map. We illustrate
how to identify groups of adjacent cells which leads to a Sum-of-Products simplification
of the digital logic.

40
Above we, place the 1's in the K-map for each of the product terms, identify a group of
two, then write a p-term (product term) for the sole group as our simplified result.

As we can see in the above diagram as we move row wise .i.e from 00 to 01 or

Variable C is changed from 0 to 1 or but not variable B.

Variable C is reduced. And column wise we have only A therefore we will have the
min term

Mapping the four product terms above yields a group of four covered by Boolean A'

When we move row wise in the group, .i.e.

Or ( 01 to 11 ) variable B is changes from 0 to 1 then B is

41
reduced. Then we left with C row wise. Column wise variable A is changed from 0 to 1
then also reduced. Totally we have left with C in the group.

The next example also shows with four variables for a function

In the above K-map for the function, row wise, .i.e. 01 to 11, here the variable Y is
changed from 0 to 1 but not Z , so variable Y is reduced and column wise .i.e. 01 to 11,
W is changed from 0 to 1 but not X since its value is unchanged, variable W is reduced.
Finally we have Y row wise and X column wise, logically AND to get its minterm .i.e.
X.Y since their value in both cases is 1, no need of complementing either of them.

There fore, F= XY

After mapping the six p-terms above, identify the upper group of four, pick up the lower
two cells as a group of four by sharing the two with two more from the other group.
Covering these two with a group of four gives a simpler result. Since there are two
groups, there will be two p-terms in the Sum-of-Products result A'+B

42
Grouping on Karnaugh Maps

Specific rules apply to grouping on a Karnaugh Map, they are


summarized here:
n
1. Groups must contain 2 cells set to 1.
0
2. A single cell (group of 2 ) cannot be simplified.
1 2
3. A group of 2 (2 ) cells, reduces 1 variable, a group of 4 (2 ) cells
reduces 2 variables. In general a group of 2n cells, reduces n variables.

Using the largest possible groups will give the simplest functions.

4. All cells in the K-map set to 1 must be included in at least one group
when developing the minterm.
5. Groups may overlap if they contain at least one other ungrouped cell in
the K-map.
6. Any group that has all of its cells included in other groups is redundant.
7. Groups must be square or rectangular. Diagonal or L-shaped groups
are invalid.
8. The edges of a K-map are considered to be adjacent. Therefore a
group can leave at the top of a K-map and re-enter at the bottom,
similarly for the two sides.

The following K-map representations and reductions are some examples that clarify for
rule number 5 and 8 in the above:

Mapping the four p-terms (product of terms) above yields a group of four. Visualize the
group of four by rolling up the ends of the map to form a cylinder, and then the cells are
adjacent. We normally mark the group of four as above left. Out of the variables A, B, C,
there is a common variable: C'. C' is a 0 over all four cells. Final result is C'.

43
The six cells above from the unsimplified equation can be organized into two groups of
four. These two groups should give us two p-terms in our simplified result of A' + C'.

Ö Next example clarifies the advantage of simplifying for a given function (using
K-map) on the design of the logic circuit of a function.

Figure___.

The Boolean equation for the output has four product terms. Map four 1's corresponding
to the p-terms. Forming groups of cells, we have three groups of two. There will be three
p-terms in the simplified result, one for each group. The gate diagram of the result is
shown below. Look the two diagrams (figure__ and figure__) and identify which logic
diagram is complex?

44
Figure___

45
1.3.8 Multiplexers
Multiplexer means many to 1. A multiplexer is an electronic circuit with
many inputs but only one output. By applying control signals, we can
steer any input to the output. Figure 1.14 shows the block diagram of a
n x 1 multiplexer. For a multiplexer with n inputs there must be at least
m control signals where 2m=n
Figure 1.14 Block Diagram of a n x 1 multiplexer.

Why Multiplexer is called Data selector?


Figure 1.15 shows a 8 to 1 multiplexer. This is also called a data-selector
because the data bit coming in the output depends on the selected input
data bit. The input bits are labeled D0 to D7. only one of these bits is
transmitted to the output. Which bit is transmitted would depend on the
value of the control input signal ABC. For example, if ABC=000, then the
upper AND gate is enabled, while all other AND gated are disabled.
Therefore, data bit D0 will be transmitted as output Y.

If the control signal ABC are changed to 111, then all gates are disabled
except the bottom most AND gate. In this case, D7 is the only bit
transmitted to the output and thus, Y=D7.

46
Figure 1.15 Eight to one multiplexer

The control nibble or the values of ABC thus, determine which of the
input data bits D0 to D7 is transmitted to the output. Table 1.23 lists the
value of output Y with different values of control signals ABC.

Note that the circle at the tip of the last OR gate gives the output to be

the complement of the input. So, Y=D0 for A, B, C=0,0,0 and Y= D7 for

A, B, C= 1,1,1.

47
Table 1.23 Truth table for 8 x 1 multiplexer

A B C Y

0 0 0 D0

0 0 1 D1

0 1 0 D2

0 1 1 D3

1 0 0 D4

1 0 1 D5

1 1 0 D6

1 1 1 D7

1.3.9 Decoders
A decoder is a combinational circuit that converts binary information
from n input lines to a maximum of 2n output lines. A decoder is said to
be an n x m decoder if it accepts n inputs and gives m output where
m ≤ 2n.

as an example, consider a 3-to-8 decoder. It is shown in Figure 1.16. the


decoder has 8 outputs. Each of the outputs represents one of 8
combinations of the three variables. Only one of the outputs is equal to
one at any time. (see table 1.24).

48
Figure 1.16 A 3 x 8 decoder

In Figure 1.16, the three inverters provide the complement of the inputs,
and each one of the eight AND gates generate one of the outputs. Table
1.24 can be interpreted as the decoder for binary to decimal number. If x,
y, z=111, then D7=1 or the output will be available only at the last AND
gate and will indicate that the result is the decimal 7. if x, y, z=110, then
the output will be available at D6 which indicates that it is decimal 6.

49
Table 1.24 Truth table for 3 x 8 decoder.

Inputs Outputs
x y z D0 D1 D2 D3 D4 D5 D6 D7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1

1.3.10 Encoders (Decimal to Binary encoder)


Encoder does just the reverse of the decoder. It takes 2n inputs and
generates n outputs (see Figure 1.17). Thus, if you have eight inputs, i.
e. 23, then, the coded output will be in three bits which are shown as x,
y, z in Figure 1.17. Table 1.25 gives the truth table.
Figure 1.17 An 8 x 3 encoder

50
Note that Table 1.25 is another form of binary conversion of decimal
digits. This is because, if D7 =1, which means the decimal number is 7,
its binary encoding is xyz, where x=1, y=1 and z=1. Similarly, if we want
to encode decimal number 6 to binary number, we point 1 to D6 and we
get xyz=110. the truth table for an encoder is just the opposite of that of
decoder listed in Table 1.24, where the inputs for the decoder is same as
the outputs for the encoder and outputs of the decoder forms the inputs
of the encoder.

The encoder in Figure 1.17 assumes that only one input has a value 1 at
any given time. Note that D0 is not connected to any OR gate; the binary
output must be all 0s in this case.

Table 1.25 Truth table for 8 x 3 encoder

Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 x y z
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1

1.4 FLIP-FLOPS

The logical circuits which we discussed so far were combinational


circuits, i.e. their outputs wee dependent on their inputs at any given

51
time. But when some storage elements such as flip-flops form part of an
electronic circuit, it is called a sequential circuit.

9 A flip-flop is a binary cell (i.e it contains 0 or 1) capable of storing


just one bit of information.

In other words, it can be said that flip-flops are the smallest temporary
storage units of a computer. Flip-flops are of many types depending on
the number of inputs they have and on how these inputs affect the
outputs. All flip-flops have 2 outputs, one for the normal value and other
giving a value opposite to the value stored in the flip-flop.
9 Flip-flop is a memory device since it will remain either in the state
0, i.e. close or 1, i.e. open. A flip-flop stores a binary state until
directed by a clock pulse to switch state.
So, depending upon which state it is, we can consider that state to store
a binary digit. So, if it is in state 0, it is storing a bit 0 and if it is in the
other state, i.e 1, it is storing a bit 1.

1.4.1 What is a Flip-Flop?

A flip-flop is a bistable electronic circuit that has two stable states i.e. the
output is either 0 or 1. a flip-flop can be regarded as a memory device.
When a flip-flop has its output set at 0, it can be regarded as storing a
logic 0 and when its output is set at 1, it is regarded as storing 1. It is
the smallest storage unit of a computer since what it stores is a single bit
(0 or 1). There are various types of flip-flops R-S flip-flop is the
commonest of them.

1.4.2 R-S Flip-Flop


We can constrict an R-S flip-Flop using NOR gates (see figure 1.18)

52
This is the conventional method of drawing an R-S flip-flop with logic
NOR gates. The outputs are defined in a more general way as Q and Q’.
There are two inputs to the flip-flop defined as R and S inputs. The
input-output possibilities for the R-S flip-flop are given in Table 1.26.

The first input in Table1.26 is R=0, S= 0 since a zero at the input of a


NOR gate has no effect on its output, the flip-flop simply remains in its
present state that is, Q remains unchanged.

Table 1.26 Truth table for R-S flip-flop

R S Q
0 0 Last State
0 1 1
1 0 0
1 1 ? ( Forbidden)

Figure 1.18 Connecting NOR gates for R-S flip-flop

53
The second input condition, R=0 and S=1 forces the output of NOR GATE
(B) to be low. Both inputs to NOR GATE (A) are now low and the NOR-
gate output must be high. Thus, a 1 at the S input is said to SET the flip-
flop and it switches to the stable state where Q=1.

The third condition is R=1 and S=0 in Table 1.26. This condition forces
the output of NOR gate (A) low, and switches because both inputs to
NOR gate (B) are now low, the output must therefore be high. Thus, a 1
at the R input is said to RESET the flip-flop and it switches to the stable
state where Q-0.

The last condition that is R=1 and S=1 is forbidden, and one cannot
predict what would be the values of Q or )Q).

Latch
An R-S flip-flop is also called a “Latch” or a “Bistable Multi Vibrator”.
Any change in the input information at R or S is transmitted immediately
to Q or Q.

Clocked R-S Flip-Flop


In Figure 1.19, when the ENABLE input is high, information at R and S
would be transmitted directly to the outputs of the AND gates. The
output of the R-S flip-flop will change in response to input changes as
long as the ENABLE is high.

When the ENABLE input goes low, the output of R-S flip-flop will retain
the information.
Thus, it is possible to strobe or clock the flip-flop in order to store
information (set it or Reset it) at any time, and then hold the

54
information for any desired period of time. Such a system is called
Clocked R-S flip-flop. This is shown in Figure 1.19.

Disadvantages of R-S flip-flop


a) R-S flip-flop needs two data inputs namely a high S to store a 1 bit.
Similarly to store a 0 bit, we need a high R. Generation of two
signals to drive a flip-flop is difficult in many application.

Figure 1.19 Clocked R-S flip-flop

b) The R-S flip-flop puts a forbidden condition that both R and S


cannot be high at the same time. In some cases, it may occur
inadvertently. Therefore, we need to design other types of flip-flop
circuits.

1.4.3 D Flip-Flop
Figure 1.20 gives a simple way to build a delay or D flip-flop. This type of
flip-flop prevents the value of D from reaching the Q output until a clock
pulse occurs.

55
When the clock pulse is low, both AND gates to which clock is applied
are disabled. Therefore, D can change value without affecting the value of
Q. when clock is high, both AND gates are enabled.

In this case, Q is forced to equal the value of D. when the clock again
goes low, Q retains or stores the last value of D. thus, D flip-flop is a
bistable circuit whose D input is transferred to the output Q after a clock
pulse is received. Thus, it delays D input by a clock pulse duration and
therefore, it is called D or Delayed flip-flop.

Figure 1.20 D flip-flop

1.4.4 J-K Flip-Flop


In an R-S Flip-Flop, the state of the output is not predictable when R=1
and S=1. The J-K flip-flop allows inputs J=K=1. in this situation, the
state of the output is changed. Inputs J and K behave like inputs S and
R to set and reset (clear) the flip-flop. In J-K, the letter J is the set and
the letter K is for Reset (clear).

56
When inputs are applied to both J and K simultaneously as 1, the flip-
flop switches to its complement state, i.e. if Q=1, it switches to Q=0 and
vice versa.

A Clocked J-K Flip-Flop


A clocked J-K flip-flop is shown in the Figure 1.21 and characteristic
table shown in Table 1.27.
Figure 1.21 Connection of J-K flip-flop using NOR gates.

57
Table 1.27 Characteristic table of J-K flip-flop

Q J K Q(t + 1)
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0

Working of J-K Flip-Flop


Output Q is ANDed with K and CP (clock pulse) inputs so that the flip-
flop is cleared during a clock pulse only if Q was previously 1.

Similarly, output Q’ is ANDed with J and CP (clock pulse) inputs so that


the flip-flop is set with a clock pulse only if Q’ was previously 1.

Comparing Table 1.27 with Table 1.26, the J-K flip-flop behaves like an
R-S flip-flop, except when both J and K are equal to 1.

When both J and K=1 and if Q=1, the output of the upper AND gate
becomes 1 upon application of the clock pulse, and the flip-flop is
cleared.

If Q’=1 and J and K=1, the output of the lower AND gate becomes a 1 and
the flip-flop is set. In either case, the output state of the flip-flop is
complemented, to that of the previous state.

58
The inputs in the graphic symbol for the J-K flip-flop must be marked
with a J(under Q) and K (under Q). the characteristic equation is derived
using the K-Map.
Q (t + 1) =JQ’ + K’Q

1.4.5 T Flip-Flop
A T flip-flop or Toggle flip-flop is constructed by supplying an input T to
an S-R flip-flop (see Figure 1.22). A toggle flip-flop changes output when
a high signal is applied. The truth table for a toggle flip-flop is given in
Table 1.28.

Table 1.28 Truth table for T flip-flop


T Q(t + 1)
0 Q

1 Q

The T flip-flop can also be constructed using a J-K flip-flop. In this case,
both J and K inputs are always equal.

Figure 1.22 T flip-flop Circuit

59
1.4.6 Master –Stove Flip-Flop
A master-slave flip-flop is made out of two separate flip-flop circuits.
First part of the circuit serves as a master S-R flip-flop and the second
part of the circuit serves as a slave flip-flop.

It consists of a master flip-flop, a slave flip-flop and an inverter. When


clock pulse, CP=0, the output of the inverter is 1. This is applied to the
slave S-R flip-flop. Since the clock input of the slave is now 1, the flip-
flop is enabled and output Q is equal to Y, while Q’ is equal to Y’. the
master flip-flop will remain disabled during this time as the clock pulse
is zero. The logic diagram of a master-slave flip-flop is shown in figure
1.23.
Figure 1.23. Logic Diagram of master –slave flip-flop

When the clock pulse again goes to 1, the information then at the
external R and S inputs (the leftmost points) is transmitted to the master
flip-flop and thus Y and Y’ get the values corresponding to the inputs of
S-R values.

60
The slave flip-flop, however, is isolated as long as the clock pulse is at its
1 level, because the output of the inverter through which the clock pulse
must pass to apply to the slave flip-flop is 0 (zero). When the clock input
of the slave is 1, the flip-flop is enabled and output Q is equal to Y while
Q’ is equal to Y’.
9 The master flip-flop is disabled because CP=0. when the pulse
becomes 1, the information then at the external R and S inputs is
transmitted to the master flip-flop. The slave flip-flop, however is
isolated as long as the pulse is at its 1 level, because the output of
the inverter is 0, at this moment.

When the pulse returns to 0, the master flip-flop is isolated which


prevents the external inputs from affecting it. The slave flip-flop then
goes to the same state as the master flip-flop. Figure 1.24 shows a
clocked master-slave J-K flip-flop.
Figure 1.24 Schematic diagram of master-slave flip-flop using S-R flip-flop.

As seen in Figure 1.24, master-slave J-K flip-flop contains two S-R flip-
flops. One is called master and the other slave.

61
When the clock pulse is high, the master is active and the slave is
inactive. The master sets or resets according to the state of the input
signals. Since the slave is inactive during this period, its output remains
steady at the previous state.

When the clock goes low, the master flip-flop is inactive and the slave is
active. The slave sets or resets according to its inputs. The final output Q
of a master-slave flip-flop is the result of the output of the slave flip-flop.
The output of the slave is available at the end of the clock pulse.

When both inputs, i.e. J and K are high at the same time, the master-
slave flip-flop toggles once. The master toggles and then slave copies the
action of the master. There is no race-around condition because the
feedback is from the output of the slave which is steady during positive
half cycle of the clock.

62

You might also like