UNIT 2 Computer Organization

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

UNIT 2

BASIC PROCESSING UNIT

SIGNED NUMBER REPRESENTATION


A signed integer is an integer with a positive ‘+’ or negative sign ‘-‘ associated with it. Since
the computer only understands binary, it is necessary to represent these signed integers in
binary form.
In binary, signed Integer can be represented in three ways:
1. Signed bit.
2. 1’s Complement.
3. 2’s Complement.

SIGNED BIT REPRESENTATION

In the signed integer representation method the following rules are followed:

1.The MSB (Most Significant Bit) represents the sign of the Integer.
2. Magnitude is represented by other bits other than MSB i.e. (n-1) bits where n is the
no. of bits.
3. If the number is positive, MSB is 0 else 1.
4. The range of signed integer representation of an n-bit number is given as –(2^{n-
1}-1) to (2)^{n-1}-1.
Example:
Let n = 4
Range:
–(2^{4-1}-1) to 2^{4-1}-1
= -(2^{3}-1) to 2^{3}-1
= -(7) to+7
For 4 bit representation, minimum value=-7 and maximum value=+7

Signed bit Representation:

Positive Numbers

Decimal
Sign Magnitude
Representation

0 0 0 0 +0

0 0 0 1 +1

0 0 1 0 +2
Positive Numbers

0 0 1 1 +3

0 1 0 0 +4

0 1 0 1 +5

0 1 1 0 +6

0 1 1 1 +7

Negative Numbers

Decimal
Sign Magnitude
Representation

1 0 0 0 -0

1 0 0 1 -1

1 0 1 0 -2

1 0 1 1 -3

1 1 0 0 -4

1 1 0 1 -5

1 1 1 0 -6

1 1 1 1 -7

DRAWBACKS:
1. For 0, there are two representations: -0 and +0 which should not be the case as 0 is
neither –ve nor +ve.
2. Out of 2^n bits for representation, we are able to utilize only 2^{n-1} bits.
3. Numbers are not in cyclic order i.e. After the largest number (in this, for example,
+7) the next number is not the least number (in this, for example, +0).
4. For negative numbers signed extension does not work.

Example:
Signed extension for +5

Signed extension for -5

5. As we can see above, for +ve representation, if 4 bits are extended to 5 bits there is a
need to just append 0 in MSB.
6. But if the same is done in –ve representation we won’t get the same number. i.e. 10101
≠ 11101.
1’S COMPLEMENT REPRESENTATION OF A SIGNED INTEGER

In 1’s complement representation the following rules are used:


1. For +ve numbers the representation rules are the same as signed integer
representation.
2. For –ve numbers, we can follow any one of the two approaches:

 Write the +ve number in binary and take 1’s complement of it.

1’s complement of 0 = 1 and 1’s complement of 1 = 0


Example:
(-5) in 1’s complement:
+5 = 0101
-5 = 1010
3. The range of 1’s complement integer representation of n-bit number is given as –
(2^{n-1}-1) to 2^{n-1}-1.

1’s Complement Representation:


Positive Numbers

Sign Magnitude Number

0 0 0 0 +0

0 0 0 1 +1

0 0 1 0 +2

0 0 1 1 +3

0 1 0 0 +4

0 1 0 1 +5

0 1 1 0 +6

0 1 1 1 +7

Negative Numbers

Sign Magnitude Number


Positive Numbers

Sign Magnitude Number

1 0 0 0 -7

1 0 0 1 -6

1 0 1 0 -5

1 0 1 1 -4

1 1 0 0 -3

1 1 0 1 -2

1 1 1 0 -1

1 1 1 1 -0

Drawbacks:
1. For 0, there are two representations: -0 and +0 which should not be the case as 0
is neither –ve nor +ve.
2. Out of 2^n bits for representation, we are able to utilize only 2^{n-1} bits.

Merits over Signed bit representation:


1. Numbers are in cyclic order i.e. after the largest number (in this, for example, +7) the
next number is the least number (in this, for example, -7).

2. For negative number signed extension works.

Example: Signed extension for +5

Signed extension for -5


3. As it can be seen above, for +ve as well as -ve representation, if 4 bits are extended
to 5 bits there is a need to just append 0/1 respectively in MSB.

2’S COMPLEMENT REPRESENTATION

In 2’s Complement representation the following rules are used:


1. For +ve numbers, the representation rules are the same as signed integer representation.
2. For –ve numbers, there are two different ways we can represent the number.

Write an unsigned representation of 2^n-X for –X in n-bit representation.

Example:
(-5) in 4-bit representation
2^4-5=11 -→1011(unsigned)

Write a representation of +X and take 2’s Complement.


To take 2’s complement simply take 1’s complement and add 1 to it.
Example:
(-5) in 2’s complement
(+5) = 0101
1’s complement of (+5) = 1010
Add 1 in 1010: 1010+1 = 1011
Therefore (-5) = 1011
3. Range of representation of n-bit is –(2^{n-1} ) to (2)^{(n-1)-1}.

2’s Complement representation (4 bits):-

Merits:
1. No ambiguity in the representation of 0.
2. Numbers are in cyclic order i.e. after +7 comes -8.
3. Signed Extension works.
4. The range of numbers that can be represented using 2’s complement is very high.
Due to all of the above merits of 2’s complement representation of a signed integer, binary
numbers are represented using 2’s complement method instead of signed bit and 1’s
complement.
FIXED POINT ARITHMETIC

Real numbers have a fractional component. This article explains the real number
representation method using fixed points. In digital signal processing (DSP) and gaming
applications, where performance is usually more important than precision, fixed point data
encoding is extensively used.
The Binary Point: Fractional values such as 26.5 are represented using the binary point
concept. The decimal point in a decimal numeral system and a binary point are comparable.
It serves as a divider between a number’s integer and fractional parts.
For instance, the weight of the coefficient 6 in the number 26.5 is 100, or 1. The weight of
the coefficient 5 is 10-1 or (5/10 = 1/2 = 0.5).

2 * 101 + 6 * 100 + 5 * 10-1 = 26.5


2 * 10 + 6 * 1 + 0.5 = 26.5
A “binary point” can be created using our binary representation and the same decimal point
concept. A binary point, like in the decimal system, represents the coefficient of the
expression 20 = 1. The weight of each digit (or bit) to the left of the binary point is 20, 21, 22,
and so forth. The binary point’s rightmost digits (or bits) have weights of 2-1, 2-2, 2-3, and so
on.
For illustration, the number 11010.12 represents the value:

11010.12
= 1 * 24 + 1 * 23 + 0 * 22 + 1 * 21 + 0* 20 + 1 * 2-1
= 16 + 8 + 2 + 0.5
= 26.5

SHIFTING PATTERN:

When an integer is shifted right by one bit in a binary system, it is comparable to being
divided by two. Since we cannot represent a digit to the right of a binary point in the case of
integers since there is no fractional portion, this shifting operation is an integer division.
 A number is always divided by two when the bit pattern of the number is shifted
to the right by one bit.
 A number is multiplied by two when it is moved left one bit.
HOW TO WRITE FIXED POINT NUMBER?

Understanding fixed point number representation requires knowledge of the shifting process
described above. Simply by implicitly establishing the binary point to be at a specific place
of a numeral, we can define a fixed-point number type to represent a real number in
computers (or any hardware, in general). Then we will just use this implicit standard to
express numbers.
Two arguments are all that are required to theoretically create a fixed-point type:
1. Width of the number representation.
2. Binary point position within the number.
the notation fixed<w, b>, where “w” stands for the overall number of bits used (the width of
a number) and “b” stands for the location of the binary point counting from the least
significant bit (counting from 0).

Unsigned representation:

For example, fixed<8,3> signifies an 8-bit fixed-point number, the rightmost 3 bits of which
are fractional.
Representation of a real number:
00010.1102
= 1 * 21 + 1 * 2-1 + 1 * 2-2
= 2 + 0.5 + 0.25
= 2.75

Signed representation:

Negative integers in binary number systems must be encoded using signed number
representations. In mathematics, negative numbers are denoted by a minus sign (“-“) before
them. In contrast, numbers are exclusively represented as bit sequences in computer
hardware, with no additional symbols.
Signed binary numbers (+ve or -ve) can be represented in one of three ways:
1. Sign-Magnitude form
2. 1’s complement form
3. 2’s complement form
 Sign-Magnitude form: In sign-magnitude form, the number’s sign is
represented by the MSB (Most Significant Bit also called as Leftmost Bit),
while its magnitude is shown by the remaining bits (In the case of 8-bit
representation Leftmost bit is the sign bit and remaining bits are magnitude
bit).
55 10 = 001101112
−55 10 = 101101112
 1’s complement form: By complementing each bit in a signed binary integer,
the 1’s complement of a number can be derived. A result is a negative number
when a positive number is complemented by 1. Similar to this, complementing a
negative number by 1 results in a positive number.

55 10 = 001101112
−55 10 = 110010002

 2’s complement form: By adding one to the signed binary number’s 1’s
complement, a binary number can be converted to its 2’s complement.
Therefore, a positive number’s 2’s complement results in a negative number.
The complement of a negative number by two yields a positive number.

55 10 = 11001000 + 1 (1’s complement + 1 = 2’s complement)


-55 10 = 11001001 2

FIXED POINT REPRESENTATION OF NEGATIVE NUMBER:

Consider the number -2.5, fixed<w,b> width = 4 bit, binary point = 1 bit (assume the binary
point is at position 1). First, represent 2.5 in binary, then find its 2’s complement and you
will get the binary fixed-point representation of -2.5.
2.5 10 = 0101 2
-2.5 10 = 1010 2 + 1 (1’s complement + 1 = 2’s complement)
-2.5 10 = 1011 2
1’S COMPLEMENT REPRESENTATION RANGE:

One bit is essentially used as a sign bit for 1’s complement numbers, leaving you
with only 7 bits to store the actual number in an 8-bit number.
Therefore, the biggest number is just 127 (anything greater would require 8 bits,
making it appear to be a negative number).
The least figure is likely to be -127 or -128 as well.
1’s complement:
127 = 01111111 : 1s complement is 10000000
128 = 10000000 : 1s complement is 01111111
We can see that storing -128 in 1’s complement is impossible (since the top
bit is unset and it looks like a positive number)
The 1’s complement range is -127 to 127.

2’S COMPLEMENT REPRESENTATION RANGE:

Additionally, one bit in 2’s complement numbers is effectively used as a sign bit,
leaving you with only 7 bits to store the actual number in an 8-bit integer.
2’s complement:
127 = 01111111 : 2s complement is 10000001
128 = 10000000 : 2s complement is 10000000
we can see that we can store -128 in 2s complement.
The 2s complement range is -128 to 127.

ADVANTAGES OF FIXED-POINT REPRESENTATION:

 Integer representation and fixed-point numbers are indeed close relatives.


 Because of this, fixed point numbers can also be calculated using all the arithmetic
operations a computer can perform on integers.
 They are just as simple and effective as computer integer arithmetic.
 To conduct real number arithmetic using fixed point format, we can reuse all the
hardware designed for integer arithmetic.

DISADVANTAGES OF FIXED-POINT REPRESENTATION:

 Loss in range and precision when compared to representations of floating point


numbers.
ADDITION AND SUBTRACTION OF SIGNED NUMBERS

FLOATING POINT ADDITION:-

To understand floating point addition, first we see addition of real numbers in decimal as
same logic is applied in both cases.
For example,
we have to add 1.1 * 103 and 50.

We cannot add these numbers directly. First, we need to align the exponent and then, we
can add significant.
After aligning exponent, we get 50 = 0.05 * 103
Now adding significant, 0.05 + 1.1 = 1.15
So, finally we get (1.1 * 103 + 50) = 1.15 * 103

Here, notice that we shifted 50 and made it 0.05 to add these numbers.

Now let us take example of floating point number addition.

We follow these steps to add two numbers:


1. Align the significant
2. Add the significant
3. Normalize the result
Let the two numbers be
x = 9.75
y = 0.5625
Converting them into 32-bit floating point representation,
9.75’s representation in 32-bit format = 0 10000010 00111000000000000000000
0.5625’s representation in 32-bit format = 0 01111110 00100000000000000000000

Now we get the difference of exponents to know how much shifting is required.
(10000010 – 01111110)2 = (4)10
Now, we shift the mantissa of lesser number right side by 4 units.
Mantissa of 0.5625 = 1.00100000000000000000000
(note that 1 before decimal point is understood in 32-bit representation)
Shifting right by 4 units, we get 0.00010010000000000000000
Mantissa of 9.75 = 1. 00111000000000000000000
Adding mantissa of both
0. 00010010000000000000000
+ 1. 00111000000000000000000
————————————————-
1. 01001010000000000000000
In final answer, we take exponent of bigger number
So, final answer consist of :
Sign bit = 0

Exponent of bigger number = 10000010


Mantissa = 01001010000000000000000

32-bit representation of answer = x + y = 0 10000010 01001010000000000000000.

FLOATING POINT SUBTRACTION

Subtraction is similar to addition with some differences like we subtract mantissa unlike
addition and in sign bit we put the sign of greater number.
Let the two numbers be
x = 9.75
y = – 0.5625
Converting them into 32-bit floating point representation
9.75’s representation in 32-bit format = 0 10000010 00111000000000000000000
– 0.5625’s representation in 32-bit format = 1 01111110 00100000000000000000000

Now, we find the difference of exponents to know how much shifting is required.
(10000010 – 01111110)2 = (4)10

Now, we shift the mantissa of lesser number right side by 4 units.

Mantissa of – 0.5625 = 1.00100000000000000000000


(note that 1 before decimal point is understood in 32-bit representation)
Shifting right by 4 units, 0.00010010000000000000000
Mantissa of 9.75= 1. 00111000000000000000000

Subtracting mantissa of both


0. 00010010000000000000000
– 1. 00111000000000000000000
————————————————
00100110000000000000000

Sign bit of bigger number = 0


So, finally the answer = x – y = 0 10000010 00100110000000000000000
MULTIPLICATION OF POSITIVE NUMBERS
SIGNED OPERAND MULTIPLICATION ALGORITHM
Multiplication of two fixed point binary number in signed magnitude representation is done
with process of successive shift and add operation.

In the multiplication process we are considering successive bits of the multiplier, least
significant bit first.
If the multiplier bit is 1, the multiplicand is copied down else 0’s are copied down.
The numbers copied down in successive lines are shifted one position to the left from the
previous number.
Finally numbers are added and their sum form the product.
The sign of the product is determined from the sign of the multiplicand and multiplier. If they
are alike, sign of the product is positive else negative.

HARDWARE IMPLEMENTATION :
Following components are required for the Hardware Implementation of multiplication
algorithm :
1. Registers:
Two Registers B and Q are used to store multiplicand and multiplier respectively.
Register A is used to store partial product during multiplication.
Sequence Counter register (SC) is used to store number of bits in the multiplier.

2. Flip Flop:
To store sign bit of registers we require three flip flops (A sign, B sign and Q
sign). Flip flop E is used to store carry bit generated during partial product
addition.

3. Complement and Parallel adder:


This hardware unit is used in calculating partial product i.e, perform addition
required.

FLOWCHART OF MULTIPLICATION:

1. Initially multiplicand is stored in B register and multiplier is stored in Q register.


2. Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e.,
if both the signs are alike, output of XOR operation is 0 unless 1) and output stored
in As (sign of A register).
Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is
initialized with value n, n is the number of bits in the Multiplier.
3. Now least significant bit of multiplier is checked. If it is 1 add the content of
register A with Multiplicand (register B) and result is assigned in A register with
carry bit in flip flop E. Content of E A Q is shifted to right by one position, i.e.,
content of E is shifted to most significant bit (MSB) of A and least significant bit
of A is shifted to most significant bit of Q.
4. If Qn = 0, only shift right operation on content of E A Q is performed in a similar
fashion.
5. Content of Sequence counter is decremented by 1.
6. Check the content of Sequence counter (SC), if it is 0, end the process and the
final product is present in register A and Q, else repeat the process.

Example:
Multiplicand = 10111
Multiplier = 10011
BOOTH MULTIPLICATION ALGORITHM
Booth algorithm gives a procedure for multiplying binary integers in signed 2’s
complement representation in efficient way, i.e., less number of additions/subtractions
required. It operates on the fact that strings of 0’s in the multiplier require no addition but
just shifting and a string of 1’s in the multiplier from bit weight 2^k to weight 2^m can be
treated as 2^(k+1 ) to 2^m. As in all multiplication schemes, booth algorithm requires
examination of the multiplier bits and shifting of the partial product. Prior to the shifting,
the multiplicand may be added to the partial product, subtracted from the partial product, or
left unchanged according to following rules:
1. The multiplicand is subtracted from the partial product upon encountering the
first least significant 1 in a string of 1’s in the multiplier
2. The multiplicand is added to the partial product upon encountering the first 0
(provided that there was a previous ‘1’) in a string of 0’s in the multiplier.
3. The partial product does not change when the multiplier bit is identical to the
previous multiplier bit.

HARDWARE IMPLEMENTATION OF BOOTHS ALGORITHM – The hardware


implementation of the booth algorithm requires the register configuration shown in the figure
below.
BOOTH’S ALGORITHM FLOWCHART –

We name the register as A, B and Q, AC, BR and QR respectively. Qn designates the least
significant bit of multiplier in the register QR. An extra flip-flop Qn+1is appended to QR to
facilitate a double inspection of the multiplier. The flowchart for the booth algorithm is
shown below.

AC and the appended bit Qn+1 are initially cleared to 0 and the sequence SC is set to a
number n equal to the number of bits in the multiplier. The two bits of the multiplier in Qn
and Qn+1are inspected. If the two bits are equal to 10, it means that the first 1 in a string has
been encountered. This requires subtraction of the multiplicand from the partial product in
AC. If the 2 bits are equal to 01, it means that the first 0 in a string of 0’s has been
encountered. This requires the addition of the multiplicand to the partial product in AC. When
the two bits are equal, the partial product does not change. An overflow cannot occur because
the addition and subtraction of the multiplicand follow each other. As a consequence, the 2
numbers that are added always have a opposite signs, a condition that excludes an overflow.
The next step is to shift right the partial product and the multiplier (including Qn+1). This is
an arithmetic shift right (ashr) operation which AC and QR to the right and leaves the sign
bit in AC unchanged. The sequence counter is decremented and the computational loop is
repeated n times. Product of negative numbers is important, while multiplying negative
numbers we need to find 2’s complement of the number to change its sign, because it’s easier
to add instead of performing binary subtraction. product of two negative number is
demonstrated below along with 2’s complement.

Product is calculated as follows:


Product = AC QR
Product = 0010 0011 = 35
ADVANTAGES:

1. Faster than traditional multiplication: Booth’s algorithm is faster than traditional


multiplication methods, requiring fewer steps to produce the same result.

2. Efficient for signed numbers: The algorithm is designed specifically for multiplying
signed binary numbers, making it a more efficient method for multiplication of signed
numbers than traditional methods.

3. Lower hardware requirement: The algorithm requires fewer hardware resources


than traditional multiplication methods, making it more suitable for applications with
limited hardware resources.

4. Widely used in hardware: Booth’s algorithm is widely used in hardware


implementations of multiplication operations, including digital signal processors,
microprocessors, and FPGAs.

DISADVANTAGES:

1. Complex to understand: The algorithm is more complex to understand and


implement than traditional multiplication methods.

2. Limited applicability: The algorithm is only applicable for multiplication of signed


binary numbers, and cannot be used for multiplication of unsigned numbers or
numbers in other formats without additional modifications.

3. Higher latency: The algorithm requires multiple iterations to calculate the result of
a single multiplication operation, which increases the latency or delay in the
calculation of the result.

4. Higher power consumption: The algorithm consumes more power compared to


traditional multiplication methods, especially for larger inputs.

APPLICATION OF BOOTH’S ALGORITHM:

1. Chip and computer processors: Corner’s Calculation is utilized in the equipment


execution of number-crunching rationale units (ALUs) inside microchips and
computer chips. These parts are liable for performing number juggling and coherent
procedure on twofold information. Proficient duplication is fundamental in different
applications, including logical registering, designs handling, and cryptography.
Corner’s Calculation lessens the quantity of piece movements and augmentations
expected to perform duplication, bringing about quicker execution and better in
general execution.

2. Digital Signal Processing (DSP): DSP applications frequently include complex


numerical tasks, for example, sifting and convolution. Duplicating enormous twofold
numbers is a principal activity in these errands. Corner’s Calculation permits DSP
frameworks to perform duplications all the more productively, empowering ongoing
handling of sound, video, and different sorts of signs.

3. Hardware Accelerators: Many particular equipment gas pedals are intended to


perform explicit assignments more productively than broadly useful processors.
Corner’s Calculation can be integrated into these gas pedals to accelerate
augmentation activities in applications like picture handling, brain organizations, and
AI.

4. Cryptography: Cryptographic calculations, like those utilized in encryption and


computerized marks, frequently include particular exponentiation, which requires
proficient duplication of huge numbers. Corner’s Calculation can be utilized to speed
up the measured augmentation step in these calculations, working on the general
proficiency of cryptographic tasks.

5. High-Performance Computing (HPC): In logical reenactments and mathematical


calculations, enormous scope augmentations are oftentimes experienced. Corner’s
Calculation can be carried out in equipment or programming to advance these
duplication tasks and improve the general exhibition of HPC frameworks.

6. Implanted Frameworks: Inserted frameworks frequently have restricted assets


regarding handling power and memory. By utilizing Corner’s Calculation, fashioners
can upgrade augmentation activities in these frameworks, permitting them to perform
all the more proficiently while consuming less energy.

7. Network Parcel Handling: Organization gadgets and switches frequently need to


perform estimations on bundle headers and payloads. Augmentation activities are
regularly utilized in these estimations, and Corner’s Calculation can assist with
diminishing handling investment utilization in these gadgets.

8. Advanced Channels and Balancers: Computerized channels and adjusters in


applications like sound handling and correspondence frameworks require productive
augmentation of coefficients with input tests. Stall’s Calculation can be utilized to
speed up these increases, prompting quicker and more precise sifting activities.
DIVISION ALGORITHM
The Division of two fixed-point binary numbers in the signed-magnitude representation is
done by the cycle of successive compare, shift, and subtract operations.
The binary division is easier than the decimal division because the quotient digit is either 0
or 1. Also, there is no need to estimate how many times the dividend or partial remainders
adjust to the divisor.

HARDWARE IMPLEMENTATION :
The hardware implementation in the division operation is identical to that required for
multiplication and consists of the following components –
 Here, Registers B is used to store divisor, and the double-length dividend is stored
in registers A and Q
 The information for the relative magnitude is given in E.
 A sequence Counter register (SC) is used to store the number of bits in the
dividend.

Flowchart of Division :
 Initially, the dividend is in A & Q and the divisor is in B.
 The sign of the result is transferred into Q, to be part of the quotient. Then a
constant is set into the SC to specify the number of bits in the quotient.
 Since an operand must be saved with its sign, one bit of the word will be inhabited
by the sign, and the magnitude will be composed of n -1 bits.
 The condition of divide-overflow is checked by subtracting the divisor in B from
the half of the bits of the dividend stored in A. If A ≥ B, DVF is set and the
operation is terminated before time. If A < B, no overflow condition occurs and
so the value of the dividend is reinstated by adding B to A.
 The division of the magnitudes starts with the dividend in AQ to left in the high-
order bit shifted into E.
(Note – If shifted a bit into E is equal to 1, and we know that EA > B as EA
comprises a 1 followed by n -1 bits whereas B comprises only n -1 bits). In this
case, B must be subtracted from EA, and 1 should insert into Q, for the quotient
bit.
 If the shift-left operation (shl) inserts a 0 into E, the divisor is subtracted by adding
its 2’s complement value and the carry is moved into E. If E = 1, it means that A
≥ B; thus, Q, is set to 1. If E = 0, it means that A < B, and the original number is
reimposed by adding B into A.
 Now, this process is repeated with register A containing the partial remainder.
FLOATING POINT NUMBERS AND ITS ARITHMETIC
OPERATION

When you have to represent very small or very large numbers, a fixed-point representation will
not do. The accuracy will be lost. Therefore, you will have to look at floating-point
representations, where the binary point is assumed to be floating. When you consider a decimal
number 12.34 * 107, this can also be treated as 0.1234 * 109, where 0.1234 is the fixed-point
mantissa. The other part represents the exponent value, and indicates that the actual position of
the binary point is 9 positions to the right (left) of the indicated binary point in the fraction.
Since the binary point can be moved to any position and the exponent value adjusted
appropriately, it is called a floating-point representation.
The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for
floating point arithmetic. This standard specifies how single precision (32 bit) and double
precision (64 bit) floating point numbers are to be represented, as well as how arithmetic should
be carried out on them. The IEEE single precision floating point standard representation
requires a 32 bit word, which may be represented as numbered from 0 to 31, left to right. The
first bit is the sign bit, S, the next eight bits are the exponent bits, ‘E’, and the final 23 bits are
the fraction ‘F’. Instead of the signed exponent E, the value stored is an unsigned integer E’ =
E + 127, called the excess-127 format. Therefore, E’ is in the range 0 £ E’ £ 255.

S E’E’E’E’E’E’E’E’ FFFFFFFFFFFFFFFFFFFFFFF

01 8 9 31

The value V represented by the word may be determined as follows:

 If E’ = 255 and F is nonzero, then V = NaN (“Not a number”)


 If E’ = 255 and F is zero and S is 1, then V = -Infinity
 If E’ = 255 and F is zero and S is 0, then V = Infinity
 If 0 < E< 255 then V =(-1)**S * 2 ** (E-127) * (1.F) where “1.F” is intended to
represent the binary number created by prefixing F with an implicit leading 1 and a
binary point.
 If E’ = 0 and F is nonzero, then V = (-1)**S * 2 ** (-126) * (0.F). These are
“unnormalized” values.
 If E’= 0 and F is zero and S is 1, then V = -0
 If E’ = 0 and F is zero and S is 0, then V = 0

ARITHMETIC UNIT

Arithmetic operations on floating point numbers consist of addition, subtraction,


multiplication and division. The operations are done with algorithms similar to those used
on sign magnitude integers (because of the similarity of representation) — example, only
add numbers of the same sign. If the numbers are of opposite sign, must do subtraction.
1. ADDITION

Example on decimal value given in scientific notation:

3.25 x 10 ** 3
+ 2.63 x 10 ** -1
—————–
first step: align decimal points

second step: add

3.25 x 10 ** 3
+ 0.000263 x 10 ** 3
——————–
3.250263 x 10 ** 3
(presumes use of infinite precision, without regard for accuracy)

third step: normalize the result (already normalized!)

Example on floating pt. value given in binary:

.25 = 0 01111101 00000000000000000000000

100 = 0 10000101 10010000000000000000000


To add these fl. pt. representations,

step 1: align radix points

shifting the mantissa left by 1 bit decreases the exponent by 1

shifting the mantissa right by 1 bit increases the exponent by 1

we want to shift the mantissa right, because the bits that fall off the end should come
from the least significant end of the mantissa

-> choose to shift the .25, since we want to increase it’s exponent.
-> shift by 10000101
-01111101
———
00001000 (8) places.

0 01111101 00000000000000000000000 (original value)


0 01111110 10000000000000000000000 (shifted 1 place)
(note that hidden bit is shifted into msb of mantissa)
0 01111111 01000000000000000000000 (shifted 2 places)
0 10000000 00100000000000000000000 (shifted 3 places)
0 10000001 00010000000000000000000 (shifted 4 places)
0 10000010 00001000000000000000000 (shifted 5 places)

0 10000011 00000100000000000000000 (shifted 6 places)


0 10000100 00000010000000000000000 (shifted 7 places)
0 10000101 00000001000000000000000 (shifted 8 places)

step 2: add (don’t forget the hidden bit for the 100)

0 10000101 1.10010000000000000000000 (100)


+ 0 10000101 0.00000001000000000000000 (.25)
—————————————
0 10000101 1.10010001000000000000000

step 3: normalize the result (get the “hidden bit” to be a 1)


It already is for this example.

result is 0 10000101 10010001000000000000000

2. SUBTRACTION

Same as addition as far as alignment of radix points


Then the algorithm for subtraction of sign mag. numbers takes over.

before subtracting,
compare magnitudes (don’t forget the hidden bit!)
change sign bit if order of operands is changed.

don’t forget to normalize number afterward.

3. MULTIPLICATION

Example on decimal values given in scientific notation:

3.0 x 10 ** 1
+ 0.5 x 10 ** 2
—————–

Algorithm: multiply mantissas


add exponents

3.0 x 10 ** 1
+ 0.5 x 10 ** 2
—————–
1.50 x 10 ** 3

Example in binary: Consider a mantissa that is only 4 bits.


0 10000100 0100
x 1 00111100 1100

4. DIVISION
It is similar to multiplication.
do unsigned division on the mantissas (don’t forget the hidden bit) subtract TRUE
exponents

The organization of a floating-point adder unit and the algorithm is given below.
The floating point multiplication algorithm is given below. A similar algorithm based on the
steps discussed before can be used for division.
FUNDAMENTAL CONCEPTS: EXECUTION OF A
COMPLETE INSTRUCTION

THE INSTRUCTION CYCLE –

Each phase of Instruction Cycle can be decomposed into a sequence of elementary micro-
operations. In the above examples, there is one sequence each for the Fetch, Indirect, Execute
and Interrupt Cycles.

The Indirect Cycle is always followed by the Execute Cycle. The Interrupt Cycle is always
followed by the Fetch Cycle. For both fetch and execute cycles, the next cycle depends on
the state of the system.
We assumed a new 2-bit register called Instruction Cycle Code (ICC). The ICC designates
the state of processor in terms of which portion of the cycle it is in:-

00 : Fetch Cycle
01 : Indirect Cycle
10 : Execute Cycle
11 : Interrupt Cycle
At the end of each cycles, the ICC is set appropriately. The above flowchart of Instruction
Cycle describes the complete sequence of micro-operations, depending only on the
instruction sequence and the interrupt pattern (this is a simplified example). The operation of
the processor is described as the performance of a sequence of micro-operation.

DIFFERENT INSTRUCTION CYCLES:

1. The Fetch Cycle –


At the beginning of the fetch cycle, the address of the next instruction to be executed
is in the Program Counter (PC).

 Step 1: The address in the program counter is moved to the memory address
register (MAR), as this is the only register which is connected to address lines of
the system bus.

 Step 2: The address in MAR is placed on the address bus, now the control unit
issues a READ command on the control bus, and the result appears on the data
bus and is then copied into the memory buffer register (MBR). Program counter
is incremented by one, to get ready for the next instruction. (These two actions
can be performed simultaneously to save time).
 Step 3: The content of the MBR is moved to the instruction register (IR).

Thus, a simple Fetch Cycle consist of three steps and four micro-operation.
Symbolically, we can write these sequence of events as follows:-

Here ‘I’ is the instruction length. The notations (t1, t2, t3) represents successive time
units. We assume that a clock is available for timing purposes and it emits regularly
spaced clock pulses. Each clock pulse defines a time unit. Thus, all time units are of
equal duration. Each micro-operation can be performed within the time of a single
time unit.

First time unit: Move the contents of the PC to MAR.


Second time unit: Move contents of memory location specified by MAR to MBR.

Increment content of PC by I.

Third time unit: Move contents of MBR to IR.

Note: Second and third micro-operations both take place during the second time unit.
2. The Indirect Cycles –
Once an instruction is fetched, the next step is to fetch source operands. Source
Operand is being fetched by indirect addressing (it can be fetched by any addressing
mode, here it is done by indirect addressing). Register-based operands need not be
fetched. Once the opcode is executed, a similar process may be needed to store the
result in main memory. Following micro-operations takes place: -

Step 1: The address field of the instruction is transferred to the MAR. This is used to
fetch the address of the operand.

Step 2: The address field of the IR is updated from the MBR.(So that it now contains
a direct addressing rather than indirect addressing).

Step 3: The IR is now in the state, as if indirect addressing has not been occurred.

Note: Now IR is ready for the execute cycle, but it skips that cycle for a moment to
consider the Interrupt Cycle .

3. The Execute Cycle


The other three cycles (Fetch, Indirect and Interrupt) are simple and predictable. Each
of them requires simple, small and fixed sequence of micro-operation. In each case
same micro-operation are repeated each time around.
Execute Cycle is different from them. Like, for a machine with N different opcodes
there are N different sequence of micro-operations that can occur.

Lets take an hypothetical example :-

Consider an add instruction:

We begin with the IR containing the ADD instruction.

Step 1: The address portion of IR is loaded into the MAR.

Step 2: The address field of the IR is updated from the MBR, so the reference memory
location is read.

Step 3: Now, the contents of R and MBR are added by the ALU.

Lets take a complex example :-


Here, the content of location X is incremented by 1. If the result is 0, the next
instruction will be skipped. Corresponding sequence of micro-operation will
be :-

Here, the PC is incremented if (MBR) = 0. This test (is MBR equal to zero or not)
and action (PC is incremented by 1) can be implemented as one micro-operation.
Note : This test and action micro-operation can be performed during the same
time unit during which the updated value MBR is stored back to memory.

4. The Interrupt Cycle:


At the completion of the Execute Cycle, a test is made to determine whether any
enabled interrupt has occurred or not. If an enabled interrupt has occurred then
Interrupt Cycle occurs. The nature of this cycle varies greatly from one machine to
another.
Lets take a sequence of micro-operation:-

 Step 1: Contents of the PC is transferred to the MBR, so that they can be saved
for return.
 Step 2: MAR is loaded with the address at which the contents of the PC are to
be saved.
 PC is loaded with the address of the start of the interrupt-processing routine.
Step 3: MBR, containing the old value of PC, is stored in memory.

Note: In step 2, two actions are implemented as one micro-operation. However,


most processor provide multiple types of interrupts, it may take one or more
micro-operation to obtain the save_address and the routine_address before they
are transferred to the MAR and PC respectively.
USES OF DIFFERENT INSTRUCTION CYCLES :
Here are some uses of different instruction cycles:
1. Fetch cycle: This cycle retrieves the instruction from memory and loads it into
the processor’s instruction register. The fetch cycle is essential for the processor
to know what instruction it needs to execute.
2. Decode cycle: This cycle decodes the instruction to determine what operation it
represents and what operands it requires. The decode cycle is important for the
processor to understand what it needs to do with the instruction and what data it
needs to retrieve or manipulate.
3. Execute cycle: This cycle performs the actual operation specified by the
instruction, using the operands specified in the instruction or in other registers.
The execute cycle is where the processor performs the actual computation or
manipulation of data.
4. Store cycle: This cycle stores the result of the operation in memory or in a
register. The store cycle is essential for the processor to save the result of the
computation or manipulation for future use.

The advantages and disadvantages of the instruction cycle depend on various factors, such as
the specific CPU architecture and the instruction set used. However, here are some general
advantages and disadvantages of the instruction cycle:

ADVANTAGES:

1. Standardization: The instruction cycle provides a standard way for CPUs to


execute instructions, which allows software developers to write programs that can
run on multiple CPU architectures. This standardization also makes it easier for
hardware designers to build CPUs that can execute a wide range of instructions.

2. Efficiency: By breaking down the instruction execution into multiple steps, the
CPU can execute instructions more efficiently. For example, while the CPU is
performing the execute cycle for one instruction, it can simultaneously fetch the
next instruction.

3. Pipelining: The instruction cycle can be pipelined, which means that multiple
instructions can be in different stages of execution at the same time. This improves
the overall performance of the CPU, as it can process multiple instructions
simultaneously.

DISADVANTAGES:

1. Overhead: The instruction cycle adds overhead to the execution of instructions,


as each instruction must go through multiple stages before it can be executed. This
overhead can reduce the overall performance of the CPU.
2. Complexity: The instruction cycle can be complex to implement, especially if the
CPU architecture and instruction set are complex. This complexity can make it
difficult to design, implement, and debug the CPU.

3. Limited parallelism: While pipelining can improve the performance of the CPU,
it also has limitations. For example, some instructions may depend on the results
of previous instructions, which limits the amount of parallelism that can be
achieved. This can reduce the effectiveness of pipelining and limit the overall
performance of the CPU.

ISSUES OF DIFFERENT INSTRUCTION CYCLES :


Here are some common issues associated with different instruction cycles:
1. Pipeline hazards: Pipelining is a technique used to overlap the execution of
multiple instructions by breaking them into smaller stages. However, pipeline
hazards occur when one instruction depends on the completion of a previous
instruction, leading to delays and reduced performance.
2. Branch prediction errors: Branch prediction is a technique used to anticipate
which direction a program will take when encountering a conditional branch
instruction. However, if the prediction is incorrect, it can result in wasted cycles
and decreased performance.
3. Instruction cache misses: Instruction cache is a fast memory used to store
frequently used instructions. Instruction cache misses occur when an instruction
is not found in the cache and needs to be retrieved from slower memory, resulting
in delays and decreased performance.
4. Instruction-level parallelism limitations: Instruction-level parallelism is the
ability of a processor to execute multiple instructions simultaneously. However,
this technique has limitations as not all instructions can be executed in parallel,
leading to reduced performance in some cases.
5. Resource contention: Resource contention occurs when multiple instructions
require the use of the same resource, such as a register or a memory location. This
can lead to delays and reduced performance if the processor is unable to resolve
the contention efficiently.
MULTIPLE BUS ORGANIZATION
Multiple bus organization in computer architecture is a design that allows multiple devices to
work simultaneously. This reduces the time spent waiting and improves the computer's speed.
The main advantage of multiple bus organization is the reduction in the number of cycles
required for execution. In a multiple bus structure, one bus is used to fetch instructions and the
other is used to fetch data. The same bus is shared by three units: memory, processor, and I/O
units. In a multiple bus system, each processor-memory pair is linked by various redundant
paths. This means that the failure of one or more paths can be tolerated, but it will degrade
system performance. The main reason for having multiple buses in a computer design is to
improve performance. Other advantages include:-

 Better connectivity.
 An increase in the size of the registers.

There are three types of bus lines: data bus, address bus, and control bus. Communication over
each bus line is performed in cooperation with another.

1. Single Bus Structure: In a single bus structure, one common bus is used to communicate
between peripherals and microprocessors. It has disadvantages due to the use of one common
bus.

2. Double Bus Structure: In a double bus structure, one bus is used to fetch instructions
while other is used to fetch data, required for execution. It is to overcome the bottleneck
of a single bus structure.
Differences between Single Bus and Double Bus Structure :
S.
No. Single Bus Structure Double Bus Structure

The same bus is shared by three


The two independent buses link various units
1. units (Memory, Processor, and
together.
I/O units).

One common bus is used for Two buses are used, one for communication
2. communication between from peripherals and the other for the
peripherals and processors. processor.

Here, the I/O bus is used to connect I/O units


The same memory address space
3. and processor and other one, memory bus is
is utilized by I/O units.
used to connect memory and processor.

Instructions and data both are Instructions and data both are transferred in
4.
transferred in same bus. different buses.

5. Its performance is low. Its performance is high.

The cost of a single bus structure


6. The cost of a double bus structure is high.
is low.

Number of cycles for execution is


7. Number of cycles for execution is less.
more.

8. Execution of the process is slow. Execution of the process is fast.

Number of registers associated


9. Number of registers associated are more.
are less.

At a time single operand can be


10. At a time two operands can be read.
read from the bus.

Advantages- Advantages-
11.  Less expensive  Better performance
 Simplicity  Improves Efficiency

A bus is an important memory transferring device used by most computer devices and
Smartphones to pass data back and forth across the entire system. The overall speed of the
machine is directly affected by the types of buses used by it. Simple computer design uses
single bus structures for transferring data, and Multiple bus organization uses multiple buses
for enhanced performance.

In a multi-bus architecture, all the pathways are suited for handling some special types of
information. In single bus architecture, all the devices use a common bus for data transfer; thus,
the system’s efficiency and performance is lower. However, in the multiple bus organization,
the wasted time lowers down, and thus the speed and performance of the entire system boost
up. Thus, this is one of the key reasons behind using Multiple bus organization. Additionally,
the Multiple Bus organization also provides many choices for connecting the devices to the
computer, making it more compatible.

BENEFITS OF MULTIPLE BUS ORGANIZATION:-

Multiple bus organization is a vital mode for industrial systems. In this, several devices having
different transfer rates are connected. Simultaneously, more throughputs are also maintained
in multiple bus organization. Some benefits of the architecture are as follows:

 Allows more number of devices to get connected with the computer.


 This system has faster execution as many devices work together to ensure better
performance.
 This is highly compatible with older and newer devices.
 Multiple bus organization has a multi-core processor for transferring more data
and information and minimizing the wait time.

SOME MORE BENEFITS :-

 Multiple Bus Organization Improves Efficiency

In a single-bus architecture, all components including the central processing unit,


memory and peripherals share a common bus. When many devices need the bus at the
same time, this creates a state of conflict called bus contention; some wait for the bus
while another has control of it. The waiting wastes time, slowing the computer
down, as Engineering 360 explains. Multiple buses permit several devices to work
simultaneously, reducing time spent waiting and improving the computer's speed.
Performance improvements are the main reason for having multiple buses in a
computer design.

 Additional Buses Allow Expansion

Having multiple buses available gives you more choices for connecting devices to your
computer, as hardware makers may offer the same component for more than one bus
type. As Digital Trends points out, most desktop PCs use the Serial Advanced
Technology Attachment interface for internal hard drives, but many external hard
drives and flash drives connect via USB. If your computer's SATA connections are all
used, the USB interface lets you connect additional storage devices.
 More Buses Means More Compatibility

As with all of a computer's components, bus designs evolve, with new types being
introduced every few years. For example, the PCI bus that supports video, network and
other expansion cards predates the newer PCIe interface, and USB has undergone
several major revisions. Having multiple buses that support equipment from different
eras lets you keep legacy equipment such as printers and older hard drives and add
newer devices as well.

 Multi-Core Requires Multiple Buses

A single central processing unit places heavy demands on the bus that carries memory
data and peripheral traffic for hard drives, networks and printers; since the mid-2000s,
however, most computers have adopted a multi-core model that require additional
buses. To keep each core busy and productive, the new bus designs ferry increased
amounts of information in and out of the microprocessor, keeping wait times to a
minimum.

MULTI BUS ORGANIZATION OF DATA PATH

In single bus organization, only one data item can be transferred over the bus in a clock cycle.
To reduce the number of steps needed, most commercial processors provide multiple internal
paths that enable several transfers to take place in parallel.
Figure illustrates a three-bus structure used to connect the registers and the ALU of a processor.
All general-purpose registers are combined into a single block called the register file. The
register file in Figure is said to have three ports. There are two outputs, allowing the contents
of two different registers to be accessed simultaneously and have their contents placed on buses
A and B. The third port allows the data on bus C to be loaded into a third register during the
same clock cycle.
Buses A and B are used to transfer the source operands to the A and B inputs of the ALU, where
an arithmetic or logic operation may be performed. The result is transferred to the destination
over bus C. If needed, the ALU may simply pass one of its two input operands unmodified to
bus C. We will call the ALU control signals for such an operation R=A or R=B. A second
feature in Figure is the introduction of the Incrementor unit, which is used to increment the PC
by 4. Using the Incrementor eliminates the need to add 4 to the PC using the main ALD, as
was done in single bus organization. The source for the constant 4 at the ALU input multiplexer
is still useful.
It can be used to increment other addresses, such as the memory addresses in Load Multiple
and Store Multiple instructions.
Action:
1. PCout, R=B, MARin, Read, IncPC
2. WFMC
3. MDRoutB, R=B, Irin
4. R4out, R5outB, SelectA, Add, R6in, End.
Consider the three-operand instruction Add R4, R5, R6: The control sequence for
executing this instruction is given in Figure.

In step 1, the contents of the PC are passed through the ALU, using the R=B control
signal, and loaded into the MAR to start a memory read operation. At the same time the
PC is incremented by 4. Note that the value loaded into MAR is the original contents
of the PC. The incremented value is loaded into the PC at the end of the clock cycle and
will not affect the contents of MAR.

In step 2, the processor waits for MFC and loads the data received into MDR, then
transfers them to IR in step 3. Finally, the execution phase of the instruction requires
only one control step to complete, step 4. By providing more paths for data transfer a
significant reduction in the number of clock cycles needed to execute an instruction is
achieved.
HARDWIRED CONTROL

A hardwired control is a mechanism of producing control signals using Finite State Machines
(FSM) appropriately. It is designed as a sequential logic circuit. The final circuit is
constructed by physically connecting the components such as gates, flip flops, and drums.
Hence, it is named a hardwired controller.

The figure shows a 2-bit sequence counter, which is used to develop control signals. The
output obtained from these signals is decoded to generate the required signals in sequential
order.

The hardwired control consists of a combinational circuit that outputs desired controls for
decoding and encoding functions. The instruction that is loaded in the IR is decoded by the
instruction decoder. If the IR is an 8-bit register, then the instruction decoder generates 2 8 (256)
lines.

Inputs to the encoder are given from the instruction step decoder, external inputs, and condition
codes. All these inputs are used and individual control signals are generated. The end signal is
generated after all the instructions get executed. Furthermore, it results in the resetting of the
control step counter, making it ready to generate the control step for the next instruction.
The major goal of implementing the hardwired control is to minimize the cost of the circuit
and to achieve greater efficiency in the operation speed. Some of the methods that have come
up for designing the hardwired control logic are as follows −

 Sequence Counter Method − This is the most convenient method employed to design
the controller of moderate complexity.
 Delay Element Method − This method is dependent on the use of clocked delay
elements for generating the sequence of control signals.
 State Table Method − This method involves the traditional algorithmic approach to
design the Notes controller using the classical state table method.

In computer architecture, the control unit is responsible for directing the flow of data and
instructions within the CPU. There are two main approaches to implementing a control unit:
hardwired and micro-programmed.
A hardwired control unit is a control unit that uses a fixed set of logic gates and circuits to
execute instructions. The control signals for each instruction are hardwired into the control
unit, so the control unit has a dedicated circuit for each possible instruction. Hardwired
control units are simple and fast, but they can be inflexible and difficult to modify.
On the other hand, a micro-programmed control unit is a control unit that uses a microcode
to execute instructions. The microcode is a set of instructions that can be modified or updated,
allowing for greater flexibility and ease of modification. The control signals for each
instruction are generated by a microprogram that is stored in memory, rather than being
hardwired into the control unit.
Micro-programmed control units are slower than hardwired control units because they
require an extra step of decoding the microcode to generate control signals, but they are more
flexible and easier to modify. They are commonly used in modern CPUs because they allow
for easier implementation of complex instruction sets and better support for instruction set
extensions.
To execute an instruction, the control unit of the CPU must generate the required control
signal in the proper sequence. There are two approaches used for generating the control
signals in proper sequence as Hardwired Control unit and the Micro-programmed control
unit.
HARDWIRED CONTROL UNIT-
The control hardware can be viewed as a state machine that changes from one state to another
in every clock cycle, depending on the contents of the instruction register, the condition
codes, and the external inputs. The outputs of the state machine are the control signals. The
sequence of the operation carried out by this machine is determined by the wiring of the logic
elements and hence named “hardwired”.
 Fixed logic circuits that correspond directly to the Boolean expressions are used
to generate the control signals.
 Hardwired control is faster than micro-programmed control.
 A controller that uses this approach can operate at high speed.
 RISC architecture is based on the hardwired control unit.
MICRO-PROGRAMMED CONTROL UNIT

The control signals associated with operations are stored in special memory units inaccessible
by the programmer as Control Words.

Control signals are generated by a program that is similar to machine language programs.
The micro-programmed control unit is slower in speed because of the time it takes to fetch
microinstructions from the control memory.

SOME IMPORTANT TERMS


1. Control Word: A control word is a word whose individual bits represent
various control signals.
2. Micro-routine: A sequence of control words corresponding to the control
sequence of a machine instruction constitutes the micro-routine for that
instruction.
3. Micro-instruction: Individual control words in this micro-routine are referred
to as microinstructions.
4. Micro-program: A sequence of micro-instructions is called a micro-program,
which is stored in a ROM or RAM called a Control Memory (CM).
5. Control Store: the micro-routines for all instructions in the instruction set of a
computer are stored in a special memory called the Control Store.
TYPES OF MICRO-PROGRAMMED CONTROL UNIT – Based on the type of Control
Word stored in the Control Memory (CM), it is classified into two types :

1. HORIZONTAL MICRO-PROGRAMMED CONTROL UNIT :

The control signals are represented in the decoded binary format that is 1 bit/CS.
Example: If 53 Control signals are present in the processor then 53 bits are required.
More than 1 control signal can be enabled at a time.

 It supports longer control words.


 It is used in parallel processing applications.
 It allows a higher degree of parallelism. If degree is n, n CS is enabled at a time.
 It requires no additional hardware(decoders). It means it is faster than Vertical
Microprogrammed.
 It is more flexible than vertical microprogrammed

2. VERTICAL MICRO-PROGRAMMED CONTROL UNIT :

The control signals are represented in the encoded binary format. For N control
signals- Log2(N) bits are required.

 It supports shorter control words.


 It supports easy implementation of new control signals therefore it is more
flexible.
 It allows a low degree of parallelism i.e., the degree of parallelism is either 0 or
1.
 Requires additional hardware (decoders) to generate control signals, it implies it
is slower than horizontal microprogrammed.
 It is less flexible than horizontal but more flexible than that of a hardwired control
unit.
THE DIFFERENCES BETWEEN HARDWIRED AND MICRO-PROGRAMMED
CONTROL UNITS:

Micro-programmed Control
Hardwired Control
Objective Unit
Unit

Fixed set of logic gates Microcode stored in memory


Implementation
and circuits

Less flexible, difficult to More flexible, easier to modify


Flexibility
modify

Supports complex instruction


Supports limited
Instruction Set sets
instruction sets

Complex design, more difficult


Simple design, easy to
Complexity of Design to implement
implement

Slower operation due to


Speed Fast operation microcode decoding

Difficult to debug and Easier to debug and test


Debugging and Testing
test

Larger size, higher cost


Size and Cost Smaller size, lower cost

Maintenance and Difficult to upgrade and Easier to upgrade and maintain


Upgradability maintain

You might also like