Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

DIGITAL LOGIC DESIGN AND COMPUTER ORGANIZATION

(DLDCO)

UNIT-1
Computer types
A computer can be defined as a fast electronic calculating machine
that accepts the (data) digitized input information process it as per the
list of internally stored instructions and produces the resulting
information.
List of instructions are called programs & internal storage is called
computer memory.
The different types of computers are

1. Personal computers: - This is the most common type found in


homes, schools, Business offices etc., It is the most common type of desk
top computers with processing and storage units along with various
input and output devices.
2. Note book computers: - These are compact and portable versions
of PC
3. Work stations: - These have high resolution input/output (I/O)
graphics capability, but with same dimensions as that of desktop
computer. These are used in engineering applications of interactive
design work.
4. Enterprise systems: - These are used for business data processing
in medium to large corporations that require much more computing
power and storage capacity than work stations. Internets associated with
servers have become a dominant worldwide source of all types of
information.
5. Super computers: - These are used for large scale numerical
calculations required in the applications like weather forecasting etc.,

1
Functional unit
A computer consists of five functionally independent main parts
input, memory, arithmetic logic unit (ALU), output and control unit.

Input device accepts the coded information as source program i.e.,


high level language. This is either stored in the memory or immediately
used by the processor to perform the desired operations. The program
stored in the memory determines the processing steps. Basically, the
computer converts one source program to an object program. i.e., into
machine language.

Input ALU

I/O Processor
Memory

Output
Control Unit

Fig: Functional units of computer

Finally, the results are sent to the outside world through output
device. All of these actions are coordinated by the control unit.

2
1. Input unit:
The source program/high level language program/coded
information/simply data is fed to a computer through input devices
keyboard is a most common type. Whenever a key is pressed, one
corresponding word or number is translated into its equivalent binary
code over a cable & fed either to memory or processor.
Example: - Joysticks, trackballs, mouse, scanners etc are other input
devices.
2. Memory unit:
Its function is to store programs and data. It is basically to two types
i) Primary memory and
ii) Secondary memory

i) Primary memory: Is the one exclusively associated with the processor


and operates at the electronics speeds, programs must be stored in this
memory while they are being executed. The memory contains a large
number of semiconductors storage cells. Each capable of storing one bit
of information. These are processed in a group of fixed site called word.
To provide easy access to a word in memory, a distinct address is
associated with each word location. Addresses are numbers that identify
memory location.
Number of bits in each word is called word length of the computer.
Programs must reside in the memory during execution. Instructions and
data can be written into the memory or read out under the control of
processor.
Memory in which any location can be reached in a short and fixed
amount of time after specifying its address is called random-access
memory (RAM).
The time required to access one word in called memory access time.
Memory which is only readable by the user and contents of which can’t
be altered is called read only memory (ROM) it contains operating system.
Caches are the small fast RAM units, which are coupled with the
processor and are often contained on the same IC chip to achieve high
performance. Although primary storage is essential it tends to be
expensive.
3
ii) Secondary memory: Is used where large amounts of data & programs
have to be stored, particularly information that is accessed infrequently.
Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies
etc.
3. Arithmetic logic unit (ALU):
Most of the computer operators are executed in ALU of the
processor like addition, subtraction, division, multiplication, etc. the
operands are brought into the ALU from memory and stored in high-
speed storage elements called register. Then according to the
instructions, the operation is performed in the required sequence.
The control and the ALU are many times faster than other devices
connected to a computer system. This enables a single processor to
control a number of external devices such as key boards, displays,
magnetic and optical disks, sensors and other mechanical controllers.
4. Output unit:
These actually are the counterparts of input unit. Its basic function
is to send the processed results to the outside world.
Examples: Printer, speakers, monitor etc.
5. Control unit:
It effectively is the nerve center that sends signals to other units
and senses their states. The actual timing signals that govern the transfer
of data between input unit, processor, memory and output unit are
generated by the control unit.

Basic operational concepts


To perform a given task an appropriate program consisting of a list
of instructions is stored in the memory. Individual instructions are
brought from the memory into the processor, which executes the
specified operations. Data to be stored are also stored in the memory.
Examples: - Add LOCA, R0
This instruction adds the operand at memory location LOCA, to operand
in
Register R0 & places the sum into register. This instruction requires the
performance of several steps,
1. First the instruction is fetched from the memory into the processor.
2. The operand at LOCA is fetched and added to the contents of R0
4
3. Finally, the resulting sum is stored in the register R0

The preceding add instruction combines a memory access operation


with an ALU Operations. In some other type of computers, these two
types of operations are performed by separate instructions for
performance reasons.
Load LOCA, R1 Add R1, R0
Transfers between the memory and the processor are started by
sending the address of the memory location to be accessed to the memory
unit and issuing the appropriate control signals. The data are then
transferred to or from the memory.

The above figure shows how memory & the processor can be
connected. In addition to the ALU & the control circuitry, the processor
contains a number of registers used for several different purposes.
The Instruction Register (IR): Holds the instructions that are currently
being executed. Its output is available for the control circuits which
generates the timing signals that control the various processing elements
in one execution of instruction.
The Program Counter (PC):
This is another specialized register that keeps track of execution of
a program. It contains the memory address of the next instruction to be
fetched and executed.
Besides IR and PC, there are n-general purpose registers R0 through Rn-
The other two registers which facilitate communication with memory are:
1. MAR – (Memory Address Register): - It holds the address of the
location to be accessed.
2. MDR – (Memory Data Register): - It contains the data to be written
into or read out of the address location.

5
MAR MDR
CONTRO
L

PC R0

R1

… ALU

IR …

Rn-1

n- GPRs
Fig: Connections between the processor and the memory

6
Operating steps are
1. Programs reside in the memory & usually get these through the I/P unit.

2. Execution of the program starts when the PC is set to point at the first
instruction of the program.

3. Contents of PC are transferred to MAR and a Read Control Signal is sent to the
memory.

4. After the time required to access the memory elapses, the address word is read
out of the memory and loaded into the MDR.

5. Now contents of MDR are transferred to the IR & now the instruction is ready
to be decoded and executed.

6. If the instruction involves an operation by the ALU, it is necessary to obtain the


required operands.

7. An operand in the memory is fetched by sending its address to MAR & Initiating
a read cycle.

8. When the operand has been read from the memory to the MDR, it is transferred
from MDR to the ALU.

9. After one or two such repeated cycles, the ALU can perform the desired
operation.

10. If the result of this operation is to be stored in the memory, the result is sent
to MDR.

11. Address of location where the result is stored is sent to MAR & a write cycle is
initiated.

12. The contents of PC are incremented so that PC points to the next instruction
that is to be executed.

Normal execution of a program may be preempted (temporarily interrupted) if some


devices require urgent servicing, to do this one device raises an Interrupt signal.
An interrupt is a request signal from an I/O device for service by the processor.
The processor provides the requested service by executing an appropriate interrupt
service routine.
The Diversion may change the internal stage of the processor its state must be
saved in the memory location before interruption. When the interrupt-routine service
is completed the state of the processor is restored so that the interrupted program
may continue.

7
Bus structure
The simplest and most common way of interconnecting various parts of
the computer. To achieve a reasonable speed of operation, a computer
must be organized so that all its units can handle one full word of data
at a given time. A group of lines that serve as a connecting port for several
devices is called a bus.
In addition to the lines that carry the data, the bus must have lines for
address and control purpose. Simplest way to interconnect is to use the
single bus as shown

INPUT MEMORY PROCESSOR OUTPUT

Fig: Single bus structure

Since the bus can be used for only one transfer at a time, only two
units can actively use the bus at any given time. Bus control lines are
used to arbitrate multiple requests for use of one bus.

8
Single bus structure is
➢ Low cost
➢ Very flexible for attaching peripheral devices

Multiple bus structure certainly increases, the performance but also


increases the cost significantly.
All the interconnected devices are not of same speed & time, leads
to a bit of a problem. This is solved by using cache registers (ie buffer
registers). These buffers are electronic registers of small capacity when
compared to the main memory but of comparable speed.
The instructions from the processor at once are loaded into these buffers
and then the complete transfer of data at a fast rate will take place.

Software
A Computer is an electronic device that can perform various operations of
computation at a greater speed than what an ordinary machine or human mind
can do. It is driven by many entities including the physical and tangible
components that we can touch or feel, called the Hardware and programs and
commands that drive the hardware, called the Software.
The Software refers to the set of instructions fed in form of programs to govern
the computer system and process the hardware components. For example:
• The antivirus that we use to protect our computer system is a type of
Software.
• The media players that we use to play multimedia files such as movies,
music etc. are Software.
• The Microsoft Office we use to edit the documents is a Software.
Depending on its use and area of implementation, Software can be divided into
3 major types:
1. System Software
2. Application Software
3. Utility Software

9
System Software

These are the software that directly allows the user to interact with the hardware
components of a computer system. As the humans and machines follow different
languages, there has to be an interface that will allow the users to interact with
the core system, this interface is provided by the software. The system software
can be called the main or the alpha software of a computer system as it handles
the major portion of running a hardware. This System Software can be further
divided into four major types:
1. The Operating System – It is the main program that governs and
maintains the inter-cooperation of the components of a computer
system. For eg., Microsoft Windows, Linux, Mac OS etc.
2. The Language Processor – The hardware components present in the
computer system does not understand human language. There are three
types of languages involved in the world of human-machine interaction:

• Machine-Level Language: The machines only understand the


digital signals or the binary codes or the binary language which
consist of strings of 0’s and 1’s. These are totally machine
dependent language.
• Assembly-Level Language: These are the Low-Level Language
(LLL), that forms a correspondence between machine level
instruction and general assembly level statements. Assembly

10
language uses a mnemonic to represent each low-level machine
instruction or operation-code also called the op-codes. For eg.,
ADD instruction is used to add two entities, the HALT
instruction is used to stop a process etc. It is a machine
dependent language and varies from processor to processor.
• High-Level Language: These are the simple English
statements, that humans use to program and code as it is easy
to read and understand to the human world. For eg., Java, C,
C++, Python etc.

The machine level language is very complex to understand and code,


therefore the users prefer the High-Level Language or the HLL for
coding. These codes need to be converted into the machine language so
that the computer can easily understand and work accordingly. This
operation is performed by the Language Processor which is made up of
further three components:

• Assembler: This language processor is used to convert the


assembly language into machine level language.
• Compiler: This language processor is used to convert High-
Level Language into machine level language in one go, thus
execution time is fast. The error detection is difficult in a
compiler. Programming Languages like C, C++ and Scala use
compiler.
• Interpreter: This language processor is also used to convert
High-Level Language into machine level language line-by-line,
thus execution time is slow. Error-detection is easier in an
interpreter as it reports as soon as a bug is caught and restarts
the process. This consumes unnecessary memory.
Programming Languages like Python, Ruby and Java uses an
interpreter.

3. The Device Drivers – The device drivers and the device programs or the
system software that acts as an interface between the various Input-
Output device and the users or the operating system. For eg., the
Printers, Web cameras come with a driver disk that is needed to be
installed into the system to make the device run in the system.
4. The BIOS – It stands for Basic Input Output System and is a small
firmware, that controls the peripheral or the input-output devices
attached to the system. This software is also responsible for starting the
OS or initiating the booting process.

11
Application Software

These are the basic software used to run to accomplish a particular action and
task. These are the dedicated software, dedicated to performing simple and single
tasks. For eg., a single software cannot serve to both the reservation system and
banking system. These are divided into two types:
1. The General-Purpose Application Software: These are the types of
application software that comes in-built and ready to use, manufactured
by some company or someone. For eg.,
• Microsoft Excel – Used to prepare excel sheets.
• VLC Media Player – Used to play audio/video files.
• Adobe Photoshop – Used for designing and animation and many
more.
2. The Specific Purpose Application Software: These are the type of
software that is customizable and mostly used in real-time or business
environment. For eg.,
• Ticket Reservation System
• Healthcare Management System
• Hotel Management System
• Payroll Management System

Utility Software

These are the most basic type of software which provides high utility to the user
and the system. These perform the basic but daily need tasks. For eg.,
• Antivirus Software: These provide protection to the computer system
from unwanted malware and viruses. For e.g., Quick Heal, McAfee etc.
• Disk Defragmenter Tools: These help the users to analyse the bad sectors
of the disk and rearrange the files in a proper order.
• Text-editors: These help the users to take regular notes and create basic
text files. For eg., Notepad, Gedit etc.

12
Performance
The most important measure of the performance of a computer is how
quickly it can execute programs. The speed with which a computer
executes program is affected by the design of its hardware. For best
performance, it is necessary to design the compiles, the machine
instruction set, and the hardware in a coordinated way.

The total time required to execute the program is elapsed time is a


measure of the performance of the entire computer system. It is affected
by the speed of the processor, the disk and the printer. The time needed
to execute an instruction is called the processor time.

Just as the elapsed time for the execution of a program depends on all
units in a computer system, the processor time depends on the hardware
involved in the execution of individual machine instructions. This
hardware comprises the processor and the memory which are usually
connected by the bus as shown in the figure.

Cache
Main Memory Processor
Memory

13
Multiprocessor & Multicomputer
➢ Large computers that contain a number of processor units are
called multiprocessor system.
➢ These systems either execute a number of different application
tasks in parallel or execute subtasks of a single large task in
parallel.
➢ All processors usually have access to single memory location &
hence they are called shared memory multiprocessor systems.
➢ The high performance of these systems comes with much increased
complexity and cost.
➢ In contrast to multiprocessor systems, it is also possible to use an
interconnected group of complete computers to achieve high total
computational power. These computers normally have access to
their own memory units when the tasks they are executing need to
communicate data they do so by exchanging messages over a
communication network. This distinguishes them from shared
memory multiprocessors, leading to name message-passing multi
computer.

Computer Generations
The history of the computer goes back several decades however and there
are five definable generations of computers.
Each generation is defined by a significant technological development
that changes fundamentally how computers operate – leading to more
compact, less expensive, but more powerful, efficient and robust
machines.
1940 – 1956: First Generation – Vacuum Tubes
These early computers used vacuum tubes as circuitry and magnetic
drums for memory. As a result, they were enormous, literally taking up
entire rooms and costing a fortune to run. These were inefficient
materials which generated a lot of heat, sucked huge electricity and
subsequently generated a lot of heat which caused ongoing breakdowns.

14
These first-generation computers relied on ‘machine language’ (which is
the most basic programming language that can be understood by
computers). These computers were limited to solving one problem at a
time. Input was based on punched cards and paper tape. Output came
out on print-outs. The two notable machines of this era were the UNIVAC
and ENIAC machines – the UNIVAC is the first every commercial
computer which was purchased in 1951 by a business – the US Census
Bureau.
1956 – 1963: Second Generation – Transistors
The replacement of vacuum tubes by transistors saw the advent of the
second generation of computing. Although first invented in 1947,
transistors weren’t used significantly in computers until the end of the
1950s. They were a big improvement over the vacuum tube, despite still
subjecting computers to damaging levels of heat. However they were
hugely superior to the vacuum tubes, making computers smaller, faster,
cheaper and less heavy on electricity use. They still relied on punched
card for input/printouts.
The language evolved from cryptic binary language to symbolic
(‘assembly’) languages. This meant programmers could create
instructions in words. About the same time high level programming
languages were being developed (early versions of COBOL and
FORTRAN). Transistor-driven machines were the first computers to store
instructions into their memories – moving from magnetic drum to
magnetic core ‘technology’. The early versions of these machines were
developed for the atomic energy industry.
1964 – 1971: Third Generation – Integrated Circuits
By this phase, transistors were now being miniaturised and put on
silicon chips (called semiconductors). This led to a massive increase in
speed and efficiency of these machines. These were the first computers
where users interacted using keyboards and monitors which interfaced
with an operating system, a significant leap up from the punch cards and

15
printouts. This enabled these machines to run several applications at
once using a central program which functioned to monitor memory.
As a result of these advances which again made machines cheaper and
smaller, a new mass market of users emerged during the ‘60s.
1972 – 2010: Fourth Generation – Microprocessors
This revolution can be summed in one word: Intel. The chip-maker
developed the Intel 4004 chip in 1971, which positioned all computer
components (CPU, memory, input/output controls) onto a single chip.
What filled a room in the 1940s now fit in the palm of the hand. The Intel
chip housed thousands of integrated circuits. The year 1981 saw the first
ever computer (IBM) specifically designed for home use and 1984 saw the
MacIntosh introduced by Apple. Microprocessors even moved beyond the
realm of computers and into an increasing number of everyday products.
The increased power of these small computers meant they could be
linked, creating networks. Which ultimately led to the development, birth
and rapid evolution of the Internet. Other major advances during this
period have been the Graphical user interface (GUI), the mouse and more
recently the astounding advances in lap-top capability and hand-held
devices.
2010: Fifth Generation – Artificial Intelligence
Computer devices with artificial intelligence are still in development, but
some of these technologies are beginning to emerge and be used such as
voice recognition.
AI is a reality made possible by using parallel processing and
superconductors. Leaning to the future, computers will be radically
transformed again by quantum computation, molecular and nano
technology.
The essence of fifth generation will be using these technologies to
ultimately create machines which can process and respond to natural
language, and have capability to learn and organize themselves.

16
Data Representation
Binary Numbers Fixed Point Representation

We obviously need to represent both positive and negative


numbers. Three systems are used for representing such
numbers:
➢ Sign-and-magnitude
➢ 1’s-complement
➢ 2’s-complement
In all three systems, the leftmost bit is 0 for positive
numbers and 1 for negative numbers. Positive values have
identical representations in all systems, but negative values
have different representations. In the sign-and-magnitude
systems, negative values are represented by changing the
most significant bit from 0 to 1 in the corresponding
positive value. For example, +5 is represented by 0101, and
-5 is represented by 1101. In 1’s-complement
representation, negative values are obtained by
complementing each bit of the corresponding positive
number. Thus, the representation for -3 is obtained by
complementing each bit in the vector 0011 to yield 1100.
clearly, the same operation, bit complementing, is done in
converting a negative number to the corresponding positive
value. Converting either way is referred to as forming the
1’s-complement of a given number. Finally, in the 2’s-
complement system, forming the 2’s-complement of a
number is done by subtracting that number from 2n.
Hence, the 2’s complement of a number is obtained by
adding 1 to the 1’s complement of that number.
Addition of Positive numbers:
Consider adding two 1-bit numbers. The results are shown
in figure. Note that the sum of 1 and 1 requires the 2-bit
vector 10 to represent the value 2. We say that the sum
is 0 and the carry-out is 1. In order to add multiple-bit
numbers, we use a method analogous to that used for

17
manual computation with decimal numbers. We add bit
pairs starting from the low-order (right) and of the bit
vectors, propagating carries toward the high-order (left)
end.

SIGN MAGNITUDE
--------------

A human readable way of getting both positive and negative integers.


The hw that does arithmetic on sign magnitude integers is not fast,
and it is more complex than the hw that does arithmetic on 1's comp.
and 2's comp. integers.

It uses 1 bit of integer to represent the sign of the integer

let sign bit of 0 be positive,


1 be negative.

the rest of the integer is a magnitude, using same encoding as unsigned


integers

example: 4 bits
0101 is 5

1101 is -5

to get the additive inverse of a number, just flip


(not, invert, complement, negate) the sign bit.

range: -(2**(n-1)) + 1 to 2**(n-1) -1


where n is the number of bits

4 bits, -7 to +7
n=4, - 2**3 + 1 to 2**3 - 1
-8 + 1 to 8 - 1

because of the sign bit, there are 2 representations for 0.


This is a problem for hardware. . .

0000 is 0, 1000 is 0
the computer must do all calculations such that they
come out correctly and the same whichever representation is used.

ONE's COMPLEMENT
----------------

Historically important, and we use this representation to get


2's complement integers.

Now, nobody builds machines that are based on 1's comp. integers.
In the past, early computers built by Semour Cray (while at CDC)
were based on 1's comp. integers.

18
positive integers use the same representation as unsigned.

00000 is 0
00111 is 7, etc.

negation (finding an additive inverse) is done by taking a bitwise


complement of the positive representation.

COMPLEMENT. INVERT. NOT. FLIP. NEGATE.


a logical operation done on a single bit
the complement of 1 is 0.
the complement of 0 is 1.

-1 --> take +1, 00001


complement each bit 11110

that is -1.

don't add or take away any bits.

EXAMPLES: 11100 this must be a negative number.


to find out which, find the additive
inverse!
00011 is +3 by sight,
so 11100 must be -3

things to notice: 1. any negative number will have a 1 in the MSB.


2. there are 2 representations for 0,
00000 and 11111.

TWO's COMPLEMENT
----------------

A variation on 1's complement that does NOT have 2 representations for


0. This makes the hardware that does arithmetic faster than for the
other representations.

a 3 bit example:
bit pattern: 100 101 110 111 000 001 010 011

1's comp: -3 -2 -1 0 0 1 2 3

2's comp.: -4 -3 -2 -1 0 1 2 3

the negative values are all "slid" down by one, eliminating the
extra zero representation.

how to get an integer in 2's comp. representation:

positive values are the same as for sign/mag and 1's comp.
they will have a 0 in the MSB (but it is NOT a sign bit!)

19
positive: just write down the value as before
negative:
take the positive value 00101 (+5)
take the 1's comp. 11010 (-5 in 1's comp)
add 1 + 1
------
11011 (-5 in 2's comp)

to get the additive inverse of a 2's comp integer,


1. take the 1's comp.
2. add 1

to add 1 without really knowing how to add:


start at LSB, for each bit (working right to left)
while the bit is a 1, change it to a 0.
when a 0 is encountered, change it to a 1 and stop.
the rest of the bits remain the same as they were.

EXAMPLE:
what decimal value does the two's complement 110011 represent?

It must be a negative number, since the most significant bit


is a 1. Therefore, find the additive inverse:

110011 (2's comp. ?)

001100 (after taking the 1's complement)


+ 1
------
001101 (2's comp. +13)

Therefore, it's additive inverse (110011) must be -13.

A LITTLE BIT ON ADDING


----------------------
we'll see how to really do this in the next chapter, but here's
a brief overview.

carry in a b sum carry out


0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

a 0011
+b +0001
-- -----
sum 0100

its really just like we do for decimal!


0 + 0 = 0
1 + 0 = 1
1 + 1 = 2 which is 10 in binary, sum is 0 and carry the 1.
1 + 1 + 1 = 3 sum is 1, and carry a 1.

20
OVERFLOW
--------
sometimes a value cannot be represented in the limited number
of bits allowed. Examples:
unsigned, 3 bits: 8 would require at least 4 bits (1000)
sign mag., 4 bits: 8 would require at least 5 bits (01000)

when a value cannot be represented in the number of bits allowed,


we say that overflow has occurred. Overflow occurs when doing
arithmetic operations.

example: 3 bit unsigned representation

011 (3)
+ 110 (6)
---------
? (9) it would require 4 bits (1001) to represent
the value 9 in unsigned rep.

CHARACTER REPRESENTATION
------------------------

everything represented by a computer is represented by binary


sequences.

A common non-integer needed to be represented is characters.


We use standard encodings (binary sequences) to repreesent characters.

REMEMBER: bit patterns do NOT imply a representation

Many I/O devices work with 8 bit quantities.


A standard code ASCII (American Standard for Computer Information
Interchange) defines what character is represented by each sequence.

examples:
0100 0001 is 41 (hex) or 65 (decimal). It represents `A'
0100 0010 is 42 (hex) or 66 (decimal). It represents `B'

Different bit patterns are used for each different character


that needs to be represented.

The code has some nice properties. If the bit patterns are compared,
(pretending they represent integers), then `A' < `B'
This is good, because it helps with sorting things into alphabetical
order.

Notes: `a' (61 hex) is different than `A' (41 hex)


`8' (38 hex) is different than the integer 8

the digits:
`0' is 48 (decimal) or 30 (hex)
`9' is 57 (decimal) or 39 (hex)

Because of this, code must deal with both characters and integers
correctly.

21
Floating Point Numbers
Real Numbers: pi = 3.14159265... e = 2.71828...

Scientific Notation: has a single digit to the left of the decimal point.

A number in Scientific Notation with no leading 0s is called


a Normalized Number: 1.0 × 10-8

Not in normalized form: 0.1 × 10-7 or 10.0 × 10-9

Can also represent binary numbers in scientific notation: 1.0 × 2-3

Computer arithmetic that supports such numbers is called Floating


Point.

The form is 1.xxxx… × 2yy…

Using normalized scientific notation

1. Simplifies the exchange of data that includes floating-point


numbers
2. Simplifies the arithmetic algorithms to know that the numbers
will always be in this form
3. Increases the accuracy of the numbers that can be stored in a
word, since each unnecessary leading 0 is replaced by another
significant digit to the right of the decimal point

Representation of Floating-Point numbers

-1S × M × 2E

Bit No Size Field Name

31 1 bit Sign (S)

23-30 8 bits Exponent (E)

0-22 23 bits Mantissa (M)

22
A Single-Precision floating-point number occupies 32-bits, so there
is a compromise between the size of the mantissa and the size of the
exponent.

These chosen sizes provide a range of approx:


± 10-38 ... 1038

• Overflow

The exponent is too large to be represented in the Exponent


field

• Underflow

The number is too small to be represented in the Exponent field

To reduce the chances of underflow/overflow, can use 64-bit Double-


Precision arithmetic

Bit No Size Field Name


63 1 bit Sign (S)
52-62 11 bits Exponent (E)
0-51 52 bits Mantissa (M)

providing a range of approx.


± 10-308 ... 10308

These formats are called ...

IEEE 754 Floating-Point Standard

Since the mantissa is always 1.xxxxxxxxx in the normalized form, no


need to represent the leading 1. So, effectively:

• Single Precision: mantissa ===> 1 bit + 23 bits


• Double Precision: mantissa ===> 1 bit + 52 bits

Since zero (0.0) has no leading 1, to distinguish it from others, it is


given the reserved bitpattern all 0s for the exponent so that hardware
won't attach a leading 1 to it. Thus:

• Zero (0.0) = 0000...0000


• Other numbers = -1S × (1 + Mantissa) × 2E

23
If we number the mantissa bits from left to right m1, m2, m3, ...

mantissa = m1 × 2-1 + m2 × 2-2 + m3 × 2-3 + ....

Negative exponents could pose a problem in comparisons.

For example (with two's complement):

Sign Exponent Mantissa


1.0 × 2-1 0 11111111 0000000 00000000 00000000
1.0 × 2+1 0 00000001 0000000 00000000 00000000

With this representation, the first exponent shows a "larger" binary


number, making direct comparison more difficult.

To avoid this, Biased Notation is used for exponents.

If the real exponent of a number is X then it is represented as (X +


bias)

IEEE single-precision uses a bias of 127. Therefore, an exponent of

-1 is represented as -1 + 127 = 126 = 011111102


0 is represented as 0 + 127 = 127 = 011111112
+1 is represented as +1 + 127 = 128 = 100000002
+5 is represented as +5 + 127 = 132 = 100001002

So the actual exponent is found by subtracting the bias from the


stored exponent. Therefore, given S, E, and M fields, an IEEE
floating-point number has the value:

-1S × (1.0 + 0.M) × 2E-bias

(Remember: it is (1.0 + 0.M) because, with normalized form, only


the fractional part of the mantissa needs to be stored)

Floating Point Addition

Add the following two decimal numbers in scientific notation:


8.70 × 10-1 with 9.95 × 101

24
1. Rewrite the smaller number such that its exponent matches
with the exponent of the larger number.
8.70 × 10-1 = 0.087 × 101

2. Add the mantissas

9.95 + 0.087 = 10.037 and write the sum 10.037 × 101

3. Put the result in Normalised Form

10.037 × 101 = 1.0037 × 102 (shift mantissa, adjust exponent)

check for overflow/underflow of the exponent after


normalisation

4. Round the result

If the mantissa does not fit in the space reserved for it, it has to
be rounded off.

For Example: If only 4 digits are allowed for mantissa


1.0037 × 102 ===> 1.004 × 102

(only have a hidden bit with binary floating point numbers)

Example addition in binary

Perform 0.5 + (-0.4375)

0.5 = 0.1 × 20 = 1.000 × 2-1 (normalised)

-0.4375 = -0.0111 × 20 = -1.110 × 2-2 (normalised)

1. Rewrite the smaller number such that its exponent matches


with the exponent of the larger number.
-1.110 × 2-2 = -0.1110 × 2-1

2. Add the mantissas:


1.000 × 2-1 + -0.1110 × 2-1 = 0.001 × 2-1

3. Normalise the sum, checking for overflow/underflow:

25
0.001 × 2-1 = 1.000 × 2-4

-126 <= -4 <= 127 ===> No overflow or underflow

4. Round the sum:

The sum fits in 4 bits so rounding is not required

Check: 1.000 × 2-4 = 0.0625 which is equal to 0.5 - 0.4375

Floating Point Multiplication

Multiply the following two numbers in scientific notation by hand:


1.110 × 1010 × 9.200 × 10-5

1. Add the exponents to find

New Exponent = 10 + (-5) = 5

If we add biased exponents, bias will be added twice. Therefore we need to


subtract it once to compensate:

(10 + 127) + (-5 + 127) = 259

259 - 127 = 132 which is (5 + 127) = biased new exponent

2. Multiply the mantissas


1.110 × 9.200 = 10.212000

Can only keep three digits to the right of the decimal point, so
the result is
10.212 × 105

3. Normalise the result


1.0212 × 106

4. Round it
1.021 × 106

26
Example multiplication in binary:
1.000 × 2-1 × -1.110 × 2-2

1. Add the biased exponents


(-1 + 127) + (-2 + 127) - 127 = 124 ===> (-3 + 127)

2. Multiply the mantissas

1.000
× 1.110
-----------
0000
1000
1000
+ 1000
-----------
1110000 ===> 1.110000
The product is 1.110000 × 2-3
Need to keep it to 4 bits 1.110 × 2-3

3. Normalise (already normalised)

At this step check for overflow/underflow by making sure that

-126 <= Exponent <= 127

1 <= Biased Exponent <= 254

4. Round the result (no change)


5. Adjust the sign.

Since the original signs are different, the result will be negative
-1.110 × 2-3

27
Memory locations and addresses
Number and character operands, as well as instructions,
are stored in the memory of a computer. The memory
consists of many millions of storage cells, each of which can
store a bit of information having the value 0 or 1. Because
a single bit represents a very small amount of information,
bits are seldom handled individually. The usual approach
is to deal with them in groups of fixed size. For this purpose,
the memory is organized so that a group of n bits can be
stored or retrieved in a single, basic operation. Each group
of n bits is referred to as a word of information, and n is
called the word length. The memory of a computer can be
schematically represented as a collection of words.
Modern computers have word lengths that typically range
from 16 to 64 bits. If the word length of a computer is 32
bits, a single word can store a 32-bit 2’s complement
number or four ASCII characters, each occupying 8 bits. A
unit of 8 bits is called a byte.
Accessing the memory to store or retrieve a single item of
information, either a word or a byte, requires distinct
names or addresses for each item location. It is customary
to use numbers from 0 through 2K-1, for some suitable
values of k, as the addresses of successive locations in the
memory. The 2k addresses constitute the address space of
the computer, and the memory can have up to 2k
addressable locations. 24-bit address generates an address
space of 224 (16,777,216) locations. A 32-bit address
creates an address space of 232 or 4G (4 giga) locations.

28
Number base conversions
A number N in base or radix b can be written as:
(N)b = dn-1 dn-2 — — — — d1 d0. d-1 d-2 — — — — d-m

In the above, dn-1 to d0 is integer part, then follows a radix


point, and then d-1 to d-m is fractional part.

dn-1 = Most significant bit (MSB)


d-m = Least significant bit (LSB)

How to convert a number from one base to another?


Follow the example illustrations:

1. Decimal to Binary
(10.25)10
Note: Keep multiplying the fractional part with 2 until
decimal part 0.00 is obtained.
(0.25)10 = (0.01)2

Answer: (10.25)10 = (1010.01)2

2. Binary to Decimal
(1010.01)2

1×23 + 0x22 + 1×21+ 0x20 + 0x2-1 + 1×2-2 = 8+0+2+0+0+0.25


= 10.25

Answer: (1010.01)2 = (10.25)10

29
3. Decimal to Octal
(10.25)10
(10)10 = (12)8

Fractional part:
0.25 x 8 = 2.00
Note: Keep multiplying the fractional part with 8 until
decimal part .00 is obtained.
(.25)10 = (.2)8

Answer: (10.25)10 = (12.2)8

4. Octal to Decimal
(12.2)8
1 x 81 + 2 x 80 +2 x 8-1 = 8+2+0.25 = 10.25
(12.2)8 = (10.25)10

5. Hexadecimal and Binary

To convert from Hexadecimal to Binary, write the 4-bit


binary equivalent of hexadecimal.

(3A)16 = (00111010)2
To convert from Binary to Hexadecimal, group the bits in
groups of 4 and write the hex for the 4-bit binary. Add 0's
to adjust the groups.
1111011011
(001111011011 )2 = (3DB)16

30
Complements
Complements are used in the digital computers in order to
simplify the subtraction operation and for the logical
manipulations. For each radix-r system (radix r represents
base of number system) there are two types of
complements.
1 Radix Complement The radix complement is referred
to as the r's complement
2 Diminished Radix Complement The diminished radix
complement is referred to as the (r-1)'s complement
Binary system complements
As the binary system has base r = 2. So the two types of
complements for the binary system are 2's complement and
1's complement.

1's complement
The 1's complement of a number is found by changing all
1's to 0's and all 0's to 1's. This is called as taking
complement or 1's complement. Example of 1's
Complement is as follows.

2's complement
The 2's complement of binary number is obtained by adding
1 to the Least Significant Bit (LSB) of 1's complement of the
number.

2's complement = 1's complement + 1

31
Signed binary numbers
Signed Binary Numbers use the MSB as a sign bit to display
a range of either positive numbers or negative numbers.
In mathematics, positive numbers (including zero) are
represented as unsigned numbers. That is, we do not put
the +ve sign in front of them to show that they are positive
numbers.
However, when dealing with negative numbers we do use a
-ve sign in front of the number to show that the number is
negative in value and different from a positive unsigned
value, and the same is true with signed binary numbers.
However, in digital circuits there is no provision made to
put a plus or even a minus sign to a number, since digital
systems operate with binary numbers that are represented
in terms of “0’s” and “1’s”. When used together in
microelectronics, these “1’s” and “0’s”, called a bit (being a
contraction of BInary digiT), fall into several range sizes of
numbers which are referred to by common names, such as
a byte or a word.
We have also seen previously that an 8-bit binary number
(a byte) can have a value ranging from 0 (00000000)2 to 255
(11111111)2, that is 28 = 256 different combinations of bits
forming a single 8-bit byte. So, for example an unsigned
binary number such as: (01001101)2 = 64 + 8 + 4 + 1 =
(77)10 in decimal. But Digital Systems and computers must
also be able to use and to manipulate negative numbers as
well as positive numbers.
Mathematical numbers are generally made up of a sign and
a value (magnitude) in which the sign indicates whether the
number is positive, ( + ) or negative, ( – ) with the value
indicating the size of the number, for example 23, +156 or
-274. Presenting numbers is this fashion is called “sign-
magnitude” representation since the left most digit can be

32
used to indicate the sign and the remaining digits the
magnitude or value of the number.
Sign-magnitude notation is the simplest and one of the
most common methods of representing positive and
negative numbers either side of zero, (0). Thus, negative
numbers are obtained simply by changing the sign of the
corresponding positive number as each positive or
unsigned number will have a signed opposite, for example,
+2 and -2, +10 and -10, etc.
But how do we represent signed binary numbers if all we
have is a bunch of one’s and zero’s. We know that binary
digits, or bits only have two values, either a “1” or a “0” and
conveniently for us, a sign also has only two values, being
a “+” or a “–“.
Then we can use a single bit to identify the sign of a signed
binary number as being positive or negative in value. So to
represent a positive binary number (+n) and a negative (-n)
binary number, we can use them with the addition of a sign.
For signed binary numbers the most significant bit (MSB)
is used as the sign bit. If the sign bit is “0”, this means the
number is positive in value. If the sign bit is “1”, then the
number is negative in value. The remaining bits in the
number are used to represent the magnitude of the binary
number in the usual unsigned binary number format way.
Then we can see that the Sign-and-Magnitude (SM)
notation stores positive and negative values by dividing the
“n” total bits into two parts: 1 bit for the sign and n–1 bits
for the value which is a pure binary number. For example,
the decimal number 53 can be expressed as an 8-bit signed
binary number as follows.

33
Binary Codes

In the coding, when numbers, letters or words are represented by a specific


group of symbols, it is said that the number, letter or word is being encoded.
The group of symbols is called as a code. The digital data is represented,
stored and transmitted as group of binary bits. This group is also called
as binary code. The binary code is represented by the number as well as
alphanumeric letter.

Advantages of Binary Code


Following is the list of advantages that binary code offers.
• Binary codes are suitable for the computer applications.
• Binary codes are suitable for the digital communications.
• Binary codes make the analysis and designing of digital circuits if we
use the binary codes.
• Since only 0 & 1 are being used, implementation becomes easy.

Classification of binary codes


The codes are broadly categorized into following four categories.

• Weighted Codes
• Non-Weighted Codes
• Binary Coded Decimal Code
• Alphanumeric Codes

Weighted Codes
Weighted binary codes are those binary codes which obey the positional
weight principle. Each position of the number represents a specific weight.
Several systems of the codes are used to express the decimal digits 0 through
9. In these codes each decimal digit is represented by a group of four bits.

34
Non-Weighted Codes
In this type of binary codes, the positional weights are not assigned. The
examples of non-weighted codes are Excess-3 code and Gray code.

Excess-3 code
The Excess-3 code is also called as XS-3 code. It is non-weighted code used
to express decimal numbers. The Excess-3 code words are derived from the
8421 BCD code words adding (0011)2 or (3)10 to each code word in 8421. The
excess-3 codes are obtained as follows −

Example

35
Gray Code
It is the non-weighted code and it is not arithmetic codes. That means there
are no specific weights assigned to the bit position. It has a very special feature
that, only one bit will change each time the decimal number is incremented as
shown in fig. As only one-bit changes at a time, the gray code is called as a
unit distance code. The gray code is a cyclic code. Gray code cannot be used
for arithmetic operation.

Application of Gray code

• Gray code is popularly used in the shaft position encoders.


• A shaft position encoder produces a code word which represents the
angular position of the shaft.

Binary Coded Decimal (BCD) code


In this code each decimal digit is represented by a 4-bit binary number. BCD
is a way to express each of the decimal digits with a binary code. In the BCD,
with four bits we can represent sixteen numbers (0000 to 1111). But in BCD
code only first ten of these are used (0000 to 1001). The remaining six code
combinations i.e. 1010 to 1111 are invalid in BCD.

36
Advantages of BCD Codes

• It is very similar to decimal system.


• We need to remember binary equivalent of decimal numbers 0 to 9 only.

Disadvantages of BCD Codes

• The addition and subtraction of BCD have different rules.


• The BCD arithmetic is little more complicated.
• BCD needs more number of bits than binary to represent the decimal
number. So BCD is less efficient than binary.

Alphanumeric codes
A binary digit or bit can represent only two symbols as it has only two states '0'
or '1'. But this is not enough for communication between two computers
because there we need many more symbols for communication. These
symbols are required to represent 26 alphabets with capital and small letters,
numbers from 0 to 9, punctuation marks and other symbols.
The alphanumeric codes are the codes that represent numbers and alphabetic
characters. Mostly such codes also represent other characters such as symbol
and various instructions necessary for conveying information. An
alphanumeric code should at least represent 10 digits and 26 letters of
alphabet i.e. total 36 items. The following three alphanumeric codes are very
commonly used for the data representation.

• American Standard Code for Information Interchange (ASCII).


• Extended Binary Coded Decimal Interchange Code (EBCDIC).
• Five bit Baudot Code.
ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is
more commonly used worldwide while EBCDIC is used primarily in large IBM
computers.

37

You might also like