Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

lOMoARcPSD|11390524

Unit 1 Notes

Computer Applications (Guru Gobind Singh Indraprastha University)

StuDocu is not sponsored or endorsed by any college or university


Downloaded by Harpreet Kaur (harsimar101203@gmail.com)
lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Unit 1(Notes)

Introduction
Computer is an electronic device, it can do arithmetic calculations faster, we may define
computer as a device that transforms data. Data can be anything like marks obtained by you in
various subjects. It can also be name, age, sex, weight, height, etc. of all the students in your
class or income, savings, investments, etc., of a country. Computer can be defined in terms of its
functions. It can i) accept data ii) store data, iii) process data as desired, and iv) retrieve the
stored data as and when required and v) print the result in desired format

Definition of Computer
Computer is an electronic device, which can store data, manipulates them and gives output
according to the instruction fed in it by the user. It is not only a calculating device, but it stores
large amount of data, and takes logical decision. It is also known as Data Processor.

Computer system
Actually the word “computer” means the computer system, which is generally the combination
of one or more input device, output device, and a processing unit.

CHARACTERISTICS OF COMPUTER
The major characteristics of computer are as follows:
1. Speed
The computers process data at an extremely fast rate, that is to tune of million
instructions per second. The speed of computer is calculated in MHz(Megahertz),ie one
million instructions per second.

2. Accuracy
Besides the efficiency, computers are also very accurate. The level of accuracy depends
on the instructions and the type of machines being used. A computer is capable of doing
only what it is instructed to do, faulty instructions for processing the data automatically
lead to faulty results. This is known as GIGO ie Garbage In Garbage Out.

3. Diligence

Unlike human beings, computers are highly consistent. They do not suffer from human traits of
boredom and tiredness resulting in lack of concentration. Computers, therefore, are better than
human beings in performing voluminous and repetitive jobs.

4. Versatility
Prepared by: Ms. Kanika Dhingra Page 1

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

It means the capacity to perform completely different type of work. You may use your computer
to prepare payroll slips. Next moment you may use it for inventory management or to prepare
electric bills.

5. Power of Remembering
Computer has the power of storing any amount of information or data. Any information can be
stored and recalled as long as you require it, for any numbers of years. It depends entirely upon
you how much data you want to store in a computer and when to lose or retrieve these data.
6. No IQ
Computer is a dumb machine and it cannot do any work without instruction from the user. It
performs the instructions at tremendous speed and with accuracy. It is you to decide what you
want to do and in what sequence. So a computer cannot take its own decision as you can.
7. No Feeling
It does not have feelings or emotion, taste, knowledge and experience. Thus it does not get tired
even after long hours of work. It does not distinguish between users.
8. Storage capability
The Computer has an in-built memory where it can store a large amount of data. You can also
store data in secondary storage devices such as floppies, which can be kept outside your
computer and can be carried to other computers.

Applications of Computer

Some of the areas where computer are being used are listed:-

1)Science- Scientist have been using computer to develop theories, to analyze and test the data.
They can be used to generate detailed studies of how earthquakes affect buildings and pollution
affects weather pattern.

2) Education- Computers have also revolutionized the whole process of education. Currently the
classroom, libraries and museums are efficiently utilizing computers to make the education
much more interesting. Computer aided education (CAE) and Computer Based Training (CBT)
packages are making learning much more interactive.

3) Medicine and Health Care- Doctors are using computers right from diagnosing the illness to
monitoring a patient status during complex surgery. By using automated imaging techniques,
doctors are able to look inside a person’s body and can study each organ in detail (such as CAT
scans or MRI scans)

4) Engineering /Architecture /Manufacturing- The architects and engineers are extensively


using computers in designing and drawing. Computers can create objects that can be viewed
from all the three dimensions. By using techniques like virtual reality architects can explore

Prepared by: Ms. Kanika Dhingra Page 2

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

houses that have been designed but not built. The manufacturing factories are using computer
aided manufacturing (CAM) can be used in designing the product.

(5)Entertainment- Computers are finding greater use in entertainment industry. Computer are
used to control the and sound. The special effects which mesmerise the audience would not have
been possible without the computer.

(6) Communication- Email or Electronic mail is one of the communication media in which
computer is used. Through E-Mail the messages and reports are passed from one person to one
or more person by the aid of computer.

(7) Business Application- Initally computer were used for batch processing job, where one does
not require the immediate response from the computer. Currently computers are mainly used for
real time applications like at the sales counter that require immediate response from the
computer. Computers are used to pay bills, Banking operations etc.

(8) Publishing- Computers have created a field known as DTP or desktop publishing. In DTP
with the help of computer and a laser printer one can perform the publishing job.

(9)Banking- In the field of banking people can use the ATM (automated Teller Machine)
services 24 hours of the day in order to deposit and withdraw cash.

Advantages of Computer

 Computers offer unmatched speed, performance, and accuracy in data


computations.
 Computer undertakes boring and repetitive tasks, relievinges for more critical and
creative activities.
 Computers perform tasks repeatedly without error, avoiding the fatigue (exhaust
or great strain or stress) that affects human beings.
 Computers provide information which may be used for decision making.
 Computers help in medicine and scientists that previously did not exist.
 Computers are an entertainment medium as well as educational tool.
 Computers allow society to undertake new authorities in various fields and to
function more efficiently
 Computers are impartial. They offer a means of data processing unaffected by
racial, religious or cultural bias.
 Computers can handle large quality of data uniformly and randomly, without
prejudice (biasing, damaging, and hurt).
 Computers have highest degree of integrity, and highest degree of reliability.
Prepared by: Ms. Kanika Dhingra Page 3

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Limitations of computers

 Lack of intelligence:-computer does not have their own intelligence, and hence
cannot think. E.g.-computers can create music but cannot tell its quality. In other
words computers have no brain.
 Need of programs, systems and languages: - to make a computer operational, it
needs specified instructions in sequence (program), operating systems and
software’s specially written in high in high language or machine level etc. for
making the operation, as assigned to it.
 Data dependency:- computers are data processors, if data is wrong, results will
be wrong. Hence reliability of data is a must.
 Need of proper environment:- human beings can work under many unfavourable
conditions, however, a computer needs a perfect environment to work and if this
environment is not provided it produces erroneous ( irregular/in
natural/containing error) results.
The environment needed is
 Fixed temperature range.
 A dust free atmosphere.
 Cost: - they are costly in procurement (purchase) and its maintenance is difficult
calling for expert/trained hands.
 Compatibility:-this is a big problem. Computer programs of one computer may
not be useful to others.
 Standardization: - it calls more concerted (planned or joint activity of two or
more) effort for standardization.

HISTORY OF COMPUTER
History of computer could be traced back to the effort of man to count large numbers. This
process of counting of large numbers generated various systems of numeration like Babylonian
system of numeration, Greek system of numeration, Roman system of numeration and Indian
system of numeration. Out of these the Indian system of numeration has been accepted
universally. It is the basis of modern decimal system of numeration (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).
Later you will know how the computer solves all calculations based on decimal system. But you
will be surprised to know that the computer does not understand the decimal system and uses
binary system of numeration for processing.
We will briefly discuss some of the path-breaking inventions in the field of computing devices.
Calculating Machines

Prepared by: Ms. Kanika Dhingra Page 4

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

It took over generations for early man to build mechanical devices for counting large numbers.
The first calculating device called ABACUS was developed by the Egyptian and Chinese people.
The word ABACUS means calculating board. It consisted of sticks in horizontal positions on
which were inserted sets of pebbles. A modern form of ABACUS is given in Fig. 1.2. It has a
number of horizontal bars each having ten beads. Horizontal bars represent units, tens, hundreds,
etc.
Napier’s bones
English mathematician John Napier built a mechanical device for the purpose of multiplication
in 1617 A D. The device was known as Napier’s bones.
Slide Rule
English mathematician Edmund Gunter developed the slide rule. This machine could perform
operations like addition, subtraction, multiplication, and division. It was widely used in Europe
in 16th century.
Pascal's Adding and Subtractory Machine
You might have heard the name of Blaise Pascal. He developed a machine at the age of 19 that
could add and subtract. The machine consisted of wheels, gears and cylinders.
Leibniz’s Multiplication and Dividing Machine
The German philosopher and mathematician Gottfried Leibniz built around 1673 a mechanical
device that could both multiply and divide.
Babbage’s Analytical Engine
It was in the year 1823 that a famous English man Charles Babbage built a mechanical machine
to do complex mathematical calculations. It was called difference engine. Later he developed a
general-purpose calculating machine called analytical engine. You should know that Charles
Babbage is called the father of computer.
Mechanical and Electrical Calculator
In the beginning of 19th century the mechanical calculator was developed to perform all sorts of
mathematical calculations. Up to the 1960s it was widely used. Later the rotating part of
mechanical calculator was replaced by electric motor. So it was called the electrical calculator.
Modern Electronic Calculator
The electronic calculator used in 1960 s was run with electron tubes, which was quite bulky.
Later it was replaced with transistors and as a result the size of calculators became too small.
The modern electronic calculator can compute all kinds of mathematical computations and
mathematical functions. It can also be used to store some data permanently. Some calculators
have in-built programs to perform some complicated calculations.

COMPUTER GENERATIONS
You know that the evolution of computer started from 16th century and resulted in the form that
we see today. The present day computer, however, has also undergone rapid change during the
last fifty years. This period, during which the evolution of computer took place, can be divided

Prepared by: Ms. Kanika Dhingra Page 5

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

into five distinct phases known as Generations of Computers. Each phase is distinguished from
others on the basis of the type of switching circuits used.
(A) First Generation Computers
First generation computers used Thermion valves. These computers were large in size and
writing programs on them was difficult. Some of the computers of this generation were:
ENIAC: It was the first electronic computer built in 1946 at University of Pennsylvania, USA by
John Eckert and John Mauchy. It was named Electronic Numerical Integrator and Calculator
(ENIAC). The ENIAC was 30_ 50 feet long, weighed 30 tons, contained 18,000 vacuum tubes,
70,000 registers, 10,000 capacitors and required 150,000 watts of electricity. Today your
favourite computer is many times as powerful as ENIAC, still size is very small.
EDVAC: It stands for Electronic Discrete Variable Automatic Computer and was developed in
1950. The concept of storing data and instructions inside the computer was introduced here. This
allowed much faster operation since the computer had rapid access to both data and instructions.
The other advantages of storing instruction was that computer could do logical decision
internally.
Other Important Computers of First Generation
EDSAC: It stands for Electronic Delay Storage Automatic Computer and was developed by
M.V. Wilkes at Cambridge University in 1949.
UNIVAC-1: Ecker and Mauchly produced it in 1951 by Universal Accounting Computer setup.
Features of First Generation Computers
1. These computers were based on the vaccum tube technology.
2. They were the fastest computing devices if their times(computation time is milliseconds).
3. These computers were very large and require a lot of installation.
4. These were non-portable and very slow equipments.
5. They lacked in versatility and speed.
6. They were very expensive to operate and used a large amount of electricity.
7. These machines were unreliable and prone to frequent hardware failures. Hence constant
maintenance was required.
8. Since machine language was used, these computers were difficult to program and use.
9. Each individual component had to be assembled manually. Hence, commercial appeal of
these computers was poor.
Limitations of First Generation Computer
Followings are the major drawbacks of First generation computers.
1. The operating speed was quite slow.
2. Power consumption was very high.
3. It required large space for installation.
4. The programming capability was quite low.
(B) Second Generation Computers
Around 1955 a device called Transistor replaced the bulky electric tubes in the first generation
computer. Transistors are smaller than electric tubes and have higher operating speed. They have

Prepared by: Ms. Kanika Dhingra Page 6

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

no filament and require no heating. Manufacturing cost was also very low. Thus the size of the
computer got reduced considerably.
It is in the second generation that the concept of Central Processing Unit (CPU), memory,
programming language and input and output units were developed. The programming languages
such as COBOL, FORTRAN were developed during this period. Some of the computers of the
Second Generation were
1. IBM 1620: Its size was smaller as compared to First Generation computers and mostly used
for scientific purpose.
2. IBM 1401: Its size was small to medium and used for business applications.
3. CDC 3600: Its size was large and is used for scientific purposes.

Main Characteristics of a second generation computer are-

1. Second generation computer machines were based on transistor technology.


2. Second generation computers were smaller as compared to the first generation computers
3. The computational time of Second generation computers was reduced to microseconds
from milliseconds.
4. Second generation computers were more reliable and less prone to hardware failure.
Hence, such computers required less frequent maintenance.
5. Second generation computers were more portable and generated less amount of heat.
6. Assembly language was used to program Second generation computers. Hence,
programming became more time-efficient and less cumbersome.
7. Second generation computers still require air conditioning.
8. Manual assembly of individual components into a functional unit was still required

Limitations of Second Generation Computer


1. Air Conditioning was required.
2. Frequent maintenance posessed additional cost.
3. Manual assembly of components.

(C) Third Generation Computers


The third generation computers were introduced in 1964. They used Integrated Circuits (ICs).
These ICs are popularly known as Chips. A single IC has many transistors, registers and
capacitors built on a single thin slice of silicon. So it is quite obvious that the size of the
computer got further reduced. Some of the computers developed during this period were IBM-
360, ICL-1900, IBM-370, and VAX-750. Higher level language such as BASIC (Beginners All
purpose Symbolic Instruction Code) was developed during this period.
Computers of this generations were small in size, low cost, large memory and processing speed
is very high.
Characteristics of Third Generation Computer
1. These computers were based on integrated circuit (IC) technology.
2. These were able to reduce computational time from microseconds to nanoseconds.
3. They were easily portable and more reliable than the second generation.

Prepared by: Ms. Kanika Dhingra Page 7

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

4. These devices consumed less power and generated less heat. In some cases, air
conditioning was still required.
5. The size of these computers was smaller as compared to previous computers.
6. Extensive use of high level languages becomes possible.
7. Manual assembly of individual components into a functional unit was not required so it
reduced the large requirement of labour and cost.
8. Commercial production becomes easier and cheaper.

Limitations of Third Generation Computer


1. Air conditioning was required in few cases
2. Highly sophisticated technology required for the manufacture of IC chips.

(D)Fourth Generation Computers


The present day computers that you see today are the fourth generation computers that started
around 1975. It uses large scale Integrated Circuits (LSIC) built on a single silicon chip called
microprocessors. Due to the development of microprocessor it is possible to place computer’s
central processing unit (CPU) on single chip. These computers are called microcomputers. Later
very large scale Integrated Circuits (VLSIC) replaced LSICs. Examples- Intel 4004, Intel 8008,
Zilog’s Z-800
Characteristics of Fourth Generation Computer
1. These computers were microprocessor based systems.
2. These computers were very small in size.
3. These are the cheapest among all the other generations.
4. They are portable and very reliable.
5. These machine generate negligible amount of heat, hence they donot require air
conditioning.
6. Hardware failure is negligible so minimum maintenance.

Limitations of Fourth Generation Computer


1. It used complex technology

(E) Fifth Generation Computer


The computers of 1990s are said to be Fifth Generation computers. The speed is extremely high
in fifth generation computer. Apart from this it can perform parallel processing. The concept of
Artificial intelligence has been introduced to allow the computer to take its own decision. It is
still in a developmental stage. Example- Pentium Processors

Prepared by: Ms. Kanika Dhingra Page 8

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Different parts of computer system/ Architecture of Digital Computer/Block Diagram of


Computer

There are basically four parts of computer system:


 Input Unit/Device
 Processing Unit/Device
 Output Unit/Device
 Storage Unit/Device

INPUT DEVICE
The device, which is used to giving instruction/inputting data to the computer system is called
input device. Keyboard, mouse, scanner, camera, light pen, joystick, touch sensitive monitor etc.
are some examples of input device.

OUTPUT DEVICE
The device, which is used to getting the output/result from the computer system is called output
device. Monitor, printer, speaker etc. are some examples of output device.

PROCESSING DEVICE
Processing device is the device, which is used to process the inputted data, prepares the result
accordingly and produces the output. Hence, you can say that, the processing device control
whole computer system. Processing device is also known as CPU. It stands for control/central
processing unit. All the arithmetic operations, taking decisions and control of all computer
devices are done by central processing unit. It is consist of an arithmetic / logic section for
computations and a primary storage section for holding the instructions and related data for
processing. Though, CPU controls the overall processing so, it is called the brain of the
computer.
The processing unit is divided into three parts.
(i) Control Unit.
(ii) Memory (Primary Memory)
(iii) Arithmetical Logical Unit (ALU).

CONTROL UNIT
It interprets instructions and produces signal for the execution of instruction. It also sends result
of processed instruction to the output units as well as memory. In other word, Control Unit reads
every instruction given by you through input device (such as, keyboard), and takes decision, for
the question like, How to do? What to do? Etc., and then send the instruction to the appropriate
part for processing, and after processing it get the result from them and again sand the result to
the output device (such as, monitor, printer etc.) as well as memory.

ALU
ALU stands for Arithmetic Logic Unit/ Arithmetical Logical Unit. It does all the calculations and
makes decisions on the given data/instruction. Every type of arithmetical and logical tasks is

Prepared by: Ms. Kanika Dhingra Page 9

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

performed by ALU. i.e., every type of addition, subtraction, multiplication, division etc. is done
by ALU. It also takes logical decision like yes or no, true or false, on or off etc.

STORAGE UNIT
Memory is a major factor in a computer system. Data and programs are stored in memory before
any operation. Input device send data and instruction to CPU, which are first stored in memory
and then processed by the control unit. There are two type of memory:
(i) Internal memory (ii) External memory.

INTERNAL MEMORY
The internal memory resized inside computer where data and programs are stored at the time of
processing. There are two types of internal memory:
(i) Primary Memory. (ii) Secondary Memory.

PRIMARY MEMORY
The data and instructions supplied by the user are first stored in primary memory and after
execution it can be stored in the secondary memory for the sake of retrieving/reusing the result in
future. The primary memory is the part of processing device because without it the computer is
not able to boot/work. Primary memory is also known as main memory. There are two type of
primary memory:
(i) RAM (ii) ROM
(ii) RAM- RAM stands for Random Access Memory. The data and instructions stored in
RAM can be read over and over again without destroying them. Its data are destroyed
if the power fails, i.e., the contains of RAM are lost if power supply is cut off, (due to
this property/feature RAM is known as volatile memory). RAM is also known as
main memory. RAM can be altered.

Prepared by: Ms. Kanika Dhingra Page 10

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

(iii) ROM- ROM stands for Read Only Memory. ROM’s instruction is permanently built
into circuitry by the manufacture of the program. A ROM can’t be altered. It is a
permanently built in memory. It has no effect of electric current, i.e., when the power
is lost the contents of ROM can’t be lost (due to this feature/property, ROM is known
as non-volatile memory).

SECONDARY MEMORY
The secondary memory is basically used to store every data, which you want to reuse in future. It
stores every data in the form of ‘File’. The secondary memory is a random access memory. It
rotates at very high speed, to read/write data. Hard disk is an example of secondary memory.

HARD DISK

It is the most essential storage media of computer. All the programs, software’s etc, which you
needs, are stored in a hard disk It is fixed disk in the computer system. You can’t transfer the
hard disk any where from the computer. It is a random access device. It rotates at high speed
varying from 1000 r/m to 6000 r/m. Its storage capacity varies from few Kilobyte (KB) to
Terabyte (TB).

EXTERNAL MEMORY
The memory, which is externally used to preserve/store data and programs are Called External
memory. Floppy disk, CD ROM etc. is example of external memory. It is like secondary
memory, but used externally.
FLOPPY DISK
Floppy disk is a external storage device. It is used to store data externally. You can carry it easily
from one place to other place. Floppy disk is very popular device used for move/carry data from
one place to other place or one computer system to other computer. It can store about 1.44 MB
data at a time. It is made up of ferrous oxide (iron oxide), covered with a thick hard plastic
jacket.
CD ROM
CD ROM stands for Compact Disk Read Only Memory. It is a storage device. Its capacity is
much greater than floppy disk. It is a kind of read only memory, i.e., you can’t able to write on
CD ROM. It’s storing capacity is about 750 MB. It is made up of optical fibre. Now a day, re-
writable CD is also available. You can write on a writable / re-writable CD by using a special
device/ instrument called CD – RW (CD Re Writer).

Prepared by: Ms. Kanika Dhingra Page 11

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Classification of Computers

On the Basis of On the basis of On the basis of purpose


Technology Processing speed &
Storage Capacity

Analog Digital Hybrid Micro Mini Mainframe Super General Special


Computer Computer Computer Computer Computer Computer Purpose Purpose

Laptop Palmtop/Palm Desktop


PC

A) Based on Processing Speed and Storage capacity


1) Micro Computer

A micro computer is a computer with a central processing unit as a microprocessor. Designed for
individual use a micro computer is smaller than a mainframe or a micro computer. A micro
computer’s CPU includes Random access memory(RAM), ROM(read only memory), input/output
ports, interconnecting wires and a motherboard. The term micro computer is not as commonly
used as it was during the 1970s-1980s. We now refer to microcomputers as simply computer or
PC.The microcomputer are relatively less powerful and have less capabilities as compared to
workstations. Micro computers can run on any of the four major operating systems such as DOS,
UNIX, OS/2 & Microsoft windows NT.The home users, schools, small businesses, educational
organizations all use microcomputers.

Example: IBM PCs, PS/2 and Apple’s Macintosh.

Micro computers are further of three types:

i) Laptop: Portable and compact PC with the same capabilities as a desktop computer.
Laptop computers have an L-Shaped design and the screen can be lowered and closed

Prepared by: Ms. Kanika Dhingra Page 12

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

to allow for easy transportation of the machiene. The primary feature that attracts users
to laptops over desktops is their portability. Laptop computers provide users the ability
to run the machiene using an internal battery or an outside power adaptor. They have a
keyboard,flat screen liquid crystal display and a Pentium or power PC processor.Color
displays are available. They normally run using WINDOWS Operating system.Laptops
come with hard disk(around 40GB),CDROM and floppy disk. The most common use of
laptop computers is used for word processing and spreadsheet computing. Laptops have
dramatically decreased in size since their production in 1979. Laptops are usually more
expensive than standard desktops and do not have the same life span as fixed personal
computers.

ii) Palm PC: With miniaturization and high-density packing of transistor on a chip,
computers with capabilities nearly that of PCs which can be held in a palm have
emerged. Palm PCs are also known as Personal Digital Assistant(PDA).Palm
accepts handwritten inputs using an electronic pen which can be used to write on
a Palm’s screen (besides a tiny keyboard), have small disk storage and can be
connected to a wireless network. One has to train the system on the user’s
handwriting before it can be used as a mobile phone, Fax, and e-mail machine. A
version of Microsoft operating system called Windows-CE is available for Palm.
The Palm PC can not only be used to originate ,store & process data on its own,
but it can download data from a desktop or notebook computer or from the
internet, process it and then upload the new data back. The first handheld device
compatible with desktop IBM personal computers of the time was the Atari
Portfolio of 1989. Other early models were Poqet PC of 1989 & HP 95LX of
1991.
iii) Desktop Computer: A desktop computer is a small,relatively inexpensive computer
designed for an individual user. All are based on the microprocessor technology thet
enables manufacturers to put an entire CPU on one chip. At home most popular use for
desktop computers is for playing games. Businesses use desktop computer for word
processing,accounting,desktop publishing and for running spreadsheet and database
management applications. One of the first most popular computer was Apple II
introduced in 1977 by apple computer. A desktop computer needs a UPS to handle
electrical disturbances like short interruptions, blackouts and spikes.

2) Mini Computers

A minicomputer is a type of computer that possesses most of the features and capabilities of a
large computer but is smaller in physical size.

Prepared by: Ms. Kanika Dhingra Page 13

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

A minicomputer fills the space between the mainframe and microcomputer, and is smaller than
the former but larger than the latter. Minicomputers are mainly used as small or midrange
servers operating business and scientific applications. However, the use of the term
minicomputer has diminished and has merged with servers.

A minicomputer may also be called a mid-range computer. Minicomputers emerged in the mid-
1960s and were first developed by IBM Corporation. They were primarily designed for business
applications and services that require the performance and efficiency of mainframe computers.
Minicomputers are generally used as mid-range servers, where they can operate mid-sized
software applications and support numerous users simultaneously.

Minicomputers may contain one or more processors, support multiprocessing and tasking, and
are generally resilient to high workloads. Although they are smaller than mainframe or
supercomputers, minicomputers are more powerful than personal computers and workstations

3) Mainframes

A mainframe computer is a very large computer capable of handling and processing very large
amounts of data quickly. They are used by large institutions, such as government agencies and
large corporations.The term mainframe computer is used to distinguish very large computers
used by institutions to serve multiple users from personal computers used by individuals.
Mainframe computers are capable of handling and processing very large amounts of data very
quickly - much more data than a typical individual needs to work with on his or her own
computer.Mainframe computers are designed to handle very high volumes of input and output
and are optimized for computational speed. The speed of mainframes is expressed in million
instructions per second (MIPS). Mainframe systems can be used by a large number of users. This
means that, in a large organization, individual employees can sit at their desk using a personal
computer, but they can send requests to the mainframe computer for processing large amounts
of data. A typical mainframe system can support hundreds of users at the same time. As for the
actual hardware components inside a mainframe computer, they are similar in type to what
personal computers use: motherboard, central processing unit and memory. Mainframe
computers are computers used primarily by large organizations for critical applications, bulk
data processing such as census, industry and consumer statistics, enterprise resource planning
and transaction processing.

4) Super computers

The fastest type of computer. Supercomputers are very expensive and are employed for
specialized applications that require immense amounts of mathematical calculations. For
example, weather forecasting requires a supercomputer. Other uses of supercomputers include
animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum

Prepared by: Ms. Kanika Dhingra Page 14

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

exploration.The chief difference between a supercomputer and a mainframe is that a


supercomputer channels all its power into executing a few programs as fast as possible, whereas
a mainframe uses its power to execute many programs concurrently.A supercomputer is a
computer that performs at or near the currently highest operational rate for computers. A
supercomputer is typically used for scientific and engineering applications that must handle very
large databases or do a great amount of computation (or both). Most supercomputers are really
multiple computers that perform parallel processing.

B) Based on Purpose:

1) General Purpose: A computer designed to perform, or that is capable of performing, in a


reasonably efficient manner, the functions required by both scientific and business applications. Most
computers in use today are General-Purpose computers — those built for a great variety of processing
jobs. Simply by using a general purpose computer and different software, various tasks can be
accomplished, including writing and editing (word processing), manipulating facts in a data base,
tracking manufacturing inventory, making scientific calculations, or even controlling
organization’s security system, electricity consumption, and building temperature. General purpose
computers are designed to perform a wide variety of functions and operations. A general purpose
computer is able to perform a wide variety of operations because it can store and execute different
programs in its internal storage. Unfortunately, having this ability is often achieved at the expense of
speed and efficiency. Personal computers such as tablets, smartphones, notebooks and desktops are
examples of general purpose computers.

Prepared by: Ms. Kanika Dhingra Page 15

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

2) Special Purpose : Special-Purpose Computer are designed to be task specific and most of the times
their job is to solve one particular problem. They are also known as dedicated computers, because they
are dedicated to perfom a single task over and over again. Such a computer system would be useful in
playing graphic intensive Video Games, traffic lights control system, navigational system in an aircraft,
weather forecasting, satellite launch / tracking, oil exploration, and in automotive industries, keeping
time in a digital watch, or Robot helicopter. They perform only one function and therefore cut down
on the amount of memory needed and also the amount of information which can be input into them. As
these computers have to perform only one task, therefore, they are fast in processing. A drawback of
this specialization, however, is the computer’s lack of versatility. It cannot be used to perform other
operations.

C) Based on Technology
1) Analog Computer- Analog computers are used to process analog data. Analog data is of
continuous nature and which is not discrete or separate. Such type of data includes
temperature, pressure, speed weight, voltage, depth etc. These quantities are continuous and
having an infinite variety of values.It measures continuous changes in some physical quantity
e.g. The Speedometer of a car measures speed, the change of temperature is measured by a
Thermometer, the weight is measured by Weights machine. These computers are ideal in
situations where data can be accepted directly from measuring instrument without having to
convert it into numbers or codes.Analog computers are the first computers being developed and
provided the basis for the development of the modern digital computers. Analog computers are
widely used for certain specialized engineering and scientific applications, for calculation and
measurement of analog quantities. They are frequently used to control process such as those
found in oil refinery where flow and temperature measurements are important. They are used
for example in paper making and in chemical industry. Analog computers do not require any
storage capability because they measure and compare quantities in a single operation. Output
from an analog computer is generally in the form of readings on a series of dial (Speedometer of
a car) or a graph on strip chart.

Digital Computer- A Digital Computer, as its name implies, works with digits to represent numerals,
letters or other special symbols. Digital Computers operate on inputs which are ON-OFF type and its
output is also in the form of ON-OFF signal. Normally, an ON is represented by a 1 and an OFF is
represented by a 0. So we can say that digital computers process information which is based on the
presence or the absence of an electrical charge or we prefer to say a binary 1 or 0.

A digital computer can be used to process numeric as well as non-numeric data. It can perform
arithmetic operations like addition, subtraction, multiplication and division and also logical operations.

Prepared by: Ms. Kanika Dhingra Page 16

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Most of the computers available today are digital computers. The most common examples of digital
computers are accounting machines and calculators.

The results of digital computers are more accurate than the results of analog computers. Analog
computers are faster than digital. Analog computers lack memory whereas digital computersstore
information. We can say that digital computers count and analog computers measures.The computers
we use at our homes and offices are digital computer.

2) Hybrid Computer- A hybrid is a combination of digital and analog computers. It combines the
best features of both types of computers, i-e. It has the speed of analog computer and the
memory and accuracy of digital computer. Hybrid computers are used mainly in specialized
applications where both kinds of data need to be processed. Therefore, they help the user, to
process both continuous and discrete data. For example a petrol pump contains a processor that
converts fuel flow measurements into quantity and price values. In hospital Intensive Care Unit
(ICU), an analog device is used which measures patient's blood pressure and temperature etc,
which are then converted and displayed in the form of digits. Hybrid computers for example are
used for scientific calculations, in defense and radar systems.

Differences b/w Analog & Digital Computer


 Although digital technology is advanced and modern but analog technology is still shows more
precise and accurate output in compare to analog.
 Digital computer shows result of input in form of computer display screen, monitor, CD or other
peripheral devices while analog computer shows output in form of voltage signals and reading on
connected meters.
 If talk about electronic circuits, digital computer uses two switches on and off. While analog
computer uses signals generators, op amps (operational amplifiers) and many resistors for flow of
continuous signals.
 Digital computer performs work discretely. But analog computer produces continuous voltage
signals to perform continuously.
 Today with invention of latest technology many digital computers are performing the functions
of analog computers. While analog computers are not able to perform the same for digital
computers.
 Digital computers are widely acceptable and useable around the world. They are being used
more than analog computers.
 There is no concept of memory disc, drive or chip in analog computer. While digital computers
obviously requires memory in shape of hard disk, flash storage or memory chip to perform.
 Due to noise or disturbance in environment accuracy can be manipulate in analog computer
while noise has no such effect on digital computers.

Prepared by: Ms. Kanika Dhingra Page 17

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

BASIC COMPUTER OPERATIONS


A computer performs basically five major operations or functions irrespective of their size and make.
These are 1) it accepts data or instructions by way of input, 2) it stores data, 3) it can process data as
required by the user, 4) it gives results in the form of output, and 5) it controls all operations inside a
computer. We discuss below each of these operations.

1. Input: This is the process of entering data and programs in to the computer system. You should
know that computer is an electronic machine like any other machine which takes as inputs raw
data and performs some processing giving out processed data. Therefore, the input unit takes
data from us to the computer in an organized manner for processing.
2. Storage: The process of saving data and instructions permanently is known as storage. Data has
to be fed into the system before the actual processing starts. It is because the processing speed
of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same
speed. Therefore the data is first stored in the storage unit for faster access and processing. This
storage unit or the primary storage of the computer system is designed to do the above
functionality. It provides space for storing data and instructions.

The storage unit performs the following major functions:

. All data and instructions are stored here before and after processing.
. Intermediate results of processing are also stored here.

3. Processing: The task of performing operations like arithmetic and logical operations is called
processing. The Central Processing Unit (CPU) takes data and instructions from the storage unit and
makes all sorts of calculations based on the instructions given and the type of data provided. It is then
sent back to the storage unit.

4. Output: This is the process of producing results from the data for getting useful information. Similarly
the output produced by the computer after processing must also be kept somewhere inside the
computer before being given to you in human readable form. Again the output is also stored inside the
computer for further processing.

Prepared by: Ms. Kanika Dhingra Page 18

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

5. Control: The manner how instructions are executed and the above operations are performed.
Controlling of all operations like input, processing and output are performed by control unit. It takes
care of step by step processing of all operations in side the computer.

Hardware,Software
A computer system consists of hardware, the electronic devices that enable of computing and
manipulating information and software (set of instructions) that carry out predefined task to
complete a given job. It is the combination of physical equipment (hardware) and logical
instructions (software that gives the modern computing systems their power and versatility.

Relationship between hardware and software

Software refers to computer programs that are loaded into a computer system and hardware
refers to all the visible devices which are assembled together to build a computer system. To get
a particular job done by a computer, firstly the relevant software is loaded in the hardware.
Different software can be used on the same hardware to perform different jobs, just as different
games can be played on the computer screen (Console) by using different cassettes.

Types of software
A wide variety of computer software is available today. Although the range of software available
is vast and varied, most software can be divided into two major categories:
1. System software,
2. Application software

System Software
System software is a set of one or more programs designed to control the operation and extend
the processing capability of a computer system. It acts as an intermediary between the computer
hardware and application program, it also provides interface between user and computer In
general, a computer's system software performs one or more of the following functions:
1. Supports the development of other application software.
2. Supports the execution of other application software.
3. Monitors the effective use of various hardware resources such as CPU, memory,
peripherals, etc.
4. Communicates with and controls the operation of peripheral devices such as printer,
disk, tape, etc.
5. It helps the hardware components work together and provides support for the
6. Development and execution of application software (programs).
7. The programs included in a system software package are called system programs and the
programmers who prepare system software are referred to as system programmers.

Prepared by: Ms. Kanika Dhingra Page 19

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Advantages

1. Good system software allows application packages to be run on the computer with less
time and effort.
2. Without system software, application packages could not be run on the computer system.
A system software is an indispensable part of a total computer system.
3. A computer without some kind of system software would be very ineffective and most
likely impossible to operate.

Examples

Some of the most commonly known types of system software are :


 Operating systems
 Programming language translators
 System Utilities
 Device Drivers
 Communication Software

Operating Systems

Every computer has an operating system software that takes care of the effective and efficient
utilization of all the hardware and software components of the computer system.

Programming Language Translators

Programming language translators are system software that transforms the instructions
prepared by programmers using convenient programming languages into a form that can
be interpreted and executed by a computer system.

Utilities

These are a set of programs that help users in system maintenance tasks and in performing
tasks of routine nature. Some of the tasks commonly performed by utility programs include the
following:
1. Formatting hard disks or floppy disks.
2. Reorganizing files on a hard disk to conserve storage space.
3. Taking backup of files stored on hard disk on to a tape or floppy disk.
4. Searching a particular file from a directory of hundreds of files.
5. Checking the amount of available memory.
6. Checking the amount of available storage space on hard disk.
7. Reducing the file size for more efficient transmission over a data communication
link.

Device Drivers:

Prepared by: Ms. Kanika Dhingra Page 20

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Device drivers are system programs which are responsible for proper functioning of
devices. Every device or hardware whether it is a printer, monitor, mouse has a driver
program for support. A driver acts as translator between the device and programs that use
the device.

Communication Software

A communication software enables transfer of data and programs from one


computer system to another.

Application Software
1. Application software is a set of one or more programs designed to solve a specific
problem or do a specific task.
2. Application software, also known as an application or an "app", is computer software
designed to help the user to perform specific tasks.
3. The programs included in an application software package are called application
programs and the programmers who prepare application software are referred to as
application programmers

Firmware

Firmware is a software program or set of instructions programmed on a hardware device. It


provides the necessary instructions for how the device communicates with other computer
hardware.Firmware is a combination of software and hardware.Firmware is typically stored in
the flash ROM of a hardware device. While ROM is “Read only memory”,flash ROM can be
erased and rewritten because it is actually a type of flash memory.Firmware can be thought of as
“semipermanent” since it remains same unless it is updated by a firmware updater.You may need
to update the firmware of certain devices, such as harddrives and video cards in order for them to
work with a new operating system. ROMs, PROMs and EPROMs that have data or programs
recorded on them are firmware.

Prepared by: Ms. Kanika Dhingra Page 21

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Humanware
Humanware is hardware and software that emphasizes user capability and empowerment and the
design of the user interface. The process of building humanware generally consists of these
steps:

1. Define users (age, mindset, environmental context, previous product experience and
expectations, and so forth) and what they really want to do
2. Identify tasks they will need to do or capabilities they will want
3. Specify usability objectives (if possible, these should be measurable, such as how long to do
something or how many mouse clicks to get to a specified task point) for each task or
capability
4. Build a prototype of the user interface (it can be a paper or simulated prototype if time is
short)
5. Test and verify or correct the prototype
6. Provide the prototype and usability objectives to the program designers and coders
7. Test the code against the prototype and objectives and, if necessary, redesign or recode the
software
8. Test the product with users or valid test subjects and revise as necessary
9. Get feedback from users and continually improve the product

Example: A BrailleNote is a computer made by HumanWare for persons with visual


impairments. It has either a braille keyboard or a Qwerty Keyboard, a speech synthesizer, and a
32- or 18-column refreshable Braille display, depending on model. The "VoiceNote" is the same
device without a braille display.
BrailleNote can use only the software provided by the manufacturer, although this can be
upgraded.

Introduction to Programming Languages

In all over the world, language is the source of communication among human beings. Different
countries/regions have different languages. Similarly, in order to communicate with the
computer user also needs to have a language that should be understood by the computer. For this
purpose, different languages are developed for performing different types of work on the
computer, such languages are known as Programming Languages. Basically, languages are
divided into two categories according to their interpretation.

Prepared by: Ms. Kanika Dhingra Page 22

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

1. Low Level Languages.


2. High Level Languages.

Low Level Languages


Low level computer languages are machine codes or close to it. Computer cannot understand
instructions given in high level languages or in English. It can only understand and execute
instructions given in the form of machine language i.e. language of 0 and 1. There are two types
of low level languages:
 Machine Language.
 Assembly Language

Machine Language or First Generation Language :


It is the lowest and most elementary level of Programming language and was the first type of
programming language to be Developed. Machine Language is basically the only language
which computer Can understand. In fact, a manufacturer designs a computer to obey just one
Language, its machine code, which is represented inside the computer by a String of binary
digits (bits) 0 and 1. The symbol 0 stands for the absence of Electric pulse and 1 for the presence
of an electric pulse . Since a computer is Capable of recognizing electric signals, therefore, it
understand machine Language.
Advantages of Machine Language
i) It makes fast and efficient use of the computer.
ii) It requires no translator to translate the code i.e. Directly understood by the computer
Disadvantages of Machine Language:
i) All operation codes have to be remembered
ii) All memory addresses have to be remembered.
iii) It is hard to find errors in a program written in the machine language
iv) These languages are machine dependent i.e. a particular
v)Machine language can be used on only one type of computer

Assembly Language or Second Generation Language


It was developed to overcome some of the many inconveniences of machine language. This is
another low level but a very important language in which operation codes and operands are given
in the form of alphanumeric symbols instead of 0’s and l’s. These alphanumeric symbols will be
known as mnemonic codes and can have maximum up to 5 letter combination e.g. ADD for

Prepared by: Ms. Kanika Dhingra Page 23

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

addition, SUB for subtraction, START,LABEL etc. Because of this feature it is also known as
‘Symbolic Programming Language’. This language is also very difficult and needs a lot of
practice to master it because very small.
Advantages:
1. Easier to understand and use: Due to the use of mnemonics instead of numeric op-codes and
symbolic names for data locations instead of numeric addresses, assembly language programs
are much easier to understand and use as compared to machine language programs.
2. Easier to locate and correct errors: Because of the use of mnemonic op-codes and symbolic
names for data locations, and also because programmers need not keep track of the storage
locations of the data and instructions, fewer errors are made while writing programs in assembly
language, and those that are made are easier to find and correct.
3. Easier to modify: Because they are easier to understand, it is easier to locate, correct, and
modify instructions of an assembly language program than a machine language program.
in case of machine language.
4. No worry about addresses: One of the greatest advantage of assembly language is that
programmers need not keep track of the storage locations of the data and instructions while
writing an assembly language program.
Limitations of Assembly Language

The following limitations of machine language are not solved by using assembly language:
1. Machine dependent: Because each instruction of an assembly language program is translated
into exactly one machine language instruction, assembly language programs are machine
dependent.
2. Knowledge of hardware required: Since assembly languages are machine dependent, an
assembly language programmer must have a good knowledge of the characteristics and the
logical structure of his/her computer in order to write good assembly language programs.
3. Machine level coding: In case of an assembly language, instructions are still written at the
machine-code level. That is, one assembly language instruction is nsubstituted for one machine
language instruction.

Prepared by: Ms. Kanika Dhingra Page 24

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Lecture No. 12

High Level language (its advantages and disadvantages) , Language Translators: Assemblers,
Compilers, Linkers

High Level Languages or Third Generation Language


High level computer languages give formats close to English language and the purpose of
developing high level languages is to enable people to write programs easily and in their own
native language environment (English). High-level languages are basically symbolic languages
that use English words and/or mathematical symbols rather than mnemonic codes. Each
instruction in the high level language is translated into many machine language instructions thus
showing one-to-many translation

Advantages of High Level Language


Following are the advantages of a high level language:
 User-friendly
 Similar to English with vocabulary of words and symbols
 Therefore it is easier to learn.
 They require less time to write.
 They are easier to maintain.
 Problem oriented rather than 'machine' based.
 Program written in a high-level language can be translated into many machine language
and therefore can run on any computer for which there exists an appropriate translator.
 It is independent of the machine on which it is used i.e Programs developed in high level
language can be run on any Computer

Disadvantages of High Level Language


 A high-level language has to be translated into the machine language by a translator and
thus a price in computer time is paid.
 The object code generated by a translator might be inefficient Compared to an equivalent
assembly language program

Prepared by: Ms. Kanika Dhingra Page 25

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Fourth Generation Language

Often abbreviated 4GL, fourth-generation languages are programming languages closer to


human languages than typical high-level programming languages. Most 4GLs are used to access
databases. For example, a typical 4GL command is
FIND ALL RECORDS WHERE NAME IS "SMITH"
4GLs are designed to reduce the overall time, effort and cost of software development. The main
domains and families of 4GLs are: database queries, report generators, data manipulation,
analysis and reporting, screen painters and generators, GUI creators, mathematical optimization,
web developmentand general purpose languages.

Also known as a 4th generation language, a domain specific language, or a high productivity
language.
4GLs are more programmer-friendly and enhance programming efficiency with usage of
English-like words and phrases, and when appropriate, the use of icons, graphical interfaces and
symbolical representations.

Fifth Generation Language


A fifth generation programming language (abbreviated as 5GL) is a programming language
based on solving problems using constraints given to the program, rather than using an algorithm
written by a programmer. Most constraint-based and logic programming languages and some
declarative languages are fifth-generation languages.
While fourth-generation programming languages are designed to build specific programs, fifth-
generation languages are designed to make the computer solve a given problem without the
programmer. This way, the programmer only needs to worry about what problems need to be
solved and what conditions need to be met, without worrying about how to implement a routine
or algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence
research. Prolog, OPS5, and Mercury are examples of fifth-generation languages.

Language Translators
Compiler

Definition: A compiler is a translator program that translates a high-level language program into
its equivalent machine language program. A compiler is so called because it compiles a set of
machine language instructions for every program instruction of a high-level language.

High Level Language Input Compiler output Machine Level


Language
(Source Code)
Prepared by: Ms. Kanika Dhingra Page 26

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

The input to the compiler is the high-level language program (often referred to as a source
program) and its output is the machine language program (often referred to as an object
program). The compiler translates each high level language instruction into a set of machine
language instructions rather than a single machine language instruction.

During the process of translation of a source program into its equivalent object program by the
compiler , the source program is not being executed. It is only being converted into a form
that can be executed by the computer's processor. A compiler can translate only those source
programs, which have been written in the language for which the compiler is meant. For
example, a C compiler is only capable of translating source programs, which have been written
in C. Therefore, each computer requires a separate compiler for each high-level language that it
supports.

How Does Complier Works?

Compilers are large programs, which reside permanently on secondary storage.


STEP1:When a source program is to be translated, the compiler and the source program are
copied from secondary storage into the main memory of the computer. The compiler, being a
program, is then executed with the source program as its input data.
STEP2:It generates the equivalent object program as its output, which is normally saved in a file
on secondary storage.
STEP3: Whenever there is a need to execute the program, the object program is copied from
secondary storage into the main memory of the computer and executed.

Characteristics of Compiler

1. There is no need to repeat the compilation process every time you wish to execute the
program: This is because the object program stored on secondary storage is already in machine
language. You simply have to load the object program from the secondary storage into the main
memory of the computer, and execute it directly.
2. Compilation is necessary whenever we need to modify the program: That is, to incorporate
changes in the program, you must load the original source program from secondary storage into
the main memory of the computer, carry out necessary changes in the source program, recompile
the modified source program, and create store an updated object program for execution.
3. Compilers also automatically detect and indicate certain types of errors in source
programs. These errors are referred to as syntax errors and are
typically of the following types:
1. Illegal characters
2. Illegal combination of characters
3. Improper sequencing of instructions in a program
4. Use of undefined variable names

Prepared by: Ms. Kanika Dhingra Page 27

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Limitations

1. A compiler, however, cannot detect logic errors. It can only detect grammatical (syntax
errors) in the source program.
2. Programs containing logical errors will be successfully compiled and the object code will
be obtained without any error message.
3. It is necessary that each computer must have its own "personal" compiler for a particular
language.

Interpreter

An interpreter is another type of translator used for translating programs written in high-level
languages. It takes one statement of a high-level language program, translates it into machine
language instructions, and then immediately executes the resulting machine language
instructions. That is, in case of an interpreter, the translation and execution processes alternate
for each statement encountered in the high-level language program.

High Level Language Input Interpreter output Result of program


execution.
(Source Program)

How Interpreter differs from compiler?

 This differs from a compiler, which merely translates the entire source program into an
object program and is not involved in its execution. the input to an interpreter is the
source program, but unlike a compiler, its output is the result of program execution
instead of an object program.
 After compilation of the source program, the resulting object program is permanently
saved for future use and is used every time the program is to be executed. So repeated
compilation (translation of the source code) is not necessary for repeated execution of a
program. However in case of an interpreter, since no object program is saved for future
use, repeated interpretation (translation plus execution) of a program is necessary for its
repeated execution.
 As compared to compilers, interpreters are easier to write because they are less complex
programs than compilers. They also require less memory space for execution than
compilers.
 The main advantage of interpreters over compilers is that a syntax error in a
program statement is detected and brought to the attention of the programmer as
soon as the program statement is interpreted. This allows the programmer to make
corrections during interactive program development. Therefore, interpreters make it
easier and faster to correct programs.

Prepared by: Ms. Kanika Dhingra Page 28

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Disadvantage of interpreters over compilers:

 Interpreters are slower than compilers when running a finished program. This is because
each statement is translated every time it is executed from the source program.
 Because the interpreter does not produce an object program, it must perform the
translation process each time a program is executed.

Assembler

A computer program which translates an assembly language program to its machine language
equivalent. The assembler of a computer system is a system software, supplied by the computer
manufacturer that translates an assembly language program into an equivalent machine language
program of the computer.

Assembly Language Input Assembler Output Machine Level


Language
(Source Code)

Why is it called so?


It is so called because in addition to translating an assembly language program into its equivalent
machine language program, it also, "assembles" the machine language program in the main
memory of the computer and makes it ready for execution.

The input to the assembler is the assembly language program (often referred to as a source
program) and its output is the machine language program (often referred to as an object
program). Note that during the process of translation of a source program into its equivalent
object program by the assembler, the source program is not being executed. It is only being
converted into a form that can be executed by the computer's processor.

Assembly language program -► input Assembler -► Machine language program


(Source Program) -► One-to-one correspondence -► (Object Program)

Note: Assemblers, compilers, and interpreters are also referred to as language processors
since they are used for processing a particular language instructions.

Difference Between Compiler and Interpreter

Basis Compiler Interpreter

Prepared by: Ms. Kanika Dhingra Page 29

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Object Code A compiler provides a An Interpreter does not


separate object program generate a permanent saved
object code file

Translation Program A compiler converts the entire An Interpreter translates the


program into machine code at source code line-wise, that is,
one go it would execute the current
statement before translating
the next statement

Debugging Ease Removal of errors(debugging) Debugging becomes easier


is slow because the errors are pointed
out immediately

Implementation By nature, Compilers are Interpreters are easier to write


complex programs. Hence because they are less complex
they require hard-core coding. programs. They also require
They also require more less memory for program
memory to execute a program. execution.

Execution Time Compilers are faster as Interpreters are slower as


compared to interpreters compared to compiler because
because each statement is each statement is translated
translated only once and saved every time it is executed from
in object file, while can be the source program.
executed anytime without
translating again.

Linker

An application usually consist of hundred, thousand or even millions of lines of codes. The
codes are divided into logical groups and stored in different modules so that the debugging and
maintence of the code becomes easier. These independent source programs have to be linked
together to create a complete application. This job is done by a tool called as Linker. A linker is a
program that links together several object modules and libraries to form a single, coherent
program (executable). Object modules are the machine code output from an assembler or
compiler and contain executable machine code and data, together with information that allows
the linker to combine the modules together to form a program.

Prepared by: Ms. Kanika Dhingra Page 30

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Compiler Object file


Source File

Source File Compiler Object file Linker Executable


File

Compiler Object file


Source File

Program
Loader Library

Loaders are a part of the operating system that brings an executable file residing on disk into
memory and starts it running. It is responsible for loading, linking and relocation. In computer a
loader program is a program that performs the functions of a linker program and then
immediately schedules the executable for execution, without necessarily creating an executable
file as an output.
A loader perform four basic task:-
1) Allocation- It allocates memory space for the program
2) Linking- It combines two or more separate object programs and supplies the information
needed to allow references between them.
3) Relocation- It prepares a program to execute properly from its storage area.
4) Loading- It places data and machine instruction into the memory.

Source Code
High Level Language
Program

Compiler

object Code
Linker
Prepared by: Ms. Kanika Dhingra Page 31

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Executable Machine Object Library Routine


Language Program
( Machine Language)

Loader

Memory

Program Translation Hierarchy

There are two types of loader:-

1) Absolute Loader- It Loads the file into memory at the location specified by the
beginning portion (header) of the file and then passes control to the program. If the
memory space specified by the header is currently in use, execution cannot proceed, and
the user must wait until the requested memory becomes free. Moreover, this type of
loader performs only loading function . It does not perform linking and program
relocation.
2) Relocating Loader- This loader loads the program in memory, altering the various
addresses as required to ensure correct referencing. The decision as to where in memory
the program is placed, is done by the operating system, not the file header. The relocating
loader can only relocate code that has been produced by linker capable of producing
relative code.

Prepared by: Ms. Kanika Dhingra Page 32

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

NUMBER SYSTEM

Computer Number System: Definition, base/ radix of Decimal, Binary, Octal, Hexa-
decimal. Conversion : Decimal to Binary , Octal and Hexadecimal and Vice- Versa

Number Systems

Inside a computer system, data is stored in a format that cannot be easily read by human beings.
This is the reason why input and output (I/O) interfaces are required. Every computer stores
numbers, letters, and other special characters in a coded form. Before going into the details of
these codes, it is essential to have a basic understanding of the number system. So the goal of this
chapter is to familiarize you with the basic fundamentals of number system. It also introduces
some of the commonly used number systems by computer professionals and the relationship
among them.

NON-POSITIONAL NUMBER SYSTEMS

Number systems are basically of two types: non-positional and positional. In early days, human
beings counted on fingers. When ten fingers were not adequate, stones, pebbles, or sticks were
used to indicate values. This method of counting uses an additive approach or the non-positional
number system. In this system, we have symbols such as I for 1, II for 2, III for 3, IIII for 4, IIIII
for 5, etc. Each symbol represents the same value regardless of its position in the number and the
symbols are simply added to find out the value of a particular number. Since it is very difficult to
perform an arithmetic with such a number system, positional number systems were developed as
the centuries passed.

POSITIONAL NUMBER SYSTEMS

In a positional number system, there are only a few symbols called digits, and these symbols
represent different values depending on the position they occupy in the number. The value of
each digit in such a number is determined by three considerations:

1. The digit itself,

2. The position of the digit in the number, and

Prepared by: Ms. Kanika Dhingra Page 33

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

3. The base of the number system (where base is defined as the total number of digits available
in the number system).

The number system that we use in our day-to-day life is called the decimal number system. In
this system, the base is equal to 10 because there are altogether ten symbols or digits (0, 1, 2, 3,
4, 5, 6, 7, 8, 9). You know that in the decimal system, the successive positions to the left of the
decimal point represent units, tens, hundreds, thousands, etc. But you may not have given much
attention to the fact that each position represents a specific power of the base (10). For example,
the decimal number 2586 (written as 258610) consists of the digit 6 in the units position, 8 in the
tens position, 5 in the hundreds position, and 2 in the thousands position, and its value can be
written as:

(2x1000)+ (5x100)+ (8x10)+ (6xl), or

2000 + 500 + 80 + 6, or

It may also be observed that the same digit signifies different values depending on the position it
occupies in the number. For example,

In 2586io the digit 6 signifies 6 x 10° = 6 In 2568io the digit 6 signifies 6 x 101 = 60 In 2658,o
the digit 6 signifies 6 x 102 = 600 In 6258io the digit 6 signifies 6 x 103 = 6000 Thus any number
can be represented by using the available digits and arranging them in various positions.

The principles that apply to the decimal number system also apply to any other positional
number system. It is important only to keep track of the base of the number system in which we
are working.

The following two characteristics are suggested by the value of the base in all positional number
systems:

1. The value of the base determines the total number of different symbols or digits available in
the number system. The first of these choices is always zero.

2. The maximum value of a single digit is always equal to one less than the value of the base.

Some of the positional number systems commonly used in computer design and by
computer professionals are discussed below.

1. Binary Number System

The binary number system is exactly like the decimal number system except that the base is 2
instead of 10. We have only two symbols or digits (0 and 1) that can be used in this number
system. Note that the largest single digit is 1 (one less than the base). Again, each position in a
binary number represents a power of the base (2). As such, in this system, the rightmost position
is the units (2°) position, the second position from the right is the 2's (21) position and proceeding

Prepared by: Ms. Kanika Dhingra Page 34

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

in this way we have 4's (22) position, 8's (23) position, 16's (24) position, and so on. Thus, the
decimal equivalent of the binary number 10101 (written as 10101)2 is:

(1 x 24) + (0 x 23) + (1 x 22) + (0 x 21) + (1 x 2°), or

16 + 0 + 4 + 0+1, or

21

In order to be specific about which system we are referring to, it is common practice to indicate
the base as a subscript. Thus we write:

101012 = 2110

"Binary digit" is often referred to by the common abbreviation bit. Thus, a "bit" in computer
terminology means either a 0 or a 1. A binary number consisting of 'n' bits is called an n-bit
number. Figure 3 lists all the 3-bit numbers along with their decimal equivalent. Remember that
we have only two digits, 0 and 1, in the binary system, and hence the binary equivalent of the
decimal number 2 has to be stated as 10 (read as one, zero). Another important point to note is
that with 3 bits (positions), only 8 (23) different patterns of 0s and Is are possible and from
Figure 3.1 it may be seen that a 3-bit number can have one of the 8 values in the range 0 to 7. In
fact, it can be shown that any decimal number in the range 0 to 2""1 can be represented in the
binary form as an n-bit number.

Binary Decimal Equivalent

000 ---0

001--- 1

010--- 2

011--- 3

100--- 4

101--- 5

110--- 6

111--- 7

Figure. 3-bit numbers with their decimal values.

Prepared by: Ms. Kanika Dhingra Page 35

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Every computer stores numbers, letters, and other special characters in binary form. There are
several occasions when computer professionals have to know the raw data contained in a
computer's memory. A common way of looking at the contents of a computer's memory is to
print out the memory contents on a line printer. This printout is called a memory dump. If
memory dumps were to be printed using binary numbers, the computer professionals would be
confronted with many pages of 0s and Is. Working with these numbers would be very difficult
and error prone. Because of the quantity of printout that would be required in a memory dump of
binary digits and the lack of digit variety (0s and Is only), two number systems, octal and
hexadecimal, are used as shortcut notations for binary. These number systems and their
relationship with the binary number system are explained below.

2. Octal Number System

In the octal number system the base is 8. So in this system there are only eight symbols or digits:
0, 1,2, 3, 4, 5, 6, and 7 (8 and 9 do not exist in this system). Here also the largest single digit is 7
(one less than the base). Again, each position in an octal number represents a power of the base
(8). Thus the decimal equivalent of the octal number 2057 (written as 2057)8 is:

(2 x 83) + (0 x 82) + (5 x 81) + (7 x 8°), or

1024 + 0 + 40 + 7, or

Hence, 20578= 10710

Observe that since there are only 8 digits in the octal number system, 3 bits (23 = 8) are sufficient
to represent any octal number in binary

3. Hexadecimal Number System

The hexadecimal number system is one with a base of 16. The base of 16 suggests choices of 16
single-character digits or symbols. The first 10 digits are the digits of the decimal number system
-0, 1, 2, 3, 4, 5, 6, 7, 8, 9. The remaining six digits are denoted by the symbols A, B, C, D, E, and
F representing the decimal values 10, 11, 12, 13, 14, and 15 respectively. In the hexadecimal
number system, therefore, the letters A through F are number digits. The number A has a
decimal equivalent value of 10 and the number F has a decimal equivalent value of 15. Thus, the
largest single digit is F or 15 (one less than the base). Again, each position in the hexadecimal
system represents a power of the base (16). Thus the decimal equivalent of the hexadecimal
number 1AF (written as 1AF16) is:

(1 x 162) + (A x 161) + (F x 16°), or

(1x256)+ (10x16) 256+160+ 15, or 431

Prepared by: Ms. Kanika Dhingra Page 36

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

(15xl), or

Hence, 1AF16= 43110

Observe that since there are only 16 digits in the hexadecimal number system, 4 bits are
sufficient to represent any hexadecimal number in binary.

4. Decimal Number System

This number system is used by human beings. The base or radix value of this system is 10. The
range of digits is from 0 to 9.

CONVERTING FROM ONE NUMBER SYSTEM TO ANOTHER

Numbers expressed in decimal number system are much more meaningful to us than are values
expressed in any other number system. This is mainly because of the fact that we have been
using decimal numbers in our day-today life right from childhood. However, any number value
in one number system can be represented in any other number system.

Because the input and the final output values are to be in decimal, computer professionals are
often required to convert numbers in other number systems to decimal and vice-versa. There are
many methods or techniques that can be used to convert numbers from one base to another. We
will see one technique used in converting to base 10 from any other base and a second technique
to be used in converting from base 10 to any other base.

Converting to Decimal from Another Base (Multiplication method)

The following three steps are used to convert to a base 10 value from any other number system:

Step 1: Determine the column (positional) value of each digit (this depends on the position of the
digit and the base of the number system).

Step 2: Multiply the obtained column values (in Step 1) by the digits in the corresponding
columns.

Step 3: Sum the products calculated in Step 2. The total is the equivalent value in decimal.

Example.

Conversion from hexadecimal to decimal

(AC1)16 = ?10

AC13= A x162 + C x 161 +1 x 160

= 10 x 256 + 12 x 16 + 1 x 1

Prepared by: Ms. Kanika Dhingra Page 37

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

= 2560 + 192 + 1

=2753

Converting from Decimal to Another Base (Division-Remainder Technique)

The following four steps are used to convert a number from decimal to another base:

Step 1: Divide the decimal number to be converted by the value of the new base.

Step 2: Record the remainder from Step 1 as the rightmost digit (least significant digit) of the
new base number.

Step 3: Divide the quotient of the previous divide by the new base.

Step 4: Record the remainder from Step 3 as the next digit (to the left) of the new base number.

Repeat Steps 3 and 4, recording remainders from right to left, until the quotient becomes zero in
Step 3. Note that the last remainder thus obtained will be the most significant digit (MSD) of the
new base number.

Example

2510 = ? 2

Solution:

Steps 1 and 2: 25/2 = 12 and remainder 1

Steps 3 and 4: 12/2= 6 and remainder 0

Steps 3 and 4: 6/2 = 3 and remainder 0

Steps 3 and 4: 3/2 = 1 and remainder 1

Steps 3 and 4: 1/2= 0 and remainder 1

2 25

2 12 1

2 6 0

2 3 0

1 1

Prepared by: Ms. Kanika Dhingra Page 38

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

(11001)2

As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so that
the first remainder becomes the least significant digit (LSD) and the last remainder becomes the
most significant digit (MSD).

Hence, 2510= 110012

The following two steps are used to convert a number from a base other than 10 to a base

Conversion from one number system to another number system other than decimal

other than 10: ---- Two step Conversion

Step 1: Convert the original number to a decimal number (base 10).

Step 2: Convert the decimal number so obtained to the new base number.

Example

(1A)16  ( )2

Solution:

Step 1: Convert from base 16 to base 10

1x 61 + Ax 16°

=6 + 10 x 1

= 16

Above example illustrates the method of converting a number from binary to octal. Similarly,
Example 3.17 shows how to convert a number from binary to hexadecimal. However, these are
lengthy procedures and shortcut methods can be used when we desire such conversions. These
shortcut methods are described below.

Prepared by: Ms. Kanika Dhingra Page 39

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Conversion: Binary to Octal and hexa decimal and Vice-Versa


Conversion: Octal to Hexadecimal and Vice-versa

Binary to Octal Conversion

The following steps are used in this method:

Step 1: Divide the binary digits into groups of three (starting from the right).

Step 2: Convert each group of three binary digits to one octal digit. Since there are only 8 digits
(0 to 7) in the octal number system (refer to Figure 3.1), 3 bits (23 = 8) are sufficient to represent
any octal number in binary. Since decimal digits 0 to 7 are equal to octal digits 0 to 7, binary to
decimal conversion can be used in this step.

Example

1011102 = ?8

Solution:

Step 1: Divide the binary digits into groups of 3 starting from right (LSD). 101 110

Step 2: Convert each group into one digit of octal (use binary-to-decimal conversion).

1012=lx22 + 0x21 + lx2°

=4+0+1

= 58

1102= 1x22+1 x2'+0x2°

=4+2+0 = 68

Hence, 1011102 = 568

Example

11010102 = ?8

Solution:

Prepared by: Ms. Kanika Dhingra Page 40

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

1101010 =001 101 010 (Group of 3 digits from right)

= 1528 (Convert each group to an octal digit) Hence, 11010102 = 1528

Octal to Binary Conversion

The following steps are used in this method:

Step 1: Convert each octal digit to a 3 digit binary number (the octal digits may be treated as
decimal for this conversion).

Step 2: Combine all the resulting binary groups (of 3 digits each) into a single binary number.

Example

5628 = ?2

Solution:

Step 1: Convert each octal digit to 3 binary digits.

5 - > 101

6 - > 110

2 - > 010

Step 2: Combine the binary groups.

Hence, 5628 = 1011100102

Binary to Hexadecimal Conversion

The following steps are used in this method:

Step 1: Divide the binary digits into groups of four (starting from the right).

Step 2: Convert each group of four binary digits to one hexadecimal digit. Remember that
hexadecimal digits 0 to 9 are equal to decimal digits 0 to 9, and hexadecimal digits A to F are
equal to decimal values 10 to 15. "Hence for this step, the binary to decimal conversion
procedure can be used, but the decimal values 10 to 15 must be represented as hexadecimal A to
F.

Example

101101011002 = ?16

Prepared by: Ms. Kanika Dhingra Page 41

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Solution:

Step 1 : Divide the binary digits into groups of 4.

1101 0011

Step 2: Convert each group of 4 binary digits to 1 hexadecimal digit.

11012=-1 x 23 + 1 x 22 + 0 x 21 + 1 x 2°

=8+4+0+1

= 13

= D16

00112 = 0 x 23 + 0 x 22 + 1 x 21 + 1 x 2°

=0+0+2+1 = 316

Hence, 110100112

= D316

Hexadecimal to Binary Conversion

The following steps are used in this method:

Step 1: Convert the decimal equivalent of each hexadecimal digit to 4 binary digits.

Step 2: Combine all the resulting binary groups (of 4 digits each) into a single binary number.

Example

2AB16 = ?2

Solution:

Step 1: Convert the decimal equivalent of each hexadecimal digit to 4 binary digits.

216 = 00102

A16=10102

B16=l0ll2

Step 2: Combine the binary groups.

Prepared by: Ms. Kanika Dhingra Page 42

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

2AB16 = 0010/2 1010/A 1011/B

Hence, 2AB16 = 0010101010112

Binary Arithmetic: Binary addition, Binary Subtraction, Binary Multiplication , Binary division

Binary addition

Rules

A B Sum Carry

0 0 0 0

0 1 1 0

1 0 1 0

1 1 0 1

1+1+1 1 1

For Ex1-

111011+1101

111011

+ 1101

1001000

For Ex2-

11000+100

11100

+ 100

100000

Prepared by: Ms. Kanika Dhingra Page 43

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Binary Subtraction

Rules

A B difference Borrow

0 0 0 0

0 1 1 1

1 0 1 0

1 1 0 0

For Ex1-

11101-1101

11101

-1101

10000

For Ex2-

1111101

- 1011

1110010

For Ex3-

110001

-1111

100010

Prepared by: Ms. Kanika Dhingra Page 44

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Binary multiplication

Rules

A B Product

0 0 0

0 1 0

1 0 0

1 1 1

For Ex1- 1010*1001

1010 Multiplicand
x1001 Multiplier

1010 Partial Product


0000 Partial Product
0000 Partial Product
1010 Partial Product

1011010 Final Product

For Ex2- 1111*111

1111
X111
1111
1111
1111

1101001

Binary Division

Prepared by: Ms. Kanika Dhingra Page 45

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Rule

0÷1=0
1÷1=1

The division process is performed in a manner similar to decimal division. The


rules for binary division are:

1. Start from the left of the dividend.


2. Perform a series of subtractions in which the divisor is subtracted from the
dividend.
3. If subtraction is possible, put a 1 in the quotient and subtract the divisor from the
corresponding digits of dividend.
4. If subtraction is not possible (divisor greater than remainder), record a 0 in the
quotient.
5. Bring down the next digit to add to the remainder digits. Proceed as before in a
manner similar to long division

Ex- 11101011/ 110

Answer

Q=1010111

R=001

Prepared by: Ms. Kanika Dhingra Page 46

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Signed Binary Numbers ( Complement to represent negative numbers): Sign-Magnitude


Representation, 1’s complement representation, 2’s Complement representation

One’s Complement

1 0

0 1

Ex- 11001110

Answer 00110001

Two's Complement

Rule

(i) Determine the one's complement of the binary number.

(ii)Add one(1) to the one's complement, the result is the two's complement of the given binary
number.

Ex1- 11001110

11001110

00110001 (1's complement of 11001110)

+1 (add one (1) to the 1's complement)

00110010 (2's complement)

Ex2- 00111001

00111001

00111001 (1's complement of 11001110)

+1 (add one (1) to the 1's complement)

00111010 (2's complement)

Prepared by: Ms. Kanika Dhingra Page 47

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

Sign Magnitude representation

The Sign-Magnitude Method is quite easy to understand. In fact, it is simply an


ordinary binary number with one extra digit placed in front to represent the sign. If
this extra digit is a '1', it means that the rest of the digits represent a negative
number. However if the same set of digits are used but the extra digit is a '0', it
means that the number is a positive one.

we have an 8-bit register. This means that we have 7 bits which represent a number and the
other bit to represent the sign of the number (the Sign Bit).
This is how numbers are represented:

The red digit means that the number is positive. The rest of the digits represent 37. Thus,
the above number in sign-magnitude representation, means +37.
And this is how -37 is represented:

Prepared by: Ms. Kanika Dhingra Page 48

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

2’s Complement Arithmetic: Subtraction using 2’s complement, 2’s Complement Arithmetic:
Addition /subtraction in 2’s complement Representation

Subtraction using one’s complement method

Case1- To subtract a smaller number from a larger number

(i)Determine the 1’s complement of the smaller no.

(ii)Add this to the larger no.

(iii)Remove the carry and add it to the result

Subtract (1010)2 from (1111)2 using 1’s Complement

1111 1010

+0101 0101(1’s complement of 1010)

10100

Carry bit +1

0101

Case2- To subtract larger number from a smaller number

(i) Determine the 1’s complement of the larger no.


(ii) Add this to the smaller no.
(iii) The answer is the 1’s complement of the result and is opposite in sign.

Subtract (1010)2 from (1000)2 using 1’s Complement

1000 1010

+0101 0101(1’s complement of 1010)

1101

Prepared by: Ms. Kanika Dhingra Page 49

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)


lOMoARcPSD|11390524

IINTM/BBA 107/INTRODUCTION TO IT/UNIT 1

1’s Complement of the result

ie. - (0010)

Subtraction using two’s complement method

Case1- Subtraction of a smaller number from a larger number

(i) Determine the 2’s complement of smaller number


(ii) Add this to the larger number
(iii) Omit the carry

Subtract (1010)2 from (1111)2 using 1’s Complement

1111 1010

+0110 0101(1’s complement of 1010)

Discard 10101 0110(2’s complement of 0101)


the
carry The carry is discarded answer is (0101)2

Case 2- Subtraction of a larger number from a smaller number

(i) Determine the 2’s complement of larger number


(ii) Add the 2’s complement to smaller number
(iii) There is no carry. The result is the 2’s complement and is negetive(-)
(iv) Take the 2’s complement and change the sign

Subtract (1010)2 from (1000)2

1000 1010

+0110 0101(2’s complement of 1010)

1110 0110

The answer is -(0010)2

Prepared by: Ms. Kanika Dhingra Page 50

Downloaded by Harpreet Kaur (harsimar101203@gmail.com)

You might also like