Download as pdf or txt
Download as pdf or txt
You are on page 1of 122

Information Technology

Contents

Unit I. 1.1 Introduction to Computers


1.1.1 Why do we need computer?
1.1.2 Hardware and Software
1.1.3 Types of computer
1.1.4 Advantages of computer
1.1.5 Disadvantages of computer
1.1.6 History of Computer
1.1.7 Uses of computers
1.1.8 Computer Virus
1.1.9 Block Diagram of Computer
1.1.10 Area of Application
1.2 Classification of Computers
1.2.1 Classification by Technology
1.2.2 Classification by Capacity
1.2.3 Classification by their Basic Operating Principle
1.3 Generation of Computer
1.4 Memory Units
1.4.1 Basic Units of Measurement
1.4.2 RAM, ROM, PROM, EPROM
1.5 Auxiliary Storage Devices-Magnetic Tape, Floppy Disk, Hard Disk.
1.6 Input and Output devices
1.6.1 Input Devices
1.6.2 Output Devices
Unit 2. Introduction to Computer Software
2.1 Introduction:
2.1.1 Software Fundamentals
2.2 Operating System (O.S)
2.3 Functions
2.3.1 Operating System as User Interface
2.3.2 OS is designed to serve two basic purposes
2.3.3 I/O System Management
2.3.4 History of Operating System
2.3.5 Operating System Services
2.3.6 Essential Properties of the Operating System
2.3.7 Why do we need operating systems
2.4 Classification
2.4.1 Parallel Systems
2.4.2 Distributed Systems
2.4.3 Clustered Systems
2.4.4 Real-Time Systems
2.4.5 Embedded Systems
2.5 Programming Languages
2.5.1 Machine Language
2.5.2 Assembly Language
2.5.3 Procedural Languages
2.5.4 Natural Programming Languages
2.5.5 Hypertext Mark-up Language
2.5.6 Extensible Mark-up Language (XML)
2.5.7 Object-Oriented Programming Languages
2.6 General Software features and Trends
2.7 Importance of Computers in Business
Unit 3. Computerization
3.1. Computer
3.1.1 Advantages and Disadvantages
3.1.2 Characteristics
3.1.3 Computer Languages
3.1.4 Computer Software
3.1.5 Operating System
3.1.6 Computer Security
3.2 Computerization:
3.2.1 Benefits of computerization for an individual and organization
3.2.2 Conceptualizing Computerization
3.2.3 A Social Informatics Perspective
3.3 Problems and Prospects:
3.3.1 Problems of insurance business
3.3.2 Problems with Online Shopping
3.3.3 Problems and prospects of higher education in India
3.4 Information Technology for achieving competitive edge in Business and
Industry
3.4.1 The Role of IT in Strategic Management
3.4.2 Uses of competitive intelligence
3.4.3 Strategies for Competitive Advantage
3.5 Infrastructure requirement
3.6 Selection of Hardware and Software
3.6.1 Selection of Hardware
3.6.2 Selection of Software
3.7 General Software features and Trends
3.7.1 Importance of Computers in Business
3.7.2 Effects of Technology on Business
3.7.3 The Business Perspective in CLOUD COMPUTING
3.7.4 Information Technology and Business-Level Strategy
3.7.5 IT and Corporate Strategy: The Status of Current Research
3.7.6 IT and Corporate Strategy: The CEO’s Perspective
3.7.7 Computer Technology in Business Environment
Unit I

Computer System

Objectives:

Upon successful completion of this chapter students should be able to

 Identify Needs of computer hardware and software


 Different generation of computers
 History of the computers
 Supported applications
 Digital Computer System

1.1 Introduction to Computer:

The term computer is derived from the Latin word compute, which means to
calculate. A computer is an electronic machine, devised for performing calculations and
controlling operations that can be expressed either in logical or numerical terms. In simple
terms, a computer is an electronic device that performs diverse operations with the help of
instructions to process the information in order to achieve the desired results. Computer
application extends to cover huge area including education, industries, government, medicine,
scientific research etc.

Definition:

The computer is an electronic device that stores, retrieves, and processes data, and can
be programmed with instructions. A computer is composed of hardware and software, and
can exist in a variety of sizes and configurations.

Fig1. PC Computer

1.1.1 Why do we need Computers?

1
There are several different types of computers, each of which is geared towards a
specific audience. For example, there are desktops and laptops for home use along with
smaller, more easily portable computers for work or travel. In an educational setting, a
teacher or instructor may use computers to help convey information or to create an efficient
way for students to interact and learn valuable information. Another advantage of the
computer is enhanced connectivity, especially in mobile devices, which promotes sharing of
data and information.

Functionalities:

Any digital computer carries out five functions in gross terms:

 Takes data as input.


 Stores the data/instructions in its memory and use them when required.
 Processes the data and converts it into useful information.
 Generates the output
 Controls all the above four steps.

Fig2. Functions of computer

1.1.2 Hardware and Software:


The term hardware refers to the physical components of your computer such as the
system unit, mouse, keyboard, monitor etc.

Fig3. Hardware

2
The software is the instructions that make the computer work. Software is held either
on your computer’s hard disk, CD-ROM, DVD or on a diskette (floppy disk) and is loaded
(i.e. copied) from the disk into the computers RAM (Random Access Memory), as and when
required.

Fig4. Software

Software is a set of programs, which is designed to perform a well-defined function.


A program is a sequence of instructions written to solve a particular problem.

There are two types of software:

 System Software
 Application Software

System Software:

The system software is collection of programs designed to operate, control, and


extend the processing capabilities of the computer itself. System software is generally
prepared by computer manufactures. These software products comprise of programs written
in low-level languages which interact with the hardware at a very basic level. System
software serves as the interface between hardware and the end users.

Ex: Operating System, Compilers, Interpreter, and Assemblers etc.

Features of system software are as follows:

 Close to system
 Fast in speed
 Difficult to design
 Difficult to understand
 Less interactive

3
 Smaller in size
 Difficult to manipulate
 Generally written in low-level language

Application Software:

Application software products are designed to satisfy a particular need of a particular


environment. All software applications prepared in the computer lab can come under the
category of Application software.

Application software may consist of a single program, such as a Microsoft's notepad for
writing and editing simple text. It may also consist of a collection of programs, often called a
software package, which work together to accomplish a task, such as a spread sheet package.

Ex:

 Payroll Software
 Student Record Software
 Inventory Management Software
 Income Tax Software
 Railways Reservation Software
 Microsoft Office Suite Software
 Microsoft Word
 Microsoft Excel
 Microsoft PowerPoint

Features of application software are as follows:

 Close to user
 Easy to design
 More interactive
 Slow in speed
 Generally written in high-level language
 Easy to understand
 Easy to manipulate and use
 Bigger in size and requires large storage space

4
Relationship between Hardware and Software:
 Hardware and software are mutually dependent on each other. Both of them
must work together to make a computer produce a useful output.
 Software cannot be utilized without supporting hardware.
 Hardware without set of programs to operate upon cannot be utilized and is
useless.
 To get a particular job done on the computer, relevant software should be
loaded into the hardware
 Hardware is a one-time expense.
 Software development is very expensive and is a continuing expense.
 Different software applications can be loaded on hardware to run different
jobs.
 Software acts as an interface between the user and the hardware.
 If hardware is the 'heart' of a computer system, then software is its 'soul'.
Both are complimentary to each other.

1.1.3 Types of Computers:

Mini and Mainframe Computers:

 Very powerful, used by large


organisations such as banks to
control the entire business
operation.
 Very expensive!
Fig5. Mainframe Computers
Personal Computers:

 Cheap and easy to use


 Often used as stand-alone
computers or in a network.
 May be connected to large
mainframe computers within big
companies Fig6. Personal Computer

5
1.1.4 Advantages of Computer:

Computer has made a very vital impact on society. It has changed the way of life. The
use of computer technology has affected every field of life. People are using computers to
perform different tasks quickly and easily. The use of computers makes different task easier.
It also saves time and effort and reduces the overall cost to complete a particular task.

Many organizations are using computers for keeping the records of their customers.
Banks are using computers for maintaining accounts and managing financial transactions.
The banks are also providing the facility of online banking. The customers can check their
account balance from using the internet. They can also make financial transaction online. The
transactions are handled easily and quickly with computerized systems.

Following list demonstrates the advantages of computers in today's arena.

High Speed:
 Computer is a very fast device.
 It is capable of performing calculation of very large amount of data.
 The computer has units of speed in microsecond, nanosecond, and even the
picosecond.
 It can perform millions of calculations in a few seconds as compared to man who
will spend many months for doing the same task.

Accuracy:
 In addition to being very fast, computers are very accurate.
 The calculations are 100% error free.
 Computers perform all jobs with 100% accuracy provided that correct input has been
given.

Storage Capability:
 Memory is a very important characteristic of computers.
 A computer has much more storage capacity than human beings.
 It can store large amount of data.
 It can store any type of data such as images, videos, text, audio and many others.

Diligence:

6
 Unlike human beings, a computer is free from monotony, tiredness and lack of
concentration.
 It can work continuously without any error and boredom.
 It can do repeated work with same speed and accuracy.

Versatility:
 A computer is a very versatile machine.
 A computer is very flexible in performing the jobs to be done.
 This machine can be used to solve the problems related to various fields.
 At one instance, it may be solving a complex scientific problem and the very next
moment it may be playing a card game.

Reliability:
 A computer is a reliable machine.
 Modern electronic components have long lives.
 Computers are designed to make maintenance easy.

Automation:
 Computer is an automatic machine.
 Automation means ability to perform the given task automatically.
 Once a program is given to computer i.e., stored in computer memory, the program
and instruction can control the program execution without human interaction.

Reduction in Paper Work:


 The use of computers for data processing in an organization leads to reduction in
paper work and results in speeding up a process.
 As data in electronic files can be retrieved as and when required, the problem of
maintenance of large number of paper files gets reduced.

Reduction in Cost:
 Though the initial investment for installing a computer is high but it substantially
reduces the cost of each of its transaction.

1.1.5 Disadvantages of computer:

The use of computer has also created some problems in society which are as follows.

7
Unemployment:

 Different tasks are performed automatically by using computers. It reduces the need
of people and increases unemployment in society.

Wastage of time and energy:

 Many people use computers without positive purpose.


 They play games and chat for a long period of time.
 It causes wastage of time and energy.
 Young generation is now spending more times on the social media websites like
Facebook, Twitter etc. or texting their friends all night through smartphones which is
bad for both studies and their health.
 It also has adverse effects on the social life.

Data Security:

 The data stored on a computer can be accessed by unauthorized persons through


networks. It has created serious problems for the data security.

Computer Crimes:

 People use the computer for negative activities. They hack the credit card numbers of
the people and misuse them or they can steal important data from big organizations.

Privacy violation:

 The computers are used to store personal data of the people. The privacy of a person
can be violated if the personal and confidential records are not protected properly.

Health risks:

 The improper and prolonged use of computer can results in injuries or disorders of
hands, wrists, elbows, eyes, necks and back.
 The users can avoid health risks by using the computer in proper position.
 They must also take regular breaks while using the computer for longer period of
time.

8
 It is recommended to take a couple of minutes break after 30 minutes of computer
usage.

Impact on Environment:

 The computer manufacturing processes and computer waste are polluting the
environment.
 The wasted parts of computer can release dangerous toxic materials.
 Green computer is a method to reduce the electricity consumed and environmental
waste generated when using a computer.
 It includes recycling and regulating manufacturing processes.
 The used computers must be donated or disposed of properly.

Following list demonstrates the disadvantages of computers in today's arena

 A computer is a machine that has no intelligence to perform any task.


 Each instruction has to be given to computer.
 A computer cannot take any decision on its own.

Dependency:
 It functions as per a user’s instruction, so it is fully dependent on human being
Environment:

 The operating environment of computer should be dust free and suitable.

No Feeling:
 Computers have no feelings or emotions.
 It cannot make judgement based on feeling, taste, experience, and knowledge unlike
a human being.
1.1.6 History of Computer:

From the earliest times the need to carry out calculations has been developing. The
first steps involved the development of counting and calculation aids such as the counting
board and the abacus.
Pascal (1623-62) was the son of a tax collector and a mathematical genius. He
designed the first mechanical calculator (Pascaline) based on gears. It performed addition and
subtraction.

9
Leibnitz (1646-1716) was a German mathematician and built the first calculator to do
multiplication and division. It was not reliable due to accuracy of contemporary parts.

Babbage (1792-1872) was a British inventor who designed an ‘analytical engine’


incorporating the ideas of a memory and card input/output for data and instructions. Again
the current technology did not permit the complete construction of the machine.

Babbage is largely remembered because of the work of Augusta Ada (Countess of


Lovelace) who was probably the first computer programmer.

Burroughs (1855-98) introduced the first commercially successful mechanical adding


machine of which a million were sold.by 1926.

Hollerith developed an electromechanical punched-card tabulator to tabulate the data


for 1890 U.S. census. Data was entered on punched cards and could be sorted according to
the census requirements. The machine was powered by electricity. He formed the Tabulating
Machine Company which became International Business Machines (IBM). IBM is still one of
the largest computer companies in the world.

Aiken (1900-73) a Harvard professor with the backing of IBM built the Harvard Mark
I computer (51ft long) in 1944. It was based on relays (operate in milliseconds) as opposed to
the use of gears. It required 3 seconds for a multiplication.

Eckert and Mauchly designed and built the ENIAC in 1946 for military computations. It used
vacuum tubes (valves) which were completely electronic (operated in microseconds) as
opposed to the relay which was electromechanical.

It weighed 30 tons, used 18000 valves, and required 140 kwatts of power. It was 1000
times faster than the Mark I multiplying in 3 milliseconds. ENIAC was a decimal machine
and could not be programmed without altering its setup manually.

Atanasoff had built a specialised computer in 1941 and was visited by Mauchly
before the construction of the ENIAC. He sued Mauchly in a case which was decided in his
favour in 1974!

Von Neumann was a scientific genius and was a consultant on the ENIAC project. He
formulated plans with Mauchly and Eckert for a new computer (EDVAC) which was to store
programs as well as data.

10
This is called the stored program concept and Von Neumann is credited with it.
Almost all modern computers are based on this idea and are referred to as von Neumann
machines.

He also concluded that the binary system was more suitable for computers since
switches have only two values. He went on to design his own computer at Princeton which
was a general purpose machine.

Alan Turing was a British mathematician who also made significant contributions to
the early development of computing, especially to the theory of computation. He developed
an abstract.

Act theoretical model of a computer called a Turing machine which is used to capture
the notion of computable i.e. what problems can and what problems cannot be computed. Not
all problems can be solved on a computer.

In the early days of mankind, man used to count the head of cattle by putting lines on
trees. Slowly these lines changed to numbers. To do calculation on numbers he started
inventing machines.

1. Abacus:

Abacus was probably the earliest of counting devices. It consists of rectangular


wooden frame with two compartments and beads sliding along the steel wires for counting.
Multiplication and divisions are done using repeated additions and subtractions. Even today
in which cross strings are fixed. Beads are inserted on to the strings. There are a number of
rows of beads.

2. Napier’s logs and bones:

John Napier, a Scottish mathematician invented logarithms. The use of logarithms


enabled him to transform multiplications and division problems of addition and subtractions.
In the beginning he called logarithms as artificial numbers. But later he named them
logarithms. Napier also invented a computing device consisting of sticks with numbers
carved on them. These sticks are called bones as they were made of bones. These bones
helped a lot in multiplication involving large numbers.

11
3. Slide rule:

As the name indicates, the slide rule has one scale sliding within the other. Suppose
you want to add two numbers 3 and 5, set 3 on the fixed scale and slide the moving scale. So
that its “0’ coincides with”5” of sliding scale. This is the sum of 3 and 5. The process of
reading could be quick if you are trained in the use of slide rule.

4. Calculating machines and Pascal’s calculator:

A French mathematician, Blaise Pascal invented a machine based on gear wheels. He


was the son of tax collector who had to do lot of calculations as part of his job. Blaise Pascal
wanted to make his job easier by inventing a calculator. You might be familiar with gear
wheels in use in your bicycle which meshes with a driving chain. He used similar gear wheels
with ten teeth for each digit position. He fixed them together so that one wheel drives the
other. When the wheel corresponding to units position rotated by ten teeth, it drove the wheel
corresponding to the next higher position by one tooth. Thus one could make calculations.
Pascal provided dials, which indicated numbers stored on each wheel. He also used suitable
“dialling system” to operate the gear wheels. Other people also made a number of such
calculators. Computer scientists honoured Pascal by naming a programming language Pascal
after him.

5. Babbage difference and analytical engines:

Babbage, a British National and the son of a wealthy banker wanted to correct the
errors in the logarithm tables being used during his time. In 1822, he made a machine which
calculated the successive difference of expressions (X2 + ax + b is an example of an
expression) and prepared table which helped him in his calculations. The royal Astronomical
society awarded a gold medal to him for his invention and granted a large sum of money to
carry out further work. He wanted to make an accurate calculating machine called
“Babbage’s Analytical Engine”. The analytical Engine was supposed to be very accurate. So
it needed lot of parts made with precision. Babbage could not make such parts. He conceived
that his machine would use input devices, would have a processing part called “mill” where
you can perform calculations, would also incorporate It consists of rectangular frame in
which cross strings are fixed. Beads are inserted on to the strings. There are a number of rows
of beads .Since he was about 100 years ahead in his ideas, he could not get parts needed for
his machine. This is because there were no tools to make such precision parts. He did lot of

12
work related to making precision parts and spend all the grants (and lot of his money too) but
failed in his attempt to make a machine. He ultimately died as a frustrated man.

7. Herman Hollerith’s Machine:

Governments all over the world collect details about the number of people living in
their countries. This information helps the Government in planning for the future, Sometimes
you find enumerators (people taking such details) coming to your house with forms to collect
such details. This operation is called “census” which is normally done once in 10 years. In the
United States a census was carried out in 1880 and the U S Government was processing the
census data. Even as this was going on, the next census was due in 1890. To process the
census of 1890 fast, the Government announced a competition. Dr. Herman Hollerith
Produced cards out of special paper pulp designed punching machines to punch holes in the
card to count census figures and invented sorting machines to read such punched card and
collect data. He could complete the job within three years, achieving a speedup of about three
times.

8. ABC Computer:

In 1937, Dr. John Atanstoff with the help of his assistant Berry designed the Atanstoff
Berry Computer (ABC). The machine laid the foundation for the development of electronic
digital computer.

9. ENIAC- Electronic Numerical Integrator and Calculator:

In 1947 john Mauchly and Eckart completed the first large scale Electronic Digital
Computer, ENIAC. In this computer, each time a program was changed, the wiring had to be
completely rearranged. It weighed 30 tons, contained 18,000 vacuum tubes and occupied a
space of 30 50 feet.

10. EDSAC-Electronic Delay Storage Automatic Calculator:

Maurice V. Wilkes of Cambridge University completed EDSAC in 1949. EDSAC


was the first computer to operate on the stored program concept.

11. UNIVAC-I – Universal Automatic Computers:

In 1947, after ENIAC became operational Mauchly and Eckart formed their own

13
Company-The Eckart-Mauchly Computer Corporation”. Immediately after this they started
the design of UNIVAC-I. This was purchased by US bureau of Census. UNIVAC was the
first computer dedicated to business applications.

1.1.7 Uses of Computers:

Computers in Education:

 CBT (Computer Based Training):

* Computer Based Training (CBT) offers a low cost solution to training needs where
you need to train a large amount of people on a single subject.
* These programs are normally supplied on CD-ROM and combine text, graphics and
sound.
* Packages range from general encyclopaedias right through to learning a foreign
language.

Office Applications:

 Automated Production Systems:

* Many car factories are almost completely automated and the cars are assembled by
computer-controlled robots.
* This automation is becoming increasingly common throughout industry.

 Design Systems:

* Many products are designed using CAD (Computer Aided Design) programs to
produce exact specifications and detailed drawings on the computer before producing
models of new products.

 Stock Control:

* Stock control is ideal for automation and in many companies it is now completely
computerized.
* The stock control system keeps track of the number of items in stock and can
automatically order replacement

 Accounts / Payroll:

14
* In most large organizations the accounts are maintained by a computerized system.
* Due to the repetitive nature of accounts a computer system is ideally suited to this
task and accuracy is guaranteed.

Computers in Daily Life:

 Accounts
 Games
 Educational
 On-line banking
 Smart ID cards
 Supermarkets
 Working from home (Tele-working)
 Internet

1.1.8 Computer Virus:

What are computer viruses?

 Viruses are small programs that hide themselves on your disks (both diskettes and
your hard disk).
 Unless you use virus detection software the first time that you know that you have a
virus is when it activates.
 Different viruses are activated in different ways.

How do viruses infect PCs?

 Viruses hide on a disk and when you access the disk (either a diskette or another hard
disk over a network) the virus program will start and infect your computer.
 The worst thing about a computer virus is that they can spread from one computer to
another, either via use of infected floppy disk, or over a computer network, including
the Internet.

How to prevent virus damage:

 There are a number of third party antivirus products available.


 Most of these are better than the rather rudimentary products available within DOS
and Windows, but of course you do have to pay for them!

15
 The main thing about your virus checker is that it should be kept up to date.
 Many companies supply updated disks on a regular basis or allow you to receive
updates through an electronic, on-line bulletin board.

1.1.9 Block Diagram of Computer:

Fig7. Computer Block Diagram

A computer can process data, pictures, sound and graphics. They can solve highly
complicated problems quickly and accurately.

Input Unit:

Computers need to receive data and instruction in order to solve any problem.
Therefore we need to input the data and instructions into the computers. The input unit
consists of one or more input devices. Keyboard 8 is the one of the most commonly used
input device. Other commonly used input devices are the mouse, floppy disk drive, magnetic
tape, etc. All the input devices perform the following functions.

 Accept the data and instructions from the outside world.


 Convert it to a form that the computer can understand.
 Supply the converted data to the computer system for further processing.

Storage Unit:

The storage unit of the computer holds data and instructions that are entered through
the input unit, before they are processed. It preserves the intermediate and final results before

16
these are sent to the output devices. It also saves the data for the later use. The various storage
devices of a computer system are divided into two categories.

1. Primary Storage: Stores and provides data very fast. This memory is generally used to
hold the program being currently executed in the computer, the data being received from the
input unit, the intermediate and final results of the program. The primary memory is
temporary in nature. The data is lost, when the computer is switched off. In order to store the
data permanently, the data has to be transferred to the secondary memory. Very small portion
of primary storage memory is permanent is nature eg. ROM which holds the data permanent
even if power off. The cost of the primary storage is more or compared to the secondary
storage. Therefore most computers have limited primary storage capacity.

2. Secondary Storage: Secondary storage is used like an archive. It stores several programs,
documents, data bases etc. The programs that you run on the computer are first transferred to
the primary memory before it is actually run. Whenever the results are saved, again they get
stored in the secondary memory. The secondary memory is slower and cheaper than the
primary memory. Some of the commonly used secondary memory devices are Hard disk, CD,
etc.

Memory Size:

All digital computers use the binary system, i.e. 0’s and 1’s. Each character or a
number is represented by an 8 bit code.

 The set of 8 bits is called a byte.


 A character occupies 1 byte space.
 A numeric occupies 2 byte space.
 Byte is the space occupied in the memory.

The size of the primary storage is specified in KB (Kilobytes) or MB (Megabyte).


One KB is equal to 1024 bytes and one MB is equal to 1000KB. The size of the primary
storage in a typical PC usually starts at 16MB. PCs having 32 MB, 48MB, 128 MB, 256MB
memory are quite common.

Output Unit:

The output unit of a computer provides the information and results of a computation
to outside world. Printers, Visual Display Unit (VDU) are the commonly used output devices.

17
Other commonly used output devices are floppy disk drive, hard disk drive, and magnetic
tape drive.

Arithmetic Logical Unit:

All calculations are performed in the Arithmetic Logic Unit (ALU) of the computer. It
also does comparison and takes decision. The ALU can perform basic operations such as
addition, subtraction, multiplication, division, etc. and does logic operations via, >, <, =, ‘etc.
Whenever calculations are required, the control unit transfers the data from storage unit to
ALU once the computations are done, the results are transferred to the storage unit by the
control unit and then it is send to the output unit for displaying results.

Control Unit:

It controls all other units in the computer. The control unit instructs the input unit,
where to store the data after receiving it from the user. It controls the flow of data and
instructions from the storage unit to ALU. It also controls the flow of results from the ALU to
the storage unit. The control unit is generally referred as the central nervous system of the
computer that control and synchronizes its working.

Central Processing Unit:

The control unit and ALU of the computer are together known as the Central
Processing Unit (CPU). The CPU is like brain performs the following functions:

 It performs all calculations.


 It takes all decisions.
 It controls all units of the computer.

A PC may have CPU-IC such as Intel 8088, 80286, 80386, 80486, Celeron, Pentium,
Pentium Pro, Pentium II, Pentium III, Pentium IV, Dual Core, and AMD etc.

1.1.10 Area of Application:

Digital Media:

Digital media, including both graphics and sound, have become central both to our
culture and our science. There are (at least) three general areas that might serve as a focus for
certificate students interested in these computer applications.

18
Graphics:

Courses for a graphics media track might include COS 426 (Computer Graphics) or
COS 429 (Computer Vision), plus COS 436 (Human Computer Interface Technology) or
COS 479 (Pervasive Information Systems). The choices are wide and will vary with the
student. Those interested in a graphics track for the applications certificate should see Prof.
Adam Finkelstein.

Music:

The collaboration between Music and Computer Science at Princeton has a long and
rich history. Specific cross-listed COS/MUS courses include MUS/COS 314 (Introduction to
Computer Music) and COS 325/MUS 315 (Transforming Reality by Computer). A music
track for the certificate might include one of these two, plus COS 436 (Human Computer
Interface Technology) or COS 479 (Pervasive Information Systems). Again, a wide range of
choices is possible.

Policy and Intellectual Property:

The legal and political aspects of digital media are becoming increasingly important
in our society. A track for the certificate that focused in this area might typically include COS
491 (Information Technology and The Law), plus any one of many other possible courses,
depending on the student’s particular interest.

1.2 Classification of Digital Computer System:

Computing machines can be classified in many ways and these classifications depend
on their functions and definitions. They can be classified by the technology from which they
were constructed, the uses to which they are put, their capacity or size, the era in which they
were used, their basic operating principle and by the kinds of data they process. Some of
these classification techniques are discussed as follows:

1.2.1 Classification by Technology:

This classification is a historical one and it is based on what performs the computer
operation, or the technology behind the computing skill.

19
Flesh: Before the advent of any kind of computing device at all, human beings performed
computation by themselves. This involved the use of fingers, toes and any other part of the
body.

Wood: Wood became a computing device when it was first used to design the abacus.
Shickard in 1621 and Polini in 1709 were both instrumental to this development.

Metals: Metals were used in the early machines of Pascal, Thomas, and the production
versions from firms such as Brundsviga, Monroe, etc.

Elector Mechanical Devices: As differential analysers, these were present in the early
machines of Zuse, Aiken, Stibitz and many others.

Electronic Elements: These were used in the Colossus, ABC, ENIAC, and the stored
program computers. This classification really does not apply to developments in the last sixty
years because several kinds of new electro technological devices have been used thereafter.

1.2.2 Classification by Capacity:

Computers can be classified according to their capacity. The term ‘capacity’ refers to the
volume of work or the data processing capability a computer can handle. Their performance
is determined by the amount of data that can be stored in memory, speed of internal operation
of the computer, number and type of peripheral devices, amount and type of software
available for use with the computer. The capacity of early generation computers was
determined by their physical size - the larger the size, the greater the volume. Recent
computer technology however is tending to create smaller machines, making it possible to
package equivalent speed and capacity in a smaller format. Computer capacity is currently
measured by the number of applications that it can run rather than by the volume of data it
can process. This classification is therefore done as follows:

1. Microcomputer - is the smallest of the digital computers. A MICROCOMPUTER or


PERSONAL COMPUTER, PC for short, is most widely used especially at home
because of its affordable price and manageability

 It consists of: CPU, Keyboard, Monitor, Printer, and the disks drives.
 Can only be used by one person at a time

Ex: Personal Computers, Workstations, Portable Computers

20
2. Minicomputer - smallest computer designed specifically for the multi-user
environment.

 Can allow several people using the machine at the time.


 Serves as stand-alone computer
 40 to 100 employees or remote terminals.
 These perform multi-tasking and allow many terminals to be connected
to their services.

The ability to connect minicomputers to each other and mainframes has popularized them
among larger businesses. This use is being challenged by the developments in the
microcomputers under a network. Minicomputers are still recognized are being able to
process large amounts of data.

3. Mainframe Computer - is another system that can be used in multi-user


environment.

 Can serve more than 100 remote terminals. Mainframe computer are
large general purpose computers. Mainframe computers generally
require special attention and are kept in a controlled atmosphere. They
are multi-tasking and generally used in areas where large database are
maintained e.g. government departments and the airline industry.

Other types:

Supercomputers – are the fastest calculating devices ever invented. Operate at


speeds measured in nanoseconds and even in picoseconds.

Network Computers - are computers with minimal memory, disk storage and
processor power designed to connect a network, especially the Internet. A Network is
the coordinated system of linked computer terminals or minicomputer and
mainframes that may operate independently but also share data and other resources.

1.2.3 Classification by their Basic Operating Principle:

Using this classification technique, computers can be divided into Analog, Digital and
Hybrid systems. They are explained as follows:

21
(A) Analog Computers: Analog computers were well known in the 1940s although they are
now uncommon. In such machines, numbers to be used in some calculation were represented
by physical quantities - such as 4 electrical voltages. According to the Penguin Dictionary of
Computers (1970), “an analog computer must be able to accept inputs which vary with
respect to time and directly apply these inputs to various devices within the computer which
performs the computing operations of additions, subtraction, multiplication, division,
integration and function generation….” The computing units of analog computers respond
immediately to the changes which they detect in the input variables. Analog computers excel
in solving differential equations and are faster than digital computers.

(B) Digital Computers: Most computers today are digital. They represent information
discretely and use a binary (two-step) system that represents each piece of information as a
series of zeroes and ones. The Pocket Webster School & Office Dictionary (1990) simply
defines Digital computers as “a computer using numbers in calculating.” Digital computers
manipulate most data more easily than analog computers. They are designed to process data
in numerical form and their circuits perform directly the mathematical operations of addition,
subtraction, multiplication, and division. Because digital information is discrete, it can be
copied exactly but it is difficult to make exact copies of analog information.

(C) Hybrid Computers: These are machines that can work as both analog and digital
computers.

1.3 Generation of Computer:

A. The Mechanical Era (1623-1945):

Trying to use machines to solve mathematical problems can be traced to the early
17th century. Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among
mathematicians who designed and implemented calculators that were capable of addition,
subtraction, multiplication, and division included The first multipurpose or programmable
computing device was probably Charles Babbage’s Difference Engine, which was begun in
1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the
Analytical Engine but unfortunately it also was only partially completed. Babbage, together
with Ada Lovelace recognized several important programming techniques, including
conditional branches, iterative loops and index variables.

22
Babbage designed the machine which is arguably the first to be used in computational
science. In 1933, George Scheutz and his son, Edvard began work on a smaller version of the
difference engine and by 1853 they had constructed a machine that could process 15-digit
numbers and calculate fourth-order differences. The US Census Bureau was one of the first
organizations to use the mechanical computers which used punch-card equipment designed
by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith’s company
merged with a competitor to found the corporation which in 1924 became International
Business Machines (IBM).

B. First Generation Electronic Computers (1937-1953):

These devices used electronic switches, in the form of vacuum tubes, instead of
electromechanical relays. The earliest attempt to build an electronic computer was by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial differential
equations. By 1941 he and graduate student Clifford Berry had succeeded in building a
machine that could solve 29 simultaneous equations with 29 unknowns. However, the
machine was not programmable, and was more of an electronic calculator.

Fig8. First Generation Computer

A second early electronic machine was Colossus, designed by Alan Turing for the
British military in 1943. The first general purposes programmable electronic computer was
the Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and
John V. Mauchly at the University of Pennsylvania. Research work began in 1943, funded by

23
the Army Ordinance Department, which needed a way to compute ballistics during World
War II. The machine was completed in 1945 and it was used extensively for calculations
during the design of the hydrogen bomb. Eckert, Mauchly, and John von Neumann, a
consultant to the ENIAC project, began work on a new machine before ENIAC was finished.
The main contribution of EDVAC, their new project, was the notion of a stored program.
ENIAC was controlled by a set of external switches and dials; to change the program
required physically altering the settings on these controls. EDVAC was able to run orders of
magnitude faster than ENIAC and by storing instructions in the same medium as data,
designers could concentrate on improving the internal structure of the machine without
worrying about matching it to the speed of an external control. Eckert and Mauchly later
designed what was arguably the first commercially successful computer, the UNIVAC; in
1952. Software technology during this period was very primitive.

C. Second Generation (1954-1962):

The second generation witnessed several important developments at all levels of


computer system design, ranging from the technology used to build the basic circuits to the
programming languages used to write scientific applications. Electronic switches in this era
were based on discrete diode and transistor technology with a switching time of
approximately 0.3 microseconds. The first machines to be built with this technology include
TRADIC at Bell Laboratories in 1954 and TX-0 at MIT’s Lincoln Laboratory. Index registers
were designed for controlling loops and floating point units for calculations based on real
numbers.

Fig9. Second Generation Computer

24
A number of high level programming languages were introduced and these include
FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of
this era include the IBM 704 and its successors, the 709 and 7094. In the 1950s the first two
supercomputers were designed specifically for numeric processing in scientific applications.

D. Third Generation (1963-1972):

Technology changes in this generation include the use of integrated circuits (ICs)
semiconductor devices with several transistors built into one physical component,
semiconductor memories, microprogramming as a technique for efficiently designing
complex processors and the introduction of operating systems and timesharing.

Fig10. Third Generation Computer

The first ICs were based on small-scale integration (SSI) circuits, which had around
10 devices per circuit (‘chip’), and evolved to the use of medium-scale integrated (MSI)
circuits, which had up to 100 devices per chip. Multi layered printed circuits were developed
and core memory was replaced by faster, solid state memories.
In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to
use functional parallelism. By using 10 separate functional units that could operate
simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a
computation rate of one million floating point operations per second (Mflops). Five years
later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its
pipelined functional units, is considered to be the first vector processor and was capable of

25
executing at ten Mflops. The IBM 360/91, released during the same period, was roughly
twice as fast as the CDC 660.
Early in this third generation, Cambridge University and the University of London
cooperated in the development of CPL (Combined Programming Language, 1963). CPL was,
according to its authors, an attempt to capture only the important features of the complicated
and sophisticated ALGOL. However, like ALGOL, CPL was large with many features that
were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge
developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In
1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called simply
B, in connection with an early implementation of the UNIX operating system. Comment)

E. Fourth Generation (1972-1984):

Large scale integration (LSI - 1000 devices per chip) and very large scale integration
(VLSI - 100,000 devices per chip) were used in the construction of the fourth generation
computers. Whole processors could now fit onto a single chip, and for simple systems the
entire computer (processor, main memory, and I/O controllers) could fit on one chip. Gate
delays dropped to about 1ns per gate. Core memories were replaced by semiconductor
memories.

Fig11. Fourth Generation Computer

Large main memories like CRAY 2 began to replace the older high speed vector
processors, such as the CRAY 1, CRAY X-MP and CYBER In 1972, Dennis Ritchie
developed the C language from the design of the CPL and Thompson’s B. Thompson and
Ritchie then used C to write a version of UNIX for the DEC PDP-11. Other developments in
software include very high level languages such as FP (functional programming) and Prolog
(programming in logic).

26
IBM worked with Microsoft during the 1980s to start what we can really call PC
(Personal Computer) life today. IBM PC was introduced in October 1981 and it worked with
the operating system (software) called ‘Microsoft Disk Operating System (MS DOS) 1.0.
Development of MS DOS began in October 1980 when IBM began searching the market for
an operating system for the then proposed IBM PC and major contributors were Bill Gates,
Paul Allen and Tim Paterson. In 1983, the Microsoft Windows was announced and this has
witnessed several improvements and revision over the last twenty years.

F. Fifth Generation (1984-1990):

Fig12. Fifth Generation Computer

This generation brought about the introduction of machines with hundreds of


processors that could all be working on different parts of a single program. The scale of
integration in semiconductors continued at a great pace and by 1990 it was possible to build
chips with a million components - and semiconductor memories became standard on all
computers. Computer networks and single-user workstations also became popular.

Parallel processing started in this generation. The Sequent Balance 8000 connected up
to 20 processors to a single shared memory module though each processor had its own local
cache. The machine was designed to compete with the DEC VAX-780 as a general purpose
Unix system, with each processor working on a different user’s job. However Sequent
provided a library of subroutines that would allow programmers to write programs that would
use more than one processor, and the machine was widely used to explore parallel algorithms
and programming techniques. The Intel iPSC-1, also known as ‘the hypercube’ connected
each processor to its own memory and used a network interface to connect processors. This
distributed memory architecture meant memory was no longer a problem and large systems

27
with more processors (as many as 128) could be built. Also introduced was a machine,
known as a data-parallel or SIMD where there were several thousand very simple processors
which work under the direction of a single control unit. Both wide area network (WAN) and
local area network (LAN) technology developed rapidly.

Most of the developments in computer systems since 1990 have not been fundamental
changes but have been gradual improvements over established systems. This generation
brought about gains in parallel computing in both the hardware and in improved
understanding of how to develop algorithms to exploit parallel architectures. Workstation
technology continued to improve, with processor designs now using a combination of RISC,
pipelining, and parallel processing.

Wide area networks, network bandwidth and speed of operation and networking
capabilities have kept developing tremendously. Personal computers (PCs) now operate with
Gigabit per second processors, multi-Gigabyte disks, hundreds of Mbytes of RAM, colour
printers, high-resolution graphic monitors, stereo sound cards and graphical user interfaces.
Thousands of software (operating systems and application software) are existing today and
Microsoft Inc. has been a major contributor.

Finally, this generation has brought about micro controller technology. Micro
controllers are ’embedded’ inside some other devices (often consumer products) so that they
can control the features or actions of the product. They work as small computers inside
devices and now serve as essential components in most machines.

1.4 Memory Units:

The computer system essentially comprises three important parts – input device, central
processing unit (CPU) and the output device. The CPU itself is made of three components
namely, the arithmetic logic unit (ALU), memory unit, and the control unit.
All storage devices are characterized with the following features:

 Speed
 Volatility
 Access method
 Portability
 Cost and capacity

28
1.4.1 Basic Units of Measurement:

All information in the computer is handled using electrical components like the
integrated circuits, semiconductors, all of which can recognize only two states –presence or
absence of an electrical signal. Two symbols used to represent these two states are 0 and 1,
and are known as BITS (an abbreviation for BInary DigiTS). 0 represents the absence of a
signal, 1represents the presence of a signal. A BIT is, therefore, the smallest unit of data in a
computer and can either store a 0 or 1.
Since a single bit can store only one of the two values, there can possibly be only four unique
combinations:
00 01 10 11

Bits are, therefore, combined together into larger units in order to hold greater range of
values.

BYTES are typically a sequence of eight bits put together to create a single computer
alphabetical or numerical character. More often referred to in larger multiples, bytes may
appear as Kilobytes (1,024 bytes), Megabytes (1,048,576 bytes), GigaBytes (1,073,741,824),
TeraBytes (approx. 1,099,511,000,000 bytes), or PetaBytes (approx.1,125,899,900,000,000
bytes).

Bytes are used to quantify the amount of data digitally stored (on disks, tapes) or
transmitted (over the internet), and are also used to measure the memory and document size.

1.4.2 RAM, ROM, PROM, EPROM:

The Term Computer Memory is defined as one or more sets of chips that store
Data/program instructions, either temporarily or permanently. It is critical processing
component in any computer. The PCs use several different types. They are:

•Main Memory / Primary Memory units


–Two most important are
•RAM (Random Access Memory)
•ROM (Read-only Memory)
–They work in different ways and perform distinct functions
–CPU Registers
–Cache Memory

29
•Secondary Memory/Auxiliary Memory

Also termed as ‘auxiliary’ or ‘backup’ storage, it is typically used as a supplement to


main storage. It is much cheaper than the main storage and stores large amount of data and
instructions permanently. Hardware devices like magnetic tapes and disks fall under this
category.

Computer’s memory can be classified into two types – RAM and ROM.

RAM or Random Access Memory is the central storage unit in a computer system. It
is the place in a computer where the operating system, application programs and the data in
current use are kept temporarily so that they can be accessed by the computer’s processor.
The more RAM a computer has, the more data a computer can manipulate.

Random access memory, also called the Read/Write memory, is the temporary
memory of a computer. It is said to be ‘volatile’ since its contents are accessible only as long
as the computer is on. The contents of RAM are cleared once the computer is turned off.

ROM or Read Only Memory is a special type of memory which can only be read and
contents of which are not lost even when the computer is switched off. It typically contains
manufacturer’s instructions. Among other things, ROM also stores an initial program called
the ‘bootstrap loader’ whose function is to start the computer software operating, once the
power is turned on.

Read-only memories can be manufacturer-programmed or user-programmed. While


manufacturer-programmed ROMs have data burnt into the circuitry, user-programmed ROMs
can have the user load and then store read-only programs. PROM or Programmable ROM is
the name given to such ROMs.

Information once stored on the ROM or PROM chip cannot be altered. However,
another type of memory called EPROM (Erasable PROM) allows a user to erase the
information stored on the chip and reprogram it with new information. EEPROM
(Electrically EPROM) and UVEPROM (Ultra Violet EPROM) are two types of EPROM’s.

Storage Vs. Memory:

RAM is volatile memory having a limited storage capacity.


Secondary/auxiliary storage is storage other than the RAM. These include devices that are

30
peripheral and are connected and controlled by the computer to enable permanent storage of
programs and data.

Magnetic medium was found to be fairly inexpensive and long lasting medium and, therefore,
became the preferred choice for auxiliary storage. Floppy disks and hard disks fall under this
category. The newer forms of storage devices are optical storage devices like CDs, DVDs,
Pen drive, Zip drive etc.

1.5 Auxiliary Storage Devices-Magnetic Tape, Floppy Disk, Hard Disk.


The Magnetic Storage Exploits duality of magnetism and electricity. It converts
electrical signals into magnetic charges, captures magnetic charge on a storage medium and
then later regenerates electrical current from stored magnetic charge. Polarity of magnetic
charge represents bit values zero and one.

Magnetic Disk:

The Magnetic Disk is Flat, circular platter with metallic coating that is rotated beneath
read/write heads. It is a Random access device; read/write head can be moved to any location
on the platter.

Floppy Disk:

These are small removable disks that are plastic coated with magnetic recording
material. Floppy disks are typically 3.5′′ in size (diameter) and can hold 1.44 MB of data.
This portable storage device is a rewritable media and can be reused a number of times.

Floppy disks are commonly used to move files between different computers. The
main disadvantage of floppy disks is that they can be damaged easily and, therefore, are not
very reliable. The following figure shows an example of the floppy disk.

Floppy Disk

31
Hard Disk:

Another form of auxiliary storage is a hard disk. A hard disk consists of one or more
rigid metal plates coated with a metal oxide material that allows data to be magnetically
recorded on the surface of the platters. The hard disk platters spin at a high rate of speed,
typically 5400 to 7200 revolutions per minute (RPM).Storage capacities of hard disks for
personal computers range from 10 GB to 120 GB (one billion bytes are called a gigabyte).

Hard Disk

1.6 Input and Output devices:

1.Input Devices:

Any device that allows information from outside the computer to be communicated to
the computer is considered an input device. All computer input devices and circuitry must
eventually communicate with the computer in discrete binary form because CPU (Central
Processing Unit) of a computer can understand only discrete binary information. A few
common computer input devices are -

1. Punched Cards
2. Card Readers
3. Key Punching Machines
4. Keyboard
5. Mouse
6. Joystick
7. Trackball

32
8. Magnetic Tablet
9. Optical Recognition
10. Scanners

These can be mainly divided into two basic categories:

(a)Analog Device: - An analog device is a continuous mechanism that represents information


with continuous physical variations.

(b)Digital Device: - A digital device is a discrete mechanism which represents all values with
a specific number system.

Punched Cards:

In the early years of computer evolution the punched cards were the most widely used
input medium for most computer system. These days they are not used in computer industry
as number of fast input devices are available. There are two types of punched cards - one has
eighty columns and the other has ninety six columns. Punched cards are rarely used today.
However you may occasionally encounter them in large companies such as public utilities,
where they are used still for billing.

Advantages of Punched Cards:

 Cards are standardized and can be used with any hardware system without much
difficulty.
 Cards are easily read by humans and are easy to handle.
 Cards are multi - purpose media, they can be used for input, output of storage.

Limitations of Punched Cards:

 Due to low data density, card files are bulky. Such bulky card decks make processing
slow since lot of paper has to be moved.
 A card can’t be erased and reused to enter new data and must be stored and processed
in designated order.
 Cards are quite costly and card readers operate at very slow speed.

Card Reader:

33
A card reader is an input device. It transfers data from the punched card to the
computer system. The card reader will read each punch card by passing light on it. Each card
will be passed between a light source and a set of light detectors. The presence of hole causes
the light to produce a pulse in the director; these pulses are transformed into binary digits by
the card reader and sent to the computer for storage. Card readers can read upto 2000 cards
per minute. There are two types of card readers depending upon the mechanism used for
sensing the punched holes in the cards. The two types of card readers are Photoelectric Card
Reader and Wire Brush Card Reader.

Key - Punching Machines:

There is another device to punch data on a punch card - the key punch. It contains a
keyboard which looks like a typewriter keyboard. When characters are typed in the
Keyboard, corresponding holes are punched in the blank card. Then these cards will be sent
to card reader to feed the information to the computer. The commonly used key punch
machines have the following components.

 Keyboard
 Card hopper
 Punching station
 Backspace key
 Card stacker
 Column indicator
 Program control unit
 Program drum
 Reading section
 Switches
 Printing mechanism etc.

Keyboard:

Keyboard is the most popular and widely used device for entering data and
instructions in a computer system. A keyboard is similar to the keyboard of a typewriter. A
keyboard has alphabets, digits, special characters and some control keys. A keyboard also has
cursor control keys and function keys. Function keys allow user to enter frequently used

34
operations in a single keystroke, and cursor - control keys can be used to select displayed
objects. Some of the special keys on a keyboard are given below:

 Arrow Keys - To move the cursor in the top, down, left and right direction in a
document.
 Backspace key - To delete the character on the left of the cursor.
 Caps Lock - To capitalise letters.
 Del - To delete the character from the current position of the cursor.
 End - To move the cursor to the end of the line.
 Enter - To start a new paragraph in a document.
 Esc - To cancel a command.
 Home - To move the cursor to the beginning of the line.
 Shift - To type the special characters above the numeric keys or to change the case of
the alphabet.
 Space bar - To enter a space.
 Tab - To enter multiple spaces between two words in a document.

Figure13. Keyboard

Mouse:

A mouse is a pointing device. It is used to position the cursor on the screen. A mouse
can be cursor click. The mouse can be used to drag and drop objects on the screen. A mouse
has a wheel or roller on the bottom of the mouse, which can be used to detect the amount and
the direction of movement. The distance moved is determined by the number of pulses
emitted by the mouse. A mouse is a usual pointing tool for interacting with a graphical user
interface (GUI) applications.

35
Figure14. Mouse

Joystick:

A Joystick is a device used in video games. It is also a pointing device, which is used
to move the cursor position on the screen. A joystick consists of a small, vertical liver fitted
on a base. This lever is used to move the cursor on the screen. The screen- cursor movement
in any particular direction is measured by the distance that the stick is shifted or moved from
its center position. The amount of movement is measured by the potentiometers that are
plugged at the base of the joystick. When the stick is released, a spring brings it back to its
center position. A joystick has a click, double click and drag switch inputs. When you push
the stick, the cursor does not go flying, no matter what the user does. The joystick can move
right or left, forward or backward. It moves in that direction at a given speed, controlled by
the supervisor from a switch underneath.

Figure15. Joystick

36
Trackball:

A trackball is also a pointing device. It consists of a ball which is fitted on a box. The
ball can be rotated with the figures or palm of the hand to move the cursor on the screen. The
amount and direction of rotations can be detected by the potentiometers which are attached to
the ball. Track balls are generally fitted on keyboards. Before the advent of the touch pad,
small trackballs were common on portable computers, where there may be no desk space on
which to run a mouse. The trackball was invented by Tom Cranston and Fred Longstaff as
part of the Royal Canadian Navy’s system in 1952, eleven years before the mouse was
invented.

Figure16. Trackball

Advantages of Trackball:

1. It can be placed on any type of surface, including our palm or lap.


2. It is stationary so it does not need much space for its use.
3. Due to their compact size these are most suitable for portable computer.

Magnetic Tablet (DIGITIZER):

A magnetic tablet is also known as - digitizer, digitizing tablet, graphics tablet,


graphics pad or drawing tablet. These tablets may also be used to capture data or handwritten
signature. It can also be used to trace an image from a piece of paper which it taped or
otherwise secured to the surface. Capturing data in this way, either by tracing or entering the
corners of linear polylines or shapes is called digitizing. A graphics tablet consists of a flat
surface upon which the user may “draw” or trace an image using an attached stylus, a pen-
like drawing apparatus. The image generally does not appear on the tablet itself but, rather is
displayed on the computer monitor. Some tablets, come as a functioning secondary computer
screen that you can interact with images directly by using the stylus, some tablets are

37
intended as a general replacement for a mouse as the primary pointing and navigation device
for desktop computers.

Advantages of Digitizer:

1. Interactive graphics is possible using digitizer.


2. The sketches displayed on the monitor are neater and more precise than those on
paper.

Disadvantages:

1. They are quite costly.


2. These suite to applications which require high resolution graphics only.

Optical Recognition:

Optical recognition occurs when a device scans a printed surface and translates the
image the scanner sees into a machine-readable format that is understandable by the
computer. The three types of optical recognition devices are as follows-

1. Optical Character Recognition (OCR)


2. Optical Mark Recognition (OMR)
3. Optical Bar Recognition (OBR)

1. Optical Character Recognition (OCR): An OCR is the mechanical or electronic


translation of scanned image of handwritten, typewritten or printed text into machine encoded
text. It is widely used to convert books and documents into electronics files, to computerize a
record-keeping system in an office or to publish the text on a website. An OCR system
require calibration to read a specific font, Some systems are capable of reproducing formatted
output that closely approximates the original scanned page including images and other non-
textual components.

2. Optical Mark Recognition (OMR): Many traditional OMR devices work with a
dedicated scanner device that shines a beam of light onto the form paper. The contrasting
reflectivity at predetermined positions on a page is then utilized to detect the marked areas
because they reflect less light than the blank areas of the paper.

38
Figure17. OMR

Some OMR devices use forms which are pre-printed onto “transoptic” paper and
measure the amount of light which passes through the paper thus a mark on either side of the
paper will reduce the amount of light passing through the paper OMR is generally
distinguished from OCR by the fact that a complicated pattern recognition engine is not
required. One of the most familiar applications of OMR is the use of HB pencil bubble
optical answer sheets in multiple choice question examinations.

3. Optical Bar Recognition (OBR): It is slightly more sophisticated type of optical


recognition. The bar codes are the vertical zebra-striped marks you see on most manufactured
retail products-every candy to cosmetics to comic books. The usual bar-code system in use is
called the Universal Product Code (UPC). Bar codes represent data, such as name of the
manufacturers and the type of product, the code is interpreted on the basis of the width of the
lines rather than the location of the bar code. The barcode does not have the price of the
product. Bar code readers are photoelectric (optical) scanners that translate the symbols in the
bar code into digital code. In these systems, the price of a particular item is set with in the
store’s computer. Once the bar code has been scanned the corresponding price appears on the
sales clerk’s point-of-sale (POS) terminal and on your receipt.

Figure18. Bars Figure19. Barcode-reader

39
Scanner:

A scanner is also referring to as image scanner. It is an input device that optically


scans images, printed text, handwritten text or an object, and converts it to a digital image.
Common examples found in offices are variations of the desktop (or flatbed) scanner where
the document is placed on a glass window for scanning. Hand-held scanners, where the
device is moved by the hand, have evolved from text scanning used for industrial design,
reverse engineering, test and measurement, gaming and other applications.

Modern scanners typically use a Change-Coupled Device (CCD) or a Contact Image


Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as
the image sensor. Another category of scanner is digital camera scanners, which are based on
the concept of reprographic cameras. Due to increasing resolution and new features such as
anti-shake, digital cameras have become an attractive alternative to regular scanners. New
scanning technologies are combining 3D scanners with digital cameras to create full-Color,
photo realistic 3D models of objects.

Figure20. Scanner

1.6.2 Output Devices:

An output device is a device which accepts results from the computer and displays
them to user. The output device also converts the binary code obtained from the computer
into human readable form. Output devices generate two types of output copy of results.

1. Hard Copy Output - it is a computer output, which is permanent in nature and can be kept
in paper files, or can be looked at a later stage, when the person is not using the computer.
For example, output produced by printers or plotters on paper.

2. Soft Copy Output - it is a computer output, which is temporary in nature, and vanishes
after its use. For example, output shown on a terminal screen, or spoken out by a voice
response system. The commonly used output devices are: CRT screen/TFT screen, printers
and plotters.

40
Hard Copy Output Devices:

Hard copy is printed output for example, printouts, whether text or graphics, from
printers. The hard copy output devices are printers.

Printer:

A printer is an output device that prints characters, symbols and graphics on paper or
another hard copy medium. There are a variety of printers available for various types of
applications. It depends on the speed and approach of printing. Printers can be classified as:

(a)Character Printers
(b)Line Printers
(c) Page Printers (Laser Printer)

There is yet another classification depending upon the technology used for printing.
According to this classification, printers are of two types:

(i) Impact Printers - do have contact with paper.


(ii) Non-impact printers - do not have contact with paper and output is generated by
hammering on the ribbon.

(a) Character Printers:

Character printers print only one character at a time. They are low-speed printers and
are generally used for low volume printing work. Characters to be printed are sent serially to
the printer. Three of the most commonly used character printers are described as follows:

(i) Letter Quality Printers (Daisy Wheel): Letter quality printers are used where good
printing quality is needed. These printers use a print wheel font known as a daisy wheel.
There is a character embossed on each petal of daisy wheel. The wheel is rotated of a rapid
rate with the help of a motor. In order to print a character, the wheel is rotated. When the
desired character spins to the correct position, a print hammer strikes it to produce the output.

Thus, daisy wheel printers are impact printers. It has a fixed font type. It cannot print
graphics.

41
Figure21. Daisy Wheel Printer

(ii) Dot-Matrix Printer (DMP): A dot-matrix printer prints the character as patterns of dots.
The print head contains a vertical array of 7, 9, 14, 18 or even 24 pins. A character is printed
in a number of steps. One dot column of the dot-matrix is taken up at a time. The selected
dots of a column are printed by the print head at a time as it moves across a line. The shape of
each character is detained from the information held electronically in the printer.

The dot-matrix printers are faster than daisy wheel printers. This speed lies in the
range of 30-600 cps. Dot-matrix do not have fixed character font. So they can print any shape
of character. This allows for many special characters and the ability to print graphics such as
graphs and charts.

Figure22. Dot-matrix Printer

(iii) Inkjet Printers: Inkjet printers are non-impact printers. They employ a different
technology to print character on the paper. They print character by spraying small drops of
ink onto the paper. Special type of ink with high iron content is used. Each droplet is changed
when it passes through the valve. Then is passes through a region having horizontal and
vertical deflection plates. These plates deflect the ink drops to the desired spots on the paper
to form the desired character.

The concept of inkjet printing originated in the 19th century and the technology was
first extensively developed in the early 1950. Inkjet printers produce high quality printing
output. The speed of inkjet printers lies in the range of 40-300 cps. They allow all types of
fonts and styles. Colour printing is also possible by using different coloured inks.

42
The advantage of inkjet printer is that it is quieter in operation than impact dot matrix.
It can print finer, smoother details through higher print head resolution.

Figure23. Inkjet Printer

(b) Line Printer:

The line printer is a form of high speed impact printer in which one line of text is
printed at a time. They are mostly associated with the early days of computing, but
technology is still in use. They are fast printers and the speed lies in the range of 600 to 1200
lines per minute (approximately 10 to 20 papers per minute) were common. The drum printer
and chain printer are the most commonly used line printers.

(i) Drum Printer: A drum printer consists of a solid, cylindrical drum which contains
complete raised character set in each band around the cylinder. The number of bands is equal
to the number of printing positions. Each band contains all the possible characters. The drum
rotates at a rapid speed. There is a magnetically driven hammer for each possible print
position. The hammer hits the paper and the ribbon against the desired character on the drum
when it comes in printing position. Reason is the desired character to be recorded on the
continuous paper. The speed of a drum printer is in the range of 200 to 2000 lines per minute.

Figure24. Drum Printer

(ii) Chain Printer: Chain printers (also known as train printers) use a rapidly rotating chain
which is called print chain. The print chain contains characters. Each link of the chain is

43
character font. There is a magnetically driven hammer behind the paper for each print
position. The processor sends all the characters to be printed in one line to the printer. When
the desired character comes in the print position the hammer strikes the ribbon and paper
against the character. A chain may contain more than one character set, for example, 4 sets.
As compared to chain printer, chain printer had the advantage that the type chain could
usually be changed by the operator. The speed lies in the range of 400-2400 lines per minute.

(c) Page Printer (Laser Printer)

Page printers or laser printers are non-impact printers. It is a common type of


computer printer that rapidly produces high quality text and graphics on plain paper. Page
printers are very costly and are economical only when printed volume is very high. Page
printers are based on a number of technologies like electronics, xerography and laser; these
techniques are called Electro-photographic techniques. In these printers, an image is
produced on a photosensitive surface using a laser beam of other light source. The laser beam
is turned off and on under the control of a computer. The areas that are exposed to the laser
attract toner, which is generally an ink power. Thereafter the drum transfers the toner to the
paper. Then the toner is permanently fused on the paper with heat or pressure in a fusing
station. After this drum is discharged clean so that it is ready for next processing.

The cost of this technology depends on a combination of factors, including the cost of
paper, toner and infrequent drum replacement as well as the replacement of other consumable
such as the fuser assembly and transfer assembly.

The laser printer speed can vary widely, and depends on many factors, including the
graphic intensity of the job being processed. The fastest models can print over 200
monochrome pages per minute (12000 pages per hour).

Advantages:

 Very high speed.


 Low noise level.
 Low maintenance requirements.
 Very high image quality
 Excellent graphics capability
 A variety of type sizes and styles.

44
Figure25. Laser Printer

Plotters:

A plotter is a computer printing device for printing vector graphics. In the past,
plotters were widely used in application such as CAD (computer - aided design), though they
have generally been replaced with the wide-format conventional printers. It is now common
place to refer to such wide-format printers as “plotters”.

A plotter is an output device used to produce hard copies of graphs and designs. They
use ink pen or inkjet to draw graphics and drawings. Pen could be monochrome or multi-
coloured. Pen plotters print by moving a pen or other instrument across the surface of a piece
of paper. Pen plotters can draw complex line art, including text, but do so slowly because of
the mechanical movement of the pens. They are often incapable of efficiently creating a solid
region of Colour, but can draw an area by drawing a number of close, regular lines. Plotters
offered the fastest way to efficiently produce very large drawing.

Pen plotters have essentially become absolute, and have been replaced by large-
format inkjet printers and LED toner based printer. Such device may still understand vector
language originally designed for plotter use, because in many uses, they offer a more efficient
alternative to restore data.

Plotters are primarily used in technical drawing and CAD applications, where they
have the advantage of working on very large paper sizes while maintaining high resolution.
Another use has been found by replacing the pen with a cutter, and in this form plotters can
be found in many garment and sign shops. A niche application of plotter is in creating tactical
images for visually hand capped people on special thermal cell paper.

45
Figure26. Plotter

Soft Copy Output Devices:

Softcopy is data that is shown on a display screen or is in audio or Voice form. This
kind of output is not tangible, it cannot be touched. The softcopy devices are CRT display
screen, flat - panel display screen (for example, liquid-crystal display).

Monitor is the most commonly used word for softcopy output device. Monitors are
also known as display screens, CRT or simply screens are output devices that show
programming instructions and data as they are being input and information after it is
processed. The size of a computer screen is measured diagonally from corner to corner in
inches. For desktop microcomputers, the most common size is 13, 15, 17, 19 and 21 inches,
for laptop computers, 12.1, 13.3, and 14.1 inches. There are two types of monitors

1. CRT (Cathode-Ray Tube)


2. Flat - panel or LCD (Liquid Crystal Display)

CRT (Cathode-Ray Tube):

A CRT is a vacuum tube used as a display screen in a computer or video display


terminal. The same kind of technology is found not only in the screens of desktop computers
but also in television sets and flight information monitor in airports.

A CRT display comes in two varieties: Monochrome (only one colour) and colour
(multicolour). Monochrome displays come in single colour like green, blue, yellow, orange,
pink, red and white depending upon the type of phosphor material used.

Coloured displays are developed by using a combination of phosphors that emit


different coloured light. To produce coloured display, three phosphors-red, blue and green are
used. CRT is considerable cheaper (5-10 times) than flat-panel displays.

46
Figure27. CRT Monitor

Flat-panel/LCD (Liquid Crystal Displays):

As compared to CRT, the flat-panel displays are much thinner, lighter and consume
less power. Thus they are most useful for portable computers, although they are available for
desktop computers as well. Flat panel displays are made up of two plates of glass separated
by a layer of a substance in which light is manipulated. A technology, liquid crystal display
(LCD) is used, in which molecules of liquid crystals line up in a way that after their optical
properties, creating images on the screen by transmitting or blocking out light.

Figure28. LCD

Flat panel screens are either active - matrix or passive matrix displays, depends on the
location of their transistors.

In an active - matrix display, also known as TFT (thin film transistor) display, each
pixel on the screen is controlled by its own transistor. Active - matrix displays are much
brighter and sharper than passive matrix displays. But they are more expansive.

In a passive - matrix display, a transistor controls a whole row or columns of pixels. It


provides a sharp image for one colour (monochrome) screen. The advantage is that it is less
expansive and use less power than active - matrix display.

47
Review Questions:

1. What are the types in computer?


2. Explain about the history of computer?
3. What is computer virus? How it affect the system?
4. Discuss computer system? With Block diagram of computer?
5. What are the classification of digital computer system
6. Explain the different generation of computers?
7. Write a short note on any two input devices and output devices?
8. List out the characteristics of computer?
9. Why do we need a computer for our day today life?
10. Discuss about hardware and software?

48
Unit II

Computer Software

Objectives:

The primary objective of this study was to determine wide range of informative ideas about
Fundamental computer requirements.

 Computer software Fundamentals


 Operating System
 Functions
 Input/output Management
 Services
 Programming Languages used in computers
 Classification of Computer Systems

2.1 Introduction:

Computer hardware is only as effective as the instructions we give it, and those
instructions are contained in software. Software not only directs the computer to manage its
internal resources, but also enables the user to tailor a computer system to provide specific
business value. It is surprising to many people that at the corporate level, software
expenditures (development and purchase) typically are a much larger cost than is hardware.
In this chapter we learn that computer software, in its various forms and languages, can be
quite complex. But these complexities must be understood in order to truly be able to exploit
the power of modern information technologies. This chapter explains to the reader the
concepts of what software is, how it works, and how it is created. Along the way we provide
examples of software’s critical role in maintaining organizational competitiveness.

2.1.1 Software Fundamentals:

Software consists of computer programs, which are sequences of instructions for the
computer. The process of writing (or coding) programs is called programming, and
individuals who perform this task are called programmers.

Unlike the hardwired computers of the 1950s, modern software uses the stored
program concept, in which stored software programs are accessed and their instructions are

49
executed (followed) in the computer’s CPU. Once the program has finished executing, a new
program is loaded into main memory and the computer hardware addresses another task.

Computer programs include documentation, which is a written description of the


functions of the program. Documentation helps the user operate the computer system and
helps other programmers understand what the program does and how it accomplishes its
purpose. Documentation is vital to the business organization. Without it, if a key programmer
or user leaves, the knowledge of how to use the program or how it is designed may be lost.

The computer is able to do nothing until it is instructed by software. Although


computer hardware is, by design, general purpose, software enables the user to instruct a
computer system to perform specific functions that provide business value. There are two
major types of software: systems software and application software.

Systems software is a set of instructions that serves primarily as an intermediary


between computer hardware and application programs, and may also be directly manipulated
by knowledgeable users. Systems software provides important self-regulatory functions for
computer systems, such as loading itself when the computer is first turned on, managing
hardware resources such as secondary storage for all applications, and providing commonly
used sets of instructions for all applications to use. Systems programming is either the
creation or maintenance of systems software.

Application software is a set of computer instructions that provide more specific


functionality to a user. That functionality may be broad, such as general word processing, or
narrow, such as an organization’s payroll program. An application program applies a
computer to a certain need. Application programming is either the creation or the
modification and improvement of application software. There are many different software
applications in organizations today, as this chapter will discuss. For a marketing application,
for example, see the Market Intelligence box at the Web site.

Systems software is the class of programs that control and support the computer
system and its information-processing activities. Systems software also facilitates the
programming, testing, and debugging of computer programs. It is more general than
application software and is usually independent of any specific type of application. Systems
software programs support application software by directing the basic functions of the

50
computer. For example, when the computer is turned on, the initialization program (a systems
program) prepares and readies all devices for processing.

2.2 Operating System (O.S):

An operating system is act as an intermediary between the computer and computer


hardware. The purpose of an operating system is to provide an environment in which a user
can execute programs in a convenient and efficient manner.

An operating system is software that manages the computer hardware. The hardware
must provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system.

Definition:

An Operating system is a program that controls the execution of application programs


and acts as an interface between the user of a computer and the computer hardware.

A more common definition is that the operating system is the one program running at
all times on the computer (usually called the kernel), with all else being applications
programs.

An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
memory management module, I/O programs, and a file system.

Features:

 An operating system is a program that acts as an interface between the software and the
computer hardware.
 It is an integrated set of specialised programs that are used to manage overall resources
and operations of the computer.
 It is specialised software that controls and monitors the execution of all other programs
that reside in the computer, including application programs and other system software.

Objectives:

 To make a computer system convenient to use in an efficient manner

51
 To hide the details of the hardware resources from the users
 To provide users a convenient interface to use the computer system
 To act as an intermediary between the hardware and its users and making it easier for the
users to access and use other resources
 To manage the resources of a computer system
 To keep track of who is using which resource, granting resource requests, according for
resource using and mediating conflicting requests from different programs and users
 To provide efficient and fair sharing of resources among users and programs

Fig37. OS Structural Diagram

Characteristics:

Memory Management -- keeps tracks of primary memory i.e. what part of it is in use by
whom, what part is not in use etc. and allocates the memory when a process or program
requests it.

Processor Management -- allocates the processor (CPU) to a process and deallocates


processor when it is no longer required.

Device Management -- keeps track of all devices. This is also called I/O controller that
decides which process gets the device, when, and for how much time.

File Management -- allocates and de-allocates the resources and decides who gets the
resources.

52
Security -- prevents unauthorized access to programs and data by means of passwords and
similar other techniques.

Job accounting -- keeps track of time and resources used by various jobs and/or users.

Control over system performance -- records delays between request for a service and from
the system.

Interaction with the operators -- The interaction may take place via the console of the
computer in the form of instructions. Operating System acknowledges the same, does the
corresponding action and informs the operation by a display screen.

Error-detecting aids -- Production of dumps, traces, error messages and other debugging
and error-detecting methods.

Coordination between software and users -- Coordination and assignment of compilers,


interpreters, assemblers and other software’s for the various users of the computer systems.

Operating System is the main software of the computer as everything will run on it in one
form or other.

 There are primarily three choices: Windows, Linux, and Apple OS X.

 Linux is free but people generally do not use it for home purpose.

 Apple OS X works only on Apple Desktops.

 Windows 7 is very popular among desktop users.

 Most of the computers come pre-equipped with Windows 7 Starter edition.

 Windows 8 is recently introduced and is available in market.

 Windows 7 and Windows 8 come in multiple versions from starter, home basic,
home premium, professional, ultimate and enterprise editions.

 As edition version increases, their features list and price increases.

 Recommended - Windows 7 Home Premium.

53
2.3 Functions:

Operating system performs three functions:

1. Convenience: An OS makes a computer more convenient to use.

2. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

3. Ability to Evolve: An OS should be constructed in such a way as to permit the effective


development, testing and introduction of new system functions without at the same time
interfering with service.

2.3.1 Operating System as User Interface:

Every general purpose computer consists of the hardware, operating system, system
programs, and application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral device and storage device. System program consists of compilers, loaders,
editors, OS etc. The application program consists of business program, database program.

Fig38. Conceptual view of a computer system

Every computer must have an operating system to run other programs. The operating
system and coordinates the use of the hardware among the various system programs and
application program for a various users. It simply provides an environment within which
other programs can do useful work.

The operating system is a set of special programs that run on a computer system that
allow it to work properly. It performs basic tasks such as recognizing input from the

54
keyboard, keeping track of files and directories on the disk, sending output to the display
screen and controlling peripheral devices.

2.3.2 OS is designed to serve two basic purposes:

1. It controls the allocation and use of the computing system‘s resources among the various
user and tasks.

2. It provides an interface between the computer hardware and the programmer that simplifies
and makes feasible for coding, creation, debugging of application programs.

The operating system must support the following tasks. The tasks are:

1. Provides the facilities to create, modification of program and data files using and editor.

2. Access to the compiler for translating the user program from high level language to
machine language.

3. Provide a loader program to move the compiled program code to the computer‘s memory
for execution.

4. Provide routines that handle the details of I/O programming.

2.3.3 I/O System Management:

The module that keeps track of the status of devices is called the I/O traffic controller.
Each I/O device has a device handler that resides in a separate process associated with that
device.

The I/O subsystem consists of

1. A memory management component that includes buffering, caching and spooling.

2. A general device driver interface.

Assembler:

Input to an assembler is an assembly language program. Output is an object program


plus information that enables the loader to prepare the object program for execution. At one
time, the computer programmer had at his disposal a basic machine that interpreted, through

55
hardware, certain fundamental instructions. He would program this computer by writing a
series of ones and zeros (machine language), place them into the memory of the machine.

Compiler:

The high level languages – examples are FORTRAN, COBOL, ALGOL and PL/I – are
processed by compilers and interpreters. A compiler is a program that accepts a source
program in a ―high-level language‖ and produces a corresponding object program. An
interpreter is a program that appears to execute a source program as if it was machine
language. The same name (FORTRAN, COBOL etc.) is often used to designate both a
compiler and its associated language.

A loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating and direct-linking. In general, the loader
must load, relocate, and link the object program. Loader is a program that places programs
into memory and prepares them for execution. In a simple loading scheme, the assembler
outputs the machine language translation of a program on a secondary device and a loader is
placed in core. The loader places into memory the machine language version of the user‘s
program and transfers control to it. Since the loader program is much smaller than the
assembler, those makes more core available to user‘s program.

2.3.4 History of Operating System:

Operating systems have been evolving through the years. Following table shows the
history of OS.

Generation Year Electronic devices Types of OS and


used devices
First 1945 – 55 Vacuum tubes Plug boards
Second 1955 – 1965 Transistors Batch system
Third 1965 – 1980 Integrated Circuit Multiprogramming
(IC)
Fourth Since 1980 Large scale PC
integration

56
2.3.5 Operating System Services:

An operating system provides services to programs and to the users of those


programs. It provided by one environment for the execution of programs. The services
provided by one operating system is difficult than other operating system. Operating system
makes the programming task easier.

The common service provided by the operating system is listed below.

1. Program execution

2. I/O operation

3. File system manipulation

4. Communications

5. Error detection

1. Program execution: Operating system loads a program into memory and executes the
program. The program must be able to end its execution, either normally or abnormally.

2. I/O Operation: I/O means any file or any specific I/O device. Program may require any
I/O device while running. So operating system must provide the required I/O.

3. File system manipulation: Program needs to read a file or write a file. The operating
system gives the permission to the program for operation on file.

4. Communication: Data transfer between two processes is required for some time. The both
processes are on the one computer or on different computer but connected through computer
network. Communication may be implemented by two methods:

 Shared memory
 Message passing

5. Error detection: error may occur in CPU, in I/O devices or in the memory hardware. The
operating system constantly needs to be aware of possible errors. It should take the
appropriate action to ensure correct and consistent computing.

Operating system with multiple users provides following services.

57
 Resource Allocation
 Accounting
 Protection

A) Resource Allocation:

If there is more than one user or jobs running at the same time, then resources must be
allocated to each of them. Operating system manages different types of resources require
special allocation code, i.e. main memory, CPU cycles and file storage.

There are some resources which require only general request and release Operating
System Components Modern operating systems share the goal of supporting the system
components. The system components are:

 Process Management
 Main Memory Management
 File Management
 Secondary Storage Management
 I/O System Management
 Networking
 Protection System
 Command Interpreter System

2.3.6 Essential Properties of the Operating System:

1. Batch: Jobs with similar needs are batched together and run through the computer as a
group by an operator or automatic job sequencer. Performance is increased by attempting to
keep CPU and I/O devices busy at all times through buffering, off line operation, spooling
and multiprogramming. A Batch system is good for executing large jobs that need little
interaction; it can be submitted and picked up latter.

2. Time sharing: Uses CPU s scheduling and multiprogramming to provide economical


interactive use of a system. The CPU switches rapidly from one user to another i.e. the CPU
is shared between a numbers of interactive users. Instead of having a job defined by spooled
card images, each program reads its next control instructions from the terminal and output is
normally printed immediately on the screen.

58
3. Interactive: User is on line with computer system and interacts with it via an interface. It
is typically composed of many short transactions where the result of the next transaction may
be unpredictable. Response time needs to be short since the user submits and waits for the
result.

2.3.7 Why do we need operating systems?

Convenience:

 Provide a high-level abstraction of physical resources.


 Make hardware usable by getting rid of warts & specifics.
 Enable the construction of more complex software systems
 Enable portable code.
 MS-DOS version 1 boots on the latest 3+ GHz Pentium.
 Would games that ran on MS-DOSv1 work well today?

Efficiency:

 Share limited or expensive physical resources.


 Provide protection.

A computer operating system brings many basic functions to a computer:

»Process management

»Memory management

»File input/output

»Device input/output

2.4 Classification:

Different types of Operating Systems:

 Desktop PCs
 Parallel Systems
 Distributed Systems
 Clustered Systems
 Real-time Systems

59
 Embedded Systems

Desktop Systems:

 Personal Computers – computer system dedicated to a single user.


 I/O devices – keyboards, mice, display screens, small printers.
 User convenience and responsiveness.
 Can adopt technology developed for larger operating system
 Often individuals have sole use of computer and do not need advanced CPU
utilization of protection features.
 May run several different types of operating systems (Windows, MacOS, UNIX,
Linux)

2.4.1 Parallel Systems:

 Multiprocessor systems with more than one CPU in close communication.


 Tightly coupled system – processors share memory and the internal clock;
communication usually takes place through the shared memory.

Advantages of parallel system:

 Increased throughput
 Economical
 Increased reliability

Disadvantages of parallel system:

 Graceful degradation
 Fail-soft systems

Symmetric multiprocessing (SMP):

 Each processor runs an identical copy of the operating system.


 The OS code is usually shared.
 Many processes can run at once without performance deterioration.
 Most modern operating systems have SMP support.
 OS has to cater for protection of data.

Asymmetric multiprocessing:

60
 Each processor is assigned a specific task; master processor schedules and farms work
to slave processors.
 More common in extremely large systems like mainframes with hundreds of
processors.

2.4.2 Distributed Systems:

 Distribute the computation among several physical processors.


 Loosely coupled system – each processor has its own local memory; processors
communicate with one another through various communications lines, such as high-
speed buses or network communication.

Advantages of distributed systems:

 Resources Sharing
 Computation speed up
 Load balancing
 Scalability
 Reliability
 Fail-Safe
 Communications

Disadvantages of distributed systems:

 May make use of commodity platforms.


 OS has to cater for resource sharing.
 May be either client-server or peer-to-peer systems.

2.4.3 Clustered Systems:

 Clustering allows two or more systems to share storage.


 Provides high reliability.
 Asymmetric clustering: one server runs the application while other server’s standby.
 Symmetric clustering: all N hosts are running the application.
 Used mainly for database applications where a file server exists.

2.4.4 Real-Time Systems:

61
 Often used as a control device in a dedicated application such as controlling scientific
experiments, medical imaging systems, industrial control systems, and some display
systems.
 Well-defined fixed-time constraints.
 Real-Time systems may have either hard or soft real-time.

Hard real-time:

 Secondary storage limited or absent, data stored in short term memory, or read-only
memory (ROM)
 Conflicts with time-sharing systems, usually not supported by general-purpose
operating systems.

Soft real-time:

 Limited utility in industrial control of robotics


 Quality of Service
 Useful in applications (multimedia, virtual reality) requiring advanced operating-
system features.

2.4.5 Embedded Systems:

 Personal Digital Assistants (PDAs)


 Cellular telephones
 Issues:

– Limited memory
– Slow processors
– Small display screens

 Usually most features of typical OS’s are not included at the expense of the
developer.
 Emphasis is on I/O operations.
 Memory Management and Protection features are usually absent.

2.5 Programming Languages:

Programming languages provide the basic building blocks for all systems and
application software. Programming languages allow people to tell computers what to do and

62
are the means by which software systems are developed. This section will describe the five
generations of programming languages.

2.5.1 Machine Language:

Machine language is the lowest-level computer language, consisting of the internal


representation of instructions and data. This machine code—the actual instructions
understood and directly executable by the central processing unit—is composed of binary
digits. Machine language is the only programming language that the machine actually
understands. Therefore, machine language is considered the first-generation language. All
other languages must be translated into machine language before the computer can run the
instructions. Because a computer’s central processing unit is capable of executing only
machine language programs, such programs are machine dependent (non-portable). That is,
the machine language for one type of central processor may not run on other types.

Machine language is extremely difficult to understand and use by programmers. As a


result, increasingly more user-friendly languages have been developed. From the first
generation machine language to more humanlike natural language. These user-oriented
languages make it much easier for people to program, but they are impossible for the
computer to execute without first translating the program into machine language. The set of
instructions written in a user-oriented language is called a source program. The set of
instructions produced after translation into machine language is called the object program.

Programming in a higher-level language (i.e., a user-oriented language) is easier and


less time consuming, but additional processor time is required to translate the program before
it can be executed. Therefore, one trade-off in the use of higher-level languages is a decrease
in programmer time and effort for an increase in processor time needed for translation.

2.5.2 Assembly Language:

An assembly language is the next level up from machine language. It is still


considered a lower-level language but is more user-friendly because it represents machine
language instructions and data locations in primary storage by using mnemonics, or memory
aids, which people can more easily use. Assembly languages are considered second-
generation languages. Compared to machine language, assembly language eases the job of
the programmer considerably. However, each statement in an assembly language must still be
translated into a single statement in machine language, and assembly languages are still

63
hardware dependent. Translating an assembly language program into machine language is
accomplished by a systems software program called an assembler.

2.5.3 Procedural Languages:

Procedural languages are the next step in the evolution of user-oriented programming
languages. They are also called third-generation languages, or 3GLs. Procedural languages
are much closer to so-called natural language (the way we talk) and therefore are easier to
write, read, and alter. Moreover, one statement in a procedural language is translated into a
number of machine language instructions, thereby making programming more productive. In
general, procedural languages are more like natural language than assembly languages are,
and they use common words rather than abbreviated mnemonics. Because of this, procedural
languages are considered the first level of higher-level languages.

Procedural languages require the programmer to specify, step by step, exactly how the
computer must accomplish a task. A procedural language is oriented toward how a result is to
be produced. Because of computers understand only machine language (i.e., 0s and 1s),
higher-level languages must be translated into machine language prior to execution. This
translation is accomplished by systems software called language translators. A language
translator converts the high-level program, called source code, into machine language code,
called object code. There are two types of language translators—interpreters and compilers.
The translation of a high-level language program to object code is accomplished by a
software program called a compiler, which translates the entire program at once. In contrast,
an interpreter is a compiler that translates and executes one source program statement at a
time. Because this translation is done one statement at a time, interpreters tend to be simpler
than compilers. This simplicity allows for more extensive debugging and diagnostic aids to
be available on interpreters. For examples of FORTRAN, COBOL, and C, sees Examples of
Procedural Languages on the Web site.

Nonprocedural Languages Another type of high-level language, called


nonprocedural languages, allows the user to specify the desired result without having to
specify the detailed procedures needed for achieving the result. These languages are fourth-
generation languages (4GLs). An advantage of nonprocedural languages is that they can be
used by nontechnical users to carry out specific functional tasks. These languages greatly
simplify and accelerate the programming process, as well as reduce the number of coding
errors. The 4GLs are common in database applications as query languages, report generators,

64
and data manipulation languages. They allow users and programmers to interrogate and
access computer databases using statements that resemble natural language.

2.5.4 Natural Programming Languages:

Natural programming languages are the next evolutionary step. They are sometimes
known as fifth-generation languages, or intelligent languages. Translator programs to
translate natural languages into a structured, machine-readable form are extremely complex
and require a large amount of computer resources. Therefore, most of these languages are still
experimental and have yet to be widely adopted by industry.

We have now encountered the five generations of programming languages that


communicate instructions to the computer’s central processing unit. Table 4.2 summarizes
the features of these five generations. But we are not finished yet; there are a handful of
newer programming languages to look at before we finish this section. Visual Programming
Languages Programming languages that are used within a graphical environment are often
referred to as visual programming languages. These languages use a mouse, icons, symbols
on the screen, or pull-down menus to make programming easier and more intuitive. Visual
Basic and Visual C++ are examples of visual programming languages. Their ease of use
makes them popular with nontechnical users, but the languages often lack the specificity and
power of their nonvisual counterparts. Although programming in visual languages is popular
in some organizations, the more complex and mission-critical applications are usually not
written in visual languages.

2.5.5 Hypertext Mark-up Language:

Hypertext is an approach to data management in which data are stored in a network of


nodes connected by links (called hyperlinks). Users access data through an interactive
browsing system. The combination of nodes, links, and supporting indexes for any particular
topic is a hypertext document. A hypertext document may contain text, images, and other
types of information such as data files, audio, video, and executable computer programs. The
standard language the World Wide Web uses for creating and recognizing hypertext
documents is the Hypertext Mark-up Language (HTML). HTML gives users the option of
controlling visual elements such as fonts, font size, and paragraph spacing without changing
the original information. HTML is very easy to use, and some modern word processing
applications will automatically convert and store a conventional document in HTML.

65
Dynamic HTML is the next step beyond HTML. Dynamic HTML presents richly formatted
pages and lets the user interact with the content of those pages without having to download
additional content from the server. This functionality means that Web pages using Dynamic
HTML provide more exciting and useful information. Enhancements and variations of
HTML make possible new layout and design features on Web pages. For example, cascading
style sheets (CSSs) are an enhancement to HTML that act as a template defining the
appearance or style (such as size, Color, and font) of an element of a Web page, such as a
box.

2.5.6 Extensible Mark-up Language (XML):

Extensible Mark-up Language (XML) is designed to improve the functionality of


Web documents by providing more flexible and adaptable information identification. XML
describes what the data in documents actually mean. XML documents can be moved to any
format on any platform without the elements losing their meaning. That means the same
information can be published to a Web browser, a PDA, or a smart phone, and each device
would use the information appropriately.. Notice that HTML only describes where an item
appears on a page, whereas XML describes what the item is. For example, HTML shows only
that “Introduction to MIS” appears on line 1, where XML shows that “Introduction to MIS”
is the Course Title. IT’s About Business Box 4.2 shows the benefits that Fidelity has gained
by standardizing on XML.

2.5.7 Object-Oriented Programming Languages:

Object-oriented programming (OOP) languages are based on the idea of taking a


small amount of data and the instructions about what to do with that data (these instructions
are called methods in object-oriented programming) and putting both of them together into
what is called an object. This process is called encapsulation. When the object is selected or
activated, the computer has the desired data and takes the desired action. This is what
happens when you select an icon on your GU equipped computer screen and click on it. That
is, in object-oriented systems, programs tell objects to perform actions on themselves. For
example, windows on your GUI screens do not need to be drawn through a series of
instructions. Instead, a window object could be sent a message to open at a certain place on
your screen, and the window will appear at that place. The window object contains the
program code for opening and placing itself.

66
There are several basic concepts to object-oriented programming, which include
classes, objects (discussed above), encapsulation (discussed above), and inheritance. For a
more detailed discussion and examples of basic OOP concepts, see the Web site. The
reusability feature of object-oriented languages means that classes created for one purpose
can be used in a different object-oriented program if desired. For example, if a class has
methods that solve a very difficult computation problem, that problem does not have to be
solved again by another programmer. Rather, the class is just used in the new program. This
feature of reusability can represent a tremendous reduction in programming time within an
organization. A disadvantage of object-oriented programming, however, is that defining the
initial library of classes is very time-consuming, so that writing a single program with OOP
takes longer than conventional programming. Another disadvantage is that OOP languages,
like visual programming languages, are somewhat less specific and powerful, and require
more time and memory to execute than procedural languages. Popular object-oriented
programming languages include Smalltalk, C, and Java. Because Java is a powerful and
popular language, we will look at it next in more detail.

Java is an object-oriented programming language developed by Sun Microsystems.


The language gives programmers the ability to develop applications that work across the
Internet. Java can handle text, data, graphics, sound, and video, all within one program. Java
is used to develop small applications, called applets, which can be included in an HTML page
on the Internet. When the user uses a Java-compatible browser to view a page that contains a
Java applet, the applet’s code is transferred to the user’s system and executed by the browser.
Java becomes even more interesting when one considers that many organizations are
converting their internal networks to use the Internet’s TCP/IP protocol (more about this in
Chapter 7). This means that with a computer network that runs the Internet protocol,
applications written in Java can be stored on the network, downloaded as needed, and then
erased from the local computer when the processing is completed. Users simply download the
Java applets as needed, and no longer need to store copies of the application on their PC’s
hard drive.

Java can benefit organizations in many ways. Companies will not need to purchase
numerous copies of commercial software to run on individual computers. Instead, they will
purchase one network copy of the software package, made of Java applets. Rather than pay
for multiple copies of software, companies may be billed for usage of their single network
copy, similar to photocopying. Companies also will find it easier to set information

67
technology standards for hardware, software, and communications; with Java, all applications
processing will be independent of the type of computer platform. Companies will have better
control over data and applications because they can be controlled centrally from the network
servers. Finally, software management (e.g., distribution and upgrades) will be much easier
and faster. Unified Modeling Language (UML), developing a model for complex software
systems is as essential as having a blueprint for a large building. The UML is a language for
specifying, visualizing, constructing, and documenting the artefacts (such as classes, objects,
etc.) in object-oriented software systems. The UML makes the reuse of these artefacts easier
because the language provides a common set of notations that can be used for all types of
software projects.

2.6 General Software features and Trends:

The benefits of the analysis of software faults and failures have been widely recognized.
However, detailed studies based on empirical data are rare. In this paper, we analyse the fault
and failure data from two large, real-world case studies. Specifically, we explore:

 The localization of faults that lead to individual software failures and


 The distribution of different types of software faults. Our results show that individual
failures are often caused by multiple faults spread throughout the system.

This observation is important since it does not support several heuristics and
assumptions used in the past. In addition, it clearly indicates that finding and fixing faults that
lead to such software failures in large, complex systems are often difficult and challenging
tasks despite the advances in software development. Our results also show that requirement
faults, coding faults, and data problems are the three most common types of software faults.
Furthermore, these results show that contrary to the popular belief, a significant percentage of
failures are linked to late life cycle activities. Another important aspect of our work is that we
conduct intra- and inter project comparisons, as well as comparisons with the findings from
related studies. The consistency of several main trends across software systems in this paper
and several related research efforts suggests that these trends are likely to be intrinsic
characteristics of software faults and failures rather than project specific.

2.7 Importance of Computers in Business:

Computer plays an important role in business environment as every organisation


adopts it in some form or the other to perform the tasks in effective manner. In the past few

68
years’ rapid development in IT, particularly in communications, electronic service networks,
and multimedia have opened up new opportunities for corporates. All these are contributing
towards new and effective ways of processing business transactions, integrating business
processes, transferring payments and delivering services electronically. It has affected the
business in the following ways:

1. Office Automation:

Computers have helped automation of many industrial and business systems. They are
used extensively in manufacturing and processing industries, power distribution systems,
airline reservation systems, transportation systems, banking systems, and so on. Computer
aided design (CAP) and computer-aided manufacture (CAM) are becoming popular among
the large industrial establishment.

2. Stores large amount of date and information:

Business and commercial organizations need to store and maintain voluminous


records and use them for various purposes such as inventory control, sales analysis, payroll
accounting, resources scheduling and generation of management reports. Computers can store
and maintain files and can sort, merge or update as and when necessary.

3. Improves Productivity:

With the introduction of word processing software, Computers have recently been
applied to the automation of office tasks and procedures. This is aimed at improving the
productivity of both clerical & managerial staff.

4. Sharing of data and information:

Due to networking of computers, where a number of computers are connected


together to share the data and information, use of e-mail and internet has changed the ways of
business operations.

5. Competitiveness:

Computers offer a reliable and cost-effective means of doing business electronically.


Routine tasks can be automated. The customers can be provided support round the clock,
which is 24 hours a day. With advancement in IT sector, corporates are spreading business
around the world thus, increasing their presence and entering new markets.

69
6. Security:

To provide security to data and important computer programs, almost every


organisation has some security programs to avoid the illegal access of the company’s
information by unauthorized persons. The three fundamental attributor of a security program
are confidentially, integrity and availability which allow access to only authorized persons in
an organization.

7. Cost Benefits:

The extensive availability of internet based information means that companies have a
wider choice of suppliers which leads to a more competitive pricing. Due to the presence of
internet the role of the middleman becomes less important as companies can sell their product
or services directly to the customer.

8. Marketing:

Corporates engaged in e-business can take help of their respective websites to create
brand awareness of their products, thus, creating new avenues of promotion of their products.
In addition, companies’ websites can also provide better services such as after sales service to
the customer.

70
Review Questions:

1. Discuss about Software fundamentals?


2. What is an Operating System? Explain its Functions?
3. How the IO System managed?
4. Write the history of OS?
5. Write about parallel and distributed system?
6. What are the available programming languages?
7. Explain about Software features and trends?
8. How the computer involves in Business?
9. What is an Embedded System?
10. Write a Short note on Real time system?

71
Unit III

Objectives:

The study which makes the knowledge of Contribution of computer in business challenges
and technology.

 Computer Fundamentals
 Problems and Prospects
 Business Challenges
 Computer security
 Effects of Technology

3.1. Computer:

A computer is an electronic device that manipulates information, or data. It has the


ability to store, retrieve, and process data. You probably already know that you can use a
computer to type documents, send email, play games, and browse the Web. You can also use
it to edit or create spreadsheets, presentations, and even videos.

Definition:

A computer is a device that accepts information (in the form of digitalized data) and
manipulates it for some result based on a program or sequence of instructions on how the data
is to be processed. Complex computers also include the means for storing data (including the
program, which is also a form of data) for some necessary duration. A program may be
invariable and built into the computer (and called logic circuitry as it is on microprocessors)
or different programs may be provided to the computer (loaded into its storage and then
started by an administrator or user). Today's computers have both kinds of programming.

3.1.1 Advantages and Disadvantages:

Computers have made an impact in virtually all areas of our lives. They have changed
the way things are done by increasing accuracy and speed. We no longer need to rely on
manpower to execute repetitive and tedious work that can be automated by computers. They
have also drastically brought down the cost of doing business. Today, computers are a staple
in most disciplines including medicine, accounting, education, engineering and others.

72
Despite all the merits of computers, they also have their downsides. Read on to learn more
about the advantages and disadvantages of a computer.

Advantages:

1. Speed Up Work Efficiency:

This is by far the biggest advantage of using computers. They have replaced the use of
manpower in carrying out tedious and repetitive work. Work that can take days to complete
manually can be done in a few minutes using a computer. This is made possible by the fact
that data, instructions and information move very fast in the electric circuits of computers.
They process trillions of instructions within a second.

2. Large and Reliable Storage Capacity:

Computers can store huge volumes of data. To put this into perspective, physical files that
can fill a whole room can be stored in one computer once they are digitized. Better yet,
access to the stored information is super-fast. It takes micro-seconds for data to be transferred
from storage to memory in a computer. The same cannot be said for the retrieval of physical
files. With a computer, you can store videos, games, applications, documents etc. that you
can access whenever required. Better yet, storage can be backed up fast and efficiently.

Fig1. Storage Device

3. Connection with Internet:

The Internet is probably the most outstanding invention in history. Computers allow you
to connect to the Internet and access this global repository of knowledge. With the Internet,
you can communicate faster with people across the globe. You can send email, hold voice
and video calls or use IM services. The Internet also allows for instant sharing of files. You
can also connect with friends and family on social networks and even learn a new language

73
online. The Internet is a great educational resource where you can find information on
virtually anything.

One of the biggest breakthroughs on the Internet is probably ecommerce. You can
actually shop in the convenience of your home and have the items delivered to your doorstep.

Fig2. Internet

4. Consistency:

You always get the same result for the same process when using a computer. For example
if you created a document on one computer, you can open it on another without making any
special adjustments. This consistency makes it possible to save and edit a document from
different computers in different parts of the world. Collaboration is therefore easier.

Whatever job you need done, you can always rest assured that the computer will get it
just right. There will be no variations in results achieved from the same process. This makes
computers ideal for doing tedious and repetitive work.

Fig3. Consistency of Systems

Disadvantages:

1. Health Risk:
Improper and prolonged use of a computer might lead to disorders or injuries of the
elbows, wrist, neck, back, and eyes. As a computer user you can avoid these injuries by
working in a workplace that is well designed, using a good sitting position and taking proper

74
work breaks. Technology load and computer addiction are the major behavioral health risks.
Addiction comes when you are obsessed with a computer. Technology overload comes when
you are over loaded with computer and mobile phones. Both technology overload and
computer addiction are avoidable if the habits are noted and a follow up is done.

Fig4. Health Risk

2. Violation of Privacy:

When using the Internet on your computer, you run the risk of leaking your private
information. This is especially so if you happen to download malicious software into your
computer. Trojans and Malware can infiltrate your system and give identity thieves access to
your personal information. Of particular interest to identity thieves are your bank and credit
card details. Make sure to install reliable antivirus software to keep malware and Trojans at
bay. You should also avoid clicking on suspicious looking links when using the Internet.

Fig5. Privacy

3. Impact on Environment:

Manufacturing process of computers and computer waste are harmful to the environment.
When computer junk is discarded in open grounds, they release harmful chemicals like lead
and mercury to the environment. Mercury can result in cancer and lead can cause radiation
diseases when exposed to the environment. Disposed computers could also cause fire.

75
Fig6. Environment

4. Data Security:

This is one of the most controversial aspects of computers today. The safety and integrity
of data is keys for any business. However, data stored in a computer can be lost or
compromised in a number of ways. There are instances where the computer could crash
wiping out all data that had been stored therein. Hackers could also gain access into your
computer and compromise the integrity of your data. This is why you should always have a
backup. Moreover, you should put measures in place to keep your data safe from hackers.

Fig7. Security

3.1.2 Characteristics:

 Quick performance
 Quick data input and information retrieval
 Ability to store data
 Accurate results, which depend on the accuracy of data input.
 Reducing human role, in particular in mechanically run factories.
 Quick processing of arithmetic's and logic operations.
 Continuous and persistent workability.
 Available a lot of software and applicable programs that facilitate computer
accessibility without need to study computer science.
 Prompt decision making to find proper and at most solutions for specifics question.

76
 Communicability through computer networks with other computers to exchange data
and information.

3.1.3 Computer Languages:

Machine Level Language: This is low level programming language. Computer or any
electronic device only understands this language. i.e. Binary number i.e. 0 and 1.

Assembly Level Language: This is a low level programming language which is converted
into executable machine code by a utility programmer referred to as an assembler.

High Level Language: High level language is a programming language which is easily
understandable / readable by human.

Interpreter: This is a convertor which converts high level language programme to low level
language programme line by line.

Compiler: This is also a convertor which converts whole high level language programme to
low level language programme at a time.

3.1.4 Computer Software:

Software is a logical programme to handle/solve the complex problem.

System Software: This is special type of software which is responsible for handle the whole
computer system.

Application Software: This is special type of software which is used to solve a particular
problem.

Embedded Software: This type of software embedded with hardware to do a specific type of
job.

Proprietary Software: In general, this type of software requires purchasing to use that
particular software for the sometime or single user as per conditioned by the vendor of that
particular software.

77
Open Source Software: This type of software may be freely available and cannot be used in
commercially. We can modify, and use it under the same license.

3.1.5 Operating System:

Windows: This is a Proprietary Operating system and vendor is Microsoft i.e. Windows
2007, Windows vista, Windows 2008 etc.

Linux: This is an open source Operating System such as Ubuntu, fedora, debian, mandriva,
centOS etc.

Linux (Ubuntu) Desktop Elements

File Management in Linux (Ubuntu)

3.1.6 Computer Security:

Virus and worms: These are the computer program which malfunction the computer system.
Virus requires a carrier while a worm does this by itself. Worm does not require any carrier.

Spoofing: Through this, deceiving the computer users and making the fool.

Intrusion or Hacking: If a computer is used and controlled by unauthorized users then it is


called hacking and who does this is called hacker. Main purpose of hacking is to steal the
private data or alter the actual data.

Denial of Services: The main aim of this attack is to bring down the targeted network and
make it to deny the service for legitimate users.

Sniffing: Data can be seen and watched when it travels one computer to other computer.

3.2 Computerization:

Computerization is (used with object), computerized, computerizing.

1. To control, perform, process, or store (a system, operation, or information)


by means of or in an electronic computer or computers.

2. To equip with or automate by computers:

78
3.2.1 Benefits of computerization for an individual and organization:

For Individuals computers are the best source of getting information related to their
studies and Jobs, It is the best way to perform different tasks using MS Office and other
helping software’s that assists in developing projects in different fields. Not only are that
computers also a source of entertainment of different kinds.

For organizations they help in reducing their costs regarding data protection and data
security. It also helps them in getting updated about what is happening in different branches
thus getting updated about their performance. It also is a source of keeping check on rivals,
finding potential employees through online hiring and thus working efficiently.

3.2.2 Conceptualizing Computerization:

Conceptualizing the connections among doing work and using computers requires a
more detailed depiction of computing than is often developed in direct effects models (Kling
and Lamb 2000). Direct effects models of computing (and even models of mediating effects)
characterize computers as endogenous to their setting of use (Mackenzie 1992).That is, the
computer is seen as an external force, perhaps as an agent of change, and the features of this
tool are easily understood (both in terms of use and outcomes). This tool view suggests that a
computer-based system need not be connected to the scenes of its use (Orlikowski and Iacono
2001). This further suggests a form of stable meaning for a computer: Tools do not change
over time, unless these changes are designed into new forms of the tool (such as additional
features in new releases of software). The implied stability of meaning inherent in a tool view
tends to minimize the changing nature of tool use due to the context of its use.

3.2.3 A Social Informatics Perspective:

Social informatics is the large, diverse, and growing body of research and studies that
examines social and organizational aspects of computerization, such as the roles of computers
in social and organizational change and the ways that the uses of computers are influenced by
social and organizational structures and actions (Kling 1999; Sawyer and Eckenfelder 2002;
Kling, Sawyer, and Rosenbaum 2003). Social informatics is not a theory or a method. Social
informatics provides a set of empirically grounded orienting principles that makes explicit
particular elements of the socio-technical perspective regarding developing, deploying, and
using computers.

79
Three broad findings that arise from the empirical basis of social informatics research
help focus our attention on work and computerization. The first finding is that computers are
embedded in, and also help to shape, the context of their use. For example, Ward, Wamsley,
Schroeder, and Robins (2000) investigate the assumption that computerization promotes
autonomy, organizational change, and greater responsiveness. Their analysis of computing
implementations and uses at the U.S. Federal Emergency Management Administration
(FEMA) helps to show how existing control structures within the agency shape budgeting for
computers. The resulting systems reinforce existing power structures in the organization.
They explain that the Reagan Bush administration’s emphasis on national security led to the
allocation of most FEMA computing resources to civilian defence-related projects and the
location of the agency information resources management office to be in the civilian defines
division. This allocation process left other parts of the agency, including the disaster planning
division, suffering for lack of computing resources and unable to deal effectively with major
U.S. natural disasters such as Hurricane Hugo, Hurricane Andrew, or the Loma Prieta
earthquake. They conclude that government investment in computing does not automatically
produce greater responsiveness, as the larger social context shapes the programs in which the
computer systems are procured, implemented, and used.

IMPLEMENTING IT·BASED TRANSFORMATION OF THE ORGANIZATION


The discussion above focused on the changes caused by an individual system or a
reengineering effort within the organization. How do you implement the massive changes
required to use information technology to transform the organization and create new
structures and relationships within the firm and with external organizations?
To create a new kind of structure: the T-Form organization. The advantages of this structure
include:
1. A lean organization with the minimal number of employees necessary for the
business to function
2. A responsive organization that reacts quickly to threats from competitors and
changes in the environment
3. A minimum overhead organization
4. A structure with low fixed costs due to more virtual components, partnerships, and
subcontracting
5. An organization that is responsive to customers and suppliers
6. An organization that is more competitive than firms with traditional structures

80
7. An organization that allows its employees to develop their capabilities and
maximize their contribution to the firm
One of the major advantages of this kind of organization is its lack of a large number of
hierarchical levels. The firm is flat with few levels of management. This organization is
responsive because decisions are made quickly; large numbers of levels of managers do not
slow decision making. All these features add up to lower overhead than the traditional
bureaucratic organization. The end result should be a firm that is more competitive than a
traditional hierarchical organization because of its responsiveness and lower operating costs.

There are costs that go along with these benefits including:


1. The organization has to invest in information technology.
2. The firm has to be able to manage IT.
3. Employees have to learn new technologies and constantly update their knowledge.
4. Managers have a wide span of control.
5. Managers have to supervise remote workers.
6. Firms have to manage close relationships with partners and companies in various
alliances.
Another cost of using technology to transform the organization is learning new technology.
Products and systems are constantly changing. New releases of PC software, like spreadsheet
packages, seem to average more than one per year. If you do not upgrade and learn new
systems, eventually it becomes difficult to share with others. And, of course, you forego the
improvements in the new version of the packages. The T-Form organization features a wide
span of control for most managers.
The idea is to substitute electronic for face-to-face communications. Implicit in a wide span
of control is a degree of trust in subordinates. Electronics will not substitute for the close
control one can exercise over subordinates when a manager has only five or six direct reports.
A recent news story on Japanese management showed a large number of workers arranged in
two rows of desks, each row facing the other. At the end of the row sat the workers'
supervisor with his desk perpendicular to the workers so he had them under constant view.
Evidently a common feature of Japanese organizations, this physical structure is probably the
ultimate in close supervision. The T-Form organization is at the opposite end of the spectrum.
It requires managers to place more responsibility with subordinates to do their jobs. Closely
related to the need to adopt a management philosophy stressing subordinate responsibility is
the problem of managing remote work. Companies are likely to eliminate physical offices for

81
employees who spend a great deal of time traveling or who work from a satellite office or
home. Work-at-home experiments have shown that some managers feel uncomfortable trying
to supervise subordinates they rarely see. Remote work also requires the manager to trust
subordinates, and of course, requires subordinates to act responsibly. Some subordinates have
reacted negatively to losing their offices and to using part of their homes as offices. They feel
the company is forcing its overhead costs onto them. Virtual offices will undoubtedly call for
new managerial skills and relationships between managers and the people reporting to them.
The final management cost of a technology-based organization is handling relationships with
external firms. These firms might be suppliers or customers, partnersin a strategic alliance, or
governmental agencies. These partners are a vital part of your business, but they do not report
to you. Managers have to manage a cooperative arrangement without having the usual "tools"
given a manager such as reporting relationships and control over subordinates' salaries.

3.3 Problems and Prospects:

3.3.1 Problems of insurance business:

The insurance business in Bangladesh is facing lots of problem in every now and
then. To describe the problems, we use service quality gap model. By using the model it will
become easier to understand the problems of insurance in Bangladesh.

Service Quality Gap Model:

Managers in the service sector are under increasing pressure to demonstrate that their
services are customer-focused and that continuous performance improvement is being
delivered. Given the financial and resource constraints under which service organizations
must manage it is essential that customer expectations are properly understood and measured
and that, from the customers ' perspective, any gaps in service quality are identified. This
information then assists a manager in identifying cost-effective ways of closing service
quality gaps and of prioritizing which gaps to focus on - a critical decision given scarce
resources ( SERVQUAL and Model of Service Quality Gaps: A Framework for Determining
and Prioritizing Critical Factors in Delivering Quality Services by Dr. Arash Shahin,
Department of Management, University of Isfahan, Iran). What makes managing customer
service different, as a marketing problem, from managing the standard elements of the
marketing mix (product, price, promotions, and place) is that customer service is typically

82
delivered by front-line employees. Personnel policies, thus, have immediate marketing
implications. Many retailers take this into consideration by treating employees as "internal
customers." According to this philosophy, management must "sell" their internal customers
on the company and its policies in order to induce front-line employees to deliver the desired
levels of customer service. Standard personnel policies that can facilitate customer service
and sell the "internal customers" include (a) employee screening and selection, (b) training,
(c) setting suitable reporting relationships, (d) goals and reward systems, (e) internal
communications, and (f) generally creating a "service" culture. The Gap Analysis Model goes
a step beyond simply reexamining each of the standard personnel policies in light of the
desired customer service. The model provides specific criteria concerning personnel and
management policies that complete the linkage between customer expectations and perceived
service delivery. In addition, the model provides a checklist of where breaks in the chain can
occur; using this checklist can provide a useful audit of service quality (See: A Service
Quality Audit:

1. Not Knowing What Customers Expect:

Based on interviews, the authors found that executives' perceptions of superior quality
service are largely congruent with customers' expectations. Customers' expectations versus
management perceptions are the result of the lack of a marketing research orientation,
inadequate upward communication and too many layers of management.

2. The Wrong Service-Quality Standards:

Gap 2 arises when there is a discrepancy between what managers perceive that
customers expect and the actual standards that they (the managers) set for service delivery.
This gap may occur when management is aware of customers' expectations but may not be
willing or able to put systems in place that meet or exceed those expectations.

3. The Service-Performance Gap:

Organizational policies and standards for service levels may be in place, but is front
line staff following them? A very common gap in the service industry, Gap 3 is the difference
between organizational service specifications and actual levels of service delivery. Service
specifications versus service delivery is the result of role ambiguity and conflict, poor

83
employee-job fit and poor technology-job fit, inappropriate supervisory control systems, lack
of perceived control and lack of teamwork.

4. When Promises Do Not Match Delivery:

Customers perceive that organizations are delivering low-quality service when a gap
appears between promised levels of service and the service that is actually delivered. This
gap is created when advertising, personal selling or public relations over-promise or
misrepresent service levels. Service delivery versus external communication may occur as a
result of inadequate horizontal communications and propensity to over-promise.

5. The discrepancy between customer expectations and their perceptions of the service
delivered:

As a result of the influences exerted from the customer side and the shortfalls (gaps)
on the part of the service provider. In this case, customer expectations are influenced by the
extent of personal needs, word of mouth recommendation and past service experiences.

6. The discrepancy between customer expectations and employees' perceptions:

As a different result of understanding in customer expectations by front-line service


providers.

7. The discrepancy between employee's perceptions and management perceptions:

The result of the differences of understanding in customer expectations between managers


and service providers.

Other Problems:

Service quality gap model does not provide all the problems of insurance business.
There some other problems too. These problems are given below:

Lack of trustworthiness: Lack of trustworthiness is one of the major problems of insurance


business in Bangladesh. Lengthy process in getting payment after any incident is the main
reason of trustworthiness. Time killing behavior in payment after incidence is reducing the
trust of the customers towards the insurance companies.

84
Low income of the people: Low income and purchasing power doesn't permit the people of
Bangladesh to go for an insurance policy. Practically we can easily relate the above
mentioned factors. For example, in one hand the lower income of the people is creating
barrier in buying insurance policy, on the other hand lack of trustworthiness makes this
insurance avoiding behavior more acute.

Unattractive offerings: Insurance companies are not providing attracting offerings to their
customers. All the offerings are similar. There is very less variation among the offerings of
different insurance companies.

Lack of information about the insurance companies: The insurance companies are not
delivering their information (regarding company and insurance policy) properly or evenly
which is another problem of the insurance companies.

Inefficiency in problem solving: Inefficiency in problem solving is another problem of the


insurance companies. If any customer comes to then to solve some problems, they do not
solve those problems efficiently.

High service/processing cost: Insurance companies charges high service/processing cost


from their customers.

Less convincing sales people: Some insurance companies appoint sales people at a very
lower cost. These sales people are not much convincing. They cannot convince effectively to
purchase insurance policy. This is another problem of insurance companies.

Lengthy process to get payment after incidents: Insurance companies take a lengthy
process to get payment after incidents. Sometimes they take one or two years to pay their
customers. This is one of the major problems of insurance companies.

Steps to overcome the problems of insurance business:

The demographic trends suggest that as private insurance companies (both local and
multinational) have proliferated in Dhaka city, better educated and more affluent people have
gravitated to these insurance companies for insurance services. These people/clients are likely
to have better information about the quality of services provided by both public and private
insurance companies and their inclination to select private insurance companies suggests,

85
implicitly, that the quality of service is better at these private firms even though their (private
insurance companies) service cost is somewhat higher. Moreover, many branch operation of
private insurance companies help the people to make evaluation among them and making an
insurance decision in favor of those which are trustworthy. But between the private local and
foreign insurance company choice, clients are mostly considering foreign private insurance
companies due to its trustworthiness, experience in operation and wide area coverage. Less
number of branches of the public insurance companies may be another prime reason of not
being preferred by the local clients. By definition, it might be more authentic if the clients
were inclined towards the public insurance companies from trustworthiness point of view, but
as statistics suggests in favor of choosing foreign private insurance firms, probably we have
to be satisfied by saying that it is in many respect guided by client's psychology of getting
better and prompt services.

The incentive structure must also play a role in ensuring the quality services delivered
by the public insurance companies. One solution is to tie part of the compensation of
insurance personnel in public companies to services rendered and feedback received from
clients. This, of course, is a complex issue and has implications for pay scale administration,
since public bank staffs, as government servants, are paid according to certain pay structures.
While beyond the scope of this paper, authors feel that compensation flexibility is necessary
to reward those who are dedicated to providing quality insurance services. If compensation
adjustments can't be incorporated, benefits-including promotion, transfers in more valued
branches, study leave, performance bonus and the like-could be tied to performance
evaluation mechanism. There must be a formal procedure of evaluating the employees by the
clients through some questionnaire type performance appraisal form. A suggestion, objection
or recommendation book in the branch can be introduced where the clients can even
complain or appreciate about a specific employee.

Public awareness and the transparency of the high official may have a positive impact
on that issue. A rating scale could also be established to rate the quality of services based on
insurance company's facilities, past performance records, and client's evaluations. The rating
factors and mechanisms would have to be developed on the basis of inputs from clients and
the profession. It would also be important to determine, specify, and strongly enforce the
legal consequences for tampering with client records and their evaluations. This process will
lead to qualifying and ranking each and every insurance company (Private and public). We

86
think the insurance policy collection and profit margin should not be the only benchmark to
position a specific insurance company. As the number of insurance companies continues to
grow, it is important to develop a national capability to periodically evaluate and publicly
disseminate (As University Grants Commission did for the private universities) the ratings or
rankings of all insurance companies so that each service provider's reputation is widely
known. Armed with this information, clients can make more informed choices.

Prospects of insurance business:

Insurance industry, as said earlier, at the final stage of its transition. Government has
taken several steps for revitalizing the sector to make it more vibrant and operationally sound.
However, amendments and initiatives can't make an overnight change in the sector. As we
know the Department of Insurance is going to be replaced by an Independent Regulatory
Authority yet the same will not be fruitful until the authority is equipped with technocrats at
the policy level and adequate human resources at the operation level to take control over the
sector. The new regulatory body should discover some mechanism to eradicate underhand
commission to reduce the high procurement cost in general insurance business.
Professionalism at every level of management is very crucial for overall development in the
sector.

For efficient and prompt decision making, management should be given sufficient
delegation of power. The board should only involve in strategic and policy aspects of the
company without looking into the day to day operation. All the insurance companies should
have a sound HR policy that will attract the qualified people to choose the profession as a
'career' not a mere 'job'. HR development program should be a part and parcel of regular
business operation for the enhancement of skills and development of professionalism. A good
number of companies are still struggling for their survival, thus huge cost of IT infrastructure
is an additional burden for them. However, awareness should be built for effective use of IT
infrastructure in MIS that ultimately will bring positive results in future.

Last but not the least; it is not the responsibility of the regulatory body alone to make
revolutionary change, rather the respective board, the management team and above all the
insured should come forward to bring the sector to the global standard. The sooner it
happens; the better is for the stakeholders in particular and the country in general.

87
3.3.2 Problems with Online Shopping:

Purchasing goods from the comfort of your own living room certainly is more
convenient than actually driving to a store, while offering a virtually unlimited array of
choices and the ability to compare prices. While online payment and security technology
have come a long way, you still may experience problems with online shopping from time to
time. This article covers some of the more common issues, such as getting the wrong item or
falling prey to online scams, and ways to minimize these potential pitfalls.

Are There Any Scams I Should Watch Out For When Buying Online? In addition to
general problems with online shopping pertaining to legitimate retailers, you also need to be
aware of the various kinds of scams targeting online consumers. Some suggestions for
avoiding scams are listed below: Beware of "Gray Market" Items: So-called gray market
goods may be either illegal in the U.S., sold in a way that sidesteps regulations, or unintended
for the U.S. market. You may get something that doesn't work properly or which has
instructions in another language. Also, gray market merchandise typically lacks a U.S.
warranty. Be Skeptical of Service Contracts: Extended-service packages from retailers or
third parties usually are overpriced and generally are not a good value. Make Sure You
Understand Shipping Charges: A retailer may try to squeeze a profit from heavily discounted
items by tacking on an extremely high shipping rate, most of it not actually used for shipping.
Know How to Spot the "U.S. Warranty" Scam: Sometimes gray market goods are sold with a
warranty provided by a third party, but described only as a "U.S. warranty." This is not the
same as a manufacturer's warranty and typically provides an inferior level of protection.

The main problems faced by hotel businesses can be formulated as following:

 Low room occupancy, especially in the regions


 Few visits of foreign tourists (incoming tourism)
 The hotels having bank debts were experiencing difficulties, because they had taken
loans before the crisis.
 High accommodation rates.
 Shortage of highly-skilled personnel.
 Low quality of services.

Analysis of the hotel business of the Republic has revealed the following trends of its
development: step-by-step recovery and gradual return to the pre-crisis levels of occupancy.

88
Hotel business of Kazakhstan is expanding annually, international operators entering the
market will contribute to increasing activity and qualitative development of hotel services
market, local operators are developing the middle-class segment of In our opinion, in the
future hotel rooms of “express class”, guest houses with extended services and home-hotels
(when foreigners buy out the rooms) are going to have the best investment prospects.

3.3.3 Problems and prospects of higher education in India:

Critical issues in Indian higher education:

As India strives to compete in a globalised economy in areas that require highly


trained professionals, the quality of higher education becomes increasingly important. So far,
India’s large, educated population base and its reservoir of at least moderately well trained
university graduates have aided the country in moving ahead, but the competition is fierce;
from China in particular. Other countries are also upgrading higher education with the aim of
building world class universities. Even the small top tier of higher education faces serious
problems. Many IIT graduates, well trained in technology, have chosen not to contribute their
skills to the burgeoning technology sector in India; perhaps half leave the country
immediately upon graduation to pursue advanced studies abroad, and most do not return. A
stunning 86 per cent of Indian students in the fields of science and technology who obtain
degrees in the United States do not return home immediately following their graduation. A
body of dedicated and able teachers work at the IITs and IIMs, but the lure of jobs abroad and
in the private sector makes it increasingly difficult to lure the best and brightest to the
academic profession.

The present system of higher education does not serve the purpose for which it has
been started. In general education itself has become so profitable a business that quality is
lost in the increase of quantity of professional institutions with quota system and
politicization adding fuel to the fire of spoil system, thereby increasing unemployment of
graduates without quick relief to mitigate their sufferings in the job market of the country. So,
the drawbacks of the higher education system underscore the need for reforms to make it
worthwhile and beneficial to all concerned.

Challenges of present higher educational system in India:

Since we have got independence we are facing challenges to establish a great and
strong education system. Various governments came and gone. Off course they tried to

89
establish new education policies in the system but this is very sad to dictate that they were not
sufficient for our country. Still we are facing lot of problems and challenges in our Education
System. India recognises that the new global scenario poses unprecedented challenges for the
higher education system. The University Grants Commission has appropriately stated that a
whole range of skills will be demanded from the graduates of humanities, social sciences,
natural sciences and commerce, as well as from the various professional disciplines such as
agriculture, law, management, medicine or engineering. India can no longer continue the
model of general education as it has been persisting in for the large bulk of the student
population. Rather, it requires a major investment to make human resource productive by
coupling the older general disciplines of humanities, social sciences, natural sciences and
commerce to their applications in the new economy and having adequate field based
experience to enhance knowledge with skills and develop appropriate attitudes.

Responding to these emerging needs, the UGC stated: "The University has a crucial
role to play in promoting social change. It must make an impact on the community if it is to
retain its legitimacy and gain public support". It seeks to do so by a new emphasis on
community based programmes and work on social issues. Concepts of access, equity,
relevance and quality can be operationalized only if the system is both effective and efficient.
Hence, the management of higher education and the total networking of the system has
become an important issue for effective management. The shift can occur only through a
systemic approach to change as also the development of its human resource, and networking
the system through information and communication technology.

Suggestions for improving quality of higher education:

There are some suggestions and Expectations from Government, Industry,


Educational Institutions, Parents and Students for improving quality of higher education-

1. Towards a Learning Society- As we move towards a learning society, every human


activity will require contributions from experts, and this will place the entire sector of higher
education in sharp focus. Although the priorities, which are being assigned today to the task
of Education for All, will continue to be preponderant, the country will have to prepare itself
to invest more and more on higher education and, simultaneously, measures will have to be
taken to refine, diversify and upgrade higher education and research programmes.

90
2. Industry and Academia Connection- Industry and Academia connect necessary to ensure
curriculum and skills in line with requirements. Skill building is really very crucial to ensure
employability of academia to understand and make sure good jobs (keeping in view
knowledge + skills+ global professional skills = good jobs).

3. Incentives to Teachers and Researchers- Industry and students are expecting specialized
courses to be offered so that they get the latest and best in education and they are also
industry ready and employable. Vocational and Diploma courses need to be made more
attractive to facilitate specialized programs being offered to students. Incentives should be
provided to teachers and researchers to make these professions more attractive for the
younger generation.

4. Innovative Practices- The new technologies offer vast opportunities for progress in all
walks of life. It offers opportunities for economic growth, improved health, better service
delivery, improved learning and socio-cultural advances. Though efforts are required to
improve the country’s innovative capacity, yet the efforts should be to build on the existing
strengths in light of new understanding of the research innovation- growth linkage.

5. To mobilize resources- The decline in public funding in the last two plan periods has
resulted in serious effects on standards due to increasing costs on non-salary items and
emoluments of staff, on the one hand, and declining resources, on the other. Effective
measures will have to be adopted to mobilize resources for higher education. There is also a
need to relate the fee structure to the student’s capacity to pay for the cost. So that, students at
lower economic levels can be given highly subsidised and fully subsidised education.

6. Coming of Information Age- The world is entering into an Information Age and
developments in communication, information and technology will open up new and cost-
effective approaches for providing the reach of higher education to the youth as well as to
those who need continuing education for meeting the demands of explosion of information,
fast-changing nature of occupations, and lifelong education. Knowledge, which is at the heart
of higher education, is a crucial resource in the development of political democracy, the
struggle for social justice and progress towards individual enlightenment.

7. Student-Centred Education and Dynamic Methods- Methods of higher education also


have to be appropriate to the needs of learning to learn, learning to do, learning to be and
learning to become. Student-centred education and employment of dynamic methods of

91
education will require from teachers new attitudes and new skills. Methods of teaching
through lectures will have to subordinate to the methods that will lay stress on self-study,
personal consultation between teachers and pupils, and dynamic sessions of seminars and
workshops. Methods of distance education will have to be employed on a vast scale.

8. Public Private Partnership- PPP is most essential to bring in quality in the higher
education system. Governments can ensure PPP through an appropriate policy. University
Grants Commission and Ministry of HRD should play a major role in developing a
purposeful interface between the Universities, Industries and National Research Laboratories
(NRLs) as a step towards PPP. Funding to NRLs by the government should ensure the
involvement of institutions of higher education engaged in research activities to facilitate
availability of latest sophisticated equipment. There has been some effort both by the
government and the private education institutions to develop the teaching staff at various
levels. However, this needs to be intensified with appropriate attention to all the aspects
related in order to prepare quality and sufficient number of educational staff. Such efforts
need a very serious structuring for the research base institutions. We have to be optimistic
that private-public partnership and the Industry interface will take place in the field of
education at all levels, and particularly in the backward regions, which is the need of the
hour. To achieve excellence, we thus need to create a real partnership between government,
educators and industry– Partnerships that can provide our high-tech industries with skilled
workers who meet the standards of their industry.

9. To Provide Need Based Job-Oriented Courses- All round development of personality is


the purpose of education. But the present day education is neither imparting true knowledge
of life and nor improving the talent of a student by which one can achieve laurels in the field
one is interested. So, combination of arts subjects and computer science and science and
humanities or literature should be introduced so that such courses could be useful for the
students to do jobs after recruitment in some companies which would reduce unnecessary
rush to higher education. The programme must be focused on graduate studies and research
and developing strategies and mechanisms for the rapid and efficient transfer of knowledge
and for its application to specific national and local conditions and needs. Meritorious
doctoral students should be recognized through teaching assistantships with stipends over and
above the research fellowships. Finally, based on knowledge only vision of the future life and
work can be had; based on this vision only a broad ambition can be fixed for oneself; and

92
based on this ambition only one can lead interesting life doing satisfying job to do remarkable
achievements in some field in the world.

10. International Cooperation- Universities in India have been a primary conduit for the
advancement and transmission of knowledge through traditional functions such as research,
innovation, teaching, human resource development, and continuing education. International
cooperation is gaining importance as yet another function. With the increased development of
transport and communication, the global village is witnessing a growing emphasis on
international cooperation and action to find satisfactory solutions to problems that have
global dimensions and higher education is one of them.

11. Towards a New vision- India realizes, like other nations of the world, that humanity
stands today at the head of a new age of a large synthesis of knowledge, and that the East and
the West have to collaborate in bringing about concerted action for universal upliftment, and
lasting peace and unity. In this new age, great cultural achievements of the past have to be
recovered and enriched in the context of the contemporary advancement so that humanity can
successfully meet the evolutionary and revolutionary challenges and bring about a new type
of humanity and society marked by integrated powers of physical, emotional, dynamic,
intellectual, ethical, aesthetic and spiritual potentialities.

12. Cross Culture Programmes- After education, tour to all the places in India and world as
far as possible with the cooperation of government is necessary so that one can understand
about people, culture, arts, literature, religions, technological developments and progress of
human society in the world.

13. Action Plan for Improving Quality- Academic and administrative audit should be
conducted once in three years in colleges by external experts for ensuring quality in all
aspects of academic activities. The self-finance colleges should come forward for
accreditation and fulfil the requirements of accreditation. Universities and colleges should
realise the need for quality education and come forward with action plan for improving
quality in higher educational institutions.

14. Individuality- The life of one will not be interesting but rather boring, monotonous and
frustrating. This is mainly due to parental interference in the education of the children.
Parental guidance is necessary but it should not interfere in the creativity or individuality of
the students. Also, in spite of the obsolete type of education system, some are achieving

93
wonderful things in Sports, Music, Dance, Painting, Science and Technology in the world.
This is only due to the encouragement of the parents and some dedicated teachers in the
educational institutions. Higher education is necessary for one to achieve excellence in the
line one is best. But one should be selected for higher education on the basis of merit only.
Further, fees for education in general should not be high; especially, the fees for higher
studies should be within the reach of every class of people in the nation.

15. Privatization of Higher Education- In any nation education is the basic necessity for the
socio-economic development of the individuals and the society. In reality only 20% of the
population is educated in India. So, improved standard of education as first priority should be
offered to the majority by the govt. authorities with sincere political will. Also, privatization
of higher education is absolutely necessary in a vast country like India as government alone is
helpless to do so.

16. Quality development- Quality depends on its all functions and activities: teaching and
academic programs, research and scholarship, staffing, students, building, facilities,
equipment, services to the community and the academic environment. It also requires that
higher education should be characterized by its international dimensions: exchange of
knowledge, interactive networking, mobility of teachers and students and international
research projects, while taking into account the national cultural values and circumstances.
The level of education and knowledge being imparted by many colleges...is not up to the
mark. Instead of concentrating on quantity, these institutions should concentrate on quality.
The approach of doctoral research in social sciences needs to be more analytical and
comparative and be related to society, policy and economy. A study conducted on Social
Science Research Capacity in South Asia (2002) showed that the share of the Indian
universities in the special articles published in the Economic and Political Weekly was only
about a 25 percent. This too was dominated by only three universities, namely- Jawaharlal
Nehru University, University of Mumbai & University of Delhi.

17. World Class Education- Indian government is not giving priority to the development of
Standard in education. India should aspire for the international standard in education. Many
national universities like in the USA, UK, Australia, etc. allow studies in higher education for
foreign students in their countries and through correspondence courses as well. In the same
way India Universities of world class education can also offer courses of studies to foreign

94
students taking advantage of the globalization process. To achieve that goal it should adopt
uniform international syllabus in its educational institutions.

18. Personality Development- Finally, education should be for the flowering of personality
but not for the suppression of creativity or natural skill. In the globalized world opportunities
for the educated people are naturally ample in scope. As a result business process outsourcing
(BPO) activities have increased competition in the world trade leading towards the
production of quality goods and their easy availability everywhere in the world market. That
is the way the world can be developed for peace, prosperity and progress by able and skilful
men.

19. Status of Academic Research Studies- If we see the number of researchers engaged in
Research and Development activities as compared to other countries we find that we have
merely 119 researchers, whereas Japan has 5287 and US has 4484 researchers per million of
population. Even in absolute terms, number of researchers in India is much smaller compared
to US, China, Japan, Russia, and Germany. Numbers of doctoral degrees awarded in all
subjects are 16, 602 out of which 6774 are in Arts and 5408 in science and rest in others
(professional subjects). India has a little over 6000 doctorates in Science and engineering,
compared to 9000 in China and 25000 in US. It increased rapidly from a little over 1000 in
1990 to over 9000 in recent years in China. In comparison, there has been a modest increase
in India. National Science Foundation (NSF) - Science and Engineering Indicators (2002)
shows that in the US, about 4% of the science and engineering graduates finish their
doctorates. This figure is about 7% for Europe. In India this is not even 0.4%. Data on
doctorates particularly in science, engineering and medicine suggests that only a few
institutions have real research focus. In engineering there were merely 650 doctorates
awarded in 2001-02. Of these 80 percent were from just 20-top universities. In science, 65
percent of the doctorates awarded were from the top-30 universities.

20. Stipends to Research Fellows- The number of Ph.Ds. from Indian Universities should
increase with proper standards. This should be seen in the context of extremely low fraction
of Ph.Ds. in India in relation to M.Sc. / B.Tech., as compared to what it is in USA, UK,
Germany, Japan etc. Meritorious doctoral students should be recognized through teaching
assistantships with stipends over and above the research fellowships Identifying talented,
meritorious students and encouraging them through recognition is very important to attract
students into research and teaching.

95
21. Fair Quality Assurance System- Colleges and Private institutes should set up Internal
Quality Assurance Cell and must follow a minimum standard to give degrees. The quality
assurance system must be independent of political and institutional interaction and it must
have a basis in the legislation. There should be operational, financial and academic autonomy
coupled with accountability. There is a need of an independent accreditation agency with a
conglomerate of government, industry, academia; society etc. means all stakeholders of the
education to ensure that the stakeholders particularly the students are not taken for a ride.
They should be able to know whether a particular institution delivers value or not, then things
can be under control to some extent. It is also important that all institutes of higher learning
must make public the acceptability of their courses and degrees. (i.e. the status, recognition
and acceptability of their courses by other institutions).

22. To increase Quantity of Universities- We need more universities because we are more
in number and present number of universities is too less. On 13th June, 2005 Government of
India constituted a high level advisory body known as National Knowledge Commission
(NKC) to advise the PM about the state of education in India and measures needed to reform
this sector. It was headed by Sam Pitroda and submitted its report in November 2007. NKC
has recommended setting up of 1500 universities by 2015 so that gross enrollment ratio
increases to 15 percent. It has also called for establishing an Independent Regulatory
Authority for Higher Education (IRAHE) to monitor the quality of overall higher education
in India.

23. Examination Reforms- Examination reforms, gradually shifting from the terminal,
annual and semester examinations to regular and continuous assessment of student’s
performance in learning should be implemented.

24. High-tech Libraries- Our university libraries have a very good collection of books, but
they are all in mess. A library must be online and conducive for serious study. Indian
universities should concentrate more on providing quality education which is comparable to
that of international standards.

3.4 Information Technology for achieving competitive edge in Business and Industry:

3.4.1 The Role of IT in Strategic Management:

A strategic IS has been defined as "the information system to support or change


enterprise's strategy". Strategic management is the technique that an organization can plans

96
the strategy of its future operations; in the other word a SIS is a system to manage
information and assist in strategic decision making. The term strategic points to the long-term
nature of this mapping exercise and to the large magnitude of advantage the exercise is
expected to give an organization (Turban 2006). Four critical factors in developing and
strategic IS are Initiation, data collection, strategy formulation and short-term development.
These factors are used to prioritize proposed ISs, so that those giving competitive advantage
to the organization can be highlighted for immediate development (Karababas et al, 1994). IT
contributes to strategic management in many ways (for addition information see Kemerer,
1997, and Callon, 1996). Turban et al (2006) introduce these eight factors;

1. Innovative applications - IT creates innovative applications that provide direct strategic


advantage to organizations. For example, Federal Express was the first company in its
industry to use IT for tracking the location of every package in its system. Next, FedEx was
the first company to make this database accessible to its customers over the Internet. FedEx
has gone on to provide e-fulfilment solutions based on IT and is even writing software for
this purpose (Bhise et al., 2000).

2. Competitive weapons. ISs themselves have long been recognized as a competitive weapon
(Ives and Lear mouth, 1984, and Callon, 1996). Michael Dell, founder of Dell Computer, puts
it bluntly: “The Internet is like a weapon sitting on the table, ready to be picked up by either
you or your competitors”.

3. Changes in processes. IT supports changes in business processes that translate to strategic


advantage (Davenport, 1993). For example, Berri is Australia’s largest manufacturer and
distributor of fruit juice products. The principal goal of its enterprise resource planning
system implementation was “to turn its branch-based business into a national organization
with a single set of unified business processes” in order to achieve millions of dollars in cost-
savings (J.D. Edwards, 2002a). Other ways in which IT can change business processes
include better control over remote stores or offices by providing speedy communication tools,
streamlined product design time with computer-aided engineering tools, and better decision-
making processes by providing managers with timely information reports.

4. Links with business partners. IT links a company with its business partners effectively
and efficiently. For example, Rosenbluth’s Global Distribution Network allows it to connect
agents, customers, and travel service providers around the globe, an innovation that allowed it
to broaden its marketing range (Clemons and Hann, 1999).

97
5. Cost reductions. IT enables companies to reduce costs. For example, a Booz- Allen &
Hamilton study found that: a traditional bank transaction costs $1.07, whereas the same
transaction over the Web costs about 1 cent; a traditional airline ticket costs $8 to process, an
e-ticket costs $1 (ibm.com/ partner world/pwhome.nsf/vAssetsLookup/ad2.pdf/$file/ad2.pdf).
In the customer service area, a customer call handled by a live agent costs $33, but an
intelligent agent can handle the same request for less than $2 (Schwartz, 2000).

6. Relationships with suppliers and customers. IT can be used to lock in suppliers and
customers, or to build in switching costs (making it more difficult for suppliers or customers
to switch to competitors).

7. New products. A firm can leverage its investment in IT to create new products that are in
demand in the marketplace. According to Vandenbosch and Dawar (2002, p. 38), “The
redefinition of ICI’s role not only generated much higher margins for the business, it also
gave ICI a much more defensible competitive position”.

8. Competitive intelligence. IT provides competitive (business) intelligence by collecting and


analysing information about products, markets, competitors, and environmental changes
(Guimaraes and Armstrong, 1997).

3.4.2 Uses of competitive intelligence:

 A sporting goods company found an activist group planning a demonstration and


boycott months in advance, enabling the company to implement a counter strategy.
 Within days of launch, a software firm found dissatisfaction with specific product
features, enabling the technicians to write a “patch” that fixed the problem within
days instead of the months normally required to obtain customer feedback and
implement software fixes.
 A packaging company was able to determine the location, size, and production
capacity for a new plant being built by a competitor. The otherwise well protected
information was found by an automated monitoring service in building permit
documents within the Web site of the town where the new plant was being built.
 A telecommunications company uncovered a competitor’s legislative strategy,
enabling the company to gain an upper hand in a state-by-state lobbying battle.
 The creative team embarking on development of a new video game used the Internet
to identify cutting-edge product attributes that game-players prefer. The intensive

98
research uncovered three key “got to haves” that were not identified in focus groups
and had not been included in the original design specification.

3.4.3 Strategies for Competitive Advantage:

As Howard et al, (1999) believed if ISs design and strategy development are
addressed simultaneously, Strategic competitive advantage can be gained (Howard et al,
1999). Porter’s model identifies the forces that influence competitive advantage in the
marketplace. Of greater interest to most managers is the development of a strategy aimed at
establishing a profitable and sustainable position against these five forces (Turban et al,
2006). To establish such a position, a company needs to develop a strategy of performing
activities differently from a competitor. Porter (1985) proposed cost leadership,
differentiation, and niche strategies. Additional strategies have been proposed by other
strategic-management authors (e.g., Neumann, 1994; Wiseman, 1988; Frenzel, 1996). Turban
et al, (2006) sited 12 strategies for competitive advantage here and we present again turban's
literature.

1. Cost leadership strategy: Produce products and/or services at the lowest cost in the
industry. A firm achieves cost leadership in its industry by thrifty buying practices, efficient
business processes, forcing up the prices paid by competitors, and helping customers or
suppliers reduce their costs.

2. Differentiation strategy: Offer different products, services, or product features. By


offering different, “better” products companies can charge higher prices; sell more products,
or both.

3. Niche strategy: Select a narrow-scope segment (niche market) and be the best in quality,
speed, or cost in that market.

4. Growth strategy: Increase market share, acquire more customers, or sell more products.
Such a strategy strengthens a company and increases profitability in the long run. Web-based
selling can facilitate growth by creating new marketing channels, such as electronic auctions.

5. Alliance strategy: Work with business partners in partnerships, alliances, joint ventures,
or virtual companies. This strategy creates synergy, allows companies to concentrate on their
core business, and provides opportunities for growth.

99
6. Innovation strategy: Introduce new products and services, put new features in existing
products and services, or develop new ways to produce them. Innovation is similar to
differentiation except that the impact is much more dramatic. Differentiation “tweaks”
existing products and services to offer the customer something special and different.
Innovation implies something so new and different that it changes the nature of the industry.

7. Operational effectiveness strategy: Improve the manner in which internal business


processes are executed so that a firm performs similar activities better than rivals (Porter,
1996). Such improvements increase employee and customer satisfaction, quality, and
productivity while decreasing time to market. Improved decision making and management
activities also contribute to improved efficiency.

8. Customer-orientation strategy: Concentrate on making customers happy. Strong


competition and the realization that the customer is king (queen) is the basis of this strategy.
Web-based systems that support customer relationship management are especially effective
in this area because they can provide a personalized, one-to-one relationship with each
customer.

9. Time strategy: Treat time as a resource, then manage it and use it to the firm’s advantage.
“Time is money,” “Internet time” (i.e., three months on the Internet is like a year in real
time), first-mover advantage, just-in-time delivery or manufacturing, competing in time
(Keen, 1988), and other time-based competitive concepts emphasize the importance of time
as an asset and a source of competitive advantage. One of the driving forces behind time as a
competitive strategy is the need for firms to be immediately responsive to customers,
markets, and changing market conditions. A second factor is the time-to-market race. By
introducing innovative products or using IT to provide exceptional service, companies can
create barriers to entry from new entrants.

10. Lock in customers or suppliers strategy: Encourage customers or suppliers to stay with
you rather than going to competitors. Locking in customers has the effect of reducing their
bargaining power.

11. Increase switching costs strategy: Discourage customers or suppliers from going to
competitors for economic reasons.

3.5 Infrastructure requirement:

100
In light of these challenges, government and other officials in both developed and
developing countries can use a variety of data sources and analytics tools to tackle
infrastructure requirements in a number of different ways.

For instance, decision-makers for emerging economies can analyse government data
regarding transportation bottlenecks as well as the current capacities and restrictions of water,
power, and other utilities that may be limiting opportunities for business investment and
growth.

Providing these types of insights to organizers in the public and private sectors can
help decision-makers better understand and prioritize the types of investments that will
generate the greatest economic returns (e.g., upgrading electricity infrastructure in cities to
attract business expansion).

Governments are increasingly relying on public/private partnerships and public


concession models to help finance and build infrastructure projects, according to a report by
the Urban Land Institute and Ernst & Young.

Large sovereign wealth funds and institutional investors are “tentatively warming” to
the potential for reliable returns on infrastructure investments, according to the report. Still,
investors continue to worry about the reliability of government partners, deal structures, and
the viability of certain investments.

3.6 Selection of Hardware and Software

3.6.1 Selection of Hardware:

Compute (server) hardware selection:

Evaluate Compute (server) hardware four opposing dimensions:

Server density:

 A measure of how many servers can fit into a given measure of physical
space, such as a rack unit [U].

Resource capacity:

101
 The number of CPU cores, how much RAM, or how much storage a given
server delivers.

Expandability:

 The number of additional resources you can add to a server before it reaches
capacity.

Cost:

 The relative of the hardware weighted against the level of design effort needed
to build the system.

You must weigh the dimensions against each other to determine the best design for
the desired purpose. For example, increasing server density can mean sacrificing resource
capacity or expandability. Increasing resource capacity and expandability can increase cost
but decrease server density. Decreasing cost often means decreasing supportability, server
density, resource capacity, and expandability.

Compute capacity (CPU cores and RAM capacity) is a secondary consideration for
selecting server hardware. As a result, the required server hardware must supply adequate
CPU sockets, additional CPU cores, and more RAM; network connectivity and storage
capacity are not as critical. The hardware needs to provide enough network connectivity and
storage capacity to meet the user requirements; however they are not the primary
consideration.

Some server hardware form factors are better suited to storage-focused designs than
others. The following is a list of these form factors:

 Most blade servers typically support dual-socket multi-core CPUs. Choose either full
width or full height blades to avoid the limit. High density blade servers support up to
16 servers in only 10 rack units using half height or half width blades.

 1U rack-mounted servers have the ability to offer greater server density than a blade
server solution, but are often limited to dual-socket, multi-core CPU configurations.

102
 To obtain greater than dual-socket support in a 1U rack-mount form factor, customers
need to buy their systems from Original Design Manufacturers (ODMs) or second-tier
manufacturers.

 2U rack-mounted servers provide quad-socket, multi-core CPU support but with a


corresponding decrease in server density (half the density offered by 1U rack-
mounted servers).
 Larger rack-mounted servers, such as 4U servers, often provide even greater CPU
capacity. Commonly supporting four or even eight CPU sockets. These servers have
greater expandability but such servers have much lower server density and usually
greater hardware cost.
 Rack-mounted servers that support multiple independent servers in a single 2U or 3U
enclosure, "sled servers", deliver increased density as compared to typical 1U-2U
rack-mounted servers.

For example, many sled servers offer four independent dual-socket nodes in 2U for a total
of 8 CPU sockets in 2U. However, the dual-socket limitation on individual nodes may not be
sufficient to offset their additional cost and configuration complexity.

Other factors strongly influence server hardware selection for a storage-focused Open Stack
design architecture. The following is a list of these factors:

Instance density:

 In this architecture, instance density and CPU-RAM oversubscription are


lower. You require more hosts to support the anticipated scale, especially if
the design uses dual-socket hardware designs.

Host density:

 Another option to address the higher host count is to use a quad socket
platform. Taking this approach decreases host density which also increases
rack count. This configuration affects the number of power connections and
also impacts network and cooling requirements.

Power and cooling density:

103
 The power and cooling density requirements might be lower than with blade,
sled, or 1U server designs due to lower host density (by using 2U, 3U or even
4U server designs). For data centres with older infrastructure, this might be a
desirable feature.

Storage-focused Open Stack design architecture server hardware selection should


focus on a "scale up" versus "scale out" solution. The determination of which is the best
solution, a smaller number of larger hosts or a larger number of smaller hosts depends on a
combination of factors including cost, power, cooling, physical rack and floor space, support-
warranty, and manageability.

3.6.2 Selection of Software:

To develop a large application, lots of effort, money and time are required for
designing the systems and writing the program code. The overall goal of computerising an
application is to make it more efficient than manual system with optimum utilisation of time,
money and effort spent on its development.

In order to save the valuable time spend by systems designers and programmers in
designing the complete system and writing codes, certain programs are required, which are
called software tools.

The selection of software tools has become an important aspect of software


development. Software tools assist the programmers / analyst in the design, coding, editing,
compiling, linking and debugging programs. They allow them to focus on the challenging
aspects of a system.

Application Generators:

Application generators are software tools that help the programmer to quickly
generate a complete or part program according to the specifications given. The programmer
does not write the code but using an application generator, defines the menus, screen, report
formats, data elements and processing logics. The program code is generated quickly by the
application generator. Now, the programmer can easily edit and execute the program.

104
Many application generators are available for different third and fourth generation
languages like COBOL, dBASE, Foxpro etc. For example, Pacbase is an application
generator for programs written in COBOL languages. dBASE and Foxpro have built-in code
generation capabilities for designing screen, menu and report formats. Genifer is a full scale
code generator, that provides a pattern called a template, from which the code is generated.
After defining screens, menus and reports, Genifer creates the data files, index files and
programs.

Advantages of Application Generators:

The major advantages of using application generators are:

 Saving a lot of development time


 Useful as a learning tocl for writing programs
 Programs are easy to modify and maintain

Disadvantages of Application Generators

 Application generators also have certain disadvantages:


 They cannot handle systems having complex processing logic.
 They add complexity, if template language differs from native language.

Software Engineering and CASE Tools:

Development of an application software is very complex to plan, design, develop and


manage. Software engineering is the systematic approach in design, development, operation
and maintenance of such software. Its basic aim is to produce high quality software at low
cost. Computer Aided Software Engineering (CASE) tool is a group of different software
tools that are integrated and used in group of different software tools that are integrated and
used in software engineering. For example, Designer/2000 is Oracle’s suite of CASE tools
that address the different stages of the application development.

System Software Utilities:

105
Utility software is system software designed to help analyse, configure, optimize or
maintain a computer. Utility software usually focuses on how the computer infrastructure
(including the computer hardware, operating system, and software and data storage) operates.

Types of Utilities:

 File Management Utilities


 Data Compression Utilities
 Diagnostic Utilities
 Virus Detection and Removal Utilities
 Text Editing Utilities
 Performance Monitoring Utilities
 Spooling Utilities

General Purpose Application Softwares:

 Word Processor Packages


 Database Management Packages
 Spreadsheet Packages
 Office Automation Packages

Special Purpose Application Softwares:

 Desktop Publishing Software


 Graphics, Multimedia Animation Softwares
 Business Application Softwares

Software Evaluation Criteria:

 Efficiency—how well are the programs written?


 Ease of use—how easy is the software to use?
 Documentation—what is the quality and quantity of the documentation?
 Hardware requirements—what hardware is needed to run the software?
 Vendor—what is the reputation of the developer in terms of support, maintenance,
and industry position?
 Cost—how much does it cost?

106
3.7 General Software features and Trends:

The benefits of the analysis of software faults and failures have been widely recognized.
However, detailed studies based on empirical data are rare. In this paper, we analyse the fault
and failure data from two large, real-world case studies. Specifically, we explore:

 The localization of faults that lead to individual software failures and


 The distribution of different types of software faults. Our results show that individual
failures are often caused by multiple faults spread throughout the system.

This observation is important since it does not support several heuristics and
assumptions used in the past. In addition, it clearly indicates that finding and fixing faults that
lead to such software failures in large, complex systems are often difficult and challenging
tasks despite the advances in software development. Our results also show that requirement
faults, coding faults, and data problems are the three most common types of software faults.
Furthermore, these results show that contrary to the popular belief, a significant percentage of
failures are linked to late life cycle activities. Another important aspect of our work is that we
conduct intra- and inter project comparisons, as well as comparisons with the findings from
related studies. The consistency of several main trends across software systems in this paper
and several related research efforts suggests that these trends are likely to be intrinsic
characteristics of software faults and failures rather than project specific.

3.7.1 Importance of Computers in Business:

Computer plays an important role in business environment as every organisation


adopts it in some form or the other to perform the tasks in effective manner. In the past few
years’ rapid development in IT, particularly in communications, electronic service networks,
and multimedia have opened up new opportunities for corporates. All these are contributing
towards new and effective ways of processing business transactions, integrating business
processes, transferring payments and delivering services electronically. It has affected the
business in the following ways:

1. Office Automation:

107
Computers have helped automation of many industrial and business systems. They are
used extensively in manufacturing and processing industries, power distribution systems,
airline reservation systems, transportation systems, banking systems, and so on. Computer
aided design (CAP) and computer-aided manufacture (CAM) are becoming popular among
the large industrial establishment.

2. Stores large amount of date and information:

Business and commercial organizations need to store and maintain voluminous


records and use them for various purposes such as inventory control, sales analysis, payroll
accounting, resources scheduling and generation of management reports. Computers can store
and maintain files and can sort, merge or update as and when necessary.

3. Improves Productivity:

With the introduction of word processing software, Computers have recently been
applied to the automation of office tasks and procedures. This is aimed at improving the
productivity of both clerical & managerial staff.

4. Sharing of data and information:

Due to networking of computers, where a number of computers are connected


together to share the data and information, use of e-mail and internet has changed the ways of
business operations.

5. Competitiveness:

Computers offer a reliable and cost-effective means of doing business electronically.


Routine tasks can be automated. The customers can be provided support round the clock,
which is 24 hours a day. With advancement in IT sector, corporates are spreading business
around the world thus, increasing their presence and entering new markets.

6. Security:

To provide security to data and important computer programs, almost every


organisation has some security programs to avoid the illegal access of the company’s
information by unauthorized persons. The three fundamental attributor of a security program
are confidentially, integrity and availability which allow access to only authorized persons in
an organization.

108
7. Cost Benefits:

The extensive availability of internet based information means that companies have a
wider choice of suppliers which leads to a more competitive pricing. Due to the presence of
internet the role of the middleman becomes less important as companies can sell their product
or services directly to the customer.

8. Marketing:

Corporates engaged in e-business can take help of their respective websites to create
brand awareness of their products, thus, creating new avenues of promotion of their products.
In addition, companies’ websites can also provide better services such as after sales service to
the customer.

3.7.2 Effects of Technology on Business:

Businesses have been at the forefront of technology for ages. Whatever speed
production will draw in more business? As computers emerged in the 20th century, they
promised a new age of information technology. But in order to reap the benefits, businesses
needed to adapt and change their infrastructure [source: McKinney]. For example, American
Airlines started using a computerized flight booking system, and Bank of America took on an
automated check-processing system.

Obviously, now, most business is conducted over personal computers or


communication devices. Computers offer companies a way to organize dense databases,
personal schedules and various other forms of essential information.

As information travels faster and faster and more reliably, barriers of distance
disappear, and businesses are realizing how easy it is to outsource jobs overseas. Outsourcing
refers to the practice of hiring employees who work outside the company or remotely -- and
even halfway across the world. Companies can outsource duties such as computer
programming and telephone customer service. They can even outsource fast-food restaurant
service -- don't be surprised if you're putting in your hamburger order with a fast-food
employee working in a different country entirely. Outsourcing is a controversial practice, and
many believe that U.S. companies who take part are hurting the job market in their own
country. Nonetheless, from a business perspective, it seems like the wisest route, saving
companies between 30 and 70 percent.

109
Another technology that's starting to revolutionize business is actually not very new --
it's just cheaper these days. Radio frequency identification (RFID) technology is infiltrating
and changing business significantly in a few ways. Microchips that store information (such as
a number equivalent of a barcode and even an up-to-date history of the chip's travels) can be
attached to product, and this helps companies keep track of their inventory.

Some businesses have even begun to use RFID chip implants in humans to tighten
security. An access control reader detects the chip's signal and permits the employee access to
the door. But many people are concerned about privacy issues if this were to become
widespread practice.

3.7.3 The Business Perspective in CLOUD COMPUTING:

Evolution of the outsourcing value chain towards a cloud computing value network
according to Porter (1980), a value chain is described as that primary and support activities
within and around an organization that together design, produces, deliver and support a
product or service. The primary process directly adds value to and transforms inputs into
goods or services while the support processes are those activities necessary to support or
enable the primary operations. A value chain describes the interactions between different
business partners to jointly develop and manufacture a product or service. Here, the
manufacturing process is decomposed into its strategically relevant activities, thus
determining how competitive advantages can be achieved. Competitive advantages are
achieved by fulfilling the strategically important activities cheaper or better than the
competition (Porter 1985). A value chain does not only contain different companies but also
different business units inside one organization that jointly create a product. Porter (1985)
argues that a firm’s value chain links to the value chain of suppliers and buyers of products
and services that result in a large stream of activities called the value system). Thereby, the
value of the product is enhanced with each step along the linear production process.
However, the manufacturing process is seldom strictly linear.

Also, it has been found that the value chain analysis is more applicable for the
analysis of manufacturing and production firms rather than services (see Stabell and
Fjeldstad, 1998). With regard to services, there is rather a (value) network of relationships
that generates economic value and other advantages through complex dynamical exchanges
between companies (Allee 2002). Especially with regard to new internet services, value

110
networks are often understood as a network of suppliers, distributors, suppliers of commercial
services and customers that are linked via the internet and other electronic media to create
values for their end customers (Tapscott et al. 2000). In traditional IT service outsourcing the
value chain is usually divided into the areas infrastructure, applications and business
processes, which can be complemented by strategy and consulting activities. In each of these
four value chain steps the whole cycle of IT-services, often referred to as “plan, build, run”,
must be supported and implemented. Thus, single aspects of individual value chain steps may
be outsourced, such as the development of applications.

Purchasing and operating IT hardware as well as hosting can be further divided into
services that are done by the customer himself and such that use resources of a hosting
provider. Here, the myriad possibilities of combination may lead to complex outsourcing
relationships. In cloud computing the traditional value chain that can be applied for
outsourcing becomes even more complex and breaks up into a myriad of different
combination of actors and their interactions to depict rather a network than a sequential chain.
Part of this new complexity can be found in a general trend from products to services (Jacob
and Ulaga 2008). The trend does not only lead to more out sourcing, but also from the
classical hardware-based outsourcing of data centres to computing as a service. A similar
trend can be found in the software business, which leads away from delivering software
products off the shelf towards offering software as a service.

Cloud computing links these two areas of a stronger service-oriented hardware


outsourcing to the “as-a-service” concept for software. Here, cloud computing shows two big
facets: infrastructure-based services are now offered dynamically to the needs of customers,
often referred to as utility computing, where the customer is charged according to its actual
usage. Secondly, new cloud computing platforms emerged, to integrate both hardware and
software as-service offerings. These platforms allow creating new, single as well as
composed applications and services that support complex processes and interlink multiple
data sources. From a technical point of view these platforms provide programming and
runtime environments to deploy cloud computing applications. Looking at these platforms
from a value chain perspective, they can be perceived as some kind of market place, where
various cloud computing resources from different levels (infrastructure, platform services and
applications) are integrated and offered to the customer. By composing different services,
complex business processes can be supported and accessed via a unified user interface.

111
3.7.4 Information Technology and Business-Level Strategy:

Over several decades, numerous studies in both the MIS and Strategy fields have tried
to theorize the character of, and examine business-level value attributable to, IT investments.
Work on this topic in the MIS area is quite prevalent, with several hundred studies on the
“business value of IT” documented in review articles (Kohli and Devaraj 2003; Melville et al.
2004; Piccoli and Ives 2005). These repeated examinations document the contributions of IT
not solely as assets (a resource value reflected on the balance sheet, as well as potentially
contributing to income and cash flow) but also as enablers of capabilities (whose effects are
reflected in the income and cash-flow statements, but not valued in the balance sheet) and
focus on comparing the value of the investments to their costs.
Conversely, research on the IT-performance link in the strategic management
literature has been fairly limited. Such work has viewed IT investments as a means of
increasing the firm’s competitive advantage (Miller 2003; Powell and Dent- Micallef 1997;
Zott 2003) or as a necessity to avoid a disadvantageous position (Mata et al. 1995). Parallel
theoretical perspectives on the “advantage or necessity” question in the MIS literature
occurred earlier and at least as extensively (e.g., Barua and Lee 1991; Clemons and
Kimbrough 1986). Empirical studies comparing the performance of IT-intensive firms to
their peers have, however, struggled with inconsistent outcomes. Among the many studies
that find clearly positive returns, some find clearly mixed results for the IT– performance
relationship (Barua et al. 1995; Barua and Lee 1991; Francalanci and Galal 1998) and some
find a clearly negative relationship (Lee and Barua 1999; Loveman 1994). Yet other studies
show that the realization of IT gains, although documentable positive in their potential, are
largely subject to organizational implementation issues unrelated to the technology itself
(Brynjolfsson and Hitt 1998; Mooney et al. 1996; Nolan and Croson 1995) that inhibit
capture of IT attributable benefits by the firm’s owners (e.g., Brynjolfsson 1993).

Further, retrospective methodological analysis (Bharadwaj 2000) suggests that many


prior studies may be misleading because of measurement issues in quantifying the IT artefact
as well as level-of-analysis problems that confound any direct IT/performance relationship.
Several studies also attempt to discern the reasons behind these conflicting rate-ofreturn
observations across this body of research (e.g., Kohli and Devaraj 2003; Melville et al. 2004;
Piccoli and Ives 2005). While these meta-studies clearly identify many of the potential
weaknesses and limitations in the collective literature base on the context of the IT-
performance relationship, none to date has offered a clear integration of IT investments with

112
the major theory perspectives in business-level strategy and their underlying causal profit
mechanisms.

3.7.5 IT and Corporate Strategy: The Status of Current Research:

Since the early 1980’s the term “Information Technology” and “Corporate Strategy”
have been coupled with recurring regularity in the information systems literature. IT, we are
told, can provide new forms of customer service, new distribution channels, new information
based products, or can even rearrange industry bound-Aries (Cash et al. 1988, Cash and
Konsynski 1985, Porter and Miller 1985). Examples of such strategic IT 3 applications are
frequently cited (Wiseman 1985, Keen 1981) and frameworks are proposed to help
understand (e.g., Benjamin et al. 1984).

3.7.6 IT and Corporate Strategy: The CEO’s Perspective:

In an earlier time, the IT expenditures were considered to be back office investments. The
Chief Executive Officer’s view of IT, though interesting, was not usually seen as vital to
success. Back office investments in IT tended to be based on expense reductions; an IT
steering committee backed up by an accountant with a return on investment calculator
assured senior management that investments in IT were well founded. The chief executive’s
only role was to ensure that expense controls were in place and to referee resources disputes
among the business units or functional areas. But as the IT portfolio moved out of the back
office and into new products, product delivery systems, and customer service applications,
the benefits have tended to move from the expense to the revenue category. Justifications
here are often driven by intuition and gut feel, with a push from the top often required to
lubricate or circumvent approval processes best suited for justifying applications solely on the
basis of hard savings (Runge 1985, Ives and Vitale 1988).

3.7.7 Computer Technology in Business Environment:


A computer has high speed of calculation, diligence, accuracy, reliability, or
versatility which made it an integrated part in all business organisations.

Computer is used in business organisations for:

 Payroll calculations
 Budgeting
 Sales analysis

113
 Financial forecasting
 Managing employees database
 Maintenance of stocks etc.

Fig8. Business
Banking:
Today banking is almost totally dependent on computer.

Banks provide following facilities:

 Banks provide online accounting facility, which includes current balances, deposits,
overdrafts, interest charges, shares, and trustee records.
 ATM machines are making it even easier for customers to deal with banks.

Fig9. Banking
Insurance:
Insurance companies are keeping all records up-to-date with the help of computers.
The insurance companies, finance houses and stock broking firms are widely using
computers for their concerns.

Insurance companies are maintaining a database of all clients with information showing

114
 Procedure to continue with policies
 Starting date of the policies
 Next due instalment of a policy
 Maturity date
 Interests due
 Survival benefits
 Bonus

Fig10. Insurance

Education:
The computer has provided a lot of facilities in the education system.

 The computer provides a tool in the education system known as CBE (Computer
Based Education).
 CBE involves control, delivery, and evaluation of learning.
 The computer education is rapidly increasing the graph of number of computer
students.
 There are number of methods in which educational institutions can use computer to
educate the students.
 It is used to prepare a database about performance of a student and analysis is carried
out on this basis.

115
Fig11. Education

Marketing:
In marketing, uses of computer are following:

Advertising - With computers, advertising professionals create art and graphics, write and
revise copy, and print and disseminate ads with the goal of selling more products.

At Home Shopping - Home shopping has been made possible through use of computerised
catalogues that provide access to product information and permit direct entry of orders to be
filled by the customers.

Fig12. Marketing

Health Care:
Computers have become important part in hospitals, labs, and dispensaries. The
computers are being used in hospitals to keep the record of patients and medicines. It is also
used in scanning and diagnosing different diseases. ECG, EEG, Ultrasounds and CT Scans
etc., are also done by computerised machines.

Some major fields of health care in which computers are used are:

Diagnostic System - Computers are used to collect data and identify cause of illness.

Lab-diagnostic System - All tests can be done and reports are prepared by computer.

Patient Monitoring System - These are used to check patient's signs for abnormality such
as in Cardiac Arrest, ECG etc.

Pharma Information System - Computer checks Drug-Labels, Expiry dates, harmful


drug’s side effects etc.

Surgery - Nowadays, computers are also used in performing surgery.


116
Fig13. Health Care

Engineering Design:
Computers are widely used in engineering purpose.

One of major areas is CAD (Computer aided design). That provides creation and
modification of images. Some fields are:

Structural Engineering - Requires stress and strain analysis for design of Ships, Buildings,
Budgets, and Airplanes etc.

Industrial Engineering - Computers deal with design, implementation and improvement of


integrated systems of people, materials and equipment.

Architectural Engineering - Computers help in planning towns, designing buildings,


determining a range of buildings on a site using both 2D and 3D drawings.

Fig14. Engineering

Military:
Computers are largely used in defence. Modern tanks, missiles, weapons etc.
Military also employs computerised control systems. Some military areas where a computer
has been used are:

117
 Missile Control
 Military Communication
 Military Operation and Planning
 Smart Weapons

Fig15. Military

Communication:
Communication means to convey a message, an idea, a picture or speech that is
received and understood clearly and correctly by the person for whom it is meant for. Some
main areas in this category are:

 E-mail
 Chatting
 Usenet
 FTP
 Telnet
 Video-conferencing

Fig16. Communication

Government:
Computers play an important role in government. Some major fields in this category are:

118
Fig17. Government

 Budgets
 Sales tax department
 Income tax department
 Male/Female ratio
 Computerization of voters lists
 Computerization of driving licensing system
 Computerization of PAN card
 Weather forecasting

119
Review Questions:

1. What is Computer Security?


2. Write the theory of conceptualization computerization?
3. Discuss the Problems and Prospectus?
4. Explain Operating System?
5. Explain about Software features and trends?
6. Categorise the IT and Business level strategy?
7. Discuss the Computer supporting languages?
8. What are the characteristics of Computer System?
9. Write the advantages and disadvantages of computer?
10. What are social informatics?

120

You might also like