Professional Documents
Culture Documents
C2005 Study Guide Ch1
C2005 Study Guide Ch1
C2005 Study Guide Ch1
1.0 Introduction
The word computer has become a common term in every activities of your daily routine. Whether you
are working in an office or participating in recreational activities, you use computers every day. Daily
activities – typing a report, driving a car, paying for goods and services with credit card, or using an
ATM – can involve the use of computers. Computers have become the tool people use to access and
What is a Computer?
A computer is an electronic machine, operating under the control of instructions stored in its own
memory that can accept data as input, manipulate the data according to specified rules and produces
Data means unorganized facts and figures. Data can include text, numbers, images and sounds. Simple
Data when processed we get information. Information is the organized facts and figures derived out of
Each generation increases in reliability, speed, efficiency and ease of use and decreases in cost & size.
- Very large computers made up of vacuum tubes and often programmed using wiring plugboards
1-1
CHAPTER 1: INTRODUCTION
- No Operating Systems
- At first punch cards were used to provide input, then tapes were used (for batch processing)
-Simple batch processing was used with input files, programs and output on tape
- Smaller computers (e.g. IBM 1401) were used to read programs and data on punch cards and to input
- FMS (Fortran Monitor System) and IBM IBSYS as OSs for handling jobs (e.g. to read a job and to run it)
- Fixed disks were used and new jobs on cards to be executed could be read on to the disk while executing
- Though the first models used multiprogrammed batch processing, to cater to increased response time,
- Mini computers also appeared on the market which were used by small departments etc. and became the
1-2
CHAPTER 1: INTRODUCTION
- Mainframes, Minicomputers, Workstations, Personal Computers (Desktop and portable) based on VLSI
components
- Network operating systems that facilitate file sharing, remote logging etc. and Client Server computing.
- Distributed OSs that make use of multiple machines and processors to run applications.
They consist of terminals. They have time sharing facilities. Super computer are high performance
Computers.
Micro Computer:
They are called desktop computers. They are used for intensive calculations e.g. laptop computers, note
They are called embedded (use for consumer good areas) systems. They are used in industry for controlling
robots. In these Computers, there are programs stored on a chip/micro chips, that can’t be changed. They
In the network configurations main frame computers are used, using time sharing facilities e.g. personal
1-3
CHAPTER 1: INTRODUCTION
Micro Computers:
Used by one person at a time. They are also called workstation. All high and desktop computer, are use for
intensive computation. Other examples are laptop computers, notebook, hand held (Palm top) computers.
Examples of Palm top computers are personal digital assistance, personal communicator.
These are called embedded computers e.g. they are used in enhanced consumers areas (super market) for
calculating amount or for space inventory control. They are used in industry for controlling robots and
computer numerical.
In this, a program is stored on a micro chip which cannot be changed and it is called firm ware which mean
program we need a language. Programming languages have evolved tremendously since early 1950's and
this evolution has resulted in over hundreds of different languages being invented and used in the industry.
This revolution is needed as we can now instruct computers more easily and faster than ever before due to
technological advancement in hardware with fast processors like the 1.2GHz Pentium IV developed by
Intel®
1-4
CHAPTER 1: INTRODUCTION
We start out with the first and second generation languages during the period of 1950-60, which to many
experienced programmers will say are machine and assembly languages. Programming language history
really began with the work of Charles Babbage in the early nineteenth century who developed automated
calculation for mathematical functions. Further developments in early 1950 brought us machine language
without interpreters and compilers to translate languages. Micro-code is an example of the first generation
language residing in the CPU written for doing multiplication or division. Computers then were
programmed in binary notation that was very prone to errors. A simple algorithm resulted in lengthy code.
Symbolic assembly codes came next in the mid 1950's, the second generation of programming language
like AUTOCODER, SAP and SPS. Symbolic addresses allowed programmers to represent memory
locations, variables and instructions with names. Programmers now had the flexibility not to change the
addresses for new locations of variables whenever they are modified. This kind of programming is still
considered fast and to program in machine language required high knowledge of the CPU and machine's
instruction set. This also meant high hardware dependency and lack of portability. Assembly or machine
code could not run on different machines. Example, code written for the Intel® Processor family would
look very different for code written for the Motorola 68X00 series. To convert would mean changing a
Throughout the early 1960's till 1980 saw the emergence of the third generation programming languages.
Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of this and
were considered as high level languages. Most of this languages had compilers and the advantage of this
was speed. Independence was another factor as these languages were machine independent and could run
on different machines. The advantages of high level languages include the support for ideas of abstraction
1-5
CHAPTER 1: INTRODUCTION
so that programmers can concentrate on finding the solution to the problem rapidly, rather than on low-
level details of data representation. The comparative ease of use and learning, improved portability and
simplified debugging, modifications and maintenance led to reliability and lower software costs.
Features evident in fourth generation languages quite clearly are that it must be user friendly, portable and
independent of operating systems, usable by non-programmers, having intelligent default options about
what the user wants and allowing the user to obtain results fasts using minimum requirement code
generated with bug-free code from high-level expressions (employing a data-base and dictionary
management which makes applications easy and quick to change), which was not possible using COBOL
or PL/I. Standardisation however, in early stages of evolution can inhibit creativity in developing powerful
languages for the future. Examples of this generation of languages are IBM's ADRS2, APL, CSP and AS,
The 1990's saw the developments of fifth generation languages like PROLOG, referring to systems used in
the field of artificial intelligence, fuzzy logic and neural network.. This means computers can in the future
have the ability to think for themselves and draw their own inferences using programmed information in
large databases. Complex processes like understanding speech would appear to be trivial using these fast
inferences and would make the software seem highly intelligent. In fact, these databases programmed in a
specialised area of study would show a significant expertise greater than humans. Also, improvements in
the fourth generation languages now carried features where users did not need any programming
knowledge. Little or no coding and computer aided design with graphics provides an easy to use product
1-6
CHAPTER 1: INTRODUCTION
The procedure oriented languages are the third generation languages. In this type of languages the
programs are written as a sequence of instructions. To execute a procedure oriented program we need a
translator. A translator is a program that converts the source program into object program. There are two
An interpreter is a translator that converts the source code into object code instruction by instruction. An
interpreter executes the code immediately when an instruction is free from syntax errors. An interpreter
does not require much memory to store the program. A compiler on the other hand translates the entire
program into the equivalent machine codes. After compilation it will list out the errors in the program. It
will execute the program only when all the errors are corrected. A compiler requires more memory.
However the compiler converts the source code into the object code only once and when the object code is
Two mathematicians, Corrado Bohm and Guiseppe Jacopini proved that any computer program can be
written with the basic structures: sequence, selections and iterations This discovery led to the method of
A computer program is said to be structured, if it has a modular design and uses only the three
Selection: One of two blocks of program code is executed based on a test for some condition.
Iteration: One or more statements are executed repeatedly as long as a specified condition is true.
1-7
CHAPTER 1: INTRODUCTION
Sequence Selection
1-8
CHAPTER 1: INTRODUCTION
Iteration
1-9
CHAPTER 1: INTRODUCTION
The goal of structured programming is to create correct programs that are easy to write,
1. Easy to write:
different module
Studies show structured programs take less time to write than standard
programs.
Procedures written for one program can be reused in other programs requiring
the same task. A procedure that can be used in many programs is said to be
reusable
2. Easy to debug
Since each procedure is specialized to perform just one task, a procedure can be
instructions that are not grouped for specific tasks. The logic of such programs is
3. Easy to Understand
The relationship between the procedures shows the modular design of the
program.
Meaningful variable names help the programmer identify the purpose of each
variable.
1 - 10
CHAPTER 1: INTRODUCTION
4. Easy to Change
The structured program design does not provide a way to keep the data and the program (the procedure)
together. Each program therefore has to define how it will use the data for that particular program. This can
result in redundant programming code that must change every time the structure of the data is changed. A
newer approach to developing software called the object-oriented approach eliminates this problem.
It is a nonprocedural approach, which means the programmer needs to specify what to do without
specifying how to do. Consequently coding programs in this approach requires much less time and effort on
The chapters that follow this will cover in detail the Object Oriented Programming.
1 - 11