Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

CSC1201 – Introduction to Computer

Science (Part 1)
Dr. Ado Haruna,
Department of Mechatronics Engineering,
Bayero University, Kano.
aharuna.mct@buk.edu.ng

1.0 Introduction
Computers are electronic devices that store, retrieve and process data at high
speeds according to programmed instructions. Computer science is concerned
with application of scientific principles in the design, construction and
maintenance of systems based upon the use of computers.

2.0 History of Computers


2.1 Early History

The earliest computing device undoubtedly consisted of the five fingers of each
hand. Since there are ten discrete fingers (digits) available for counting, both
digital computation and the decimal system have enjoyed huge popularity
throughout history. However, improvements were made to replace the digits
of the hand by a more reliable counting device, which, led to the birth of the
Abacus in China about 2000 years ago. It consisted of a wooden rack holding
horizontal wires with beads strung to them. The beads could be moved around
accordingly to carry out arithmetic.

The transformation of multiplication to addition, by the use of logarithms was


a very significant milestone in the history of computers as it simplified
mechanisation. John Napier, a Scottish mathematician discovered the use of
logarithms in 1614. In 1617, he invented the ‘Napier bones’, a manually
operated calculating device which reduced multiplication to addition and
division to subtraction.

Calculating machines invented by Blaise Pascal in 1642 and Gottfried Leibniz in


1671 marked the genesis of the application of machines in industries. The
Pascal Machine added and subtracted numbers with dials. This was expanded
by Leibniz with special gears, allowing for multiplications on the Pascal
machine.
1
In the 19th Century, during the industrial revolution in Great Britain, Joseph
Jacquard invented the weaving loom for the textile industry. This made
possible the automatic production of unlimited varieties of pattern weaving.
The Difference Engine, designed by Charles Babbage in the 1820s was another
revolutionary mechanical computer. The difference engine was intended as a
machine with a 20 decimal capacity that could solve mechanical problems.
Babbage also made plans for the Analytical Engine, considered as the
mechanical precursor to the modern computer. The Analytical Engine was
designed to perform arithmetic operations efficiently but Babbage was unable
to obtain the necessary funds to build it.

Augusta Ada Bryon, the countess of Lovelace, was one of the few female
mathematicians of her time and a personal friend to Babbage. She prepared
extensive notes concerning Babbage’s ideas of the Analytical Engine.
Lovelace’s conceptual design led to the naming of a programming language
(Ada) in her honour. Though the Analytic Engine was never built, its key
concepts like the ability to store instructions, use of punch cards as memory
and the ability to print can be found in modern computers.

2.2 Early Electronic Computers

Herman Hollerith (widely regarded as the father of modern automatic


computing) used an idea similar to Jacquard’s loom when he chose the
punched card as the basis for storing and processing information. He built the
first punched-card tabulating and sorting machines. His designs won the
competition for the 1890 US census, chosen for their ability to count combined
facts. These machines reduced a ten-year job to three months and saved the
American taxpayers five million dollars. In 1911 Hollerith's company merged
with several others to form the Computing-Tabulating-Recording Company
(CTR), which changed its name to International Business Machines Corporation
(IBM) in 1924.

In 1937 British mathematician, Alan Turin, invented a theoretical computing


machine to serve as an idealized model for mathematical calculation. This
machine (known as the Turing machine) was a hypothetical machine having an
infinitely long tape upon which it writes, reads and alters symbols. Despite its
simplicity, a Turing machine can be adapted to simulate the logic of any

2
computer algorithm, and is particularly useful in explaining the functions of a
CPU inside a computer. The Turing machine was the theoretical predecessor to
the modern digital computer.

During the Second World War in the 1930s, American Physicist Howard Aiken
developed the Mark 1 calculating machine which was built by IBM and
installed at Harvard in 1944. The machine used relays and electromagnetic
components to replace mechanical parts. Aiken followed the Mark 1 with the
Mark II, Mark III and Mark IV which was completely electronic and used solid
state devices.

2.3 ENIAC, EDVAC, EDSAC, and UNIVAC

The Electronic Numerical Integrator And Computer (ENIAC) was the first
operational electronic digital computer developed for the U.S. Army by J.
Presper Eckert and John Mauchly at the University of Pennsylvania in
Philadelphia. Started in 1943, it took 200,000 man-hours and nearly a half
million dollars to complete two years later. Programmed by plugging in cords
and setting thousands of switches, the decimal-based machine used 18,000
vacuum tubes, weighed 30 tons and took up 1,800 square feet. It cost a
fortune in electricity to run; however, at 5,000 additions per second, it was
faster than anything else.

The Electronic Discrete Variable Automatic Computer (EDVAC) was to be a vast


improvement upon ENIAC. Mauchly and Eckert (joined by John von Neumann)
started working on it two years before ENIAC even went into operation. Their
idea was to have the program for the computer stored inside the computer.
This would be possible because EDVAC was going to have more internal
memory than any other computing device to date.

The EDSAC, (Electronic Delay Storage Automatic Calculator), was the first full-
size stored-program computer, built at the University of Cambridge, by
Maurice Wilkes and others to provide a formal computing service for users.
EDSAC was built according to the principles enunciated by the Hungarian
American scientist John von Neumann and became operational in 1949. Wilkes
built the machine chiefly to study computer programming issues, which he
realized would become as important as the hardware details.

3
The UNIVAC or the Universal Automatic Computer was, effectively, an updated
version of the ENIAC. Data could be input using magnetic computer tape and
later by punch cards. It was tabulated using vacuum tubes and state-of-the-art
circuits then either printed out or stored on more magnetic tape. Mauchly and
Eckert began building UNIVAC I in 1948 and delivered the completed machine
to the Census Bureau in March 1951. The computer was used to tabulate part
of the 1950 population census and the entire 1954 economic census. The
computer excelled at working with the repetitive but intricate mathematics
involved in weighting and sampling for these surveys. UNIVAC I, as the first
successful civilian computer, was a key part of the dawn of the computer age.

2.4 Transistors and Integrated Circuits

The invention of the transistor in 1947 by William B. Shockley and his team at
the American Telephone and Telegraph Company’s Bell Laboratories, had a
huge impact in the evolution of computers. The devices could act as an electric
switch and were smaller, cheaper, faster, and more reliable than vacuum tubes
and soon replaced them in computers. The devices also consumed far much
less energy.

Engineers soon learnt to miniaturise other electrical components like resistors


and capacitors. In 1958, Jack Kilby of Texas Instruments Inc. and Robert Noyce
of Fairchild Semiconductor Corporation independently thought of a way to
reduce circuit size further and integrate all component parts on a single piece
of solid material. The integrated circuit (IC) was thus created. ICs can contain
hundreds of thousands of individual transistors on a single piece of material
and led to a reduction in the size of computers.

Improvements in IC technology in the 1970s led to the birth of the


microprocessor. The first microprocessor was the Intel 4004, a 4-bit device the
size of a fingernail containing over two thousand transistors with a maximum
clock speed of 740 kHz. Modern microprocessors contain over a 100 million
transistors, can handle 64-bit data and have clock speeds well exceeding 4
GHz. Manufacturers used IC technology to produce smaller, cheaper and faster
computers. The first Personal Computer (PC) was the Altair 8800 made in 1975
by Micro Instrumentation Telemetry Systems (MITS). It used an Intel 8080

4
processor, had 256 bytes of RAM and displayed its output using rows of light
emitting diodes (LEDs).

3.0 COMPUTER ORGANISATION

Regardless of the difference in physical appearance, virtually every computer


can be divided into six logical units:

3.1 Input Unit

Any information or data entered or sent to the computer to be processed is


considered input. The input unit receives data from the input devices like
keyboards, mice, scanners, etc. Other input devices include microphones,
digital cameras and various sensors.

3.2 Output Unit

This unit sends out processed data to devices so that it can be interpreted by
the user. An output device is any piece of computer hardware equipment used
to communicate the results of data processing which converts the
electronically generated information into human-readable form. Computers
can output information as video, audio or in print. The output may also be a
signal used to control other devices.

3.3 Memory Unit

This is the part of the computer that facilitates temporary storage of data. The
memory unit, often referred to as primary memory or random access memory
(RAM), is quickly accessible. The memory unit stores data that has been
entered through the input devices, enabling the data to be immediately
available for processing. The memory unit also retains processed data until the
data can be transmitted to the output devices. Primary memory is usually
volatile, meaning that it is erased when the machine is powered off.

3.4 Arithmetic Logic Unit

The arithmetic-logic unit (ALU) performs simple addition, subtraction,


multiplication, division, and logic operations—such as OR and AND. It contains
mechanisms that can be used for decision making, such as comparisons of two
numbers stored in memory.

5
3.5 Central Processing Unit (CPU)

The central processing unit (CPU), is the principal part of any digital computer
system. It constitutes the physical heart of the entire computer system and is
responsible for supervising the operation of other sections. The CPU alerts the
input unit when instructions should be read, instructs the ALU when to use the
information and instructs the output unit when to transmit the information to
the output devices. The CPU and ALU are commonly regarded as a single unit.

3.6 Secondary Storage Unit

Secondary storage differs from primary storage in that it is not directly


accessible by the CPU. The computer usually uses its input/output channels to
access secondary storage and transfers the desired data using intermediate
area in primary storage. Secondary storage is cheaper than primary storage
and does not lose the data when the device is powered off i.e. it is non-
volatile. Examples of secondary storage are hard disk drives, CD-ROMS, Flash
disks, etc.

Figure 1: Computer System Organisation

6
4.0 CLASSIFICATION OF COMPUTERS

Computers may be classified according to devices, generation and size.

4.1 Classification by Devices

By devices, computers are classified as Analog, Digital or Hybrid.

(a) Analog Computers: These are similar to measuring devices such as


thermometers and voltmeters. Analog computers process data in the form
of electric voltages and their outputs are usually in the form of graphs from
which information can be read.
(b) Digital Computers: The information in digital computers is represented in
digital (discrete) form. A digital computer stores and performs a series of
mathematical and logical operations on data expressed as discrete signals
usually in the form of binary notation.
(c) Hybrid Computers: These are a combination of analog and digital
computers. It consists of an analog and digital computer incorporated
together in a single system.

4.2 Classification by Size

There are three types of computers that can be identified by their size:

(a) Super Computers: These are the largest, fastest and most expensive
computers. They have extremely fast processors capable of performing 30-
50 billion operations per second. Super computers are used in
meteorology, engineering, nuclear physics and astronomy.
(b) Mainframe Computers: These are large general purpose computers with
extensive processing, storage and input/output capabilities. They are used
in centralised computing environments in which data input is achieved via
terminals wired to the mainframe computer. Mainframes are usually
owned by large cooperate organisations such as universities, research
institutes, giant banks etc. The market for mainframes is dominated by
IBM.
(c) Minicomputers: These can be considered as a scale down version of
mainframes i.e. they are smaller, cheaper and have less memory. The term
minicomputer is no longer used specifically (it predates the term

7
microcomputer) and the boundary between these two classes of devices is
unclear.
(d) Microcomputer: This is the latest category to be developed.
Microcomputers are also called personal computers (PCs) as they can be
used in the home or office. Microcomputers utilise a microprocessor as
their central and arithmetic unit and are fast, cheap, durable and power
efficient. The desktop computer was the first type of microcomputer, so
called as it is placed on top of desks. Due to improvement in technology,
the PC is now more compact in size in the form of Laptops, Notebooks and
Palmtops.

4.3 Classification by Generation

Classification of computers by Generation is an informal way of classifying


computers based on the technologies used. There is however no exact
boundary distinguishing where one generation starts and ends. The following
are five generations of computers:

(a) First Generation


 Manufactured between 1942 – 1955
 Used vacuum tubes
 Very large in size and occupied a lot of space.
 Consumed a large amount of energy and produced a lot of heat.
 Were unreliable and required constant maintenance.

(b) Second Generation:


 These are machines designed after 1955 (approximately)
 Used transistors to replace vacuum tubes.
 Much smaller in size the previous generation.
 Were faster and more reliable than the first generation.
 Used assembly and high level language instead of machine language.
 This generation of computers were still expensive and were only
used for specific purposes.

(c) Third Generation


 These are machines designed after 1960.

8
 The third generation used IC technology. These ICs had small-scale
integration (less than 100 transistors on a single chip) and medium
scale integration (100 – 1000 transistors on a single chip).
 Smaller in size as compared to previous generations.
 More reliable and faster than earlier generations.
 Truly general purpose.

(d) Fourth Generation


 Manufactured in the 1970s.
 Used ICs with LSI (large-scale integration – at least 10,000 transistors
on a single silicon chip) and later VLSI (very-large scale integration –
over 100,000 transistors on a single chip). This led to the production
of the microprocessor used by this generation of computers.
 Very fast processing power with less power consumption.
 Commercial production.
 Used all types of high level languages.
(e) Fifth Generation
These are computers developed based on the technique of Artificial
Intelligence (AI). These computers can understand spoken words and
imitate human reasoning. They also have situation awareness (SA), can
respond to their surroundings using different type of sensors and have
‘learning’ capability. In 2017, the AI company DeepMind released an
algorithm called AlphaZero which had the ability to ‘learn’ complex
board games like Chess, solely via ‘self-play’. After 9 hours of playing
against itself to train its neural networks, the algorithm defeated the
strongest Chess program as the time (Stockfish 8) in a 100-game contest
winning 28, drawing 72 and loosing none.

4.4 Single Board Computers

Single-board computers (SBCs) are computers complete with a


microprocessor, memory, input/output and other features built on a single
circuit board. Unlike the desktop PC, SBCs often do not rely on expansion slots
for peripheral functions or expansion. SBCs can mainly be grouped into two
categories i.e. open source and proprietary. Open source SBCs give users
access to both the hardware design as well as to the source code used on the

9
board. On the other hand, proprietary SBCs are generally industrialised
designs for use in specific end applications.

In 2006, a group from the University of Cambridge decided to address the


need for a low cost computer that would allow students to learn how to
program. They came up with a $35 single board computer known as the
Raspberry Pi. This quickly gained popularity with students, hobbyists and
professional engineers alike and helped to spark interest in SBCs. There are
various SBCs in production, which use a variety of processors from traditional
X86 type (AMD and Intel) to ARM more common with mobile devices.

Figure 2: The Raspberry Pi single board computer

4.4 Microcontrollers

Microcontrollers are computers on a single integrated circuit consisting of a


processor, memory and input/output terminals. They are mostly used in
embedded systems such as automobiles, telephones, appliances, and
peripherals for computer systems as they are cheap and energy efficient. The
first commercially manufactured microcontroller was the TMS1000 produced
by Texas Instruments in 1974. The PIC (Peripheral Interface Controller or
Programmable Intelligent Computer) and AVR microcontrollers, manufactured
by Microchip Technology have gained popularity over the years. Professional
engineers as well as hobbyists use them for automation purposes. For
example, the popular Arduino Uno microcontroller board, which uses both
open source software and hardware, employs the ATmega328p AVR
microcontroller. Microcontrollers are generally based on RISC (Reduced
Instruction Set Computer) architecture allowing the computer to have a small
set of simple and general instructions.

10
5.0 HARDWARE and SOFTWARE

5.1 Hardware

Computer hardware are the component parts of the computer that can be
physically handled. These components can also be divided into input,
processing, storage and output hardware.

5.1.1 Input Hardware

These are devices used for entering data into the computer system. Examples
are Keyboards, Mice, Scanners, Punch Cards, Game Controllers etc.

5.1.2 Processing Hardware

The microprocessor (CPU) is at the centre of all processing carried out by a


computer, making it the most important part of computer hardware. It
provides computational ability and control for the computer system. However,
the CPU requires several other components like RAM, data buses and other ICs
that support it with the management of data to operate. The processing power
of the CPU is determined by its clock speed (frequency) and processing data
width.

5.1.3 Storage Hardware

Storage hardware refers to secondary memory used to store


processed/unprocessed information. Storage hardware include hard disk
drives, floppy disk drives, optical disc drives etc.

5.1.4 Output Hardware

Output hardware receive already processed data in human readable form. The
processed information can be read, viewed, heard and reproduced as a hard
copy. Examples of output hardware include monitors, printers, loudspeakers
etc.

(a) Monitors: These are also known as the Visual Display Units (VDUs).
Processed information can be read or viewed on the monitor. The
quality of video output is determined by screen size and resolution.
(b) Printers: Information can be viewed as a hard copy using printers.
Printers may be dot matrix, inkjet or laser.
11
5.1.5 Peripherals

A computer peripheral is a device connected to the computer to provide


additional functionality. Peripherals can be input, output, storage or
communication devices. Many peripherals are critical elements of a fully
functioning computer. Examples are keyboards, mice, printers, modems,
external hard disks etc.

5.2 Software

Software is a set of instructions that tell a computer what to do. Software


comprises the entire set of programs, procedures, and routines associated with
the operation of a computer system. Unlike hardware, software is part of the
computer that cannot be physically handled. A set of instructions that directs
a computer’s hardware to perform a task is called a program, or software
program.

Software is typically stored on an external long-term memory device, such as a


hard drive or magnetic diskette. When the program is in use, the computer
reads it from the storage device and temporarily places the instructions in
random access memory (RAM). The process of storing and then performing the
instructions is called “running,” or “executing,” a program. By contrast,
software programs and procedures that are permanently stored in a
computer’s memory using a read-only (ROM) technology are called firmware,
or “hard software.” The ROM-BIOS (Basic Input Output System) stores
software responsible for controlling the interaction between the input and
output hardware. The two main types of software are system software and
application software.

5.2.1 System software

The system software (operating system) is the primary software that controls
the operations of a computer and provides common services for computer
programs. It has three major functions:

a) It coordinates and manipulates computer hardware


b) It organises and manages data files on storage media
c) It manages hardware errors and loss of data.

12
Examples of commonly used PC operating systems include MS-
DOS/WINDOWS, APPLE OSX, and LINUX.

(a) MS-DOS: Short for Microsoft Disk operating system, MS-DOS is a non-
graphical command line operating system based on x86 family of
personal computers. Introduced by Microsoft in 1981, it was the main
operating system for IBM PC compatible personal computers during the
1980s to the mid-1990. MS-DOS was gradually superseded by operating
systems offering a graphical user interface (GUI), in particular by various
generations of the Microsoft Windows operating system.

(b) MAC-OS: Mac OS is the computer operating system for Apple


Computer's Macintosh line of personal computers and workstations.
In 1984, Apple Computer Inc. (now Apple Inc.) introduced the Macintosh
personal computer. The operating system of early Macintosh is named
"System Software" or "System", and its ensuing series was later
renamed to Mac OS. The Macintosh platform is generally credited with
having popularized the early concept of the graphical user interface, the
main recognizable aspect of Mac OS.

(c) Linux: Linux is an open source operating system first developed by Linus
Torvalds. Thousands of programmers contributed to enhance Linux, and
the operating system grew rapidly. Because it is free and runs on PC
platforms, it gained a sizeable audience among hard-core developers
very quickly. In general, Linux is harder to manage than something like
Windows, but offers more flexibility and configuration options and is
popular with people that want to experiment with operating system
principles.

5.2.2 Application software

Application software by contrast, directs the computer to execute commands


given by the user and may be said to include any program that processes data
for a user. Application software thus includes word processors, spread sheets,
database management, inventory and payroll programs, modelling &
simulation and many other “applications.”

13
6.0 PROBLEM SOLVING

6.1 Using a Computer to Solve Problems

A computer is worthless without a program that controls it. A computer


functions by executing a sequence of instructions, referred to as a program.
Hence, in order to solve a problem with a computer, it has to be programmed
to solve it. As computers follow instructions to the letter, they are considered
as “Garbage in, Garbage out”. To use a computer to solve a problem, one must
have the following;

 a clear understanding of what the problem is


 knowledge of the input data to be processed
 knowledge of the output information to be produced
 knowledge of the strategy to use to transform input data into output
 knowledge what data (if any) are to be generated for further processing

Problem solving strategy i.e. evolving a method for solving a problem is a


human task and not that of a computer. A step by step procedure required by
a computer to solve a problem is called an algorithm.

6.2 Algorithms

Algorithms describe the solution to a problem in terms of the data needed to


represent the problem instance and the set of steps necessary to produce the
intended result. The name derives from the Latin translation, Algoritmi de
numero Indorum, of the 9th-century mathematician al-Khwarizmi’s arithmetic
treatise “Al-Khwarizmi Concerning the Hindu Art of Reckoning.”

6.2.1 Properties of Algorithms

An algorithm must have the following properties:

I) Input(s): An algorithm must have one or more pre-specified inputs.


II) Output(s): An algorithm must have one or more outputs.
III) Definiteness: Each step of an algorithm must be clearly defined i.e.
without any confusion, contradiction or ambiguity.
IV) Finiteness: Each step of an algorithm must be terminated in a finite
(tolerable) amount of time.

14
V) Correctness: Given correct inputs, an algorithm should always
produce correct results.
VI) Efficiency: For any particular problem, there are many alternative
algorithms that can provide a solution. Algorithms should be simple,
fast, and minimise the use of available computational resources.

6.3 Flowcharts

A flowchart is a graphical representation of a process. Individual operations


can be represented by closed boxes on the flowchart, with arrows between
boxes indicating the order in which the steps are taken. In computer operation,
a flowchart represents steps taken in an algorithm in solving a particular
problem. The following are the four most commonly used blocks in a
flowchart:

I) Terminals: The ovals indicate the beginning (START) and end (STOP)
of an algorithm.

II) Data: Parallelograms indicate input/output data

III) Process: Rectangular boxes indicate the manipulation of data in the


memory of a computer.

15
IV) Decision: A diamond-shaped symbol indicates logical
decisions/comparisons.

7.0 PROGRAMMING LANGUAGES

Programming languages, are various languages for expressing a set of detailed


instructions for a digital computer i.e. they are used to write or code computer
programmes. Computer programming languages are classified either as low
level or high level languages.

7.1 Low Level Languages

These are programming languages that provide little or no abstraction from a


computer’s instruction set architecture. Low-level languages can be either
machine language or assembly language.

7.1.1 Machine Language

A machine language consists of the numeric codes for the operations that a
particular computer can execute directly. It can be considered as the natural
language of a specific computer. The codes are strings of 0s and 1s i.e. binary
bits. Machine language instructions typically consist of two parts – the
operation code (opcode) and the operand. The opcode represents the
instruction to be carried out while the operand contains the memory address
for the data to be used. Machine language is difficult to read and write, since it
does not resemble conventional mathematical notation or human language,
and its codes vary from computer to computer.

7.1.2 Assembly Language

Assembly language is one level above machine language. It uses short


mnemonic codes for instructions and allows the programmer to introduce
names for blocks of memory that hold data. Assembly language is designed to

16
be easily translated into machine language code using a fitting assembler
program. Although blocks of data may be referred to by name instead of by
their machine addresses, assembly language does not provide more
sophisticated means of organizing complex information. Like machine
language, assembly language requires detailed knowledge of internal
computer architecture.

7.1.3 High-Level Languages

High level languages were developed to speed up and simplify the


programming process. High level languages use syntaxes similar to everyday
English and a single instruction can accomplish significant task relative to a low
level language. For instance, the statement;

a=b+c

adds the value of memory location b to c and stores the result in a. Translator
programs referred to as compilers convert high-level programs into machine
language. This process takes a significant amount of time and interpreter
programs like Q-BASIC were developed that can directly run high level
languages.

17

You might also like