Professional Documents
Culture Documents
Unit Ia: What Computer Science Is Not
Unit Ia: What Computer Science Is Not
Unit Ia: What Computer Science Is Not
Unit Ia
Computer science is the scientific and practical approach to computation and its
applications. It is the systematic study of the feasibility, structure, expression, and
mechanization of the methodical procedures (or algorithms) that underlie the
acquisition, representation, processing, storage, communication of, and access to
information, whether such information is encoded as bits in a computer memory or
transcribed in genes and protein structures in a biological cell [1].
Computer Science is not just about building computers or writing computer programs!
Computer Science is no more about building computers and developing software than
astronomy is about building telescopes, biology is about building microscopes, and
music is about building musical instruments! Computer science is not about the tools we
use to carry out computation. It is about how we use such tools, and what we find out
when we do. The solution of many computer science problems may not even require
the use of computers—just pencil and paper. As a matter of fact, problems in computer
science have been tackled decades before computers were even built. That said, the
design and implementation of computing system hardware and software is replete with
formidable challenges and fundamental problems that keep computer scientists busy.
Computer Science is about building computers and writing computer
programs, and much much more! [1].
an ability to recognize variants of the same problem in different settings, and being able
to retarget known efficient solutions to problems in new settings [1].
History
The earliest foundations of what would become computer science predate the invention
of the modern digital computer. Machines for calculating fixed numerical tasks such as
the abacus have existed since antiquity, aiding in computations such as multiplication
and division. Further, algorithms for performing computations have existed since
antiquity, even before sophisticated computing equipment were created. The ancient
Sanskrit treatise Shulba Sutras, or "Rules of the Chord", is a book of algorithms written
in 800 BCE for constructing geometric objects like altars using a peg and chord, an
early precursor of the modern field of computational geometry.
Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's
calculator, in 1642. In 1673 Gottfried Leibniz demonstrated a digital mechanical
calculator, called the 'Stepped Reckoner'. He may be considered the first computer
scientist and information theorist, for, among other reasons, documenting the binary
number system. In 1820, Thomas de Colmar launched the mechanical calculator
industry when he released his simplified arithmometer, which was the first calculating
machine strong enough and reliable enough to be used daily in an office environment.
Charles Babbage started the design of the first automatic mechanical calculator, his
difference engine, in 1822, which eventually gave him the idea of the first
programmable mechanical calculator, his Analytical Engine. He started developing this
machine in 1834 and "in less than two years he had sketched out many of the salient
features of the modern computer. A crucial step was the adoption of a punched card
system derived from the Jacquard loom making it infinitely programmable. In 1843,
during the translation of a French article on the analytical engine, Ada Lovelace wrote,
in one of the many notes she included, an algorithm to compute the Bernoulli numbers,
which is considered to be the first computer program. Around 1885, Herman Hollerith
invented the tabulator which used punched cards to process statistical information;
eventually his company became part of IBM. In 1937, one hundred years after
Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds
of punched card equipment and was also in the calculator business to develop his giant
programmable calculator, the ASCC/Harvard Mark I, based on Babbage's analytical
engine, which itself used cards and a central computing unit. When the machine was
finished, some hailed it as "Babbage's dream come true" [2].
The term "computer science" appears in a 1959 article in Communications of the ACM,
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences
analogous to the creation of Harvard Business School in 1921. His efforts, and those of
others such as numerical analyst George Forsythe, were rewarded: universities went on
to create such programs, starting with Purdue in 1962. The world's first computer
science degree program, the Cambridge Diploma in Computer Science, began at the
University of Cambridge Computer Laboratory in 1953.
It is evident that the birth of computer science is closely related to human’s desire of
solving problems efficiently and correctly. Yes it all about problem solving!
1. Define the problem: State the problem you are trying to solve in clear and
concise terms.
2. List the inputs (information needed to solve the problem) and the outputs (the
intended outcome of the solution).
3. Describe the steps needed to convert or manipulate the inputs to produce the
outputs.
4. Start at a high level first, and keep refining the steps until they are effectively
computable operations.
5. Test the sequence of steps: choose data sets and verify that it works!
The formation of the sequence of steps is one of the important concepts in computer
science. It is represented by the word “algorithm”. The word algorithm is derived from
the last name of Muhammad Al-Khowarizmi (full name: Abu Ja'far Muhammad ibn-Musa
Al-Khowarizmi), a famous Persian mathematician and author of the eighth and ninth
centuries, whose work built upon that of the 7th century Indian mathematician
Brahmagupta (His most famous book is Kitab al jabr w'al muqabala). Al-Khowarizmi
developed the procedures we commonly use when adding, subtracting, multiplying and
dividing two decimal numbers with pencil and paper.
Synonyms of the word Algorithm can be words such as “rule”, “procedure”, “method” or
“technique”. There are many formal definitions for algorithm in computer science. An
algorithm is a well-ordered collection of unambiguous and effectively
computable operations that when executed produces a result and halts in a
finite amount of time [3].
A computing agent is the thing that is supposed to carry out the algorithm.
Effectively computable means the computing agent can actually carry out the operation.
If we can specify an algorithm to solve a problem then we can automate its solution.
That which carries out the steps of an algorithm (a computing agent) just needs to be
able to follow directions; not understand the concepts or ideas behind the algorithm.
For the purpose of automation, the algorithm is written as a “program” - an organized
list of instructions that, when executed, causes the computer to behave in a
predetermined manner. It is a sequence of instructions, written to perform a specified
task with a computer.
Dynamic Programming
Dynamic programming algorithms are used for optimization (for example, finding the shortest path
between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm
will examine the previously solved subproblems and will combine their solutions to give the best solution
for the given problem.
For example, let's say that you have to get from point A to point B as fast as possible, in a
given city, during rush hour. A dynamic programming algorithm will look at finding the shortest
paths to points close to A, and use those solutions to eventually find the shortest path to B. On
the other hand, a greedy algorithm will start you driving immediately and will pick the road that
looks the fastest at every intersection. As you can imagine, this strategy might not lead to the
fastest arrival time, since you might take some "easy" streets and then find yourself hopelessly
stuck in a traffic jam.
Flow charts:-
Flow chart is a graphical tool that diagrammatically depicts the steps and structure of
an algorithm or program. A flowchart is a type of diagram that represents an algorithm,
workflow or process, showing the steps as boxes of various kinds, and their order by
connecting them with arrows. The first formal flowchart is attributed to John Von
Neumann in 1945.
4. Checks the program flow – accuracy in logic flow can be easily maintained.
5. Helps in coding.
3. Crossing flowlines shall not imply any logical connection between those lines.
4. Any flowchart must be identified with a title, date, name of the author, inputs
and outputs.
5. Annotation and cross references may be provided when the meaning is not
apparent from the symbol used.
6. When drawing a chart on more than one sheet of paper, the connectors joining
different pages (or parts of a chart within one page) must be adequately
referenced.
Flowcharts are of two kinds: (i) system charts and (ii) program charts. System charts
are used by system analysts. Program charts are used to represent an algorithm in a
graphical way.
Program Charts
In 1966, computer scientists Corrado Böhm and Giuseppe Jacopini demonstrated that
all programs could be written using three control structures: Sequence, Selection, and
Repetition [5].
The sequence structure is the construct where one statement is executed after another.
The selection structure is the construct where statements can executed or skipped
depending on whether a condition evaluates to TRUE or FALSE. The repetition structure
is the construct where statements can be executed repeatedly until a condition
evaluates to TRUE or FALSE. All of these constructs can be represented using
flowcharts.
Symbols
Flowchart Symbols
Example:-
Example:-
Flowchart Illustration
Example:-
Illustration
References
[1] http://www.cs.bu.edu/AboutCS/WhatIsCS.pdf
[2] http://en.wikipedia.org/wiki/Computer_science#cite_ref-1
[3] G. Michael Schneider and Judith Gersting, Invitation to Computer Science,
[4] http://www.cs.xu.edu/csci170/08f/sect01/Overheads/WhatIsAnAlgorithm.html
[5] Corrado and Jacopini (May 1966). "Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules".
Communications of the ACM 9 (5): 366–371.
Additional References:-
http://users.evtek.fi/~jaanah/IntroC/DBeech/3gl_flow.htm
https://chortle.ccsu.edu/QBasic/chapter16/bc16_3.html
References:-
[1] http://www.cs.bu.edu/AboutCS/WhatIsCS.pdf
[2] http://en.wikipedia.org/wiki/Computer_science#cite_ref-1
[3] G. Michael Schneider and Judith Gersting, Invitation to Computer Science,
[4] http://www.cs.xu.edu/csci170/08f/sect01/Overheads/WhatIsAnAlgorithm.html
[5] Corrado and Jacopini (May 1966). "Flow Diagrams, Turing Machines and Languages with Only Two
Formation Rules". Communications of the ACM 9 (5): 366–371.