Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 36

INTRODUCTION

The computers in recent times have become a relevant too particularly in the areas of
storage and dissemination of information. The ease with which the computer function, i.e. the
speed, accuracy and readiness.

With the usefulness of the computer, it has become fashionable for organizations to be
computerized, that is, a computer department is created to serve the whole organization and
expert or professionals are employed to manage the department. It is today becoming
increasingly difficult for computer illiterates to get good employments, as computer literacy
is now a pre-requisite for most jobs.

The world is becoming a global village through the use of computer, thus there is the
need for everyone to be computer illiterate.

The computer age was characterized by generation of computers, which signified that
computer had pass through stages of evolution or development. Before we could arrive at the
present day computers, it has undergone stages of development known as generation of
computers.

What is Computer?
A computer is an electronic device used to store retrieve and manipulate data.
A computer also defines as a programmable electromechanical device that accept
instruction (program) to direct the operations of the computers. Four words can be deducted
from the above definition for further illustration.
Examples

i. Store: To put data somewhere for safe keeping


ii. Retrieve: To get and bring the data back.
iii. Process: To calculate compare arrange.

What is Computer Science?

Computer science (sometimes called computation science or computing science, but


not to be confused with computational science or software engineering) is the study of
processes that interact with data and that can be represented as data in the form of programs.
It enables the use of algorithms to manipulate, store, and communicate digital information. A
computer scientist studies the theory of computation and the practice of designing software

1
systems.

Its fields can be divided into theoretical and practical disciplines. Computational
complexity theory is highly abstract, while computer graphics emphasizes real-world
applications. Programming language theory considers approaches to the description of
computational processes, while computer programming itself involves the use of
programming languages and complex systems. Human– computer interaction considers the
challenges in making computers useful, usable, and accessible.

HISTORICAL BACKGROUND OF COMPUTER:

The history of computer dated back to the period of scientific revolution (i.e. 1543 –
1678). The calculating machine invented by Blaise Pascal in 1642 and that of Goffried
Liebnits marked the genesis of the application of machine in industry.

This progressed up to the period 1760 – 1830 which was the period of the industrial
revolution in Great Britain where the use of machine for production altered the British society
and the Western world. During this period Joseph Jacquard invented the weaving loom (a
machine used in textile industry).
The computer was born not for entertainment or email but out of a need to solve a serious
number-crunching crisis. By 1880, the United State (U.S) population had grown so large that
it took more than seven years to tabulate the
U.S. Census results. The government sought a faster way to get the job done, giving rise to
punch-card based computers that took up entire rooms. Today, we carry more computing
power on our smart phones than was available in these early models. The following brief
history of computing is a timeline of how computers evolved from their humble beginnings to
the machines of today that surf the Internet, play games and stream multimedia in addition to
crunching numbers. The followings are historical events of computer.

1623: Wilhelm Schickard designed and constructed the first working mechanical calculator.

1673: Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped
Reckoner. He may be considered the first computer scientist and information theorist, for,
among other reasons, documenting the binary number system.

1801: In France, Joseph Marie Jacquard invents a loom that uses punched wooden cards to
automatically weave fabric designs. Early computers would use similar punch cards. Home /
2
News / Tech / Health / Planet Earth / Strange News / Animals / History / Culture / Space &
Physics.

1820: Thomas de Colmar launched the mechanical calculator industry when he released his
simplified arithmometer, which was the first calculating machine strong enough and reliable
enough to be used daily in an office environment.

1822: English mathematician Charles Babbage (Father of Computer) conceives of a steam-


driven calculating machine that would be able to compute tables of numbers. The project,
funded by the English government, is a failure. More than a century later, however, the
world's first computer was actually built.

1843: During the translation of a French article on the Analytical Engine, Ada Lovelace
wrote, in one of the many notes she included, an algorithm to compute the Bernoulli
numbers, which is considered to be the first published algorithm ever specifically tailored for
implementation on a computer.

1885: Herman Hollerith invented the tabulator, which used punched cards to process
statistical information; eventually his company became part of IBM.

1890: Herman Hollerith designs a punch card system to calculate the 1880 census,
accomplishing the task in just three years and saving the government $5 million. He
establishes a company that would ultimately become IBM.

1936: Alan Turing presents the notion of a universal machine, later called the Turing
3
machine, capable of computing anything that is computable. The central concept of the
modern computer was based on his ideas.

1937: J.V. Atanasoff, a professor of physics and mathematics at Iowa State University,
attempts to build the first computer without gears, cams, belts or shafts.

1937: One hundred years after Babbage's impossible dream, Howard Aiken convinced IBM,
which was making all kinds of punched card equipment and was also in the calculator
business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on
Babbage's Analytical Engine, which itself used cards and a central computing unit. When the
machine was finished, some hailed it as "Babbage's dream come true".

1939: Hewlett-Packard is founded by David Packard and Bill Hewlett in a Palo Alto,
California, garage, according to the Computer History Museum.

1941: Atanasoff and his graduate student, Clifford Berry, design a computer that can solve 29
equations simultaneously. This marks the first time a computer is able to store information on
its main memory.
1943-1944: Two University of Pennsylvania professors, John Mauchly and J. Presper
Eckert, build the Electronic Numerical Integrator and Calculator (ENIAC). Considered the
grandfather of digital computers, it fills a 20-foot by 40-foot room and has 18,000 vacuum
tubes.

1946: Mauchly and Presper leave the University of Pennsylvania and receive funding from
the Census Bureau to build the UNIVAC, the first commercial computer for business and
government applications.

1947: William Shockley, John Bardeen and Walter Brattain of Bell Laboratories invent the
transistor. They discovered how to make an electric switch with solid materials and no need
for a vacuum.

1953: Grace Hopper develops the first computer language, which eventually becomes known
as COBOL. Thomas Johnson Watson Jr., son of IBM CEO Thomas Johnson Watson Sr.,
conceives the IBM 701 EDPM to help the United Nations keep tabs on Korea during the war.

1954: The FORTRAN programming language, an acronym for FORmula TRANslation, is


developed by a team of programmers at IBM led by John Backus, according to the University

4
of Michigan.

1958: Jack Kilby and Robert Noyce unveil the integrated circuit, known as the computer chip.
Kilby was awarded the Nobel Prize in Physics in 2000 for his work.

1964: Douglas Engelbart shows a prototype of the modern computer, with a mouse and a
graphical user interface (GUI). This marks the evolution of the computer from a specialized
machine for scientists and mathematicians to technology that is more accessible to the
general public.

1969: A group of developers at Bell Labs produce UNIX, an operating system that addressed
compatibility issues. Written in the C programming language,
UNIX was portable across multiple platforms and became the operating system of choice
among mainframes at large companies and government entities. Due to the slow nature of the
system, it never quite gained traction among home PC users. 1970: The newly formed Intel
unveils the Intel 1103, the first Dynamic Access Memory (DRAM) chip.

1971: Alan Shugart leads a team of IBM engineers who invent the "floppy disk," allowing
data to be shared among computers.

1973: Robert Metcalfe, a member of the research staff for Xerox, develops Ethernet for
connecting multiple computers and other hardware.

1974 -1977: A number of personal computers hit the market, including Scelbi & Mark-8
Altair, IBM 5100, Radio Shack's TRS-80 — affectionately known as the "Trash 80" — and
the Commodore PET.

1975: The January issue of Popular Electronics magazine features the Altair 8080, described
as the "world's first minicomputer kit to rival commercial models." Two "computer geeks,"
Paul Allen and Bill Gates, offer to write software for the Altair, using the new Beginners All
Purpose Symbolic Instruction Code (BASIC) language. On April 4, after the success of this
first endeavor, the two childhood friends form their own software company, Microsoft.

1976: Steve Jobs and Steve Wozniak start Apple Computers on April Fool's Day and roll out
the Apple I, the first computer with a single-circuit board, according to Stanford University.

1977: Radio Shack's initial production run of the TRS-80 was just 3,000. It sold like crazy.
For the first time, non-geeks could write programs and make a computer do what they
5
wished.
1977: Jobs and Wozniak incorporate Apple and show the Apple II at the first West Coast
Computer Faire. It offers color graphics and incorporates an audio cassette drive for storage.

1978: Accountants rejoice at the introduction of VisiCalc, the first computerized spreadsheet
program.

1979: Word processing becomes a reality as MicroPro International releases WordStar. "The
defining change was to add margins and word wrap," said creator Rob Barnaby in email to
Mike Petrie in 2000. "Additional changes included getting rid of command mode and adding
a print function. I was the technical brains — I figured out how to do it, and did it, and
documented it. "The first IBM personal computer, introduced on Aug. 12, 1981, used the
MS- DOS operating system. (Image: © IBM).

1981: The first IBM personal computer, code-named "Acorn," is introduced. It uses
Microsoft's MSDOS operating system. It has an Intel chip, two floppy disks and an optional
color monitor. Sears & Roebuck and Computer land sell the machines, marking the first time
a computer is available through outside distributors. It also popularizes the term PC.

1983: Apple's Lisa is the first personal computer with a graphical user interface (GUI). It also
features a drop-down menu and icons. It flops but eventually evolves into the Macintosh. The
Gavilan SC is the first portable computer with the familiar flip form factor and the first to be
marketed as a "laptop." The TRS- 80, introduced in 1977, was one of the first machines
whose documentation was intended for non-geeks (Image: © Radioshack)

1985: Microsoft announces Windows, according to Encyclopedia Britannica. This was the
company's response to Apple's graphical user interface (GUI). Commodore unveils the
Amiga 1000, which features advanced audio and video capabilities.
1985: The first dot-com domain name is registered on March 15, years before the World
Wide Web would mark the formal beginning of Internet history. The Symbolics Computer
Company, a small Massachusetts computer manufacturer, registers Symbolics.com. More
than two years later, only 100 dot-coms had been registered.

1986: Compaq brings the “Deskpro 386” to market. Its 32-bit architecture provides as speed
comparable to mainframes.

1990: Tim Berners-Lee, a researcher at CERN, the high-energy physics laboratory in Geneva,
6
develops Hyper Text Markup Language (HTML), giving rise to the World Wide Web.

1993: The Pentium microprocessor advances the use of graphics and music on PCs.

1994: PCs become gaming machines as "Command & Conquer," "Alone in the Dark 2,"
"Theme Park," "Magic Carpet," "Descent" and "Little Big Adventure" are among the games
to hit the market.

1996: Sergey Brin and Larry Page develop the Google search engine at Stanford University.

1997: Microsoft invests $150 million in Apple, which was struggling at the time, ending
Apple's court case against Microsoft in which it alleged that Microsoft copied the "look and
feel" of its operating system.

1999: The term Wi-Fi becomes part of the computing language and users begin connecting to
the Internet without wires.

2001: Apple unveils the Mac OS X operating system, which provides protected memory
architecture and pre-emptive multi-tasking, among other benefits. Not to be outdone,
Microsoft rolls out Windows XP, which has a significantly redesigned graphical user
interface GUI.

2003: The first 64-bit processor, AMD's Athlon 64, becomes available to the consumer
market.

2004: Mozilla's Firefox 1.0 challenges Microsoft's Internet Explorer, the dominant Web
browser. Facebook, a social networking site, launches.

2005: YouTube, a video sharing service, is founded. Google acquires Android, a Linux-based
mobile phone operating system.

2006: Apple introduces the MacBook Pro, its first Intel-based, dual-core mobile computer, as
well as an Intel-based iMac. Nintendo's Wii game console hits the market.

2007: The iPhone brings many computer functions to the smart phone.

2009: Microsoft launches Windows 7, which offers the ability to pin applications to the
taskbar and advances in touch and handwriting recognition, among other features.

2010: Apple unveils the iPad, changing the way consumers view media and jumpstarting the

7
dormant tablet computer segment.

2011: Google releases the Chromebook, a laptop that runs the Google Chrome OS.

2012: Facebook gains 1 billion users on October 4.

2015: Apple releases the Apple Watch. Microsoft releases Windows 10.

2016: The first reprogrammable quantum computer was created. "Until now, there hasn't
been any quantum-computing platform that had the capability to program new algorithms
into their system. They're usually each tailored to attack a particular algorithm," said study
lead author Shantanu Debnath, a quantum physicist and optical engineer at the University of
Maryland, College Park.

2017: The Defense Advanced Research Projects Agency (DARPA) is developing a new
"Molecular Informatics" program that uses molecules as computers. "Chemistry offers a rich
set of properties that we may be able to harness for rapid, scalable information storage and
processing," Anne Fischer, program manager in DARPA's Defense Sciences Office, said in a
statement. "Millions of molecules exist, and each molecule has a unique three-dimensional
atomic structure as well as variables such as shape, size, or even color. This richness provides
a vast design space for exploring novel and multi-value ways to encode and process data
beyond the 0s and 1s of current logic-based, digital architectures." [Computers of the Future
May Be Minuscule Molecular Machines].

The history of computer is considered with the generations of a computer from first
generation to fifth generation.

In 19th century English mathematics professor name Charles Babbage referred as a


“Father of Computer”. He designed the Analytical Engine and it was this design that the basic
framework of the computers of today are based on. Generally speaking, computers can be
classified into five generations. Each generation lasted for a certain period of time and each
gave us either a new and improved computer or an improvement to the existing computer.

The generations of computer are as follows:

First Generation of Computer (1937 – 1946):

In 1937 the first electronic digital computer was built by Dr. John V. Atanasoff and
Clifford Berry. It was called the Atanasoff-Berry Computer (ABC). In 1943 an electronic

8
computer name the Colossus was built for the military. Other developments continued until in
1946 the first general– purpose digital computer, the Electronic Numerical Integrator and
Calculator (ENIAC) was built. It is said that this computer weighed 30 tons, and had 18,000
vacuum tubes which was used for processing. When this computer was turned on for the first
time lights dim in sections of Philadelphia. Computers of this generation could only perform
single task, and they had no operating system.

Characteristics:

i. Sizes of these computers were as large as the size of a room.


ii. Possession of Vacuum Tubes to perform calculation.
iii. They used an internally stored instruction called program.
iv. Use capacitors to store binary data and information.
v. They use punched card for communication of input and output data and information
vi. They generated a lot of heat.
vii. They have about One Thousand 1000 circuits per cubic foot.

Examples:

i. Mark I developed by Aiken in 1944.


ii. Electronic Numerical Integrator and Calculator (ENIAC) built at the Moore School
for Engineering of the University of Pennsylvania in 1946 by J. Presper Eckert and
William Mauchley.
iii. Electronic Discrete Variable Automatic Computer (EDVAC) also developed in 1947
by Eckert and Mauchley.

Second Generation of Computer (1947 – 1962):

Second generation of computers used transistors instead of vacuum tubes which were
more reliable. In 1951 the first computer for commercial use was introduced to the public; the
Universal Automatic Computer (UNIVAC 1). In 1953 the International Business Machine
(IBM) 650 and 700 series computers made their mark in the computer world. During this
generation of computers over 100 computer programming languages were developed,
computers had memory and operating systems. Storage media such as tape and disk were in
use also were printers for output.

Characteristics:

9
i. The computers were still large, but smaller than the first generation of computers.
ii. They use transistor in place of Vacuum Tubes to perform calculation.
iii. They were produced at a reduced cost compared to the first generation of
computers.
iv. Possession of magnetic tapes as for data storage.
v. They were using punch cards as input and output of data and information. The use
of keyboard as an input device was also introduced.
vi. These computers were still generating a lot of heat in which an air conditioner is
needed to maintain a cold temperature.
vii. They have about one thousand circuits per cubic foot.

Example:

i. Leprechaun, IBM built by Bell Laboratories in 1947


ii. Transis produced by philco, GE and RCA.
iii. UNIVAC 1107, UNIVAC III.
iv. RCA 501.
v. IBM 7030 stretch.

Third Generation of Computer (1963 – 1975):

The invention of integrated circuit brought us the third generation of computers. With this
invention computers became smaller, more powerful more reliable and they are able to run
many different programs at the same time.

Characteristics:

i. They used large-scale integrated circuits, which were used for both data
10
processing and storage.
ii. Computers were miniaturized, that is, they were reduced in size compared to
previous generation.
iii. Keyboard and mouse were used for input while the monitor was used as output
device.
iv. Use of programming language like COBOL and FORTRAN were developed.
v. They have hundred thousand circuits per cubic foot.

Examples:

i. Burroughs 6700, Mini computers


ii. Honeywell 200
iii. IBM system 360
iv. UNIVAC 9000 series.

Fourth Generation of Computer (PC 1975 – Current)

At this time of technological development, the size of computer was re- divided to
what we called Personal Computers, PC. This was the time the first Microprocessor was
created by Intel. The microprocessor was a very large- scale, that is, VLS integrated circuit
which contained thousands of transistors.
Transistors on one chip were capable performing all the functions of a computer’s central
processing unit.

Characteristics:

i. Possession of microprocessor which performs all the task of a computer system


use today.
ii. The size of computers and cost was reduced.
iii. Increase in speed of computers.
iv. Very large scale (VLS) integrated circuits were used.
v. They have millions of circuits per cubic foot.

Examples:

i. IBM system 3090, IBM RISC6000, IBM RT.


ii. ILLIAC IV.
iii. Cray 2 XMP.
11
iv. HP 9000.
v. Apple Computers.

Fifth Generation of Computers (Present and Beyond)

Fifth generations computing devices, based on artificial intelligence (AI) are still in
development, although there are some application such as voice recognition, facial face
detector and thumb print that are used today.

Characteristics:

i. Consist of extremely large scale integration.


ii. Parallel processing
iii. Possession of high speed logic and memory chip.
iv. High performance, micro-miniaturization.
v. Ability of computers to mimic human intelligence, e.g. voice
recognition, facial face detector, thumb print.
vi. Satellite links, virtual reality.
vii. They have billions of circuits per cubic.

Examples:

i. Super computers
ii. Robots
iii. Facial face detector
iv. Thumb print.

Conclusion:
The earliest foundations of what would become computer science predate the invention of
the modern digital computer. Machines for calculating fixed numerical tasks such as the
abacus have existed Charles Babbage, sometimes referred to as the "father of computing".
Ada Lovelace is often credited with publishing the first algorithm intended for processing on
a computer.

Since antiquity, aiding in computations such as multiplication and division. Algorithms


for performing computations have existed since antiquity, even before the development of
sophisticated computing equipment.

In1980 Microsoft Disk Operating System (MS-Dos) was born and in 1981 IBM
12
introduced the personal computer (PC) for home and office use. Three years later Apple gave
us the Macintosh computer with its icon driven interface and the 90s gave us Windows
operating system. As a result of the various improvements to the development of the
computer we have seen the computer being used in all areas of life. It is a very useful tool
that will continue to experience new development as time passes.

What is Computer Science?

Computer science is the third most popular major amongst international students coming to the
United States. Therfe are many reasons that computer science is so popular, including
exceptional job security, uncommonly high starting salaries, and diverse job opportunities across
industries. However, an international student contemplating studying computer science needs to
ask themself, "What is computer science?"

So, what is computer science? Generally speaking, computer science is the study of computer
technology, both hardware and software. However, computer science is a diverse field; the
required skills are both applicable and in-demand across practically every industry in today's
technology-dependent world. As such, the field of computer science is divided amongst a range
of sub-disciplines, most of which are full-fledged specialized disciplines in and of themselves.
The field of computer science spans several core areas: computer theory, hardware systems,
software systems, and scientific computing. Students will choose credits from amongst these
sub-disciplines with varying levels of specialization depending on the desired application of the
computer science degree. Though most strict specialization occurs at the graduate level, knowing
exactly what computer science is (and where a student's interests fall within this vast field) is of
paramount importance to knowing how to study computer science.

Computer Science Disciplines


The disciplines encompassed by a computer science degree are incredibly vast, and an
international student must know how to study computer science or, in other words, how to
effectively navigate amongst this sea of sub-disciplines and specializations. Here are a few
possible areas of specialization available to students pursuing computer science degrees:

 Applied Mathematics
 Digital Image/ Sound
 Artificial Intelligence
13
 Microprogramming
 Bioinformatics
 Networks And Administration
 Computer Architecture Networks
 Cryptography
 Computer Engineering
 Operating Systems
 Computer Game Development
 Robotics
 Computer Graphics
 Simulation And Modeling
 Computer Programming
 Software Development
 Software Systems
 Data Management
 Web Development
 Design Databases
 Parallel Programming
 iOS Development
 Mobile Development
 Memory Systems
 Computational Physics

With so many available options, having a specific focus in mind while studying computer
science in the United States is the best plan of action for any international student hoping to
seriously prepare for their future on the job market. Knowing how to study computer science and
effectively planning which type of degree to receive will depend on how well the student
understands the discipline of computer science, and deciding which degree is right for a student
is a move that will determine what sorts of computer science careers the student is eligible for
upon graduating. Therefore, it is of the utmost importance to plan a specific computer science
degree that will enable you to pursue the career you want.

14
Despite the seemingly endless variety of applications and sub-disciplines an international student
studying computer science in the United States will have to navigate, asking important questions
like, "What is computer science?" is a great way to begin a successful education and, ultimately,
career. Moreover, there are plenty of free resources available for studying computer science. For
instance, a great resource for international students trying to study computer science in the
United States can be the websites of specific institutions. These websites will not only convey
what sorts of computer science degrees are available at their institution (as well as any
specialties), they will also often have pages specifically to assist interested international students.
Program course credit breakdowns, scholarship and internship opportunities, ongoing research,
all these vital facts about an institution can be found on their computer science program's
website.
Another great resource for international students is the Study Computer Science guide. The guide
is a wealth of information on topics ranging from questions about where to study computer
science, to providing internship and career advice.

 Portfolio versus Resume


o A resume says nothing of a programmer's ability. Every computer science major
should build a portfolio. A portfolio could be as simple as a personal blog, with a post for
each project or accomplishment. A better portfolio would include per-project pages, and
publicly browsable code (hosted perhaps on github or Google code). Contributions to open
source should be linked and documented. A code portfolio allows employers to directly judge
ability. GPAs and resumes do not.

15
 Programming languages
o Programming languages rise and fall with the solar cycle. A programmer's career
should not. While it is important to teach languages relevant to employers, it is equally
important that students learn how to teach themselves new languages. The best way to learn
how to learn programming languages is to learn multiple programming languages and
programming paradigms. The difficulty of learning the nth language is half the difficulty of
the (n-1)th. Yet, to truly understand programming languages, one must implement one.
Ideally, every computer science major would take a compilers class. At a minimum, every
computer science major should implement an interpreter.
 Discrete mathematics
o Students must have a solid grasp of formal logic and of proof. Proof by algebraic
manipulation and by natural deduction engages the reasoning common to routine
programming tasks. Proof by induction engages the reasoning used in the construction of
recursive functions. Students must be fluent in formal mathematical notation, and in
reasoning rigorously about the basic discrete structures: sets, tuples, sequences, functions and
power sets.

16
 Data structures and algorithms
o Students should certainly see the common (or rare yet unreasonably effective) data structures
and algorithms. But, more important than knowing a specific algorithm or data structure
(which is usually easy enough to look up), students must understand how to design
algorithms (e.g., greedy, dynamic strategies) and how to span the gap between an algorithm
in the ideal and the nitty-gritty of its implementation.
 Theory
o A grasp of theory is a prerequisite to research in graduate school. Theory is invaluable when
it provides hard boundaries on a problem (or when it provides a means of circumventing
what initially appear to be hard boundaries). Computational complexity can legitimately
claim to be one of the few truly predictive theories in all of computer "science." A computer
student must know where the boundaries of tractability and computability lie. To ignore these
limits invites frustration in the best case, and failure in the worst.
 Architecture
o There is no substitute for a solid understanding of computer architecture.
Everyone should understand a computer from the transistors up. The understanding of
architecture should encompass the standard levels of abstraction: transistors, gates, adders,
muxes, flip flops, ALUs, control units, caches and RAM. An understanding of the GPU
model of high-performance computing will be important for the foreseeable future.
 Operating systems
o Any sufficiently large program eventually becomes an operating system. As such, a person
should be aware of how kernels handle system calls, paging, scheduling, context-switching,
filesystems and internal resource management. A good understanding of operating systems is
secondary only to an understanding of compilers and architecture for achieving performance.
Understanding operating systems (which I would interpret liberally to include runtime
systems) becomes especially important when programming an embedded system without
one.
 Networking
o Given the ubiquity of networks, a person should have a firm understanding of the network
stack and routing protocols within a network. The mechanics of building an efficient, reliable
transmission protocol (like TCP) on top of an unreliable transmission protocol (like IP)
should not be magic to a computer guy. It should be core knowledge. People must understand
the trade-offs involved in protocol design--for example, when to choose TCP and when to
17
choose UDP. (Programmers need to understand the larger social implications for congestion
should they use UDP at large scales as well.)
 Security
o The sad truth of security is that the majority of security vulnerabilities come from sloppy
programming. The sadder truth is that many schools do a poor job of training programmers to
secure their code. Developers must be aware of the means by which a program can be
compromised. They need to develop a sense of defensive programming--a mind for thinking
about how their own code might be attacked. Security is the kind of training that is best
distributed throughout the entire curriculum: each discipline should warn students of its
native vulnerabilities.
 User experience design (UX)
o Programmers too often write software for other programmers, or worse, for
themselves. User interface design (or more broadly, user experience design) might be the
most underappreciated aspect of computer science. There's a misconception, even among
professors, that user experience is a "soft" skill that can't be taught. In reality, modern user
experience design is anchored in empirically-wrought principles from human factors
engineering and industrial design. If nothing else, engineers should know that interfaces need
to make the ease of executing any task proportional to the frequency of the task multiplied by
its importance. As a practicality, every programmer should be comfortable with designing
usable web interfaces in HTML, CSS and JavaScript.
 Software engineering
o The principles in software engineering change about as fast as the programming languages
do. A good, hands-on course in the practice of team software construction provides a
working knowledge of the pitfalls inherent in the endeavor. It's been recommended by
several readers that students break up into teams of three, with the role of leader rotating
through three different projects. Learning how to attack and maneuver through a large
existing codebase is a skill most programmers will have to master, and it's one best learned in
school instead of on the job.
 Artificial intelligence
o If for no other reason than its outsized impact on the early history of computing, student
should study artificial intelligence. While the original dream of intelligent machines seems
far off, artificial intelligence spurred a number of practical fields, such as machine learning (I
really like machine learning), data mining and natural language processing.
18
 Databases
o Databases are too common and too useful to ignore. It's useful to understand the fundamental
data structures and algorithms that power a database engine, since programmers often enough
reimplement a database system within a larger software system. Relational algebra and
relational calculus stand out as exceptional success stories in sub-Turing models of
computation. Unlike UML modeling, ER modeling seems to be a reasonable mechanism for
visualing encoding the design of and constraints upon a software artifact.
o Introduction of electronic computers set an enormous discontinuity with respect to the times
when they were not available. I made my thesis for the University degree in a laboratory of
Centro Microonde (subsequently IROE-CNR and now IFAC-CNR), a research centre of the
National Research Council (CNR). When I started working on my first degree thesis in 1958,
computers were not yet available in the University or the CNR, although in Italy the famous
Olivetti Elea 9003 was soon in use.
The name Elea was from ELaboratore Elettronico Aritmetico, (Arithmetic electronic
elaborator). Elea 9003 was the first computer using transistors only. Elea was also the name
of an old city of Magna Graecia (precisely Lucania, south Italy) seat of a Greek school of
Philosophy. One user of Elea, Luigi Logrippo, who worked in the Olivetti Computer
Marketing Division, 1961-1964, remembers: “The Olivetti Elea 9003 was the first industrial
electronic computer made in Italy. [...] The first operational specimens were installed in
1960, and the last in 1964. About 40 were produced [...] Just as other
computers of the time, this one was a … dinosaur. It occupied large, air-conditioned rooms
[...] A cabinet included main memory and CPU (additional cabinets were required for
memory extensions). Another cabinet was the control unit for the tape units. A third cabinet
would contain the control unit for other peripherals, such as the printer [...] “Happily, the
machine looked good because it was designed by a famous architect, Ettore Sottsass”.
2.1 Our first use of computer: the University computer
The first computer in the University of Florence was an IBM 1620, with 20 K memory
positions. It was located in the main building of the University, where the Dean and the
Administrative offices lay and still lie. It was a large equipment, constituted by several
different pieces, and was located in a large, locked room.
The different, scientific Institutes of the University were given access to it two hours on turn,
during the day, according to a precise schedule established every month by the Director of
the Institute of Astronomy. I started working there as representative of the Institute of

19
Electromagnetic Waves (Istituto di Onde Elettromagnetiche) of the University and was able
to utilize every time the entire two hours, because there were no other people interested in it
in the Institute. Sometimes it was possible to exchange hours with the previous and/or the
following user and had up to four or also six hours of computer availability. This was very
useful because the use of the computer was long and cumbersome.
IBM 1620 worked with cards: the input was given by cards and the output was on cards.
There was a card writer (holes)
and a large printer that printed on paper the output from the cards. Programming was in
Fortran, one of the first versions. There were subroutines for simple functions sin(x), cos(x),
atan(x), exp(x), log(x), sqrt(x). On the contrary, integrals needed to be programmed by the
users, on the basis of different
approximate formulas. Generally, I wrote subroutines based on the integration method of
Gauss. Once the program was written and the cards were punched, the card bundle (Source
program) was inserted in the computer, together with a Processor program card bundle,
which was used by the computer to translate the Source program in Computer Language.
This gave rise to another set of cards (Object Program) which was then used for the
computation.
Every time there was an error in the program, one had to find the error, replace the suitable
cards in the Source Program and repeat all the procedure.

We also started teaching programming in Base language and in Fortran language to higher level
students. The right side
2.2 Advantages of the introduction of computers: big step for computations
In spite of the complex and cumbersome procedure of preparing the programs, the introduction
of computers made a big step for the research, both for theoretical applications and for the
elaboration of experimental data, as we will see in the sequel.
Programming these small computers required direct control of the different steps. One had to
take the small memory and low velocity into account and to find suitable compromises between
the different needs. In my opinion, this allowed a clear understanding of what was going on step
by step, with the possibility of making changes and corrections. This also was an important point
for the students, who learned programming, because they were faced to solve new problems and
find optimum solutions.

20
As an example of our first use of the University computer, I was able to develop an iterative
procedure on the IBM 1620 that allowed evaluation of modes and losses of a laser resonator,
introduced by G. Toraldo di Francia, the so called flatroof resonator. Toraldo developed an
approximate theory, based on a “diamond cavity” giving rise to modes: amplitude and phase.
Starting from his theoretical field distribution, we tested the theory, and found losses and phase
shifts, that were not given by the theory. The results were published in a joint issues of Applied
Optics and Proc IEEE in 1966.
As an example, Fig. 2 shows: on the left side the scheme of the resonator; and, on the right side,
the evolution of the amplitude of the field, at an arbitrary point, x=0.554, over the normalized
aperture. The amplitude was plotted versus
INTRODUCTION OF LARGE COMPUTERS IN ITALY
After the “small” computer of our University, in Italy two big centres for computing became
available.
In 1965 the CNUCE (Centro Universitario Nazionale Calcolo Elettronico) in Pisa was set up by
the University of Pisa in collaboration with the IBM, which donated a computer IBM 7090.
Subsequently CNUCE became a CNR Institute. In 1969 CINECA, a Centre founded by a
Consortium of the Universities of Bologna, Florence, Padua and Venice started the activity with
a CDC 6600 Computer in Casalecchio di Reno, near Bologna. This Centre, initially devoted to
computations, developed largely in the subsequent and recent years and now is the largest Italian
computing centre and one of the most important worldwide. It is equipped with some of the
largest supercomputers. Cineca is now a non-profit Consortium of 68 Italian universities and 3
Institutions, including CNR and MIUR, Ministry for Education and Research.
Cineca offers support to the research activities of the scientific community through
supercomputing and its applications.
3.1 Our advances with use of large computers.
In the 70ties, the use of large computers of CNUCE and CINECA allowed us to:
- improve and enlarge the iterative procedure and find solutions of the flat roof resonator in cases
out of the theoretical approximation limit of Toraldo’s theory;
- extend the iterative procedure to other kind of resonators, in particular resonators with very low
losses, such as rimmed ones, and to obtain more complete results, including unstable regions;
- investigate the accuracy of different diffraction formulas (Applied Optics 1978). We showed
that the proper angular dependence needs to be taken into account to obtain results with physical
meaning in laser cavities investigation. (A. Consortini, F. Pasqualetti: Comparison of Various

21
Diffraction Formulas in a Study of Open Resonators - Applied Optics 17, 2519, 15 Aug. 1978)
- develop and use methods to remove low order modes. For example, in the case of a Fabry-Perot
resonator, we evaluated up to 26 modes (A. Consortini, F. Pasqualetti: An Analysis of the Modes
of Fabry-Perot Open Resonators - Optica Acta 20, n.10, 793-803, 1973 ).
Fig. 3 Amplitude and phase patterns of 26 modes of a Fabry-Perot resonator. Left: even modes 0-
24, Right: odd modes 1-25. In Fig. 3 amplitudes and phases are presented of 26 modes of a
Fabry-Perot resonator, with mirror aperture 2a=70λ, mirror distance d=100 λ, Fresnel number
N=12.25 (from Optica Acta 1973). On the left hand side even modes and on the right one odd
modes. Dashed lines are from Wainstein’s theory. These results were exceptional ones also for
the time when they were obtained. There were many other theoretical problems where we took
great profit from computers, here we limit ourselves to the previous few examples and move to
the next step.
Of course our experience is a small drop with respect to the general use, and I just want to
mention here use and progress of computers for many theoretical problems, including
applications to simulation.
COMPUTER AND MEASUREMENT INSTRUMENTS
A big step in the computer use for research was when instruments linkable with computers
became available. Now in any laboratory this is an obvious facility, but it was not so when we
made experiments in the 70ties. Here below two examples from our research activity are
described.
4.1 Measurements of laser intensity fluctuations
Initially, the output signal from an instrument was an analogical signal, that had to be digitized to
be inserted in the computer. Therefore, one was faced to several different steps.
To describe the procedure, let us refer as an example to experiments we made in 1974. We made
measurements of intensity fluctuations of laser radiation after an atmospheric path of about 4 km.
We focussed the radiation with a lens and collected it with a photomultiplier. We needed the
following operations:
-the signal from the photomultiplier was continuously recorded on the magnetic tape of a Sabre
Sangamo III Recorder (a very good instrument at that time), then
-it was digitized by an IBM System 7, at a rate of 2000 data/s and the digitized signal was
recorded on a magnetic tape.
- the digitized signal was then used with a different computer to evaluate its average, moments
up to the fourth one, and to built intensity histograms.

22
Years later, it was possible to connect the photomultiplier directly to a small computer, Apple,
and have the data digitized and stored on it. There was the disadvantage that the memory was not
as large as it is now, and initially the computer memory limited the amount of data to be
collected. We had to find suitable compromises, but we were able to make measurement and data
elaboration with the same computer.
4.2 Position measurements
An important case of use of instruments directly connected with the computer was measurement
of positions. We needed position measurement for research on:
1 - angle of arrival fluctuations (often denoted as differential angle of arrival fluctuations) at
points of a laser wavefront after a path through the atmospheric turbulence, and
2 – wandering of thin beams after a short path in the atmosphere.
As an example, let us here refer to angle of arrival. Laser radiation after a path through
turbulence impinged on a holed mask, Hartmann test, giving rise to a set of thin beams. Due to
turbulence, each beam fluctuated around an unperturbed position, and measurement of the
instantaneous positions at different times was required to obtain the angle of arrival at
the output of each hole.
When position sensors were not available, we let the beams impinge on a diffusing screen and
took pictures of it by using a good photographic camera, Hasselbladt 500 EL, that, with suitable
shrewdness like pre-exposure, allowed us to take pictures at a rate of 1/s with an exposure time
of 1/500 s. An experiment lasted from 5 up to 15 minutes. The photos were then developed.
In Fig. 4 the scheme of the measurement in the laboratory is presented as well as a photogram of
the developed film. Each measurement gave rise to from several hundreds up to about 500
photos. The position of each spot was then manually determined by using a projector. The
procedure of "reading" the photos was very long and cumbersome and different people, typically
2 or 3 people, separately read the coordinates of the centres of the spots in order to reduce the
operator's error. The data were printed and then used for a statistic elaboration with a small
computer. The procedure required many months and patience of the scientists involved.
When position sensors connectable with computer became available, collection of the data
became a completely different job. Initially, there was still the limitation on the amount of data,
but the problem was overcame with the development of personal computers of large memory and
of suitable laboratory software.
To have an idea of the big improvement, an example is here reported of the final solution, that
we reached in the 90ties,

23
of the use of position sensors for measurement of beam wandering. The problem was analogous
to the previous one.
With a suitable program developed by us, the four signals from any sensor were directly
introduced in the computer and the position automatically detected and recorded. Four sensors
were connected to a simple PC. A program for “calibration” allowed to regulate the intensity and
alignment of the sensors, with options for some on-line averages, another one called
“acquisition” allowed the measurement, with a number of options, and a set of “elaboration”
programs allowed off line statistic elaboration. All programs allowed use of from one up to all
four sensors, depending on the experiment to be made.
In Figure 5, already shown in the Proceedings of ETOP 2007, one can see the calibration and
alignment of one sensor for “instantaneous” measurement of position of a beam impinging on it.
In real time, the computer gives the average position (x and y coordinates) of a number of data
and the corresponding variances. The number of data for the averages are limited and can be
chosen in advance.
Fig. 5. Detail of the internal surface of one sensor, only the central part (1 mm x1mm) is
selected; scales of the axes, –0.5 mm to 0.5 mm. The four columns are the four signals directly
measured by the sensor at different subsequent times. Averages and corresponding variances are
presented. From A. Consortini: Using a research laboratory for training students, Proc ETOP
2007.
Advantage with respect to the previous method was incommensurable, almost tending to infinity.
Although there are many advantages to using a computer, there are also many disadvantages
(like most things in life). Below is a list of many of the disadvantages to using a computer and
what type of problems you may personally encounter.
Carpal tunnel and eye strain
A computer requires a lot of repetitive movement that often leads to carpal tunnel syndrome. For
example, moving your hand from your keyboard to a mouse and typing are all repetitive and can
cause injuries. Taking breaks, keeping the proper posture, and understanding
computer ergonomics can all help prevent or delay these injuries.
What is Computer Science?

Computer science is the third most popular major amongst international students coming to the
United States. Therfe are many reasons that computer science is so popular, including
exceptional job security, uncommonly high starting salaries, and diverse job opportunities across

24
industries. However, an international student contemplating studying computer science needs to
ask themself, "What is computer science?"

So, what is computer science? Generally speaking, computer science is the study of computer
technology, both hardware and software. However, computer science is a diverse field; the
required skills are both applicable and in-demand across practically every industry in today's
technology-dependent world. As such, the field of computer science is divided amongst a range
of sub-disciplines, most of which are full-fledged specialized disciplines in and of themselves.
The field of computer science spans several core areas: computer theory, hardware systems,
software systems, and scientific computing. Students will choose credits from amongst these
sub-disciplines with varying levels of specialization depending on the desired application of the
computer science degree. Though most strict specialization occurs at the graduate level, knowing
exactly what computer science is (and where a student's interests fall within this vast field) is of
paramount importance to knowing how to study computer science.

Computer Science Disciplines


The disciplines encompassed by a computer science degree are incredibly vast, and an
international student must know how to study computer science or, in other words, how to
effectively navigate amongst this sea of sub-disciplines and specializations. Here are a few
possible areas of specialization available to students pursuing computer science degrees:

 Applied Mathematics
 Digital Image/ Sound
 Artificial Intelligence
 Microprogramming
 Bioinformatics
 Networks And Administration
 Computer Architecture Networks
 Cryptography
 Computer Engineering
 Operating Systems
 Computer Game Development
 Robotics

25
 Computer Graphics
 Simulation And Modeling
 Computer Programming
 Software Development
 Software Systems
 Data Management
 Web Development
 Design Databases
 Parallel Programming
 iOS Development
 Mobile Development
 Memory Systems
 Computational Physics

With so many available options, having a specific focus in mind while studying computer
science in the United States is the best plan of action for any international student hoping to
seriously prepare for their future on the job market. Knowing how to study computer science and
effectively planning which type of degree to receive will depend on how well the student
understands the discipline of computer science, and deciding which degree is right for a student
is a move that will determine what sorts of computer science careers the student is eligible for
upon graduating. Therefore, it is of the utmost importance to plan a specific computer science
degree that will enable you to pursue the career you want.
Despite the seemingly endless variety of applications and sub-disciplines an international student
studying computer science in the United States will have to navigate, asking important questions
like, "What is computer science?" is a great way to begin a successful education and, ultimately,
career. Moreover, there are plenty of free resources available for studying computer science. For
instance, a great resource for international students trying to study computer science in the
United States can be the websites of specific institutions. These websites will not only convey
what sorts of computer science degrees are available at their institution (as well as any
specialties), they will also often have pages specifically to assist interested international students.
Program course credit breakdowns, scholarship and internship opportunities, ongoing research,
all these vital facts about an institution can be found on their computer science program's
website.
26
Another great resource for international students is the Study Computer Science guide. The guide
is a wealth of information on topics ranging from questions about where to study computer
science, to providing internship and career advice.

 Portfolio versus Resume


o A resume says nothing of a programmer's ability. Every computer science major should build
a portfolio. A portfolio could be as simple as a personal blog, with a post for each project or
accomplishment. A better portfolio would include per-project pages, and publicly browsable
code (hosted perhaps on github or Google code). Contributions to open source should be
linked and documented. A code portfolio allows employers to directly judge ability. GPAs
and resumes do not.
 Programming languages
o Programming languages rise and fall with the solar cycle. A programmer's career should not.
While it is important to teach languages relevant to employers, it is equally important that
students learn how to teach themselves new languages. The best way to learn how to learn
programming languages is to learn multiple programming languages and programming
paradigms. The difficulty of learning the nth language is half the difficulty of the (n-1)th.
Yet, to truly understand programming languages, one must implement one. Ideally, every
computer science major would take a compilers class. At a minimum, every computer
science major should implement an interpreter.
 Discrete mathematics
o Students must have a solid grasp of formal logic and of proof. Proof by algebraic
manipulation and by natural deduction engages the reasoning common to routine
programming tasks. Proof by induction engages the reasoning used in the construction of
recursive functions. Students must be fluent in formal mathematical notation, and in
reasoning rigorously about the basic discrete structures: sets, tuples, sequences, functions and
power sets.
 Data structures and algorithms
o Students should certainly see the common (or rare yet unreasonably effective)
data structures and algorithms. But, more important than knowing a specific algorithm or
data structure (which is usually easy enough to look up), students must understand how to
design algorithms (e.g., greedy, dynamic strategies) and how to span the gap between an
algorithm in the ideal and the nitty-gritty of its implementation.

27
 Theory
o A grasp of theory is a prerequisite to research in graduate school. Theory is
invaluable when it provides hard boundaries on a problem (or when it provides a means of
circumventing what initially appear to be hard boundaries). Computational complexity can
legitimately claim to be one of the few truly predictive theories in all of computer "science."
A computer student must know where the boundaries of tractability and computability lie. To
ignore these limits invites frustration in the best case, and failure in the worst.
 Architecture
o There is no substitute for a solid understanding of computer architecture.
Everyone should understand a computer from the transistors up. The understanding of
architecture should encompass the standard levels of abstraction: transistors, gates, adders,
muxes, flip flops, ALUs, control units, caches and RAM. An understanding of the GPU
model of high-performance computing will be important for the foreseeable future.
 Operating systems
o Any sufficiently large program eventually becomes an operating system. As such,
a person should be aware of how kernels handle system calls, paging, scheduling, context-
switching, filesystems and internal resource management. A good understanding of operating
systems is secondary only to an understanding of compilers and architecture for achieving
performance. Understanding operating systems (which I would interpret liberally to include
runtime systems) becomes especially important when programming an embedded system
without one.
 Networking
o Given the ubiquity of networks, a person should have a firm understanding of the
network stack and routing protocols within a network. The mechanics of building an
efficient, reliable transmission protocol (like TCP) on top of an unreliable transmission
protocol (like IP) should not be magic to a computer guy. It should be core knowledge.
People must understand the trade-offs involved in protocol design--for example, when to
choose TCP and when to choose UDP. (Programmers need to understand the larger social
implications for congestion should they use UDP at large scales as well.)
 Security
o The sad truth of security is that the majority of security vulnerabilities come from
sloppy programming. The sadder truth is that many schools do a poor job of training
programmers to secure their code. Developers must be aware of the means by which a
28
program can be compromised. They need to develop a sense of defensive programming--a
mind for thinking about how their own code might be attacked. Security is the kind of
training that is best distributed throughout the entire curriculum: each discipline should warn
students of its native vulnerabilities.
 User experience design (UX)
o Programmers too often write software for other programmers, or worse, for
themselves. User interface design (or more broadly, user experience design) might be the
most underappreciated aspect of computer science. There's a misconception, even among
professors, that user experience is a "soft" skill that can't be taught. In reality, modern user
experience design is anchored in empirically-wrought principles from human factors
engineering and industrial design. If nothing else, engineers should know that interfaces need
to make the ease of executing any task proportional to the frequency of the task multiplied by
its importance. As a practicality, every programmer should be comfortable with designing
usable web interfaces in HTML, CSS and JavaScript.
 Software engineering
o The principles in software engineering change about as fast as the programming
languages do. A good, hands-on course in the practice of team software construction provides
a working knowledge of the pitfalls inherent in the endeavor. It's been recommended by
several readers that students break up into teams of three, with the role of leader rotating
through three different projects. Learning how to attack and maneuver through a large
existing codebase is a skill most programmers will have to master, and it's one best learned in
school instead of on the job.
 Artificial intelligence
o If for no other reason than its outsized impact on the early history of computing,
student should study artificial intelligence. While the original dream of intelligent machines
seems far off, artificial intelligence spurred a number of practical fields, such as machine
learning (I really like machine learning), data mining and natural language processing.
 Databases
o Databases are too common and too useful to ignore. It's useful to understand the
fundamental data structures and algorithms that power a database engine, since programmers
often enough reimplement a database system within a larger software system. Relational
algebra and relational calculus stand out as exceptional success stories in sub-Turing models

29
of computation. Unlike UML modeling, ER modeling seems to be a reasonable mechanism
for visualing encoding the design of and constraints upon a software artifact.
Introduction of electronic computers set an enormous discontinuity with respect to the times
when they were not available. I made my thesis for the University degree in a laboratory of
Centro Microonde (subsequently IROE-CNR and now IFAC-CNR), a research centre of the
National Research Council (CNR). When I started working on my first degree thesis in 1958,
computers were not yet available in the University or the CNR, although in Italy the famous
Olivetti Elea 9003 was soon in use.
The name Elea was from ELaboratore Elettronico Aritmetico, (Arithmetic electronic elaborator).
Elea 9003 was the first computer using transistors only. Elea was also the name of an old city of
Magna Graecia (precisely Lucania, south Italy) seat of a Greek school of Philosophy. One user of
Elea, Luigi Logrippo, who worked in the Olivetti Computer Marketing Division, 1961-1964,
remembers: “The Olivetti Elea 9003 was the first industrial electronic computer made in Italy.
[...] The first operational specimens were installed in 1960, and the last in 1964. About 40 were
produced [...] Just as other
computers of the time, this one was a … dinosaur. It occupied large, air-conditioned rooms [...]
A cabinet included main memory and CPU (additional cabinets were required for memory
extensions). Another cabinet was the control unit for the tape units. A third cabinet would
contain the control unit for other peripherals, such as the printer [...] “Happily, the machine
looked good because it was designed by a famous architect, Ettore Sottsass”.
2.1 Our first use of computer: the University computer
The first computer in the University of Florence was an IBM 1620, with 20 K memory positions.
It was located in the main building of the University, where the Dean and the Administrative
offices lay and still lie. It was a large equipment, constituted by several different pieces, and was
located in a large, locked room.
The different, scientific Institutes of the University were given access to it two hours on turn,
during the day, according to a precise schedule established every month by the Director of the
Institute of Astronomy. I started working there as representative of the Institute of
Electromagnetic Waves (Istituto di Onde Elettromagnetiche) of the University and was able to
utilize every time the entire two hours, because there were no other people interested in it in the
Institute. Sometimes it was possible to exchange hours with the previous and/or the following
user and had up to four or also six hours of computer availability. This was very useful because
the use of the computer was long and cumbersome.

30
IBM 1620 worked with cards: the input was given by cards and the output was on cards. There
was a card writer (holes)
and a large printer that printed on paper the output from the cards. Programming was in Fortran,
one of the first versions. There were subroutines for simple functions sin(x), cos(x), atan(x),
exp(x), log(x), sqrt(x). On the contrary, integrals needed to be programmed by the users, on the
basis of different
approximate formulas. Generally, I wrote subroutines based on the integration method of Gauss.
Once the program was written and the cards were punched, the card bundle (Source program)
was inserted in the computer, together with a Processor program card bundle, which was used by
the computer to translate the Source program in Computer Language.
This gave rise to another set of cards (Object Program) which was then used for the computation.
Every time there was an error in the program, one had to find the error, replace the suitable cards
in the Source Program and repeat all the procedure.
For those who never saw these things before, bundles of cards are shown on the left side of Fig.1.
We also started teaching programming in Base language and in Fortran language to higher level
students. The right side
2.2 Advantages of the introduction of computers: big step for computations
In spite of the complex and cumbersome procedure of preparing the programs, the introduction
of computers made a big step for the research, both for theoretical applications and for the
elaboration of experimental data, as we will see in the sequel.
Programming these small computers required direct control of the different steps. One had to
take the small memory and low velocity into account and to find suitable compromises between
the different needs. In my opinion, this allowed a clear understanding of what was going on step
by step, with the possibility of making changes and corrections. This also was an important point
for the students, who learned programming, because they were faced to solve new problems and
find optimum solutions.
As an example of our first use of the University computer, I was able to develop an iterative
procedure on the IBM 1620 that allowed evaluation of modes and losses of a laser resonator,
introduced by G. Toraldo di Francia, the so called flatroof resonator. Toraldo developed an
approximate theory, based on a “diamond cavity” giving rise to modes: amplitude and phase.
Starting from his theoretical field distribution, we tested the theory, and found losses and phase
shifts, that were not given by the theory. The results were published in a joint issues of Applied
Optics and Proc IEEE in 1966.

31
As an example, Fig. 2 shows: on the left side the scheme of the resonator; and, on the right side,
the evolution of the amplitude of the field, at an arbitrary point, x=0.554, over the normalized
aperture. The amplitude was plotted versus
INTRODUCTION OF LARGE COMPUTERS IN ITALY
After the “small” computer of our University, in Italy two big centres for computing became
available.
In 1965 the CNUCE (Centro Universitario Nazionale Calcolo Elettronico) in Pisa was set up by
the University of Pisa in collaboration with the IBM, which donated a computer IBM 7090.
Subsequently CNUCE became a CNR Institute. In 1969 CINECA, a Centre founded by a
Consortium of the Universities of Bologna, Florence, Padua and Venice started the activity with
a CDC 6600 Computer in Casalecchio di Reno, near Bologna. This Centre, initially devoted to
computations, developed largely in the subsequent and recent years and now is the largest Italian
computing centre and one of the most important worldwide. It is equipped with some of the
largest supercomputers. Cineca is now a non-profit Consortium of 68 Italian universities and 3
Institutions, including CNR and MIUR, Ministry for Education and Research.
Cineca offers support to the research activities of the scientific community through
supercomputing and its applications.
3.1 Our advances with use of large computers.
In the 70ties, the use of large computers of CNUCE and CINECA allowed us to:
- improve and enlarge the iterative procedure and find solutions of the flat roof resonator in cases
out of the theoretical approximation limit of Toraldo’s theory;
- extend the iterative procedure to other kind of resonators, in particular resonators with very low
losses, such as rimmed ones, and to obtain more complete results, including unstable regions;
- investigate the accuracy of different diffraction formulas (Applied Optics 1978). We showed
that the proper angular dependence needs to be taken into account to obtain results with physical
meaning in laser cavities investigation. (A. Consortini, F. Pasqualetti: Comparison of Various
Diffraction Formulas in a Study of Open Resonators - Applied Optics 17, 2519, 15 Aug. 1978)
- develop and use methods to remove low order modes. For example, in the case of a Fabry-Perot
resonator, we evaluated up to 26 modes (A. Consortini, F. Pasqualetti: An Analysis of the Modes
of Fabry-Perot Open Resonators - Optica Acta 20, n.10, 793-803, 1973 ).
Fig. 3 Amplitude and phase patterns of 26 modes of a Fabry-Perot resonator. Left: even modes 0-
24, Right: odd modes 1-25. In Fig. 3 amplitudes and phases are presented of 26 modes of a
Fabry-Perot resonator, with mirror aperture 2a=70λ, mirror distance d=100 λ, Fresnel number

32
N=12.25 (from Optica Acta 1973). On the left hand side even modes and on the right one odd
modes. Dashed lines are from Wainstein’s theory. These results were exceptional ones also for
the time when they were obtained. There were many other theoretical problems where we took
great profit from computers, here we limit ourselves to the previous few examples and move to
the next step.
Of course our experience is a small drop with respect to the general use, and I just want to
mention here use and progress of computers for many theoretical problems, including
applications to simulation.
COMPUTER AND MEASUREMENT INSTRUMENTS
A big step in the computer use for research was when instruments linkable with computers
became available. Now in any laboratory this is an obvious facility, but it was not so when we
made experiments in the 70ties. Here below two examples from our research activity are
described.
4.1 Measurements of laser intensity fluctuations
Initially, the output signal from an instrument was an analogical signal, that had to be digitized to
be inserted in the computer. Therefore, one was faced to several different steps.
To describe the procedure, let us refer as an example to experiments we made in 1974. We made
measurements of intensity fluctuations of laser radiation after an atmospheric path of about 4 km.
We focussed the radiation with a lens and collected it with a photomultiplier. We needed the
following operations:
-the signal from the photomultiplier was continuously recorded on the magnetic tape of a Sabre
Sangamo III Recorder (a very good instrument at that time), then
-it was digitized by an IBM System 7, at a rate of 2000 data/s and the digitized signal was
recorded on a magnetic tape.
- the digitized signal was then used with a different computer to evaluate its average, moments
up to the fourth one, and to built intensity histograms.
Years later, it was possible to connect the photomultiplier directly to a small computer, Apple,
and have the data digitized and stored on it. There was the disadvantage that the memory was not
as large as it is now, and initially the computer memory limited the amount of data to be
collected. We had to find suitable compromises, but we were able to make measurement and data
elaboration with the same computer.
4.2 Position measurements
An important case of use of instruments directly connected with the computer was measurement

33
of positions. We needed position measurement for research on:
1 - angle of arrival fluctuations (often denoted as differential angle of arrival fluctuations) at
points of a laser wavefront after a path through the atmospheric turbulence, and
2 – wandering of thin beams after a short path in the atmosphere.
As an example, let us here refer to angle of arrival. Laser radiation after a path through
turbulence impinged on a holed mask, Hartmann test, giving rise to a set of thin beams. Due to
turbulence, each beam fluctuated around an unperturbed position, and measurement of the
instantaneous positions at different times was required to obtain the angle of arrival at
the output of each hole.
When position sensors were not available, we let the beams impinge on a diffusing screen and
took pictures of it by using a good photographic camera, Hasselbladt 500 EL, that, with suitable
shrewdness like pre-exposure, allowed us to take pictures at a rate of 1/s with an exposure time
of 1/500 s. An experiment lasted from 5 up to 15 minutes. The photos were then developed.
In Fig. 4 the scheme of the measurement in the laboratory is presented as well as a photogram of
the developed film. Each measurement gave rise to from several hundreds up to about 500
photos. The position of each spot was then manually determined by using a projector. The
procedure of "reading" the photos was very long and cumbersome and different people, typically
2 or 3 people, separately read the coordinates of the centres of the spots in order to reduce the
operator's error. The data were printed and then used for a statistic elaboration with a small
computer. The procedure required many months and patience of the scientists involved.
When position sensors connectable with computer became available, collection of the data
became a completely different job. Initially, there was still the limitation on the amount of data,
but the problem was overcame with the development of personal computers of large memory and
of suitable laboratory software.
To have an idea of the big improvement, an example is here reported of the final solution, that
we reached in the 90ties,
of the use of position sensors for measurement of beam wandering. The problem was analogous
to the previous one.
With a suitable program developed by us, the four signals from any sensor were directly
introduced in the computer and the position automatically detected and recorded. Four sensors
were connected to a simple PC. A program for “calibration” allowed to regulate the intensity and
alignment of the sensors, with options for some on-line averages, another one called
“acquisition” allowed the measurement, with a number of options, and a set of “elaboration”

34
programs allowed off line statistic elaboration. All programs allowed use of from one up to all
four sensors, depending on the experiment to be made.
In Figure 5, already shown in the Proceedings of ETOP 2007, one can see the calibration and
alignment of one sensor for “instantaneous” measurement of position of a beam impinging on it.
In real time, the computer gives the average position (x and y coordinates) of a number of data
and the corresponding variances. The number of data for the averages are limited and can be
chosen in advance.
Fig. 5. Detail of the internal surface of one sensor, only the central part (1 mm x1mm) is
selected; scales of the axes, –0.5 mm to 0.5 mm. The four columns are the four signals directly
measured by the sensor at different subsequent times. Averages and corresponding variances are
presented. From A. Consortini: Using a research laboratory for training students, Proc ETOP
2007.
Advantage with respect to the previous method was incommensurable, almost tending to infinity.
Although there are many advantages to using a computer, there are also many disadvantages
(like most things in life). Below is a list of many of the disadvantages to using a computer and
what type of problems you may personally encounter.
Carpal tunnel and eye strain
A computer requires a lot of repetitive movement that often leads to carpal tunnel syndrome. For
example, moving your hand from your keyboard to a mouse and typing are all repetitive and can
cause injuries. Taking breaks, keeping the proper posture, and understanding
computer ergonomics can all help prevent or delay these injuries.

35
REFERENCES

"Charles Babbage Institute: Who Was Charles Babbage?" . cbi.umn.edu.


Retrieved December 28, 2016.

"Ada Lovelace Babbage Engine Computer History Museum" .


www.computerhistory.org. Retrieved December 28, 2016.

"Wilhelm Schickard – Ein Computerpionier" (PDF) (in German).

Keates, Fiona (June 25, 2012). "A Brief History of Computing" . The Repository. The Royal
Society.

"Science Museum—Introduction to Babbage" . Archived from the original on September 8,


2006. Retrieved September 24, 2006.

Anthony Hyman (1982). Charles Babbage, pioneer of the computer.

"In this sense Aiken needed IBM, whose technology included the use of punched cards, the
accumulation of numerical data, and the transfer of numerical data from one register
to another", Bernard Cohen, p.44 (2000).

Brian Randell, p. 187, 1975.

The Association for Computing Machinery (ACM) was founded in 1947.

"IBM Archives: 1945" . Ibm.com. Retrieved March 19, 2019. "IBM100 – The
Origins of Computer Science" . Ibm.com. September 15, 1995. Retrieved March 19,
2019.

Denning, Peter J. (2000). "Computer Science: The Discipline" (PDF). Encyclopedia of


Computer Science. Archived from the original (PDF) on May 25, 2006.

36

You might also like