Computer:: Binary Form

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Question 1:

Computer:
Computer is an electronic machine that take data as an input then process data and at last gives
us output. Computer can perform any kind of operations on data like logical operations or
mathematical operations. Computers are also able to store data.

There are some input devices in computer like keyboard, mouse etc.

CPU is responsible for the processing of the data.

Output devices are monitors

The physical structure of computer is called hardware while the software is a program that
enables the computer to perform specific task.

Binary form:
Computer stores and process the data in binary form.

Types of computer on basis of Technology:


Computers differ based on their data processing abilities. They are classified according to
purpose, data handling and functionality. following are types of computer on basis of
technology.

1. Analog computer
2. Digital computer
3. Hybrid computer

Analog computer:
The analogue computer works on a continuous signal. An analog computer or analogue
computer is a type of computer that uses the continuously changeable aspects of physical
phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being
solved.

Use of analog computer:


An analog computer is a computer which is used to process analog data. Analog
computers store data in a continuous form of physical quantities and perform calculations with
the help of measures. It is quite different from the digital computer, which makes use of
symbolic numbers to represent results.

Examples of analog computer:


1. Photocopiers
2. old land-line telephones.
3. old televisions
4. VCRs 

Digital computers:
A computer that performs calculations and logical operations with quantities represented as
digits, usually in the binary number system.  Digital computers are programmable machines
that use electronic technology to generate, store and process data. These are also known as
personal computers. Digital computer, any of a class of devices capable of solving problems by
processing information in discrete form. It operates on data, including magnitudes, letters, and
symbols, that are expressed in binary code—i.e., using only the two digits 0 and 1.

Use of digital computer:


 The definition of a digital computer is the most commonly used type of computer and is used
to process information with quantities using digits, usually using the binary number system.
An example of a digital computer is a MacBook.

Examples of digital computer:


1. Smartphones
2. Laptops
3. Cpu

Hybrid computer:
 A combination of computers those are capable of inputting and outputting in both digital
and analog signals. A hybrid computer system setup offers a cost effective method of
performing complex simulations. Hybrid computers are computers that exhibit features of
analog computers and digital computers. The digital component normally serves as the
controller and provides logical and numerical operations, while the analog component often
serves as a solver of differential equations and other mathematically complex equations.
Use of hybrid computer:
It is designed to include a working analog unit that is powerful for calculations, yet has a readily
available digital memory. In large industries and businesses, a hybrid computer can be used to
incorporate logical operations as well as provide efficient processing of differential equations.

Examples of hybrid computer:


 Hybrid computer is used in hospitals to measure the heartbeat of the patient.

Types of computer on basis of size:


1. Super Computer

2. Mainframe computer

3. Mini computer

4. Micro or personal computer

Super computers:
The fastest and most powerful type of computer Supercomputers are very expensive and are
employed for specialized applications that require immense amounts of mathematical
calculations. Supercomputer, any of a class of extremely powerful computers. The term is
commonly applied to the fastest high-performance systems available at any given time.
Such computers have been used primarily for scientific and engineering work requiring
exceedingly high-speed computations.

What is a super computer used for?


Supercomputers play an important role in the field of computational science, and are used for a
wide range of computationally intensive tasks in various fields, including

1. quantum mechanics,
2. weather forecasting
3. , climate research,
4. oil and gas exploration,
5. molecular modeling.
Mainframe computer:
A very large and expensive computer capable of supporting hundreds, or even thousands, of
users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for
example) at the bottom and moves to supercomputers at the top, mainframes are just below
supercomputers. In some ways, mainframes are more powerful than supercomputers because
they support more simultaneous programs. But supercomputers can execute a single program
faster than a mainframe.

Mainframe computers or mainframes, also known as "big iron", are computers used primarily


by large organizations for critical applications; bulk data processing, such as census, industry
and consumer statistics; enterprise resource planning; and transaction processing.

Use of mainframe computers:


A mainframe computer is used to process the large and huge amount of data in petabytes. It
can control thousands of user 's. Name 'Mainframe' means that a frame for holding a number
of processors and main memory.

1. census,
2. industry and consumer statistics
3. enterprise resource planning,
4. financial transaction processing

mini computer:
A mid sized computer. In size and power, minicomputers lie
between workstations and mainframes. In the past decade, the distinction between large
minicomputers and small mainframes has blurred, however, as has the distinction between
small minicomputers and workstations. But in general, a minicomputer is a multiprocessing
system capable of supporting from 4 to about 200 users simultaneously.

What is the use of mini computer?


Minicomputers are used for

1. scientific and engineering computations,


2. business-transaction processing,
3. file handling,
4. database management, and are often now referred to as small or midsize
servers.

Question 2:
Computer Generation:
Generation in computer terminology is a change in technology. Initially, the generation term
was used to distinguish between varying hardware technologies.
Nowadays, generation includes both hardware and software, which together make up an
entire computer system.

Generations of computer:
1. first Generation: vacuum tubes (1940-1956)
2. second generation: transistor (1956-1963)
3. third generation: integrated circuits (1964-1971)
4. fourth generation: microprocessors (1971-present)

First Generation: vacuum tubes (1940-1956):


The first computer systems used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. These computers were very expensive to
operate and in addition to using a great deal of electricity, the first computers generated a lot
of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming
language understood by computers, to perform operations, and they could only solve one
problem at a time. It would take operators days or even weeks to set-up a new problem. Input
was based on punched cards and paper tape, and output was displayed on printouts

Examples of First Generation computers:


The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951

Second generation: transistor (1956-1963):


The world would see transistors replace vacuum tubes in the second generation of computers.
The transistor was invented at Bell Labs in 1947 but did not see widespread use in computers
until the late 1950s. 
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy-efficient and more reliable than their first-generation
predecessors. Though the transistor still generated a great deal of heat that subjected the
computer to damage, it was a vast improvement over the vacuum tube. Second-generation
computers still relied on punched cards for input and printouts for output.

From Binary to Assembly:


Second-generation computers moved from cryptic binary machine language to symbolic,
or assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions
of COBOL and FORTRAN. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.

Examples of the second generation computers:


1. IBM 1620,
2. IBM 7094,
3. CDC 1604,
4. CDC 3600,
5. UNIVAC 1108

Third generation: integrated circuits (1964-1971):


The development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called semiconductors,
which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors and interfaced with an operating system, which allowed the
device to run many different applications at one time with a central program that monitored
the memory. Computers for the first time became accessible to a mass audience because they
were smaller and cheaper than their predecessors.

Examples of the third generation computers:


1. IBM-360 series,
2. Honeywell-6000 series,
3. PDP (Personal Data Processor),
4. IBM-370/168.

Fourth generation: microprocessors (1971-present):


The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory to input/output
controls—on a single chip.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation
computers also saw the development of GUIs, the mouse and handheld devices.

Example of fourth generation of computer:


1. IBM 4341,
2. DEC 10,
3. STAR 1000,
4. PUP 11

Question 3:
System software:
System software is a type of computer program that is designed to run a computer's hardware
and application programs. System software is software designed to provide a platform for other
software. System software is a type of computer program that is designed to run a computer’s
hardware and application programs. If we think of the computer system as a layered model, the
system software is the interface between the hardware and user applications. The operating
system (OS) is the best-known example of system software. The OS manages all the other
programs in a computer.

Systems software carries out middleman tasks to ensure communication between other
software and hardware to allow harmonious coexistence with the user.

Examples of system software:


1. operating systems like macOS,
2. GNU/Linux ,
3. Android
4. Microsoft Windows

Sub types of system software:


The five types of systems software, are all designed to control and coordinate the procedures
and functions of computer hardware. They actually enable functional interaction between
hardware, software and the user.

Systems software can be categorized under the following


1. Operating system:
Harnesses communication between hardware, system programs, and other applications. The
operating system is a type of system software kernel that sits between computer hardware and
end user. It is installed first on a computer to allow devices and applications to be identified and
therefore functional.
System software is the first layer of software to be loaded into memory every time a computer
is powered up.
Suppose a user wants to write and print a report to an attached printer. A word processing
application is required to accomplish this task. Data input is done using a keyboard or other
input devices and then displayed on the monitor. The prepared data is then sent to the printer.
In order for the word processor, keyboard, and printer to accomplish this task, they must work
with the OS, which controls input and output functions, memory management, and printer
spooling.

Examples of Operating Systems:


 Windows 10
 Mac OS X
 Ubuntu

2. Device driver:
Enables device communication with the OS and other programs. Driver software is a
type of system software which brings computer devices and peripherals to life. Drivers
make it possible for all connected components and external add-ons perform their
intended tasks and as directed by the OS. Without drivers, the OS would not assign any
duties.
Examples of devices which require drivers::
 Mouse
 Keyboard
 Soundcard
 Display card
 Network card
 Printer
3. Firmware:
Enables device control and identification. Firmware is the operational software embedded
within a flash, ROM, or EPROM memory chip for the OS to identify it. It directly manages and
controls all activities of any single hardware.
Traditionally, firmware used to mean fixed software as denoted by the word firm. It was
installed on non-volatile chips and could be upgraded only by swapping them with new,
preprogrammed chips.
This was done to differentiate them from high-level software, which could be updated without
having to swap components.
Today, firmware is stored in flash chips, which can be upgraded without swapping
semiconductor chips.

4. Translator:
 Translates high-level languages to low-level machine codes. These are intermediate programs
relied on by software programmers to translate high-level language source code to machine
language code. The former is a collection of programming languages that are easy for humans
to comprehend and code (i.e., Java, C++, Python, PHP, BASIC). The latter is a complex code only
understood by the processor.
Popular translator languages are compilers, assemblers, and interpreters. They're usually
designed by computer manufacturers. Translator programs may perform a complete translation
of program codes or translate every other instruction at a time.
Besides simplifying the work of software developers, translators help in various design tasks.
They;

 Identify syntax errors during translation, thus allowing changes to be made to the code.
 Provide diagnostic reports whenever the code rules are not followed.
 Allocate data storage for the program.
 List both source code and program details.
5. Utility:
Ensures optimum functionality of devices and applications.
Utilities are types of system software which sits between system and application software.
These are programs intended for diagnostic and maintenance tasks for the computer. They
come in handy to ensure the computer functions optimally. Their tasks vary from crucial data
security to disk drive defragmentation.

Examples and features of utility software include:


 Antivirus and security software for the security of files and applications, e.g.,
Malwarebytes, Microsoft Security Essentials, and AVG.
.
 File Compression to optimize disk space such as WinRAR, Winzip, and 7-Zip.

Question 4:
Visionary in ARPA:
      
Shortly after the founding of ARPA a man named Dr. JCR Licklider came to head the efforts
to research the best way to utilize the investments into computer technology. One of the
changes to come under Licklider was a shift in the direction military contracts were given; the
contracts went from private independent businesses to academic research institutes with the
best computer centers. Licklider had come to believe the best direction for computers to head
in was networking with a focus on sharing information, it was for this reason he called the
group of researchers he organized the “Intergalactic Network”. Later the researchers decided
on an interface to connect the subnetworks between the universities that consisted of
telephone lines and switching nodes, the interface was the Interface Message Processor or IMP.
The first node was setup at UCLA in Professor Kleinrocks office as his research into network
analysis would establish his node as the node for analyzing the ARPAnet network. From here
the first message was transmitted over the internet on October 29, 1969 and consisted of “LO”
before crashing.4 The message was supposed to be “LOG” and after being entered was
supposed to have allowed a log in to the computer.

The Birth of the Modern Internet Protocol


      Even though the first message was transmitted in 1969, it wasn’t until later research in the
1970’s that a standardized protocol was developed so that each computer in the network could
reliably communicate. The protocol eventually adopted was TCP/IP when the DOD mandated in
1982 that all military computers adopt the standard. On flag day 1983, the ARPANET network
forcibly made the TCP/IP protocol the only compatible protocol for the computers
communicating over the network. In 1985 the Internet Architecture Board began a three day
workshop to spread the adoption of TCP/IP to commercial uses. Also in 1985 Dan Lynch, an
internet activist, started a TCP/IP conference to help increase the adoption of TCP/IP into the
commercial sphere.
Internet Expansion:
       In 1981, the National Science Foundation (NSF) expanded access to the ARPANET with the
Computer Science Network (CSNET) and again in 1986 when the NSFNET provided access to
supercomputer sites in the United States. Eventually, commercial ISP’s began appearing in the
late 1980’s and 1990’s and ARPANET was decommissioned in 1990. When the NSFNET was
decommissioned in 1995 the internet became public and began carrying commercial traffic.
Hypertext Transfer Protocol “The Web”
      The Hyper Text Transfer Protocol (HTTP) or the “World Wide Web” was developed in the
late 1980’s by the European Lab for Particle Physics in Switzerland. It is the standard by which
webpages are navigated through hyperlinks.7 More specifically the HTML specification came
from Tim Berners-Lee while he was working at CERN in Switzerland. 8 A key feature of the web is
how the webpages are formatted which is done with the use of tags to encode and format text,
sounds, and graphics. Shortly after the world wide web appeared there came web browsers;
the first web browser was also devised by Tim Berners-Lee while he was working at CERN. The
first commercial web browsers to come were Netscape followed by Microsoft’s Internet
Explorer. Thanks to how the world wide web employs tags there also followed the creation of
search engines which employ “spiders” to search through tags and meta tags of web pages and
index them in a database. This makes it possible to better navigate the internet and
substantially improved its usefulness.

Fast-forward Present Day


       Technology is continually removing the limitations on the internet, its use, and its
availability. Information is more readily accessible thanks to mobile internet devices that
connect via wi-fi or cellular networks. Currently, the most apparent limitation to the internet is
finding a device with which to interface with the internet at any given time; a barrier that is
continually shrinking as networks expand and technological devices become more prevalent
and affordable. Speculation for the future of accessibility includes possibilities like integrating
internet capable devices within architectural designs that use surfaces that are non invasive in
our lives. An example of such a technology includes the Corning glass solutions that have glass
capable of changing transparency depending on the operation
References:
 . “A Brief History of the Internet.” Online Library Learning Center. Board of Regents of
the University System of Georgia, n.d. Web. 3 Oct 2012.
<http://www.usg.edu/galileo/skills/unit07/internet07_02.phtml&gt;.
 Kleinrock, Leonard. “Leonard Kleinrock’s Home Page.” The University of California. N.p.,
23 Apr 2011. Web. 3 Oct 2012. <http://www.lk.cs.ucla.edu/index.html&gt;.
 Wikipedia contributors. “Sputnik crisis.” Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia, 13 Sep. 2012. Web. 4 Oct. 2012.
 Hauben, Michael. “Part I: The history of ARPA leading up to the ARPANET.” . N.p., n.d.
Web. 3 Oct 2012. <http://www.dei.isep.ipp.pt/~acc/docs/arpa–1.html&gt;.
 Wikipedia contributors. “Internet protocol suite.” Wikipedia, The Free Encyclopedia.
Wikipedia, The Free Encyclopedia, 2 Oct. 2012. Web. 4 Oct. 2012.
 Wikipedia contributors. “History of the Internet.” Wikipedia, The Free Encyclopedia.
Wikipedia, The Free Encyclopedia, 2 Oct. 2012. Web. 4 Oct. 2012.

You might also like