Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 15

NAME OF THE INSTITUTION: TOM MBOYA UNIVERSITY COLLEGE

COURSE UNIT: Social and Professional Issues in Information Technology

COURSE CODE: UCI 401

NAME: LAWRENE CHEPKOECH

ADMISSION NUMBER: SED/AR/00389/017

WEEK SIX.
1. History of computing

Before 1935, a computer was a person who performed arithmetic calculations.

Between 1935 and 1945 the definition referred to a machine, rather than a person. The

modern machine definition is based on von Neumann's concepts defines it as a device

that accepts input, processes data, stores data, and produces output.

We have gone from the vacuum tube to the transistor, to the microchip. Then the

microchip started talking to the modem. Now we exchange text, sound, photos and

movies in a digital environment.

14th C. - Abacus - an instrument for performing calculations by sliding counters along

rods or in grooves.

17th C. - Slide rule - a manual device used for calculation that consists in its simple

form of a ruler and a movable middle piece which are graduated with similar

logarithmic scales.

1642 - Percaline-a mechanical calculator built by Blaise Pascal, a 17th century

mathematician, for whom the Pascal computer programming language was named.

1804 - Jacquard loom - a loom programmed with punched cards invented by Joseph

Marie Jacquard

1850 - Difference Engine, Analytical Engine - Charles Babbage and Ada Byron

Babbage's description, in 1837, of the Analytical Engine, a hand cranked, mechanical

digital computer anticipated virtually every aspect of present-day computers. It wasn't

until over a 100 years later that another all-purpose computer was conceived. Sketch

of the Engine and notes by Ada Byron King, Countess of Lovelace.


1939 -1942 - Atanasoff Berry Computer - built at Iowa State by Prof. John V.

Atanasoff and graduate student Clifford Berry. Represented several "firsts" in

computing, including a binary system of arithmetic, parallel processing, regenerative

memory, separation of memory and computing functions, and more. Weighed 750 lbs.

and had a memory storage of 3,000 bits (0.4K). Recorded numbers by scorching

marks into cards as it worked through a problem.

1940s - Colossus - a vacuum tube computing machine which broke Hitler's codes

during WW II. It was instrumental in helping Turing break the German's codes during

WW II to turn the tide of the war. In the summer of 1939, a small group of scholars

became codebreakers, working at Bletchley Part in England. This group of pioneering

codebreakers helped shorten the war and changed the course of history.

1946 - ENIAC - World's first electronic, large scale, general-purpose computer, built

by Mauchly and Eckert, and activated at the University of Pennsylvania in 1946.

ENIAC recreated on a modern computer chip. It contained 19,000 vacuum tubes,

6000 switches, and could add 5,000 numbers in a second, a remarkable

accomplishment at the time. A reprogrammable machine, the ENIAC performed

initial calculations for the H-bomb. It was also used to prepare artillery shell

trajectory tables and perform other military and scientific calculations. Since there

was no software to reprogram the computer, people had to rewire it to get it to

perform different functions. The human programmers had to read wiring diagrams

and know what each switch did. J. Presper Eckert, Jr. and John W. Mauchly drew on

Alansoff's work to create the ENIAC, the Electronic Numerical Integrator and
Computer.

1951-1959 - vacuum tube based technology. Vacuum Tubes are electronic devices,

consisting of a glass or steel vacuum envelope and two or more electrodes between

which electrons can move freely. First commercial computers used vacuum tubes:

Univac, IBM 701.

1950s -1960s - UNIVAC - "punch card technology" The first commercially successful

computer, introduced in 1951 by Remington Rand. Over 40 systems were sold. Its

memory was made of mercury filled acoustic delay lines that held 1,000 12 digit

numbers. It used magnetic tapes that stored 1MB of data at a density of 128 cpi.

1960-1968 - transistor based technology. The transistor, invented in 1948, by Dr. John

Bardeen, Dr. Walter Brattain, and Dr. William Shockley. It almost completely replaced

the vacuum tube because of its reduced cost, weight, and power consumption and its

higher reliabilities (Maskus, 2000). 

1969 - The Internet, originally the Arpanet (Advanced Research Projects Agency

network), began as a military computer network.

1969-1977 - integrated circuits (IC) based technology. The first integrated circuit was

demonstrated by Texas Instruments inventor, Jack Kilby, in 1958. It was 7/16" wide

and contained two transistors. Examples of early integrated circuit technology: Intel

4004, Dec pdp 8, CRAY 1 (1976) - a 75MHz, 64-bit machine with a peak speed of

160 megaflops.

1976 - CRAY 1 - The world's first electronic digital computer, developed in 1946. A

75MHz, 64-bit machine with a peak speed of 160 megaflops, the world's fastest
processor at that time.

1976 - Apples/MACs - The Apple was designed by Steve Wozniak and Steve Jobs.

Apple was the first to have a "windows" type graphical interface and the computer

mouse. Like modern computers, early Apples had a peripheral keyboard and mouse,

and had a floppy drive that held 3.5" disks. The Macintosh replaced the Apple.

1978 to 1986 - large scale integration (LSI); Alto - early workstation with mouse;

Apple, designed by Steve Wozniak and Steve Jobs. Apple was the first to have a

"windows" type graphical interface and the computer mouse.

1986 to today - the age of the networked computing, the Internet, and the WWW.

1990 - Tim Berners-Lee invented the networked hypertext system called the World

Wide Web.

1992 - Bill Gates' Microsoft Corp. released Windows 3.1, an operating system that

made IBM and IBM-compatible PCs more user-friendly by integrating a graphical

user interface into the software. In replacing the old Windows command-line system,

however, Microsoft created a program similar to the Macintosh operating system.

Apple sued for copyright infringement, but Microsoft prevailed.

1995 - large commercial Internet service providers (ISPs), such as MCI, Sprint, AOL

and UUNET, began offering service to large number of customers.

1996 - Personal Digital Assistants such as the Palm Pilot became available to

consumers. They can do numeric calculations, play games and music and download

information from the Internet (Helpman, 1992).

2.(a)

Safety critical systems (SCS) are systems designed with the intent of curbing the
effects of an accident from a hazardous event. This can be implemented in the

aviation industry, the medical profession, nuclear testing, even the Financial sector, as

there could be deaths stemming from financial loss too. It is an application where

human safety depends on the correct usage of the software program. The software or

the hardware must not contribute to the cause of the accident or escalate the accident,

which is usually unsafe.

Safety critical systems are heavily dependent on computers, so it is up to these

computers to ensure that no failure occurs in the usage of these systems, a failure in

such system could trigger abnormal directional movements. The most valued property

of the system is that it is dependable and dependability shows the users trust in that

system (Maskus, 2000). 

(b)

A safety-related system comprises everything such as hardware, software, and

human aspects needed to perform one or more safety functions, in which failure

would cause a significant increase in the safety risk for the people or environment

involved. Safety-related systems are those that do not have full responsibility for

controlling hazards such as loss of life, severe injury or severe environmental damage.

The malfunction of a safety-involved system would only be that hazardous in

conjunction with the failure of other systems or human error. Some safety

organizations provide guidance on safety-related systems, for example the Health and

Safety Executive in Kenya. Risks of this sort are usually managed with the methods

and tools of safety engineering. A safety-critical system is designed to lose less than
one life per billion hours of operation. Typical design methods include probabilistic

risk assessment, a method that combines failure mode and effects analysis with fault

tree analysis. Safety-critical systems are increasingly computer-based (Helpman, 1992).

Liabilities

Fail-secure systems maintain maximum security when they cannot operate. For

example, while fail-safe electronic doors unlock during power failures, fail-secure

ones will lock, keeping an area secure.

Fail-Passive systems continue to operate in the event of a system failure. An

example includes an aircraft autopilot. In the event of a failure, the aircraft would

remain in a controllable state and allow the pilot to take over and complete the

journey and perform a safe landing.

Fault-tolerant systems avoid service failure when faults are introduced to the

system. An example may include control systems for ordinary nuclear reactors. The

normal method to tolerate faults is to have several computers continually test the parts

of a system, and switch on hot spares for failing subsystems. As long as faulty

subsystems are replaced or repaired at normal maintenance intervals, these systems

are considered safe. The computers, power supplies and control terminals used by

human beings must all be duplicated in these systems in some fashion (Maskus, 2000). 

Fail-soft systems are able to continue operating on an interim basis with reduced

efficiency in case of failure. Most spare tires are an example of this: They usually

come with certain restrictions for example a speed restriction and lead to lower fuel

economy. Another example is the Safe Mode found in most Windows operating
systems.

Fail-operational systems continue to operate when their control systems fail.

Examples of these include elevators, the gas thermostats in most home furnaces, and

passively safe nuclear reactors. Fail-operational mode is sometimes unsafe

3.(a)

IP refers to any valuable asset that is proprietary and intangible. This includes

creative ideas, knowledge, and expressions of the human mind which have some form

of commercial value and as such, are protectable under trademark, service mark, trade

secret, patent or copyright laws from dilution, infringement, and imitation. Intellectual

property also includes the ownership of things such as writings, artwork, symbols,

designs, names, and other creations including video and audio clips that are

downloaded online (May & Sell, 2006). 

(b)

Inventors, designers, developers and authors can protect the ideas they have

developed to prevent others from wrongly profiting from their creations or inventions.

It also gives them an opportunity to earn back the money they invested in developing

a product. Intellectual property (IP) covers any original ideas, designs, discoveries,

inventions and creative work produced by an individual or group. It wasn't a big deal

to protect IP in the past. However, with information more accessible and easier to

distribute today due to technology, safeguarding your creations and works from

infringers, copycats, and thieves has become vital to any business (Helpman, 1992).

Registering copyrights, trademarks, and patents

Copyright, trademark, and patent are three of the most common types of IP
protection. These grant you the exclusive rights to your creations, especially when it

comes to the commercial gains of its use. Copyright applies to the protection of

tangible and intangible creative works. Businesses use symbols, designs, logos, and

catchphrases as part of their marketing strategy and identity. Patents carry legal

protection that excludes others from making and distributing your invention unless

you have given them the license (Floridi & Sanders, 2002).

Registering trade names and domain names

A firm that is planning to start a business with an IP can further protect its interest

and identity by registering the trade names, product or domain name associated with

it. The business name, product, and domain names are part of the brand.

Registering design rights

Industrial designs rights protect the appearance of two- or three-dimensional

products. These include wallpaper patterns, textiles and the design of household items

such as alarm clocks, toys and chairs. To obtain this form of protection, a design must

first be registered. It must also be new (Helpman, 1992).

Registering utility models

A utility model is a patent-like intellectual property right to protect inventions.

Although a utility model is similar to a patent, it is generally cheaper to obtain and

maintain, has a shorter term shorter grant lag, and less stringent patentability

requirements.

Avoid joint ownership

Intellectual property may be developed and created by more than one person, as

in the case of a company that has its research and development team. Joint IP
ownership, on the other hand, grants the control of the copyright, trademark or patent

to more than one party. With that said, every owning party may copy, recreate,

distribute, or wield whatever they want to do with the IP without consulting the other

owners. Thus, businesses run the risk of exploiting their IP rights in joint ownership.

4 a). Non-Foundational Theory vs. Foundational Theory

Foundational Theory is one that does not depend on any other beliefs for its

justification. According to foundationalism, any justified belief must either be

foundational or depend for its justification, ultimately, on foundational beliefs.

Foundational theories are the framework, or perceived set of rules, that individuals

use or describe and explain their experiences of life and their environment. As these

are based on personal experiences and many of these may actually be false or fanciful

explanations. Example: "My parents get drunk because I'm a bad child."

Foundationalism claims that our empirical beliefs are rationally constrained by our

non‐verbal experience. Non‐verbal experience is caused by events in the world.

Foundationalism is a view about the structure of justification or knowledge. The

foundationalism’s thesis in short is that all knowledge or justified belief rest

ultimately on a foundation of no inferential knowledge or justified belief.

Non-Foundational theory (also known as anti-foundationalism) is any philosophy

which rejects a foundationalism approach. An anti-foundationalism is one who does

not believe that there is some fundamental belief or principle which is the basic

ground or foundation of inquiry and knowledge. Anti-foundationalism is a doctrine in

the philosophy of knowledge that asserts that none of our knowledge is absolutely
certain. In some versions, it asserts more specifically and more controversially that we

cannot provide knowledge with secure foundations in either pure experiences or pure

reason. Anti-foundationalism appears to be compatible with a wide range of political

sciences such as rational choice to ethnography and an equally wide range of

ideologies such as conservatism to socialism. Nonetheless, in practice, it has come to

have a close relationship to critical approaches to the study of politics (Poston, 2007).

b) Foundational Teleological Theory vs. Foundational Deontological Theory

Foundational Teleological theories differ on the nature of the end that actions

ought to promote cultivation of virtue or excellence in the agent as the end of all

action. These could be the classical virtues courage, temperance, justice, and wisdom

that promoted the Greek ideal of man as the rational animal or the theological virtues

of faith, hope, and love that distinguished the Christian ideal of man as a being

created in the image of God (Floridi & Sanders, 2002).

Foundational Deontological Theory is based on the idea is that “human beings

should be treated with dignity and respect because they have rights. This could be

argued that in deontological ethics people have a duty to respect other people’s rights

and treat them accordingly. The core concept behind this is that there are objective

obligations, or duties, that are required of all people. When faced with an ethical

situation, then, the process is simply one of identifying one’s duty and making the

appropriate decision. In deontological ethics an action is considered morally good

because of some characteristic of the action itself, not because the product of the

action is good (Poston, 2007).


c) Act-Foundational Teleological Theory vs. Rule-Foundational Teleological

Act-Foundational Teleological Theory claims that teleological theories are

absolutist in that they claim that there are certain kinds of actions that are absolutely

obligatory or forbidden such that all actions of that type are obligatory or forbidden.

The theory has some restriction on the admissible action-types all theories are

absolutist in this sense since all theories hold that all actions of the type is permissible

are permissible.

Rule-Foundational Teleological asserts that teleological theories are rule-based as

they assess the permissibility of actions in terms of whether they conform to some

specified set of rules. The idea with this characterization of teleological theories is

that all theories other than those that merely take other considerations are rule-based.

Teleological theory is rule-based since it assesses the permissibility of actions in terms

of whether they conform to the rule of maximizing the goodness of consequences

(O'Leary, 2016).

5 a)

As a computer scientist, I will not take the offer to design of optimization

algorithm for atomic bombs to make them more deadlier. Designing the harmful

software is against the ethics of computer since it causes massive death and

destruction, trigger large-scale displacement and cause long-term harm to human

health and well-being, as well as long-term damage to the environment, infrastructure,

socioeconomic development and social order. The bombs can be used to harm the

people which is against the ethics of computer scientist (O'Leary, 2016).


b)

The commandment s of computer ethics I am likely to compromise if I take the

offer are commandment 1, commandment 9, and commandment 9 of computer ethics.

Commandment 1 provides that Thou shalt not use a computer to harm other people. I

should not program a computer to do dangerous things to people. For example, the

design of optimization algorithm for atomic bombs to make them deadlier. What this

means is that computers are not an excuse to do bad things to people. The

programmer is responsible for the actions of his programs. Commandment 9 provides

that Thou shalt think about the social consequences of the program you are writing or

the system you are designing. Looking at the social consequences that a program can

have, describes a broader perspective of looking at technology. The design of

optimization algorithm for atomic bombs to make them deadlier contravenes with this

rule as it negatively affects the society. Commandment 10 provides that Thou shalt

always use a computer in ways that ensure consideration and respect for your fellow

humans. Designing optimization algorithm for atomic bombs to make them deadlier

does not respect the lives of other people (Floridi & Sanders, 2002).
References

Floridi, L., & Sanders, J. W. (2002). Computer ethics: mapping the foundationalist

debate. Ethics and Information Technology, 4(1), 1-9.

Helpman, E. (1992). Innovation, imitation, and intellectual property rights.

Maskus, K. E. (2000). Intellectual property rights in the global economy. Peterson Institute.

May, C., & Sell, S. K. (2006). Intellectual property rights: A critical history. Boulder: Lynne

Rienner Publishers.

O'Leary, D. E. (2016). Ethics for big data and analytics. IEEE Intelligent Systems, 31(4), 81-

84.

O'Regan, C. G. (2021). A brief history of computing. Springer Nature.

Poston, T. (2007). Foundational evidentialism and the problem of scatter. Abstracta, 3(2),

89-106.

You might also like