Daniel Afedzi Management Info System Assignment

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 97

DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

PARISGRADUATE SCHOOL OF
MANAGEMENT
INTERNATIONAL EXECUTIVE PROFESSIONAL MBA

DECEMBER 2010
22ND BATCH TAKE HOME EXAMINATION

Course:MANAGEMEDNT
INFORMATION SYSTEM

NAME: DANIEL AFEDZI

STUDENT NUMBER: WA 10209

1
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

TABLE OF CONTENTS

1. SOLUTION TO QUESTION 1............................................Page 3-9

2. SOLUTION TO QUESTION 2 ...........................................Page 10- 14

3. SOLUTION TO QUESTION 3 ...........................................Page 15- 20

4. SOLUTION TO QUESTION 4 ...........................................Page 21-28

5. SOLUTION TO QUESTION 5 ...........................................Page 29-39

6. SOLUTION TO QUESTION 6 ...........................................Page 40-44

7. SOLUTION TO QUESTION 7 ...........................................Page 15- 20

8. SOLUTION TO QUESTION 8 ...........................................Page 21-28

9. SOLUTION TO QUESTION 9 ...........................................Page 29-39

10. SOLUTION TO QUESTION 10 ...........................................Page 40-44

2
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

1. In a brief summary (no more than two paragraphs) explain the difference between data and
information as it applies to the data collected by Freeway Ford when it sells a car. What data
should be processed into information at Freeway Ford?

ANSWER

The term data refers to qualitative or quantitative attributes of a variable or set of variables. Data
(plural of "datum") are typically the results of measurements and can be the basis of graphs, images, or
observations of a set of variables. Data are often viewed as the lowest level of abstraction from which
information and then knowledge are derived. Raw data refers to a collection of numbers, characters,
images or other outputs from devices that collect information to convert physical quantities into
symbols that are unprocessed. (http://en.wikipedia.org/wiki/Data, visited on Monday 27th December
2010). Data consists of raw facts, such as an employee number, total hours worked in a week,
inventory part numbers, or sales orders. Several types of data can represent facts. When facts are
arranged in a meaningful manner, they become information.

Information is a collection of facts organized so that they have additional value beyond the value of the
individual facts. For example sales managers might find that knowing the total monthly sales suits
their purpose more (more valuable) than knowing the number of sales for each sales representative.
Providing information to customers can also help companies increase revenues and profits.
‘Information about the package is as important as the package itself' (Frederick Smith, Chairman and
president of FedEx) (Ralph M. Stair, George W. Reynolds 2010). The terms information and
knowledge are frequently used for overlapping concepts. The main difference is in the level of
abstraction being considered. Data is the lowest level of abstraction, information is the next level, and
finally, knowledge is the highest level among all three. Data on its own carries no meaning. In order
for data to become information, it must be interpreted and take on a meaning. For example, the height
of Mt. Everest is generally considered as "data", a book on Mt. Everest geological characteristics may
be considered as "information", and a report containing practical information on the best way to reach
Mt. Everest's peak may be considered as "knowledge".

Freeway Ford

The Bishop Ford Freeway, formerly known as the Calumet Expressway, is a portion of Interstate 94
in northeastern Illinois, south of downtown Chicago. It runs from Interstate 57 south to the intersection
with Interstate 80, Interstate 294 (Tri-State Tollway) and Illinois Route 394. The Bishop Ford
constitutes 10 of the 77 miles (16 of 124 km) that Interstate 94 runs in Illinois.

3
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

This South Side highway is named for Chicago religious activist Bishop Louis Henry Ford, the former
presiding bishop of the 8.5 million member Church of God in Christ. He spent 40 years preaching in
the city of Chicago before dying at the age of 81 in 1995.

The Bishop Ford is the only freeway-grade, toll-free road in the Chicago area that is referred to as
"Freeway." All of the others (Dan Ryan, Kennedy, Edens, Stevenson, Eisenhower, Elgin-O'Hare,
Kingery, and Borman) are called "Expressway," even though there is little or no difference in the
quality of the road between the Bishop Ford and the others—calling them "Expressway" is merely a
Chicago colloquialism to which the Bishop Ford is the lone exception (likely due to the alliterative
value of using "Freeway" instead of "Expressway" following a word beginning with the letter "F").

The Bishop Ford turns into the Dan Ryan Expressway to the north, and the Kingery Expressway to the
east of the Tri-State Tollway.

REFERENCES
 http://en.wikipedia.org/wiki/Data, visited onMonday 27th December 2010
 Ralph M. Stair, George W. Reynolds 2010

2. Name and discuss FOUR dimensions of information that the manager should consider

4
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

3. How has the increased use of Web sites by firms caused image management to increase in
importance?

4. Explain ELECTRONIC DATA INTERCHANGE.

ANSWERS

Electronic data interchange (EDI) is the structured transmission of data between organizations by
electronic means. It is used to transfer electronic documents or business data from one computer
system to another computer system, i.e. from one trading partner to another trading partner without
human intervention.

It is more than mere e-mail; for instance, organizations might replace bills of lading and even cheques
with appropriate EDI messages. It also refers specifically to a family of standards, e.g. UN/EDIFACT,
ANSI X12.

The National Institute of Standards and Technology in a 1996 publication [1] defines electronic data
interchange as "the computer-to-computer interchange of strictly formatted messages that represent
documents other than monetary instruments. EDI implies a sequence of messages between two parties,
either of whom may serve as originator or recipient. The formatted data representing the documents
may be transmitted from originator to recipient via telecommunications or physically transported on
electronic storage media.". It goes on further to say that "In EDI, the usual processing of received
messages is by computer only. Human intervention in the processing of a received message is typically
intended only for error conditions, for quality review, and for special situations. For example, the
transmission of binary or textual data is not EDI as defined here unless the data are treated as
one or more data elements of an EDI message and are not normally intended for human
interpretation as part of online data processing."[1]

 EDI can be formally defined as 'The transfer of structured data, by agreed message standards,
from one computer system to another without human intervention'. Most other definitions used
are variations on this theme. Even in this era of technologies such as XMLweb services, the
Internet and the World Wide Web, EDI may be the data format used by the vast majority of
electronic commerce transactions in the world.
(http://en.wikipedia.org/wiki/Electronic_Data_Interchange.htm, visited onMonday 27th
December 2010)

Electronic Data Interchange is commonly defined as the direct computer-to-computer exchange of


standard business forms, it clearly requires a business process. Because the key idea involved is the
5
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

exchange of documents that allow a business application to take place without human intervention,
data processing is clearly necessary for application processing. Data communication is then necessary
for the exchange to take place. It is the marrying of these three disciplines that allows the "paperless
trading" that comprises EDI technologies.

Besides the three career disciplines that are internal to the organization, three other issues are
important for EDI trading to take place: standardization of formats, security, and value-added networks
(VANs).

Standards
EDI is considered to be a technical representation of a business conversation between two entities,
either internal or external. Note that there is a perception that "EDI" constitutes the entire electronic
data interchange paradigm, including the transmission, message flow, document format, and software
used to interpret the documents. EDI is considered to describe the rigorously standardized format of
electronic documents. EDI is very useful in supply chain.

The EDI standards were designed to be independent of communication and software technologies. EDI
can be transmitted using any methodology agreed to by the sender and recipient. This includes a
variety of technologies, including modem (asynchronous, and bisynchronous), FTP, E-mail, HTTP,
AS1, AS2, etc. It is important to differentiate between the EDI documents and the methods for
transmitting them. When they compared the bisynchronous protocol 2400 bit/s modems, CLEO
devices, and value-added networks used to transmit EDI documents to transmitting via the Internet,
some people equated the non-Internet technologies with EDI and predicted erroneously that EDI itself
would be replaced along with the non-Internet technologies. These non-internet transmission methods
are being replaced by Internet Protocols such as FTP, telnet, and E-mail, but the EDI documents
themselves still remain.

As more trading partners use the Internet for transmission, standards have emerged. In 2002, the IETF
published RFC 3335, offering a standardized, secure method of transferring EDI data via e-mail. On
July 12, 2005, an IETF working group ratified RFC4130 for MIME-based HTTP EDIINT (aka. AS2)
transfers, and is preparing a similar RFC for FTP transfers (aka. AS3). While some EDI transmission
has moved to these newer protocols, the providers of the value-added networks remain active.

EDI documents generally contain the same information that would normally be found in a paper
document used for the same organizational function. For example an EDI 940 ship-from-warehouse
order is used by a manufacturer to tell a warehouse to ship product to a retailer. It typically has a ship
to address, bill to address, a list of product numbers (usually a UPC code) and quantities. Another
example is the set of messages between sellers and buyers, such as request for quotation (RFQ), bid in
response to RFQ, purchase order, purchase order acknowledgment, shipping notice, receiving advice,
invoice, and payment advice . However, EDI is not confined to just business data related to trade but
encompasses all fields such as medicine (e.g., patient records and laboratory results), transport (e.g.,
container and modal information), engineering and construction, etc. In some cases, EDI will be used
to create a new business information flow (that was not a paper flow before). This is the case in the

6
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Advanced Shipment Notification (856) which was designed to inform the receiver of a shipment, the
goods to be received and how the goods are packaged.

There are four major sets of EDI standards:

 The UN-recommended UN/EDIFACT is the only international standard and is predominant


outside of North America.
 The US standard ANSI ASC X12 (X12) is predominant in North America.
 The TRADACOMS standard developed by the ANA (Article Numbering Association) is
predominant in the UK retail industry.
 The ODETTE standard used within the European automotive industry

All of these standards first appeared in the early to mid 1980s. The standards prescribe the formats,
character sets, and data elements used in the exchange of business documents and forms. The complete
X12 Document List includes all major business documents, including purchase orders (called
"ORDERS" in UN/EDIFACT and an "850" in X12) and invoices (called "INVOIC" in UN/EDIFACT
and an "810" in X12).

The EDI standard says which pieces of information are mandatory for a particular document, which
pieces are optional and give the rules for the structure of the document. The standards are like building
codes. Just as two kitchens can be built "to code" but look completely different, two EDI documents
can follow the same standard and contain different sets of information. For example a food company
may indicate a product's expiration date while a clothing manufacturer would choose to send color and
size information.

Traditional electronic data interchange (EDI) has been evolving for approximately 25 years and has
truly become the paperless environment that is so often talked about. EDI is a complicated mixture of
three disciplines: business, data processing, and data communications. This paper examines the
concepts from the perspectives of each discipline.

Internet standards are excluded from the discussion of communications protocols, since the audience is
probably already familiar with SMTP, MIME, and other Internet messaging protocols.

Definition and Use of EDI. EDI is the computer-to-computer interchange of strictly formatted
messages that represent documents other than monetary instruments. EDI implies a sequence of
messages between two parties, either of whom may serve as originator or recipient. The formatted data
representing the documents may be transmitted from originator to recipient via telecommunications or
physically transported on electronic storage media.

In EDI, the usual processing of received messages is by computer only. Human intervention in the
processing of a received message is typically intended only for error conditions, for quality review,
and for special situations. For example, the transmission of binary or textual data is not EDI as defined
7
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

here unless the data are treated as one or more data elements of an EDI message and are not normally
intended for human interpretation as part of on-line data processing.

An example of EDI is a set of interchanges between a buyer and a seller. Messages from buyer to
seller could include, for example, request for quotation (RFQ), purchase order, receiving advice and
payment advice; messages from seller to buyer could include, simi- larly, bid in response to RFQ,
purchase order acknowledgment, shipping notice and invoice. These messages may simply provide
information, e.g., receiving advice or shipping notice, or they may include data that may be interpreted
as a legally binding obligation, e.g., bid in response to RFQ or purchase order.

EDI is being used also for an increasingly diverse set of concerns, for example, for interchanges
between healthcare providers and insurers, for travel and hotel bookings, for education administration,
and for government regulatory, statistical and tax reporting.

Standards Required for EDI.

From the point of view of the standards needed, EDI may be defined as an interchange between
computers of a sequence of standardized messages taken from a predetermined set of message types.
Each message is composed, according to a standardized syntax, of a sequence of standardized data
elements. It is the standardization of message formats using a standard syntax, and the standardization
of data elements within the messages, that makes possible the assembling, disassembling, and
processing of the messages by computer.

Implementation of EDI requires the use of a family of interrelated standards. Standards are required
for, at minimum: (a) the syntax used to compose the messages and separate the various parts of a
message, (b) types and definitions of application data elements, most of variable length, (c) the
message types, defined by the identification and sequence of data elements forming each message, and
(d) the definitions and sequence of control data elements in message headers and trailers.

Additional standards may define: (e) a set of short sequences of data elements called data segments, (f)
the manner in which more than one message may be included in a single transmission, and (g) the
manner of adding protective measures for integrity, confidentiality, and authentication into transmitted
messages.

8
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Looking closer at EDI

EDI is commonly defined as the direct computer-to-computer exchange of standard business forms.
The key idea involved is the exchange of documents that allow a business application to take place
without human intervention. The ability to send business documents between machines simplifies and
expedites the business process itself. Many businesses choose EDI as a fast, inexpensive, and safe
method of sending purchase orders, requests for quotations, quotations, invoices, payments, and other
frequently used business documents.

Often today one will see the term EC/electronic data interchange (EC/EDI). This term has evolved
from placing EDI under the EC (EC) umbrella, EC being the broad view of electronic trading. EDI is
defined as the interprocess (computer application to computer application) communication of business
information in a standardized electronic form. EC includes EDI, but recognizes the need for
interpersonal (human to human) communications, the transfer of moneys, and the sharing of common
databases as additional activities that aid in the efficient conduct of business. By incorporating a wide
range of technologies, EC is much broader than EDI. However, the focus of this document in on EDI,
not EC.

For thousands of companies, the EDI-based supply chain is the most important messaging
infrastructure they have. But with an everchanging business environment and technological advances,
staying up-to-date and optimized can be difficult and problematic.

There are many large EDI hubs with mission-critical applications and likewise, there are small
suppliers who are not set up to do Electronic Data Interchange at all. Some companies may setup
EDI services in the hopes to trade with their trading partners, but then realize not everyone has the
same capabilities. It’s crucial for companies to be able to bridge the gap between disparate systems,
data formats and communication protocols in order to trade with anyone of any size in their
community and keep their businesses thriving.

Comparing EDI and fax

Similarities exist between EDI and fax in that both use telephones lines and both can travel from
computer to computer (Sawabini, 1995). There are distinct differences however. Fax is primarily paper
based and requires a human interface. Fax receipts are not generally acceptable to applications. Fax
machines accept nonstandard data formats, and anything that can be scanned can be faxed, whereas
EDI requires standard message formats between trading partners.

9
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Comparing EDI and e-mail

Similarities also exist between e-mail and EDI. Both travel from computer to computer and both use
an electronic mailbox. However, three of the four differences listed for EDI vs. fax also apply to EDI
vs. e-mail: e-mail message format is not standard, e-mail requires human interface, and e-mail is not
acceptable to applications.

Data processing and EDI

One of the technological fields required to implement EDI is data processing. Data processing allows
the EDI operation to take information that is resident in a user application and transform that data into
a format that is recognizable to all other user applications that have an interest in using the data. In the
EDI environment, data processing will handle both outgoing and incoming data, as depicted in figure
1.

Figure 1: Data Processing and EDI

The user-defined files in figure 1 are the flat files that are produced by a business application. These
files may or may not be formatted by the user. These are the business files that need to be translated
into the X12 format.

The translation software in figure 1 is the software that maps the elements of a user-defined file into
the ANSI X12 or EDIFACT standard format. This software is available through commercial retailers
on various platforms from PCs to mainframes.

The mapping of the user-defined data elements into the translation software requires some skill in
mapping. The mapping itself requires knowledge of both the translation software and the EDI
standards being used so new mapping and processing rules can be set up for the translator. If a new
trading partner places no new requirements on the translator, the new trading partner is simply set up
under existing mapping rules. However, when the trading partner requires that additional or different
data fields be sent, a new mapping scheme needs to be identified and associated with that trading
partner (Sokol, 1995).

10
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Data communications and EDI

The other technological field that is heavily involved in EDI implementation is data communications.
Once the standards have been employed and the required software is in place, the EDI participant still
needs to have the ability to communicate with remote trading partners to take advantage of EDI.

Transport mechanisms move the data

Data must be transported across telecommunications lines in order for the trading partners to trade
information. Following are some basic concepts that describe mechanisms and methods used in this
transport of data:

Direct connect is the term used to indicate that two EDI trading partners trade information directly to
each other without a third-party connection service. Direct connects are normally used by large
corporations for intracompany EDI transactions and for intercompany transactions with trading
partners that have established high-volume rates of exchange of EDI data.

Modems are heavily used by EDI practitioners today. Modem-to-modem connections provide a level
of security and reliability that long-time practitioners are reluctant to give up. The standard in the
industry, as this paper is written, is transmission by binary synchronous modem or "bisync." This
method allows for high-speed continuous transmission in which the sending and receiving modems are
controlled by clock pulses. The clock pulses regulate the rate and timing of the data flow.

Routers, although not the primary transport mechanism for EDI transactions today, have the potential
to become the de facto standard of transmission for high-volume traffic. Currently, routers are used
mainly over leased lines, requiring expensive setups and ongoing data communications transport costs.

The business process and EDI

Any business application that can be improved through paperless trading in a fast, efficient
environment is a good candidate for EDI. EDI is currently widely used by the airline industry, banking
industry, credit card industry, and auto industry. The current push in the EDI world comes from
companies who wish to trade with each other electronically--buyers and their suppliers--hence the term
"trading partners."

Applications of EDI

The business process examined here to which to apply EDI concepts is the procurement process. This
business process was chosen for two reasons. First, within industry itself, new EDI technology is
developing fastest in this area. Second, the President has issued an initiative to streamline government
procurement through the use of EC. Since the initiative was announced in October 1993, the thrust
within the government has been to implement the initiative using EDI technologies. These factors
make the procurement process the most relevant business process to examine at this time

11
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

A typical small purchasing application

The business application depicted in figure 2 is a simple purchasing application.

and then

Figure 2: Business Application and EDI

As shown in figure 2, the procurement process normally begins with the buyer being made aware of a
need within the organization to make a purchase. As soon as a need is established and precisely
described, the buyer begins the process of selecting the supplier that will be used. Routine items may
be purchased using suppliers that have already been contracted with. New items or high-value items
may require investigation by the buyer in selecting an appropriate supplier.

The buyer will select a preliminary group of suppliers and then employ the methods of competitive
bidding, negotiation, or a combination of the two to secure the final supplier. When competitive
bidding is used, the buyer issues an RFQ to the suppliers that the buyer might be willing to do business
with. Typically, the RFQ will contain the same basic information that will be included on the purchase
order.

When a supplier receives an RFQ that the supplier has an interest in bidding on, the supplier issues a
quotation to the buyer. The quotation will contain pricing information so the buyer can do a price
comparison between the suppliers. For instance, an RFQ might be issued for 200 gallons of white,
latex-based paint. The supplier who is issuing a quotation may quote a price of $xxx.xx.

Once a supplier has been selected, the purchasing department issues a serially numbered purchase
order. The purchase order itself becomes a legally binding contract. For this reason the buyer will
carefully prepare the purchase order and ensure that the wording is precise and specific. Any drawings,
diagrams, or related documentation that is necessary to precisely describe the item being purchased
will be incorporated or referenced in the purchase order. Additionally any conditions or sampling plans
will be stated precisely.

Normally a list of terms and conditions designed to give legal protection to the buyer on various
matters prescribed by law are incorporated in, or attached to, all purchase orders as boilerplate to those
orders. These boilerplate terms and conditions cover a wide range of concerns including, contract
acceptance, delivery performance and contract termination, shipment rejections, assignment and

12
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

contracting or the order, patent rights and infringements, warranties, compliance with regulations, and
invoicing and payment procedures.

Change orders are required when a company makes a change in the contract after a purchase order has
been issued. The buyer will issue the change order and, when accepted by the supplier, the change
order either supplements or replaces the original purchase order.

The original copy of the purchase order constitutes a legal offer to buy. The purchase contract then
comes into existence when the contract is performed or when formal acknowledgment of acceptance of
the offer is made.

Normal business methods suggest that the supplier may not bother to acknowledge the offer if the
items are immediately shipped to the buyer. When the items are not immediately shipped, then the
supplier should send the acknowledgment back to the buyer.

The supplier may acknowledge the buyer's order accepting the buyer's terms and conditions, or may
acknowledge and incorporate the supplier's own terms and conditions in the acknowledgment. If the
seller's terms are different than the buyer's, the law allows them to be incorporated into the contract as
long as they do not alter the buyer's intent or unless the buyer files a written objection to the inclusion
of new terms and conditions. In general, terms and conditions that are in conflict between buyer and
seller are excluded from the contract, leaving the settlement to negotiation or suit. For this reason it is
imperative that the buyer beware of the terms and conditions in the order acceptance.

Marriage of the three disciplines

EDI involves three very different and distinct disciplines. First, there has to be a business process. If
the business process would be improved by being accomplished more quickly and with increased
efficiency, then the business process is a candidate for EDI. The business process is the domain of the
business functional area. Second, once the business process has been identified, data processing
technologies have to be applied to the business process so that the process can be handled using
computers. Some type of standard must come into play in the automation process so that paper
documents that are the output of the business process can be put into a format that is interchangeable
between computers. The automation of the business process is the domain of the data processing
discipline. Third, the standardized business form must be transmitted from and received by computers,
using data communications technologies. The data communications aspect of EDI is the domain of the
data communications discipline.

The marriage of these disciplines allows for the "paperless trading" that comprises EDI technologies.
As EDI technologies evolve, the terminology changes.

Paper document flow

The traditional document flow for purchasing transactions starts with data entry by the purchaser to
create a paper document to send by mail to trading partners. Once the trading partners receive the data,
they keystroke the information received into a local application and then perform more data entry by

13
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

entering a response into a local application. The resultant paper document is then mailed to the
purchaser.

The procedure is both time consuming and labor intensive because data from both trading partners has
to be entered twice, once at the point of creation and once at the point of entry to the foreign system. In
addition, the originator must await a paper response sent by mail.

EDI flow

EDI data is key in only one time, at the original point of entry. The data is then translated into a
standard format electronically and sent to the trading partner electronically. At the receiving end, the
data fields are mapped into local applications, and the only data entry required is for new data that may
be needed to respond to the data received.

Time for transmission is also very fast in comparison to postal mail. Even on a slow modem
connection, the time is considerably shorter than through the postal service.

Security

One of the major roles that is provided by the data communications technology is the ability to apply
security to EDI transactions so that the transactions will not be tampered with or observed, depending
on the level of security needed. The security modules that are discussed in this section are depicted in
figure 3.

14
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Figure 3: Data Communications Security

Confidentiality

Confidentiality requires that all communications between parties are restricted to the parties involved
in the transaction. This confidentiality is an essential component in user privacy, as well as in
protection of proprietary information and as a deterrent to theft of information services. Confidentiality
is concerned with the unauthorized viewing of confidential or proprietary data that one or both of the
trading partners does not want known by others. Confidentiality is provided by encryption.

Encryption is the scrambling of data so that it indecipherable to anyone except the intended recipient.
Encryption prevents snoopers, hackers, and other prying eyes from viewing data that is transmitted
over telecommunications channels. There are two basic encryption schemes, private-key and public-
key encryption. Encryption, in general, is cumbersome and expensive.

15
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Private-key encryption requires that both sending and receiving parties have the same private-
encryption keys. The sender encrypts the data using his key. The receiver then decrypts the message
using his identical key. There are several disadvantages to private-key encryption. In order to remain
secure, the keys must be changed periodically and the users must be in synch as to the actual keys
being used.

Public-key encryption is gaining wide spread acceptance as the preferred encryption technology. With
public-key encryption, a message recipient generates a matched set of keys, one public key and one
private key. The recipient broadcasts the public key to all senders or to a public location where the key
can be easily retrieved. Any sender who needs to send the receiver an encrypted message uses the
recipient's public key to encrypt the message. The private key, which is held in private by the recipient
is the only key that can decipher messages encrypted with the matched public key. This schema
requires that the private key cannot be generated from the public key.

Public key technology is the direction encryption technology is currently headed. With the advent of
X.500, databases will be built to store public keys and enhance the technology significantly.

Effects and level of automation

The benefits associated with EDI often cause overblown expectations. EDI, in and of itself, is just
another way to format and transfer data. The real use of EDI and the amount of value to be gained
from its implementation depend upon whether or not EDI is integrated into the overall data processing
effort of the organization.

The effects of EDI depend greatly on the level of automation within an organization. If the
organization is only using EDI to send data in a format required by a trading partner, the effect is much
more limited than if EDI is integrated into the back-end processes of the organization. EDI
applications that are fed by back-end processes and the databases that support these processes and
then, in turn, feed the EDI data received back into the databases and back-end processes have a huge
impact on the total level of automation within the organization.

The well-known list of EDI-related benefits--lower costs, higher productivity, and reduced order-cycle
times--is attainable. But if the automation level of the organization is not high and is not integrated, the
effects of EDI will be lessened considerably.

BENEFITS OF EDI

"EDI saves money and time because transactions can be transmitted from one information system to
another through a telecommunications network, eliminating the printing and handling of paper at one
end and the inputting of data at the other," Kenneth C. Laudon and Jane Price Laudon wrote in their
book Management Information Systems: A Contemporary Perspective. "EDI may also provide
strategic benefits by helping a firm 'lock in' customers, making it easier for customers or distributors to
order from them rather than from competitors." EDI was developed to solve the problems inherent in
paper-based transaction processing and in other forms of electronic communication. In solving these

16
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

problems, EDI is a tool that enables organizations to reengineer information flows and business
processes. It directly addresses several problems long associated with paper-based transaction systems:

 Time delays—Paper documents may take days to transport from one location to another, while
manual processing methodologies necessitate steps like keying and filing that are rendered
unnecessary through EDI.
 Labor costs—In non-EDI systems, manual processing is required for data keying, document
storage and retrieval, sorting, matching, reconciling, envelope stuffing, stamping, signing, etc.
While automated equipment can help with some of these processes, most managers will agree
that labor costs for document processing represent a significant proportion of their overhead. In
general, labor-based processes are much more expensive in the long term EDI alternatives.
 Accuracy—EDI systems are more accurate than their manual processing counterparts because
there are fewer points at which errors can be introduced into the system.
 Information Access—EDI systems permit myriad users access to a vast amount of detailed
transaction data in a timely fashion. In a non-EDI environment, in which information is held in
offices and file cabinets, such dissemination of information is possible only with great effort,
and it cannot hope to match an EDI system's timeliness. Because EDI data is already in
computer-retrievable form, it is subject to automated processing and analysis. It also requires
far less storage space.

Conclusions and future of EDI

EDI is well established as effective technology got reducing costs and increasing efficiency. EDI
technologies are approximately the same age as Internet technologies. In the past, the technologies
have been mutually exclusive, but this is rapidly changing. As the two technological communities
begin to merge and as the business community sees the advantages of this merger, EDI and the Internet
will eventually become ubiquitous.

EDI users are already seeing dramatic cost savings by moving their traffic from the traditional VAN
services to the Internet. As EDI working groups within the Internet Engineering Task Force create
interoperability standards for the use of EDI over the Internet and as security issues are addressed, EDI
over the Internet will be part of normal business. The EDI working group already has a charter for an
interoperability standard for process-to-process EDI. Once that standard is in place, real-time EDI over
the Internet will replace normal time-delayed, batch-style interactions.

REFERENCE

17
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Bort, R., and Bielfeldt, G. R. Handbook of EDI. Boston, Massachusetts: Warren, Gorham and
Lamont.
 Canis, R. J., Value-added networks: What to look for now and in the future. Conference
Proceedings EDI 2000: EDI, Electronic Commerce, and You; (pp. 141-157).
 Kimberley, P. (1991). EDI. New York: McGraw-Hill.
 Sawabini, S. (1995). Introduction to EDI. Conference Proceedings EDI 2000: EDI, EC, and
You, (pp. 1-36).
 Sokol, P. K. (1995). From EDI to EC: A Business Initiative. New York: McGraw-Hill

 http://en.wikipedia.org/wiki/Electronic_Data_Interchange.htm, visited onMonday 27th


December 2010

 Kantor, Michael; James H. Burrows (1996-04-29). "ELECTRONIC DATA INTERCHANGE


(EDI)". National Institute of Standards and Technology.
http://www.itl.nist.gov/fipspubs/fip161-2.htm.

5. What are the benefits of an interorganizational system?

ANSWER

Interorganizational systems (IOS) adoption requires cooperation and collaboration between trading partners
and, therefore, is reliant on the nature of their relationships. There has been some research that examines
the match between the different types of relationships and different types of IOS adoption and how IOS
adoption moves from a simple system to a more sophisticated system. However, these studies do not
precisely define the important constructs needed to understand this adoption progression, which makes
them difficult to be used for empirical research. This research introduces a new model, which is called
“IOS Adoption Maturity” model, to explicitly illustrate how organizations progress from one level of IOS
adoption to the next level. Based on the previous studies, we define three important constructs (IO
relationship intimacy, IOS sophistication and IOS adoption maturity) in the model. With this model, the
dynamics of IOS adoption progression can be better examined empirically.

INTRODUCTION

Interorganizational systems (IOS) are automated information systems shared by two or more companies
(Cash and Konsynski, 1985) such as Electronic Data Interchange (EDI) and Collaborative Planning,
Forecasting and Replenishment (CPFR). IOS offers trading organizations substantial benefits such as
reduced inventory costs, elimination of redundant handling of data entries, improved scheduling, rocessing
and distribution of goods and improved information accuracy, to name a few (Mentzer, 2004; Premkumar
and Ramamurthy, 1995). IOS have become a strategic weapon for some organizations to attain competitive
advantage and have shifted the competition from single firms competing anonymously to supply chain
against other supply chains (Birou, Fawcett and Magnan, 1998; Lambert and Cooper, 2000).

18
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Despite these benefits, many companies face difficulties in adopting these systems because such
implementations are highly reliant on trading partners’ existing relationships which often are not favorable
(Kurnia and Johnston, 2003). IOS adoption requires credible commitment of participating firms to work
collaboratively to achieve common objectives and goals. Because IOS adoption is contingent upon the
cooperation of two or more companies in agreeing to implement these systems, adoption cannot be
understood without considering the nature of Inter Organizational (IO) relationships required by trading
partners (Choudhury, 1997) IOS studies that investigate IO relationships can be grouped into two. The first
group investigates the IO relationship variables that affect IOS adoption (Ibrahim and Ribbers, 2006;
Karahannas and Jones, 1999; Nagy, 2006).These studies look at one or two aspects affecting a particular
IOS adoption and generalise their findings to other IOSs. In the second group, studies contend that because
there are different types of IOSs, they require different types of IO relationships (Choudhury, 1997; Shah,
Goldstein and Ward, 2002). These studies classify IO relationship types based on relationship intimacy or
commitment and IOS types based on integration and then match levels of relationship intimacy with the
levels of IOS integration (Shah et al, 2002; Ham, Reimer and Johnston, 2003). A more recent study (Ham
and Johnston, 2007) not only examines the interaction between IO relationship types and IOS types but
also investigates how organizations can move from lower levels to higher levels of intimacy of IO
relationship types and integration of IOS types.

While the second group of IOS relationship research has shed some light on how IOS progresses from
lower levels to higher levels, it is difficult to base an empirical investigation on this work. This is because
the existing literature does not provide satisfactory measures of IO relationship types and IOS types. In
addition, there are no IOS studies that have categorised relationships distinctively. This paper addresses the
need to redefine IO relationships types and IOS types more specifically, so that better evaluations of how
Organizations move from lower levels to higher levels of IOS adoption can be done.

In particular, the aims of this study are to (1) define levels of IO relationship intimacy (IO relationship
types) precisely, (2) define levels of IOS sophistication (IOS types) precisely, and (3) to develop a model,
which we call the ‘IOS Adoption Maturity’ model, that shows the progression of IOS adoption by
incorporating the concepts of IO relationship intimacy and IOS sophistication. Based on the model, a
number or propositions are developed for further empirical studies.

Interorganizational Systems and competitive advantages - lessons from history

Global business constantly faces radical transformations stemming from advances in information
technology (IT). The concept of gaining competitive advantages by linking information systems across
organizations (e.g., supply chain integration) has taken on an overtone of dogma in many business
circles. Such electronic linkages are known as Interorganizational Systems (IOS). Lately, the growing
importance and easy accessibility of the Internet have propelled IOS to a new height. Undoubtedly,
IOS can have a great impact on organizational performance and industry structure. However, IT such
as the Internet is readily available to all companies, and most IOS concepts can be easily replicated.
Followers often enjoy newer and better technology that enables them to offer comparable services in a
short time and possibly at a lower cost. Late adopters can also learn from the experience of innovators

19
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

and thus avoid problems and hiccups along the way. How, then can organizations achieve competitive
advantages from IOS?

This paper examines a number of successful IOS such as the SABRE reservations system from
American Airlines, the Apollo reservations system from United Airlines, the ASAP Express from
Baxter Healthcare Corporation, and the Wal-Mart Supply Chain system. These are some of the rare
few that have managed to sustain competitive advantages (albeit some for a short period of time) as
other companies installed similar electronic capabilities. The factors that contribute to the success of
these systems are discussed. The paper also looks at the impact of the Internet on IOS and the
strategies for IOS in the Internet era.

Global business constantly faces radical transformations, and one of the catalysts of change is
information technology. The growth of the Internet and Internet-based business, for example, has been
among the most astonishing technological and social phenomenon of the last decade. The Internet is
only an example of the advancement of information technology and its impact on businesses. As early
as the 1980s, researchers and practitioners (6, 13, 14, 20, 21, 22, 34) argued that the information
revolution was transforming the nature of products, processes, companies, industries, and even
competition itself. One of the most intriguing of these transformations is the electronic linkage,
facilitated by Interorganizational Systems (IOS) (16, 44). IOS empowers companies to compete in a
totally new dimension (23). These information systems involve networks that transcend company
boundaries (35). IOS affects competition in three vital ways:

(a) It changes the structure of the industry and alters the rules of competition.

(b) It creates competitive advantages by giving companies new ways to cooperate and compete with
their rivals.

(c) It spawns whole new businesses, often from within a company's existing operations.

The concept of gaining competitive advantages by linking organizations with information technology
has taken on an overtone of dogma in many business circles and research arenas (2, 36). The main
reason for this is the highly publicized success stories of using information technology to gain
competitive advantages. Some popular success stories of the 1980s through the early 2000s include:

Ads By Google

Interoperable Software
Efficient, Scalable, Interoperable; Juniper Raises the Bar; See How.
Juniper.net/Interoperable+Software
HEADNOTE

ABSTRACT

Global business constantly faces radical transformations stemming from advances in information
technology (IT). The concept of gaining competitive advantages by linking information systems across
organizations (e.g., supply chain integration) has taken on an overtone of dogma in many business
20
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

circles. Such electronic linkages are known as Interorganizational Systems (IOS). Lately, the growing
importance and easy accessibility of the Internet have propelled IOS to a new height. Undoubtedly,
IOS can have a great impact on organizational performance and industry structure. However, IT such
as the Internet is readily available to all companies, and most IOS concepts can be easily replicated.
Followers often enjoy newer and better technology that enables them to offer comparable services in a
short time and possibly at a lower cost. Late adopters can also learn from the experience of innovators
and thus avoid problems and hiccups along the way. How, then can organizations achieve competitive
advantages from IOS?

This paper examines a number of successful IOS such as the SABRE reservations system from
American Airlines, the Apollo reservations system from United Airlines, the ASAP Express from
Baxter Healthcare Corporation, and the Wal-Mart Supply Chain system. These are some of the rare
few that have managed to sustain competitive advantages (albeit some for a short period of time) as
other companies installed similar electronic capabilities. The factors that contribute to the success of
these systems are discussed. The paper also looks at the impact of the Internet on IOS and the
strategies for IOS in the Internet era.

Keywords: Interorganizational Systems, Competitive Advantage, Internet, Electronic Commerce

INTRODUCTION

Global business constantly faces radical transformations, and one of the catalysts of change is
information technology. The growth of the Internet and Internet-based business, for example, has been
among the most astonishing technological and social phenomenon of the last decade. The Internet is
only an example of the advancement of information technology and its impact on businesses. As early
as the 1980s, researchers and practitioners (6, 13, 14, 20, 21, 22, 34) argued that the information
revolution was transforming the nature of products, processes, companies, industries, and even
competition itself. One of the most intriguing of these transformations is the electronic linkage,
facilitated by Interorganizational Systems (IOS) (16, 44). IOS empowers companies to compete in a
totally new dimension (23). These information systems involve networks that transcend company
boundaries (35). IOS affects competition in three vital ways:

(a) It changes the structure of the industry and alters the rules of competition.

(b) It creates competitive advantages by giving companies new ways to cooperate and compete with
their rivals.

(c) It spawns whole new businesses, often from within a company's existing operations.

The concept of gaining competitive advantages by linking organizations with information technology
has taken on an overtone of dogma in many business circles and research arenas (2, 36). The main
reason for this is the highly publicized success stories of using information technology to gain
competitive advantages. Some popular success stories of the 1980s through the early 2000s include:

21
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

(a) SABRE, the American Airlines reservations system, and Apollo, the other leading computerized
reservations system from United Airlines, transformed marketing and distribution in the airline
industry.

(b) American Hospital Supply's ASAP order-entry and inventory-control system generated huge sales
increases for the company's medical products and turned the company into an industry leader (3).

(c) McKesson Drug Company's Economost system resulted in revolutionary changes in the industry as
a whole, affecting its profitability and concentration. At the same time, it changed the competitive
position of the company.

(d) Wal-Mart's continuous inventory replenishment system has become something of a legend in the
discount industry. The system, under nearly continuous development since the 1960s, has allowed
Wal-Mart to reach an unprecedented level of operational efficiency, with overhead of roughly 16.6%
as compared to industry averages of over 20%.

However, information technology - equipment, software, services, and personnel - is readily available.
For example, enterprise software systems (e.g., SAP, PeopleSoft) that centralize an organization's
operations and integrate an organization's internal system with systems of suppliers and customers are
available to any company that is willing to pay (25, 39). Late adopters often benefit from the
innovator's experience. They can implement the system with newer technology and produce
comparable services at lower costs. Technologies such as the Internet are almost freely accessible to all
parties, thus creating and sustaining competitive advantages from integrating organizations'
information systems is much more difficult than it might appear (8, 9, 46).

COMPETITIVE ADVANTAGE VERSUS COMPETITIVE NECESSITY

McFarlan (26) asked managers to consider how information systems might benefit their companies.
Could information technology build barriers to competitive entry? Could it increase switching costs for
customers? Could it change the balance of power in supplier relationships? He went on to argue that
for many companies the answer was yes. By being the first to develop proprietary systems, pioneers
could revolutionize their industries. Increasingly, however, many researchers have begun to argue that
the answer is no.

Although the desire for competitive advantage remains the most commonly espoused reason for
developing IOS, many researchers argue that, in reality, most IOS were motivated by competitive
necessity. Many companies developed IOS as a defensive measure to stay even with the competition,
providing little, if any, competitive advantage for the organization. These systems are a cost of doing
business. This scenario is also true for the Internet. During the early part of 1997, "Reality Hits the
Internet," trumpeted the headline in The Wall Street Journal, and variations on the "You can't make
money on the Net" theme have been popping up in newspapers. As far as revenues are concerned, the
millions of dollars being thrown into commercial storefronts and publishing venues in Cyberspace are
yielding the electronic equivalent of sawdust (30). However, with billions in online sales predicted by
consulting firms, most companies cannot ignore the potential of electronic commerce. In fact, most
companies are implementing electronic commerce capabilities to stay in the game rather than to
achieve a sustainable competitive advantage against rivals.
22
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Hopper (19), senior vice president for information systems at American Airlines, wrote:

SABRE's real importance to American Airlines was that it prevented an erosion of market share.
American began marketing SABRE to travel agents only after United pulled out of an industry
consortium established to explore developing a shared reservations system to be financed and used by
carriers and travel retailers. The way American was positioned as an airline - we had no hubs, our
routes were regulated, and we were essentially a long-haul carrier - meant that we would have lost
market share in a biased reservations system controlled by a competitor. SABRE was less important to
us as a biased distribution channel than as a vehicle to force neutral and comprehensive displays into
the travel agency market. (Siau, Keng, The Journal of Computer Information Systems,2003)

Nevertheless, the American Airlines reservation system has evolved into one of the best-known
success stories of using IOS to gain competitive advantage. SABRE has helped to make American
Airlines a major force in the air travel distribution chain. Operating profits from SABRE in 1985 were
reportedly $143 million on revenues of $336 million (11).

Wal-Mart's supply chain system has emerged as the exemplar for the genre and, for a variety of
reasons, it has proven difficult to replicate in its entirety. This has propelled Wal-Mart into the position
of industry leader, a spot from which it has yet to do anything except look back in the distance at its
nearest competitors. Wal-Mart's data fed inventory system is always up to date, a feature that has had
its competitors scrambling to replicate (24).

To be sure, many companies enjoy competitive advantages from IOS, however short-lived (4). IOS
gives many companies a way of at least temporarily differentiating themselves from the competition,
such as handling sales transactions more efficiency. But only a rare few manage to sustain this
advantage as other companies started to install similar electronic capabilities. This paper analyzes
some of these successful IOS that manage to gain and sustain competitive advantages over the years.
The factors that contribute to the success of these "winning" systems are also discussed. It should be
noted that the list is by no means exhaustive.

INTERORGANIZATIONAL SYSTEMS LESSONS FROM HISTORY

Two popular examples on the use of information technology for competitive advantage are the
American Airlines and United Airlines reservations systems. Both SABRE and Apollo have helped
make these carriers major forces in the air travel distribution chain. The usefulness of these airlines
examples extends beyond the anecdotal citation of reservations systems as evidence of the existence of
strategic information systems. There is much we can learn from the characteristics of these systems.

Proprietary Systems

Both the American Airlines and United Airlines have built much of their business around massive,
centralized, proprietary computer systems. The advantage of controlling proprietary systems is
obvious. By being the first to develop proprietary systems, pioneers could revolutionize the industries
and put competitors at their mercy. For example, United dominated the computerized reservations
23
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

system market at the Denver hub. For over two years, United refused to allow Frontier, which was
United's chief competitor at Denver, to become an Apollo co-host for "competitive" reasons (15).

Moreover, since the host companies control the systems, they are usually better positioned to develop
software that maximizes the profits of the host companies. And by modifying' the systems, they can
discipline competitors. For example, even after Frontier achieved co-host status under Apollo and
SABRE, bias problems continue to exist which give the host carriers, United and American,
competitive advantage. The famous example is the screen bias employed by the host companies. Host
carriers continue to maintain a competitive advantage by displaying their schedules in a superior
manner to that of other co-host carriers. In addition to the screen bias enjoyed by United as a host
carrier, some co-hosts are more equal than other co-hosts in the tradition of Orwell's Animal Farm.
This creates new echelons of bias among co-hosts for which differentiated (and higher) rates can be
charged. Host carriers also have a competitive advantage through their computerized access to
confidential information of the competitors. In testimony before the Civil Aeronautics Board (CAB),
Frontier Airlines alleged that United Airlines was enjoying unfair competitive advantage by
monitoring the loading factors of competitors and then using the system to either lower prices or
broadcast special messages to travel agents. By controlling the systems, United and American have
exclusive and immediate access on their respective Apollo and SABRE systems to highly sensitive
sales data which they can use to promptly take measures (e.g., bonus incentives, lower fares, more
schedules) to rectify the developing situation. Competitive success flow to American Airlines and
United Airlines partly because they managed to build proprietary systems and diffuse them over a
broad, fast-moving competitive space.

Wal-Mart obviously considers its inventory replenishment system to be proprietary. When Amazon
managed to hire two of the IT professionals intimately involved and familiar with Wal-Mart's IT
system, Wal-Mart immediately responded by filing a lawsuit over what it saw as not only corporate
head-hunting, but also as a major threat to its inventory/supply chain system. Wal-Mart has spent
considerable effort at insuring that its supply chain operates at near-real-time. In some ways, the
supply chain actually resembles a demand chain rather than a supply chain, a system that conveys a
competitive advantage. Wal-Mart apparently considers its IT system worthy of protection from the
replication by its competitors of the particularly sensitive and proprietary positions of the system.

Though not related to IOS, Microsoft's control of the operating systems for personal computers is
another good illustration of the strategic values of proprietary systems. Microsoft's huge fortune is
built on its ownership of the DOS and Windows operating systems. By controlling the operating
systems of roughly 95% of personal computers, Microsoft was able to design other software packages
(e.g., Microsoft Word, Microsoft Access, Microsoft Excel, etc.) that tightly coupled with its operating
systems. Using the Windows operating system as a bargaining chip, Microsoft was able to capture
market share for its other software packages by punishing those manufacturers that refused to comply.
One incident involved the use of Windows operating systems to force personal computer
manufacturers to license and distribute Microsoft's Internet browser, Internet Explorer. The action was
designed to undermine the dominant market position of Microsoft's major competitor for Internet
browsers, Netscape (10). In the words of former U.S. Attorney General Janet Reno, "Microsoft
is...taking advantage of its Windows monopoly to protect and extend that monopoly."

Broad Diffusion
24
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

As can be seen from the airline reservations systems, some successful systems are proprietary, but
open. In this paper, we define an open proprietary system as a system that is controlled by the host
company but is available to competitors and other organizations in the buyer-supplier chain. One
advantage of open systems is that they can be broadly diffused. Closed systems do not win broad
franchises, and large profits come only from broad franchises. Of course, diffusion decisions are not
without risk. Choosing the right degree of openness is one of the most subtle and difficult decisions in
the contests. In certain cases, some IOS might have competitive advantages that are based on use of the
systems with only a limited number of organizations. One example would be OTISLINE that offered
customers improved services. There may be no advantage to OTIS in opening its system to
competitors.

Another good example to illustrate the risk of openness is from the desktop computing arena. Although
not directly related to IOS, it again provides us an invaluable lesson. Ironically, IBM badly fumbled an
equivalent opportunity in desktop computing, opening and handing over the two most critical PC
architectural control points - the operating system and the microprocessor - to Microsoft and Intel.
Since any clone maker could acquire the operating system software from Microsoft and the
microprocessor from Intel, making PCs became a brutal commodity business. As a high-cost
manufacturer, IBM now holds a small fraction of the market it created. On the other hand, Microsoft
and Intel are enjoying hefty after-tax margins and handsome stock prices. Another related story is
Apple. Apple computer has been flip-flopping the openness question for a long time. Initially, they
were against licensing other manufacturers to produce clones. Most observers believe that its refusal to
license its operating system in the 1980s - as IBM did - is the root of its current problem (30). That
decision was changed by former CEO Gil Amelio. Apple has also been trying to limit clone-makers as
they are eating into Apple's profit. This, again, goes to show the difficulty involved in making the
openness decision.

The trend toward buying more hardware and software from third-party vendors also means that
competitors can easily replicate the system. Software tools such as relational databases, expert
systems, and computer-aided software engineering are helping to create powerful applications that
meet specialized needs faster and at reasonable costs. It is, therefore, increasingly difficult to design
information systems that locked-out competitors or coalitions of locked-out competitors cannot
eventually imitate or surpass. Microsoft's prompt response to and rapid capture of the Internet browser
business clearly illustrates this point. The Internet browser business, once thought to be eluding the
grasp of Microsoft, is now a non-contest. If Netscape had licensed the browser to Microsoft as
requested, it would be a totally different playing field at this point in time. Thus, where possible, the
best solution is to open the system to competitors to lessen the probability of the competition
developing similar systems and reducing or slowing down the developing of standards in the industry.
For example, American Hospital Supply has opened ASAP to products from rival companies.
Similarly, organizations (such as the AMR Corporation) that have developed centralized systems
eagerly share access to, and sometimes control, their systems. American Airlines' willingness to share
its reservation system with its competitors and sell information services through its IT subsidiary,
AMRIS, reduces the incentives of rivals to make an investment in building a similar competence (17).

25
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Another factor pushing for openness is economies of scale. It is increasingly difficult for one company
to marshal the financial resources to build and maintain new information systems on the necessary
scale. Truly useful IOS are becoming too big and too expensive for any one company to build and
own; joint ventures will become the rule rather than the exception. Furthermore, for companies to
remain low-cost providers of information, they must tap the enormous capacities of their systems.
Tapping that capacity requires opening the system to as many information suppliers as possible and
offering it to as many information consumers as possible.

The market also demands open systems (44). Different supplier systems are vendor-specific, requiring
multiple hardware or software, private networks, and different transaction formats. Consumers do not
want, nor can they afford, to have five or six terminals from different supplies on their desks. They
want to perform all transactions with all vendors using one system with one interface, which is
considerably more convenient than separate systems with individual formats, passwords, and reports.

In short, cost savings for the hosts will continue to be minimal as long as the penetration of IOS
remains small and there is a necessity to maintain parallel non-electronic systems. And the penetration
will be small unless the system is open to other information suppliers.

Continuous Evolution

To sustain competitive advantages, IOS involves a continuous process of change (29). The dominant
positions of American and United in the airlines industry, notwithstanding events such as 9-11 and the
war on terror in Afghanistan and Iraq, can be understood as the outcome of an evolutionary process
that parallels developments in information technology and the air transport industry. Since the late
1940s, the goals for reservations systems have changed and expanded. At first, the primary incentive
was to reduce clerical costs. However, it soon became apparent that an accurate count of the number
and names of passengers for each flight was fundamental to controlling airline operations. Information
captured through the reservations process was used to manage passenger service levels and aircraft
capacity and to plan for ancillary requirements such as baggage handling, food, and fuel. Eventually,
the marketing potential inherent in the systems came to dominate the airline industry's retail
distribution channels. Starting from an electromechanical base, the airline reservations systems
evolved along with the computer and air transport industries. Their expanding scale and scope were
influenced by developments in both software and hardware and affected by the regulation and
deregulation of the domestic airline market. In the process, these systems went from being useful to
being essential assets, equal in importance to an airline's fleet.

The lesson from the airline industry is simple: in this era of rapidly evolving information technology,
there is no end to change. American Airlines introduced a service that allows companies to automate
their travel and entertainment (T&E) information. Building on the SABRE system, the application
automatically transmits booking information to the customer's corporate headquarters with the goal of
creating an electronic T&E report for each traveler. The system also has a computerized-reservations
and yield-management system for the hotel and car rental industries. These developments have
revolutionized the travel industry and redefined the market. It affects pricing strategies and marketing
techniques in the hotel and car rental industries in much the same way that Apollo and SABRE
transformed the airline business.

26
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

The failure to consider all the strategic advantages to be gained from the creative use of information is
dangerous. One competitor in the elevator industry copies the other's move to centralize service
records. But the copycat company went a step further: it identified elevators that chronically failed and
then approached affected clients with proposals to rebuild those units. The innovator had, at least
temporarily, a whole new market to itself (5).

Standards Development

The state of standards development is a critical variable in the evolution of IOS within individual
companies and across industries. The existence of standards is not only a prerequisite for the high
volume penetration of IOS which can lead to significant cost savings but standards also change the
competitive dynamics of IOS because they change the rules of success. The evolution of widely-
accepted IOS within an industry makes proprietary systems, such as those developed by early IOS
"winners" (e.g., American Hospital Supply, American Airlines, Wal-Mart), impractical or at least
much more difficult to "sell" to trading partners.

Progress in standards development varies greatly from industry to industry. The evolution of standards
seems to depend a great deal on industry structure, the strategies of early IOS adopters within an
industry, the spirit of cooperation that exists between companies, and the strength of industry
associations. Industries with strong trade associations (e.g., transportation and groceries) or where
regulatory standards are imposed (e.g., banking and airlines) are most likely to be early developers or
standards. Two examples are the Universal Product Code (UPC) in the grocery industry and magnetic
ink character (MICR) sets in magnetic strips on credit cards and cards for automatic teller machines
(ATM).

Of course, not all companies are anxious to establish IOS standards. For example, when suppliers see
the opportunity for "significant" competitive advantage, it is in their interest to preempt evolving
standards and create their own proprietary protocols. This is precisely the reason why United Airlines
pulled out of an industry consortium established to explore the development of a shared reservations
system to be financed and used by carriers and travel retailers. In retrospect, United made the correct
choice in creating the proprietary SABRE system.

One way to delay the development of standards is to open the system to competitors. Another way is
for the company to market its own existing proprietary protocols as the industry standard. Baxter
launched ASAP Express as an IOS that acts as an industry platform, intermediating between hospitals
and hospital suppliers. Baxter even declared that ASAP Express offered a "level playing field" that
gave no advantage to any one vendor.

Summary

In summary, the only way companies will be able to continually differentiate themselves from
competitors implementing IOS is by continuously adding new capabilities to the systems. An evolving
system is a moving target for competitors. Lock-in is sustainable only when the company aggressively
and continuously upgrades the system and continually and compatibly extends the system itself. The
airline industry also illustrates that sustainable advantage need not only be the result of extraordinary

27
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

vision but also the result of a consistent exploration of opportunities revealed during the evolution of
adaptable systems.

THE NEXT STAGE OF COMPETITION

For the past few decades, much of the money and energy were spent on the first stage of the process -
building hardware, software, and networks powerful enough to generate useful data (1, 27). That
challenge is close to being solved; most companies have their arms around the data-gathering
conundrum. It is time to turn to another page in the chronicle of the strategic information systems.

While it is more dangerous than ever to ignore the power of IOS, it is even more dangerous to believe
that on its own, an IOS can provide an enduring business advantage. In the not-so-distant future,
powerful mobile computers (e.g., table PCs and other mobile devices) will be a part of the business
environment as familiar as telephones are today (38, 41, 42, 43). They will also be as simple to use as
telephones, or at least nearly so. The Internet will be the global electronic library enabling users to
have any and every piece of information at their fingertips instantly. With the infrastructure provided
by the Internet, companies will be able to build an IOS much faster. For example, Extranets, Internet-
based information access control and delivery systems that enable companies to selectively and
securely open and share their corporate Intranet resources, will enable companies to link using their
existing Internet infrastructure. As a result, companies will find it more difficult to differentiate
themselves simply by automating or building IOS faster than the competition. It will be easier for
every organization to automate and to capture the efficiency benefits of IOS.

This, however, does not mean that IOS will lose its pivotal role in the future or that technology
leadership will be less relevant to competitive success. Changes in information technology come so
rapidly and irresistibly, and the consequences of falling behind are becoming so irreversible that
companies will either master and re-master the technology or die. Think of it as a technology
treadmill: companies will have to run harder and harder just to stay in place. But organizations that
stay on the treadmill will be competing against others that have done the same thing. In this sense, the
information utility will have a leveling effect. Developing new innovative computer systems will offer
less decisive business advantages than previously, and these advantages will be more fleeting and more
expensive to maintain. There is still plenty of room for differentiation in the next stage of competition,
but differentiation will be of a new and more difficult sort.

Information Competition

One scenario for the next arena of competitive differentiation revolves around the intensification of
analysis and mining of data. In such an environment, information technology will be at once more
pervasive but less potent. As astute managers maneuver against rivals, they will focus less on being the
first to build proprietary electronic tools than on becoming the best at using and improving generally
available tools to enhance what their organizations already do well. Managers will shift their attention
from systems to information to knowledge (28, 37). One good example is American Airlines. For
years, American Airlines guarded their software jealously. Since 1986, however, American Airlines

28
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

sold SABRE's revenue-management expertise to any company that wanted to buy it. In the words of
Max Hopper (19), Senior Vice President for Information Systems at American Airlines:

Because we believe our analysts are better at using the software than anyone else in the world.
Whatever "market power" we might enjoy by keeping our software and expertise to ourselves is not as
great as the revenue we can generate by selling it.

This is the competitive philosophy in the new era: compete on the use of electronic tools, not on their
exclusive ownership. Similarly, Mrs. Fields marketed to other retail chains the sophisticated
networking and automation system with which it ran its cookie operations. Price Waterhouse was
helping companies like Fox photo evaluate and install the Retail Operations Intelligence system, the
backbone of Mrs. Field's nationwide expansion. McKesson Drug offered its order processing IOS
(Economost) to other drugstores.

As in airlines and so many other industries, competition has shifted from building tools that collect
data to using generally available tools to turn data into information and information into knowledge.
An example of such activities would be data mining, which is gaining importance in many
organizations (40). This is especially true in the Internet era. The exponential growth of the Internet
and the World Wide Web sites put global information at our fingertips. In addition, the Internet is
revolutionizing the way information is accessed, processed, and distributed (e.g., web information
mining). The shortage of data is no longer the problem; the ability to mine the data to produce
information is. Only companies that excel at turning data into information and then analyzing the
information quickly and intelligently enough to generate superior knowledge will sustain the
competitive advantage in the next round of competition.

Information Partnerships

One of the most intriguing developments facilitated by IOS is information partnerships that allow
organizations to join forces without merging. The same companies will compete in one field and
cooperate in another (17). The partnership between Apple and Microsoft, archrivals in the personal
computer business, is a good example. Information partnerships empower companies to compete,
ironically, by allowing them new ways to cooperate. For example, American Airlines has allied with
Citibank so that air mileage credit in its frequent flyer program is awarded to credit card users - one
mile for every dollar spent on the card. American has thus increased the loyalty of its customers, and
the credit card company has gained access to a new and highly credit-worthy customer base for cross
marketing. This partnership has been expanded to include MCI a major long distance phone company,
which offers multiple airline frequent flyer miles for each dollar of long-distance billing. Citibank, one
of the largest issuers of VISAs and MasterCards, initiated a partnership to steer its millions of VISA
holders to MCI, a response to AT&T's entry into credit cards (the Universal Card). The Star Alliance, a
network of more than 15 airlines (e.g., United, Air Canada, Singapore Airlines, and Lufthansa), is
another good example of information partnerships.

Sometimes, partnerships are formed because no one firm possesses all the requisite resources to bring
the new product or service to fruition (17). Other times, through an information partnership, diverse
companies can offer novel incentives and services or participate in joint marketing programs. They can
take advantage of new distribution channels or introduce operational efficiencies and revenue
29
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

enhancements. Partnerships create opportunities for scale and cross selling. They can make small
companies look, feel, and act big, reaching for customers once beyond their grasp. On the other hand,
partnerships can make big companies look small and closed, targeting and servicing custom markets.
Partnerships, in short, provide a new basis for competitive differentiation. Companies that hold back
forging a partnership may soon find themselves frozen out by early movers. Airline mergers are a good
example of the importance of not being left out in the cold.

With the increasing accessibility of the Internet, new technologies are constantly being developed to
further facilitate information sharing between partners and the forming of partnerships. Extranets can
be used to connect upstream and downstream value chains and form information partnerships, and
Internet-enabled partnerships are now a common sight.

Reengineering the Business Processes

Another critical factor in determining which companies derive the greatest benefits from IOS is the
ability to manage major changes in work design and organizational structure (18). Gearing toward
efficiency and control alone cannot address the fundamental performance deficiencies in business
processes. Companies must derive cost savings from the systems to cover the investment. But these
savings can only come through the painful exercise of redesigning basic organizational structures and
work processes. Simply laying IOS on top of existing work processes trivializes the potential of these
systems. To assure the effective use of the technology, organizational redesign must become
synonymous with IOS development. Those who do gain significant competitive advantage from IOS
are those who managed to integrate the technology effectively into their organizations in such a way
that they can continually add valuable new capabilities to the system while deriving cost savings from
increased productivity and decreased overhead made possible by IOS.

The usual methods for boosting performance - process rationalization and automation - haven't yielded
the dramatic improvements companies need. In particular, heavy investments in information
technology have delivered disappointing results largely because companies tend to use technology to
mechanize old ways of doing business. They leave the existing processes intact and use computers
simply to speed them up. In 1987, Business Week (33) reported that almost 40% of all U.S. capital
spending went to information sy

The usual methods for boosting performance - process rationalization and automation - haven't yielded
the dramatic improvements companies need. In particular, heavy investments in information
technology have delivered disappointing results largely because companies tend to use technology to
mechanize old ways of doing business. They leave the existing processes intact and use computers
simply to speed them up. In 1987, Business Week (33) reported that almost 40% of all U.S. capital
spending went to information systems, some $97 billion a year but IT has been used in most cases to
hasten office work rather than to transform it. With few exceptions, IT's role in the redesign of non-
manufacturing work has been disappointing; few companies have achieved major productivity gains.
Aggregate productivity figures for the United States have shown no increase since 1973.

Portland General Electric (PGE), a major customer of power generation equipment, called upon
Westinghouse's Productivity and Quality Center, a national leader in process improvement, to help
them implement EDI. The Westinghouse team asked if it could analyze the entire process by which
30
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

PGE procured equipment from Westinghouse and other suppliers. They found that, while
implementing EDI could yield efficiencies on the order of 10%, changing the overall procurement
process, including using EDI and bypassing the purchasing department altogether for most routine
purchase orders, could lead to much greater savings. In one case, the time to execute a standard
purchase order, for example, could be reduced from 15 days to half a day; the cost could be reduced
from almost $90 to $10.

Thus, the new watchwords are business process reengineering (44). A 1994 survey found that 69% of
North American and 75% of European companies had at least one reengineering project underway
(12). Nolan (32) wrote the strategic reengineering equation as Radical Change = New Organization +
IT. With business process reengineering, this means that information technology should be viewed as
more than an automating or mechanizing force; it could fundamentally reshape the way business is
done. Similarly, business activities should be viewed as more than a collection of individual or even
functional tasks; they should be broken down into processes that can be designed for maximum
effectiveness, in both the manufacturing and service environments. In short, rather than maximizing
the performance of particular individuals or business functions, companies must maximize
interdependent activities within and across the entire organization.

DuPont's concept of "effectiveness in use" as the major criterion of customer satisfaction is one leading
approach to measuring the effectiveness of interorganizational processes. DuPont was driven not
simply by the motive of selling a product but by the desire to link its internal processes (for creating
value in its products) to its customer's processes (for using those products). This concept led DuPont to
furnish EDI-provided Material Safety Data Sheets along with the chemicals it sells to its customers to
insure their safe use.

Another company that succeeded in using IOS to reengineer its business process is McKesson Drug
Company (8). McKesson's Economost system allowed the company to cut the number of warehouses
in half. Inventory turns seven times a year, so fast that it is generally held less than two weeks after
payment is made to the manufacturer. Despite a six-fold increase in sales, telephone order clerks were
down from 700 to 15, sales staffs were cut in half, and national purchasing staffs were down from 140
to 12.

At the moment, organizational units are focusing increasingly on Internet-enabled redesign of business
processes. Few would question the fact that information technology is a powerful tool for reshaping
business processes. The individuals and companies that can master redesigning processes around IT
and the Internet will be well equipped to succeed in the future.

CONCLUSIONS

Ads By Google

Interoperable Software
Efficient, Scalable, Interoperable; Juniper Raises the Bar; See How.
Juniper.net/Interoperable+Software
HEADNOTE

31
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

ABSTRACT

Global business constantly faces radical transformations stemming from advances in information
technology (IT). The concept of gaining competitive advantages by linking information systems across
organizations (e.g., supply chain integration) has taken on an overtone of dogma in many business
circles. Such electronic linkages are known as Interorganizational Systems (IOS). Lately, the growing
importance and easy accessibility of the Internet have propelled IOS to a new height. Undoubtedly,
IOS can have a great impact on organizational performance and industry structure. However, IT such
as the Internet is readily available to all companies, and most IOS concepts can be easily replicated.
Followers often enjoy newer and better technology that enables them to offer comparable services in a
short time and possibly at a lower cost. Late adopters can also learn from the experience of innovators
and thus avoid problems and hiccups along the way. How, then can organizations achieve competitive
advantages from IOS?

This paper examines a number of successful IOS such as the SABRE reservations system from
American Airlines, the Apollo reservations system from United Airlines, the ASAP Express from
Baxter Healthcare Corporation, and the Wal-Mart Supply Chain system. These are some of the rare
few that have managed to sustain competitive advantages (albeit some for a short period of time) as
other companies installed similar electronic capabilities. The factors that contribute to the success of
these systems are discussed. The paper also looks at the impact of the Internet on IOS and the
strategies for IOS in the Internet era.

Keywords: Interorganizational Systems, Competitive Advantage, Internet, Electronic Commerce

INTRODUCTION

Global business constantly faces radical transformations, and one of the catalysts of change is
information technology. The growth of the Internet and Internet-based business, for example, has been
among the most astonishing technological and social phenomenon of the last decade. The Internet is
only an example of the advancement of information technology and its impact on businesses. As early
as the 1980s, researchers and practitioners (6, 13, 14, 20, 21, 22, 34) argued that the information
revolution was transforming the nature of products, processes, companies, industries, and even
competition itself. One of the most intriguing of these transformations is the electronic linkage,
facilitated by Interorganizational Systems (IOS) (16, 44). IOS empowers companies to compete in a
totally new dimension (23). These information systems involve networks that transcend company
boundaries (35). IOS affects competition in three vital ways:

(a) It changes the structure of the industry and alters the rules of competition.

(b) It creates competitive advantages by giving companies new ways to cooperate and compete with
their rivals.

(c) It spawns whole new businesses, often from within a company's existing operations.

The concept of gaining competitive advantages by linking organizations with information technology
has taken on an overtone of dogma in many business circles and research arenas (2, 36). The main
32
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

reason for this is the highly publicized success stories of using information technology to gain
competitive advantages. Some popular success stories of the 1980s through the early 2000s include:

(a) SABRE, the American Airlines reservations system, and Apollo, the other leading computerized
reservations system from United Airlines, transformed marketing and distribution in the airline
industry.

(b) American Hospital Supply's ASAP order-entry and inventory-control system generated huge sales
increases for the company's medical products and turned the company into an industry leader (3).

(c) McKesson Drug Company's Economost system resulted in revolutionary changes in the industry as
a whole, affecting its profitability and concentration. At the same time, it changed the competitive
position of the company.

(d) Wal-Mart's continuous inventory replenishment system has become something of a legend in the
discount industry. The system, under nearly continuous development since the 1960s, has allowed
Wal-Mart to reach an unprecedented level of operational efficiency, with overhead of roughly 16.6%
as compared to industry averages of over 20%.

However, information technology - equipment, software, services, and personnel - is readily available.
For example, enterprise software systems (e.g., SAP, PeopleSoft) that centralize an organization's
operations and integrate an organization's internal system with systems of suppliers and customers are
available to any company that is willing to pay (25, 39). Late adopters often benefit from the
innovator's experience. They can implement the system with newer technology and produce
comparable services at lower costs. Technologies such as the Internet are almost freely accessible to all
parties, thus creating and sustaining competitive advantages from integrating organizations'
information systems is much more difficult than it might appear (8, 9, 46).

COMPETITIVE ADVANTAGE VERSUS COMPETITIVE NECESSITY

McFarlan (26) asked managers to consider how information systems might benefit their companies.
Could information technology build barriers to competitive entry? Could it increase switching costs for
customers? Could it change the balance of power in supplier relationships? He went on to argue that
for many companies the answer was yes. By being the first to develop proprietary systems, pioneers
could revolutionize their industries. Increasingly, however, many researchers have begun to argue that
the answer is no.

Although the desire for competitive advantage remains the most commonly espoused reason for
developing IOS, many researchers argue that, in reality, most IOS were motivated by competitive
necessity. Many companies developed IOS as a defensive measure to stay even with the competition,
providing little, if any, competitive advantage for the organization. These systems are a cost of doing
business. This scenario is also true for the Internet. During the early part of 1997, "Reality Hits the
Internet," trumpeted the headline in The Wall Street Journal, and variations on the "You can't make
money on the Net" theme have been popping up in newspapers. As far as revenues are concerned, the
millions of dollars being thrown into commercial storefronts and publishing venues in Cyberspace are
yielding the electronic equivalent of sawdust (30). However, with billions in online sales predicted by
33
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

consulting firms, most companies cannot ignore the potential of electronic commerce. In fact, most
companies are implementing electronic commerce capabilities to stay in the game rather than to
achieve a sustainable competitive advantage against rivals.

Hopper (19), senior vice president for information systems at American Airlines, wrote:

SABRE's real importance to American Airlines was that it prevented an erosion of market share.
American began marketing SABRE to travel agents only after United pulled out of an industry
consortium established to explore developing a shared reservations system to be financed and used by
carriers and travel retailers. The way American was positioned as an airline - we had no hubs, our
routes were regulated, and we were essentially a long-haul carrier - meant that we would have lost
market share in a biased reservations system controlled by a competitor. SABRE was less important to
us as a biased distribution channel than as a vehicle to force neutral and comprehensive displays into
the travel agency market.

Nevertheless, the American Airlines reservation system has evolved into one of the best-known
success stories of using IOS to gain competitive advantage. SABRE has helped to make American
Airlines a major force in the air travel distribution chain. Operating profits from SABRE in 1985 were
reportedly $143 million on revenues of $336 million (11).

Wal-Mart's supply chain system has emerged as the exemplar for the genre and, for a variety of
reasons, it has proven difficult to replicate in its entirety. This has propelled Wal-Mart into the position
of industry leader, a spot from which it has yet to do anything except look back in the distance at its
nearest competitors. Wal-Mart's data fed inventory system is always up to date, a feature that has had
its competitors scrambling to replicate (24).

To be sure, many companies enjoy competitive advantages from IOS, however short-lived (4). IOS
gives many companies a way of at least temporarily differentiating themselves from the competition,
such as handling sales transactions more efficiency. But only a rare few manage to sustain this
advantage as other companies started to install similar electronic capabilities. This paper analyzes
some of these successful IOS that manage to gain and sustain competitive advantages over the years.
The factors that contribute to the success of these "winning" systems are also discussed. It should be
noted that the list is by no means exhaustive.

INTERORGANIZATIONAL SYSTEMS LESSONS FROM HISTORY

Two popular examples on the use of information technology for competitive advantage are the
American Airlines and United Airlines reservations systems. Both SABRE and Apollo have helped
make these carriers major forces in the air travel distribution chain. The usefulness of these airlines
examples extends beyond the anecdotal citation of reservations systems as evidence of the existence of
strategic information systems. There is much we can learn from the characteristics of these systems.

Proprietary Systems

Both the American Airlines and United Airlines have built much of their business around massive,
centralized, proprietary computer systems. The advantage of controlling proprietary systems is
34
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

obvious. By being the first to develop proprietary systems, pioneers could revolutionize the industries
and put competitors at their mercy. For example, United dominated the computerized reservations
system market at the Denver hub. For over two years, United refused to allow Frontier, which was
United's chief competitor at Denver, to become an Apollo co-host for "competitive" reasons (15).

Moreover, since the host companies control the systems, they are usually better positioned to develop
software that maximizes the profits of the host companies. And by modifying' the systems, they can
discipline competitors. For example, even after Frontier achieved co-host status under Apollo and
SABRE, bias problems continue to exist which give the host carriers, United and American,
competitive advantage. The famous example is the screen bias employed by the host companies. Host
carriers continue to maintain a competitive advantage by displaying their schedules in a superior
manner to that of other co-host carriers. In addition to the screen bias enjoyed by United as a host
carrier, some co-hosts are more equal than other co-hosts in the tradition of Orwell's Animal Farm.
This creates new echelons of bias among co-hosts for which differentiated (and higher) rates can be
charged. Host carriers also have a competitive advantage through their computerized access to
confidential information of the competitors. In testimony before the Civil Aeronautics Board (CAB),
Frontier Airlines alleged that United Airlines was enjoying unfair competitive advantage by
monitoring the loading factors of competitors and then using the system to either lower prices or
broadcast special messages to travel agents. By controlling the systems, United and American have
exclusive and immediate access on their respective Apollo and SABRE systems to highly sensitive
sales data which they can use to promptly take measures (e.g., bonus incentives, lower fares, more
schedules) to rectify the developing situation. Competitive success flow to American Airlines and
United Airlines partly because they managed to build proprietary systems and diffuse them over a
broad, fast-moving competitive space.

Wal-Mart obviously considers its inventory replenishment system to be proprietary. When Amazon
managed to hire two of the IT professionals intimately involved and familiar with Wal-Mart's IT
system, Wal-Mart immediately responded by filing a lawsuit over what it saw as not only corporate
head-hunting, but also as a major threat to its inventory/supply chain system. Wal-Mart has spent
considerable effort at insuring that its supply chain operates at near-real-time. In some ways, the
supply chain actually resembles a demand chain rather than a supply chain, a system that conveys a
competitive advantage. Wal-Mart apparently considers its IT system worthy of protection from the
replication by its competitors of the particularly sensitive and proprietary positions of the system.

Though not related to IOS, Microsoft's control of the operating systems for personal computers is
another good illustration of the strategic values of proprietary systems. Microsoft's huge fortune is
built on its ownership of the DOS and Windows operating systems. By controlling the operating
systems of roughly 95% of personal computers, Microsoft was able to design other software packages
(e.g., Microsoft Word, Microsoft Access, Microsoft Excel, etc.) that tightly coupled with its operating
systems. Using the Windows operating system as a bargaining chip, Microsoft was able to capture
market share for its other software packages by punishing those manufacturers that refused to comply.
One incident involved the use of Windows operating systems to force personal computer
manufacturers to license and distribute Microsoft's Internet browser, Internet Explorer. The action was
designed to undermine the dominant market position of Microsoft's major competitor for Internet
browsers, Netscape (10). In the words of former U.S. Attorney General Janet Reno, "Microsoft
is...taking advantage of its Windows monopoly to protect and extend that monopoly."
35
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Broad Diffusion

As can be seen from the airline reservations systems, some successful systems are proprietary, but
open. In this paper, we define an open proprietary system as a system that is controlled by the host
company but is available to competitors and other organizations in the buyer-supplier chain. One
advantage of open systems is that they can be broadly diffused. Closed systems do not win broad
franchises, and large profits come only from broad franchises. Of course, diffusion decisions are not
without risk. Choosing the right degree of openness is one of the most subtle and difficult decisions in
the contests. In certain cases, some IOS might have competitive advantages that are based on use of the
systems with only a limited number of organizations. One example would be OTISLINE that offered
customers improved services. There may be no advantage to OTIS in opening its system to
competitors.

Another good example to illustrate the risk of openness is from the desktop computing arena. Although
not directly related to IOS, it again provides us an invaluable lesson. Ironically, IBM badly fumbled an
equivalent opportunity in desktop computing, opening and handing over the two most critical PC
architectural control points - the operating system and the microprocessor - to Microsoft and Intel.
Since any clone maker could acquire the operating system software from Microsoft and the
microprocessor from Intel, making PCs became a brutal commodity business. As a high-cost
manufacturer, IBM now holds a small fraction of the market it created. On the other hand, Microsoft
and Intel are enjoying hefty after-tax margins and handsome stock prices. Another related story is
Apple. Apple computer has been flip-flopping the openness question for a long time. Initially, they
were against licensing other manufacturers to produce clones. Most observers believe that its refusal to
license its operating system in the 1980s - as IBM did - is the root of its current problem (30). That
decision was changed by former CEO Gil Amelio. Apple has also been trying to limit clone-makers as
they are eating into Apple's profit. This, again, goes to show the difficulty involved in making the
openness decision.

The trend toward buying more hardware and software from third-party vendors also means that
competitors can easily replicate the system. Software tools such as relational databases, expert
systems, and computer-aided software engineering are helping to create powerful applications that
meet specialized needs faster and at reasonable costs. It is, therefore, increasingly difficult to design
information systems that locked-out competitors or coalitions of locked-out competitors cannot
eventually imitate or surpass. Microsoft's prompt response to and rapid capture of the Internet browser
business clearly illustrates this point. The Internet browser business, once thought to be eluding the
grasp of Microsoft, is now a non-contest. If Netscape had licensed the browser to Microsoft as
requested, it would be a totally different playing field at this point in time. Thus, where possible, the
best solution is to open the system to competitors to lessen the probability of the competition
developing similar systems and reducing or slowing down the developing of standards in the industry.
For example, American Hospital Supply has opened ASAP to products from rival companies.
Similarly, organizations (such as the AMR Corporation) that have developed centralized systems
eagerly share access to, and sometimes control, their systems. American Airlines' willingness to share
its reservation system with its competitors and sell information services through its IT subsidiary,
AMRIS, reduces the incentives of rivals to make an investment in building a similar competence (17).

36
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Another factor pushing for openness is economies of scale. It is increasingly difficult for one company
to marshal the financial resources to build and maintain new information systems on the necessary
scale. Truly useful IOS are becoming too big and too expensive for any one company to build and
own; joint ventures will become the rule rather than the exception. Furthermore, for companies to
remain low-cost providers of information, they must tap the enormous capacities of their systems.
Tapping that capacity requires opening the system to as many information suppliers as possible and
offering it to as many information consumers as possible.

The market also demands open systems (44). Different supplier systems are vendor-specific, requiring
multiple hardware or software, private networks, and different transaction formats. Consumers do not
want, nor can they afford, to have five or six terminals from different supplies on their desks. They
want to perform all transactions with all vendors using one system with one interface, which is
considerably more convenient than separate systems with individual formats, passwords, and reports.

In short, cost savings for the hosts will continue to be minimal as long as the penetration of IOS
remains small and there is a necessity to maintain parallel non-electronic systems. And the penetration
will be small unless the system is open to other information suppliers.

Continuous Evolution

To sustain competitive advantages, IOS involves a continuous process of change (29). The dominant
positions of American and United in the airlines industry, notwithstanding events such as 9-11 and the
war on terror in Afghanistan and Iraq, can be understood as the outcome of an evolutionary process
that parallels developments in information technology and the air transport industry. Since the late
1940s, the goals for reservations systems have changed and expanded. At first, the primary incentive
was to reduce clerical costs. However, it soon became apparent that an accurate count of the number
and names of passengers for each flight was fundamental to controlling airline operations. Information
captured through the reservations process was used to manage passenger service levels and aircraft
capacity and to plan for ancillary requirements such as baggage handling, food, and fuel. Eventually,
the marketing potential inherent in the systems came to dominate the airline industry's retail
distribution channels. Starting from an electromechanical base, the airline reservations systems
evolved along with the computer and air transport industries. Their expanding scale and scope were
influenced by developments in both software and hardware and affected by the regulation and
deregulation of the domestic airline market. In the process, these systems went from being useful to
being essential assets, equal in importance to an airline's fleet.

The lesson from the airline industry is simple: in this era of rapidly evolving information technology,
there is no end to change. American Airlines introduced a service that allows companies to automate
their travel and entertainment (T&E) information. Building on the SABRE system, the application
automatically transmits booking information to the customer's corporate headquarters with the goal of
creating an electronic T&E report for each traveler. The system also has a computerized-reservations
and yield-management system for the hotel and car rental industries. These developments have
revolutionized the travel industry and redefined the market. It affects pricing strategies and marketing
techniques in the hotel and car rental industries in much the same way that Apollo and SABRE
transformed the airline business.

37
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

The failure to consider all the strategic advantages to be gained from the creative use of information is
dangerous. One competitor in the elevator industry copies the other's move to centralize service
records. But the copycat company went a step further: it identified elevators that chronically failed and
then approached affected clients with proposals to rebuild those units. The innovator had, at least
temporarily, a whole new market to itself (5).

Standards Development

The state of standards development is a critical variable in the evolution of IOS within individual
companies and across industries. The existence of standards is not only a prerequisite for the high
volume penetration of IOS which can lead to significant cost savings but standards also change the
competitive dynamics of IOS because they change the rules of success. The evolution of widely-
accepted IOS within an industry makes proprietary systems, such as those developed by early IOS
"winners" (e.g., American Hospital Supply, American Airlines, Wal-Mart), impractical or at least
much more difficult to "sell" to trading partners.

Progress in standards development varies greatly from industry to industry. The evolution of standards
seems to depend a great deal on industry structure, the strategies of early IOS adopters within an
industry, the spirit of cooperation that exists between companies, and the strength of industry
associations. Industries with strong trade associations (e.g., transportation and groceries) or where
regulatory standards are imposed (e.g., banking and airlines) are most likely to be early developers or
standards. Two examples are the Universal Product Code (UPC) in the grocery industry and magnetic
ink character (MICR) sets in magnetic strips on credit cards and cards for automatic teller machines
(ATM).

Of course, not all companies are anxious to establish IOS standards. For example, when suppliers see
the opportunity for "significant" competitive advantage, it is in their interest to preempt evolving
standards and create their own proprietary protocols. This is precisely the reason why United Airlines
pulled out of an industry consortium established to explore the development of a shared reservations
system to be financed and used by carriers and travel retailers. In retrospect, United made the correct
choice in creating the proprietary SABRE system.

One way to delay the development of standards is to open the system to competitors. Another way is
for the company to market its own existing proprietary protocols as the industry standard. Baxter
launched ASAP Express as an IOS that acts as an industry platform, intermediating between hospitals
and hospital suppliers. Baxter even declared that ASAP Express offered a "level playing field" that
gave no advantage to any one vendor.

Summary

In summary, the only way companies will be able to continually differentiate themselves from
competitors implementing IOS is by continuously adding new capabilities to the systems. An evolving
system is a moving target for competitors. Lock-in is sustainable only when the company aggressively
and continuously upgrades the system and continually and compatibly extends the system itself. The
airline industry also illustrates that sustainable advantage need not only be the result of extraordinary

38
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

vision but also the result of a consistent exploration of opportunities revealed during the evolution of
adaptable systems.

THE NEXT STAGE OF COMPETITION

For the past few decades, much of the money and energy were spent on the first stage of the process -
building hardware, software, and networks powerful enough to generate useful data (1, 27). That
challenge is close to being solved; most companies have their arms around the data-gathering
conundrum. It is time to turn to another page in the chronicle of the strategic information systems.

While it is more dangerous than ever to ignore the power of IOS, it is even more dangerous to believe
that on its own, an IOS can provide an enduring business advantage. In the not-so-distant future,
powerful mobile computers (e.g., table PCs and other mobile devices) will be a part of the business
environment as familiar as telephones are today (38, 41, 42, 43). They will also be as simple to use as
telephones, or at least nearly so. The Internet will be the global electronic library enabling users to
have any and every piece of information at their fingertips instantly. With the infrastructure provided
by the Internet, companies will be able to build an IOS much faster. For example, Extranets, Internet-
based information access control and delivery systems that enable companies to selectively and
securely open and share their corporate Intranet resources, will enable companies to link using their
existing Internet infrastructure. As a result, companies will find it more difficult to differentiate
themselves simply by automating or building IOS faster than the competition. It will be easier for
every organization to automate and to capture the efficiency benefits of IOS.

This, however, does not mean that IOS will lose its pivotal role in the future or that technology
leadership will be less relevant to competitive success. Changes in information technology come so
rapidly and irresistibly, and the consequences of falling behind are becoming so irreversible that
companies will either master and re-master the technology or die. Think of it as a technology
treadmill: companies will have to run harder and harder just to stay in place. But organizations that
stay on the treadmill will be competing against others that have done the same thing. In this sense, the
information utility will have a leveling effect. Developing new innovative computer systems will offer
less decisive business advantages than previously, and these advantages will be more fleeting and more
expensive to maintain. There is still plenty of room for differentiation in the next stage of competition,
but differentiation will be of a new and more difficult sort.

Information Competition

One scenario for the next arena of competitive differentiation revolves around the intensification of
analysis and mining of data. In such an environment, information technology will be at once more
pervasive but less potent. As astute managers maneuver against rivals, they will focus less on being the
first to build proprietary electronic tools than on becoming the best at using and improving generally
available tools to enhance what their organizations already do well. Managers will shift their attention
from systems to information to knowledge (28, 37). One good example is American Airlines. For
years, American Airlines guarded their software jealously. Since 1986, however, American Airlines
sold SABRE's revenue-management expertise to any company that wanted to buy it. In the words of
Max Hopper (19), Senior Vice President for Information Systems at American Airlines:

39
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Because we believe our analysts are better at using the software than anyone else in the world.
Whatever "market power" we might enjoy by keeping our software and expertise to ourselves is not as
great as the revenue we can generate by selling it.

This is the competitive philosophy in the new era: compete on the use of electronic tools, not on their
exclusive ownership. Similarly, Mrs. Fields marketed to other retail chains the sophisticated
networking and automation system with which it ran its cookie operations. Price Waterhouse was
helping companies like Fox photo evaluate and install the Retail Operations Intelligence system, the
backbone of Mrs. Field's nationwide expansion. McKesson Drug offered its order processing IOS
(Economost) to other drugstores.

As in airlines and so many other industries, competition has shifted from building tools that collect
data to using generally available tools to turn data into information and information into knowledge.
An example of such activities would be data mining, which is gaining importance in many
organizations (40). This is especially true in the Internet era. The exponential growth of the Internet
and the World Wide Web sites put global information at our fingertips. In addition, the Internet is
revolutionizing the way information is accessed, processed, and distributed (e.g., web information
mining). The shortage of data is no longer the problem; the ability to mine the data to produce
information is. Only companies that excel at turning data into information and then analyzing the
information quickly and intelligently enough to generate superior knowledge will sustain the
competitive advantage in the next round of competition.

Information Partnerships

One of the most intriguing developments facilitated by IOS is information partnerships that allow
organizations to join forces without merging. The same companies will compete in one field and
cooperate in another (17). The partnership between Apple and Microsoft, archrivals in the personal
computer business, is a good example. Information partnerships empower companies to compete,
ironically, by allowing them new ways to cooperate. For example, American Airlines has allied with
Citibank so that air mileage credit in its frequent flyer program is awarded to credit card users - one
mile for every dollar spent on the card. American has thus increased the loyalty of its customers, and
the credit card company has gained access to a new and highly credit-worthy customer base for cross
marketing. This partnership has been expanded to include MCI a major long distance phone company,
which offers multiple airline frequent flyer miles for each dollar of long-distance billing. Citibank, one
of the largest issuers of VISAs and MasterCards, initiated a partnership to steer its millions of VISA
holders to MCI, a response to AT&T's entry into credit cards (the Universal Card). The Star Alliance, a
network of more than 15 airlines (e.g., United, Air Canada, Singapore Airlines, and Lufthansa), is
another good example of information partnerships.

Sometimes, partnerships are formed because no one firm possesses all the requisite resources to bring
the new product or service to fruition (17). Other times, through an information partnership, diverse
companies can offer novel incentives and services or participate in joint marketing programs. They can
take advantage of new distribution channels or introduce operational efficiencies and revenue
enhancements. Partnerships create opportunities for scale and cross selling. They can make small
companies look, feel, and act big, reaching for customers once beyond their grasp. On the other hand,
partnerships can make big companies look small and closed, targeting and servicing custom markets.
40
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Partnerships, in short, provide a new basis for competitive differentiation. Companies that hold back
forging a partnership may soon find themselves frozen out by early movers. Airline mergers are a good
example of the importance of not being left out in the cold.

With the increasing accessibility of the Internet, new technologies are constantly being developed to
further facilitate information sharing between partners and the forming of partnerships. Extranets can
be used to connect upstream and downstream value chains and form information partnerships, and
Internet-enabled partnerships are now a common sight.

Reengineering the Business Processes

Another critical factor in determining which companies derive the greatest benefits from IOS is the
ability to manage major changes in work design and organizational structure (18). Gearing toward
efficiency and control alone cannot address the fundamental performance deficiencies in business
processes. Companies must derive cost savings from the systems to cover the investment. But these
savings can only come through the painful exercise of redesigning basic organizational structures and
work processes. Simply laying IOS on top of existing work processes trivializes the potential of these
systems. To assure the effective use of the technology, organizational redesign must become
synonymous with IOS development. Those who do gain significant competitive advantage from IOS
are those who managed to integrate the technology effectively into their organizations in such a way
that they can continually add valuable new capabilities to the system while deriving cost savings from
increased productivity and decreased overhead made possible by IOS.

The usual methods for boosting performance - process rationalization and automation - haven't yielded
the dramatic improvements companies need. In particular, heavy investments in information
technology have delivered disappointing results largely because companies tend to use technology to
mechanize old ways of doing business. They leave the existing processes intact and use computers
simply to speed them up. In 1987, Business Week (33) reported that almost 40% of all U.S. capital
spending went to information systems, some $97 billion a year but IT has been used in most cases to
hasten office work rather than to transform it. With few exceptions, IT's role in the redesign of non-
manufacturing work has been disappointing; few companies have achieved major productivity gains.
Aggregate productivity figures for the United States have shown no increase since 1973.

Portland General Electric (PGE), a major customer of power generation equipment, called upon
Westinghouse's Productivity and Quality Center, a national leader in process improvement, to help
them implement EDI. The Westinghouse team asked if it could analyze the entire process by which
PGE procured equipment from Westinghouse and other suppliers. They found that, while
implementing EDI could yield efficiencies on the order of 10%, changing the overall procurement
process, including using EDI and bypassing the purchasing department altogether for most routine
purchase orders, could lead to much greater savings. In one case, the time to execute a standard
purchase order, for example, could be reduced from 15 days to half a day; the cost could be reduced
from almost $90 to $10.

Thus, the new watchwords are business process reengineering (44). A 1994 survey found that 69% of
North American and 75% of European companies had at least one reengineering project underway
(12). Nolan (32) wrote the strategic reengineering equation as Radical Change = New Organization +
41
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

IT. With business process reengineering, this means that information technology should be viewed as
more than an automating or mechanizing force; it could fundamentally reshape the way business is
done. Similarly, business activities should be viewed as more than a collection of individual or even
functional tasks; they should be broken down into processes that can be designed for maximum
effectiveness, in both the manufacturing and service environments. In short, rather than maximizing
the performance of particular individuals or business functions, companies must maximize
interdependent activities within and across the entire organization.

DuPont's concept of "effectiveness in use" as the major criterion of customer satisfaction is one leading
approach to measuring the effectiveness of interorganizational processes. DuPont was driven not
simply by the motive of selling a product but by the desire to link its internal processes (for creating
value in its products) to its customer's processes (for using those products). This concept led DuPont to
furnish EDI-provided Material Safety Data Sheets along with the chemicals it sells to its customers to
insure their safe use.

Another company that succeeded in using IOS to reengineer its business process is McKesson Drug
Company (8). McKesson's Economost system allowed the company to cut the number of warehouses
in half. Inventory turns seven times a year, so fast that it is generally held less than two weeks after
payment is made to the manufacturer. Despite a six-fold increase in sales, telephone order clerks were
down from 700 to 15, sales staffs were cut in half, and national purchasing staffs were down from 140
to 12.

At the moment, organizational units are focusing increasingly on Internet-enabled redesign of business
processes. Few would question the fact that information technology is a powerful tool for reshaping
business processes. The individuals and companies that can master redesigning processes around IT
and the Internet will be well equipped to succeed in the future.

CONCLUSIONS

Interorganizational systems (IOS) are vital strategic business tools. They change the "rules of the
game" as well as the "table stakes," and present new dimensions and possibilities. Although many
researchers argued that IOS is increasingly becoming a necessary way of doing business, innovative
and creative companies can still use IOS as a competitive weapon. Although it is difficult to sustain
any competitive advantage gained from IOS, companies can try to exploit profit opportunities for as
long as they last and may attempt to control the transition to a more competitive environment and bias
the final outcome in their favor. And when companies do sustain the competitive advantage gained
from IOS, it not only comes from the technology but also from instilling a mindset that focuses on
customer values and then supports a process that continually innovates and adds features valuable to
customers. The most significant and, heretofore, most overlooked factor in determining the
sustainability of competitive advantage is the organization's ability to manage the changes in structure
and work processes to take full advantage of the technology. Successful companies use the power of
modern information technology, such as the Internet to radically redesign their business processes in
order to achieve dramatic improvements in their performance.

42
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

REFERENCES

 Cash J.I. and Konsynski B.R. 1985. ‘IS Redraws Competitive Boundaries’. Harvard Business
Review, 63(2):134-142.Mentzer, 2004;
 Premkumar G. and Ramamurthy K. 1995. 'The role of interorganizational and organizational
factors on the
decision mode for adoption of interorganizational systems'. Decision Sciences, 26 (3): 303-336.
 Lambert D.M. and Cooper M.C. 2000. 'Issues in Supply Chain Management'. Industrial
Marketing management
 Kurnia S. and Johnston R.B. 2003. ‘Adoption of efficient consumer response: Key issues and
challenges in
 Australia’. Supply Chain Management: An international Journal, 8(2): 251-262.Choudhury,
1997
 Ibrahim M. and Ribbers P. 2006. 'Trust, Dependence and Global Interorganizational Systems'
in Proceedings of
the 39th Hawaii International Conference on Systems Sciences, Hawaii.
 Karahannas M.V. and Jones M. 1999. 'Interorganizational Systems and Trust in Strategic
Alliances' in
International Conference in Information Systems, North Carolina.
 Nagy A. 2006. 'Collaboration and conflict in the electronic integration of supply networks' in
Proceedings of the
39th Hawaii International Conference on System Sciences,
 Hawaii.Ibrahim and Ribbers, 2006;
 Choudhury V. 1997. 'Strategic choices in development of inter-organizational information
systems'. Information
systems Research
 Shah R., Goldstein, S.M. and Ward P.T. 2002. 'Aligning Supply Chain Management
Characteristics and Interorganizational Information System Types: An Exploratory Study'.
IEEE TRANSACTIONS ON
ENGINEERING MANAGEMENT Ham
 Y.N., Reimer K. and Johnston R.B. 2003. 'Complexity and Commitment in Supply Chain
Management Initiatives' in Proceedings of the Third International Conference on Electronic
Business (ICEB 2003),
 Singapore. Ham and Johnston, 2007
 Birou L.M., Fawcett, S.E. and Magnan G.M. 1998. 'The product life cycle: a tool for functional
strategic alignment'. International Journal of Purchasing and Materials Management, 34 (2):
37-51.
 Siau, Keng, The Journal of Computer Information Systems,2003

43
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

1. Alberthal, L., J. Manzi, G. Curtis, W.H. Davidow, P. Saffo, J.W. Timko, D. Nadler, and L.L. Davis.
"Will Architecture Win the Technology Wars?" Harvard Business Review, May-June 1993, pp. 162-
170.

2. Bakos, J.Y. "A Strategic Analysis of Electronic Marketplaces," MIS Quarterly, 15:3, September
1991, pp. 295-310.

3. "Baxter Healthcare Corporation: ASAP Express," 9-188-080, HBS, Case Services. Boston: Harvard
Business School, 1988.

4. Benjamin, R.I., D.W. Long, and M.S. Morton. "Electronic Data Interchange: How Much
Competitive Advantage?" Long Range Planning, 23:1, 1990, pp. 29-40.

5. Bruns, W.J. and F.W. McFarlan. "Information Technology Puts Power in Control Systems,"
Harvard Business Review, September-October 1987, pp. 89-94.

6. Cash, J.I. and B.R. Konsynski. "IS Redraws Competitive Boundaries," Harvard Business Review,
March-April 1985, pp. 134-142.

7. Clemons, E.K. and S.O. Kimbrough. "Information Systems, Telecommunications, and Their Effects
on Industrial Organization," Proceedings of the Seventh International Conference on Information
Systems, San Diego, CA, December 1986, pp. 99-108.

8. Clemons, E.K. and M.C. Row. "McKesson Drug Company: a Case Study of Economost - A
Strategic Information System," Journal of Management Information Systems, 5:1, Summer 1988, pp.
36-50.

9. Clemons, E.K. and M.C. Row. "Sustaining IT Advantage: The Role of Structural Differences," MIS
Quarterly, 14:3, September 1991, pp. 275-292.

10. CNN. "Justice Seeks $1 Million a Day Contempt Fine Against Microsoft," October 20, 1997.

11. Copeland, D.G. and J.L. McKenney. "Airline Reservations Systems: Lesson from History," MIS
Quarterly, 12:3, 1988, pp. 353-371.

12. CSC Index. "State of Reengineering Report Executive Summary." Cambridge, MA, 1994.

13. Davenport, T.H., M. Hammer, and T.J. Metsisto. "How Executives Can Shape Their Company's
Information Systems," Harvard Business Review, March-April 1989.

44
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

6. What is the difference between INTRANET, INTERNET and the WORLD WIDE WEB?

ANSWER

INTRANET

An intranet is a private computer network that uses Internet Protocol technologies to securely share
any part of an organization's information or network operating system within that organization. The
term is used in contrast to internet, a network between organizations, and instead refers to a network
within an organization. Sometimes the term refers only to the organization's internal website, but may
be a more extensive part of the organization's information technology infrastructure. It may host
multiple private websites and constitute an important component and focal point of internal
communication and collaboration.

Characteristics

An intranet is built from the same concepts and technologies used for the Internet, such as client–
server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols
may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer).
Internet technologies are often deployed to provide modern interfaces to legacy information systems
hosting corporate data.

An intranet can be understood as a private analog of the Internet, or as a private extension of the
Internet confined to an organization. The first intranet websites and home pages began to appear in
organizations in 1990-1991. Although not officially noted, the term intranet first became common-
place among early adopters, such as universities and technology corporations, in 1992.[dubious– discuss]

Intranets are also contrasted with extranets. While intranets are generally restricted to employees of the
organization, extranets may also be accessed by customers, suppliers, or other approved parties.[1]
Extranets extend a private network onto the Internet with special provisions for access, authorization,
and authentication (AAA protocol).

45
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Intranets may provide a gateway to the Internet by means of a network gateway with a firewall,
shielding the intranet from unauthorized external access. The gateway often also implements user
authentication, encryption of messages, and often virtual private network (VPN) connectivity for off-
site employees to access company information, computing resources and internal communication...

Uses

Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate
working in groups and teleconferencing) or sophisticated corporate directories, sales and customer
relationship management tools, project management etc., to advance productivity.

Intranets are also being used as corporate culture-change platforms. For example, large numbers of
employees discussing key issues in an intranet forum application could lead to new ideas in
management, productivity, quality, and other corporate issues.

In large intranets, website traffic is often similar to public website traffic and can be better understood
by using web metrics software to track overall activity. User surveys also improve intranet website
effectiveness. Larger businesses allow users within their intranet to access public internet through
firewall servers. They have the ability to screen messages coming and going keeping security intact.

When part of an intranet is made accessible to customers and others outside the business, that part
becomes part of an extranet. Businesses can send private messages through the public network, using
special encryption/decryption and other security safeguards to connect one part of their intranet to
another.

Intranet user-experience, editorial, and technology teams work together to produce in-house sites.
Most commonly, intranets are managed by the communications, HR or CIO departments of large
organizations, or some combination of these.

Because of the scope and variety of content and the number of system interfaces, intranets of many
organizations are much more complex than their respective public websites. Intranets and their use are
growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group, the
number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has
grown to an average of 6 million pages over 2005–2007.

Benefits

 Workforce productivity: Intranets can help users to locate and view information faster and
use applications relevant to their roles and responsibilities. With the help of a web browser
interface, users can access data held in any database the organization wants to make available,
anytime and - subject to security provisions - from anywhere within the company workstations,
increasing employees' ability to perform their jobs faster, more accurately, and with confidence
that they have the right information. It also helps to improve the services provided to the users.

46
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Time: Intranets allow organizations to distribute information to employees on an as-needed


basis; Employees may link to relevant information at their convenience, rather than being
distracted indiscriminately by electronic mail.
 Communication: Intranets can serve as powerful tools for communication within an
organization, vertically and horizontally. From a communications standpoint, intranets are
useful to communicate strategic initiatives that have a global reach throughout the organization.
The type of information that can easily be conveyed is the purpose of the initiative and what
the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and
who to speak to for more information. By providing this information on the intranet, staff have
the opportunity to keep up-to-date with the strategic focus of the organization. Some examples
of communication would be chat, email, and or blogs. A great real world example of where an
intranet helped a company communicate is when Nestle had a number of food processing
plants in Scandinavia. Their central support system had to deal with a number of queries every
day.[3] When Nestle decided to invest in an intranet, they quickly realized the savings.
McGovern says the savings from the reduction in query calls was substantially greater than the
investment in the intranet.
 Web publishing allows cumbersome corporate knowledge to be maintained and easily
accessed throughout the company using hypermedia and Web technologies. Examples include:
employee manuals, benefits documents, company policies, business standards, newsfeeds, and
even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI
applications). Because each business unit can update the online copy of a document, the most
recent version is usually available to employees using the intranet.
 Business operations and management: Intranets are also being used as a platform for
developing and deploying applications to support business operations and decisions across the
internetworked enterprise.
 Cost-effective: Users can view information and data via web-browser rather than maintaining
physical documents such as procedure manuals, internal phone list and requisition forms. This
can potentially save the business money on printing, duplicating documents, and the
environment as well as document maintenance overhead. For example, Peoplesoft "derived
significant cost savings by shifting HR processes to the intranet".[3] McGovern goes on to say
the manual cost of enrolling in benefits was found to be USD109.48 per enrollment. "Shifting
this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent".
Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed
54,000 reports and the amount of dollars processed was USD19 million".[3]
 Enhance collaboration: Information is easily accessible by all authorised users, which enables
teamwork.
 Cross-platform capability: Standards-compliant web browsers are available for Windows,
Mac, and UNIX.
 Built for one audience: Many companies dictate computer specifications which, in turn, may
allow Intranet developers to write applications that only have to work on one browser (no
cross-browser compatibility issues). Being able to specifically address your "viewer" is a great
advantage. Since Intranets are user-specific (requiring database/network authentication prior to
access), you know exactly who you are interfacing with and can personalize your Intranet
based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with
our company!").

47
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Promote common corporate culture: Every user has the ability to view the same information
within the Intranet.
 Immediate updates: When dealing with the public in any capacity, laws, specifications, and
parameters can change. Intranets make it possible to provide your audience with "live" changes
so they are kept up-to-date, which can limit a company's liability.
 Supports a distributed computing architecture: The intranet can also be linked to a
company’s management information system, for example a time keeping system.

Planning and creation

Most organizations devote considerable resources into the planning and implementation of their
intranet as it is of strategic importance to the organization's success. Some of the planning would
include topics such as:

 The purpose and goals of the intranet


 Persons or departments responsible for implementation and management
 Functional plans, information architecture, page layouts, design[4]
 Implementation schedules and phase-out of existing systems
 Defining and implementing security of the intranet
 How to ensure it is within legal boundaries and other constraints
 Level of interactivity (egwikis, on-line forms) desired
 Is the input of new data and updating of existing data to be centrally controlled or devolved

These are in addition to the hardware and software decisions (like content management systems),
participation issues (like good taste, harassment, confidentiality), and features to be supported.[5]

Intranets are often static sites. Essentially they are a shared drive, serving up centrally stored
documents alongside internal articles or communications (often one-way communication). However
organisations are now starting to think of how their intranets can become a 'communication hub' for
their team by using companies specialising in 'socialising' intranets.[6]

The actual implementation would include steps such as:

 Securing senior management support and funding.[7]


 Business requirements analysis.
 User involvement to identify users' information needs.
 Installation of web server and user access network.
 Installing required user applications on computers.
 Creation of document framework for the content to be hosted.[8]
 User involvement in testing and promoting use of intranet.
 Ongoing measurement and evaluation, including through benchmarking against other intranets.
[9]

Another useful component in an intranet structure might be key personnel committed to maintaining
the Intranet and keeping content current. For feedback on the intranet, social networking can be done
through a forum for users to indicate what they want and what they do not like.
48
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

INTERNET

The Internet is a global system of interconnected computer networks that use the standard Internet
Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists
of millions of private, public, academic, business, and government networks, of local to global scope,
that are linked by a broad array of electronic and optical networking technologies. The Internet carries
a vast range of information resources and services, such as the inter-linked hypertext documents of the
World Wide Web (WWW) and the infrastructure to support electronic mail.

Most traditional communications media including telephone, music, film, and television are being
reshaped or redefined by the Internet. Newspaper, book and other print publishing are having to adapt
to Web sites and blogging. The Internet has enabled or accelerated new forms of human interactions
through instant messaging, Internet forums, and social networking. Online shopping has boomed both
for major retail outlets and small artisans and traders. Business-to-business and financial services on
the Internet affect supply chains across entire industries.

The origins of the Internet reach back to the 1960s with both private and United States military
research into robust, fault-tolerant, and distributed computer networks. The funding of a new U.S.
backbone by the National Science Foundation, as well as private funding for other commercial
backbones, led to worldwide participation in the development of new networking technologies, and the
merger of many networks. The commercialization of what was by then an international network in the
mid 1990s resulted in its popularization and incorporation into virtually every aspect of modern human
life. As of 2009, an estimated quarter of Earth's population used the services of the Internet.

The Internet has no centralized governance in either technological implementation or policies for
access and usage; each constituent network sets its own standards. Only the overreaching definitions of
the two principal name spaces in the Internet, the Internet Protocol address space and the Domain
Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names
and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4
and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of
loosely affiliated international participants that anyone may associate with by contributing technical
expertise.

Internet is a short form of the technical term "internetwork",[1] the result of interconnecting computer
networks with special gateways (routers). The Internet is also often referred to as the Net.

The term the Internet, when referring to the entire global system of IP networks, has traditionally been
treated as a proper noun and written with an initial capital letter. In the media and popular culture a
trend has developed to regard it as a generic term or common noun and thus write it as "the internet",
without capitalization.

49
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Depiction of the Internet as a cloud in network diagrams

The terms Internet and World Wide Web are often used in everyday speech without much distinction.
However, the Internet and the World Wide Web are not one and the same. The Internet is a global data
communications system. It is a hardware and software infrastructure that provides connectivity
between computers. In contrast, the Web is one of the services communicated via the Internet. It is a
collection of interconnected documents and other resources, linked by hyperlinks and URLs.[2]

In many technical illustrations when the precise location or interrelation of Internet resources is not
important, extended networks such as the Internet are often depicted as a cloud.[3] The verbal image has
been formalized in the newer concept of cloud computing.

REFERENCES

 Callaghan, J (2002). Inside Intranets & Extranets: Knowledge Management AND the Struggle
for Power. Palgrave Macmillan. ISBN 0-333-98743-8.
 Pernice Coyne, Kara; Schwartz, Mathew; Nielsen, Jakob (2007), "Intranet Design Annual
2007", Nielsen Norman Group
 McGovern, Gerry
 Ward, Toby (2006-06-11). "Leading an intranet redesign". IntranetBlog.
http://intranetblog.blogware.com/blog/_archives/2006/6/11/2025170.html. Retrieved 2009-04-
03.
 LaMee, James A. (2002-04-30). "Intranets and Special Libraries: Making the most of inhouse
communications". University of South Carolina.
http://www.libsci.sc.edu/bob/class/clis724/SpecialLibrariesHandbook/Int&SpecLib.html.
Retrieved 2009-04-03.
 Scaplehorn, geoff (2010-03-01). "Bringing the internet indoors - socialising your intranet".
IntranetBlog. http://www.contentformula.com/articles/2010/bringing-the-internet-indoors-
socialising-your-intranet/. Retrieved 2010-03-03.
 Ward, Toby. "Planning: An Intranet Model for success Intranet".
http://www.prescientdigital.com/articles/intranet-articles/intranet-planning-an-intranet-model-
for-success. Retrieved 2009-04-03.
 "Intranet: Table of Contents – Macmillan Computer Sciences: Internet and Beyond".
Bookrags.com. http://www.bookrags.com/sciences/computerscience/intranet-csci-04.html.
Retrieved 2009-04-03.
 "Intranet benchmarking explained". Intranet Benchmarking Forum. http://www.ibforum.com/?
cmd=CMS_Article_List_View&uuid=Services&article=8f4928b5b6f5584beda884868f3ca458.
Retrieved 2009-04-03.
50
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 McGovern, Gerry (November 18, 2002). "Intranet return on investment case studies".
http://www.gerrymcgovern.com/nt/2002/nt_2002_11_18_intranet_roi.htm. Retrieved 2009-04-
03.
 "Making the most of your corporate intranet". April 2, 2009.
http://www.claromentis.com/blog/2009/04/top-10-ideas-making-the-most-of-your-corporate-
intranet/. Retrieved 2009-04-02.

7. What benefits does electronic commerce yield?

8. What costs do purchasers of digital products incur in their transactions?

9. How could a manager use a PDA in a virtual organization?

ANSWER

In order for organizations to succeed, they must be able to respond with flexibility in a geographically
dispersed environment. Virtual organizations, which can form, disband, and re-form to meet ill-defined
and emerging situations are playing an important role in organizational strategies. For virtual
organizations to be successful, system level elements such as organizational memory, hardware, and
software must be identified. Also a number of knowledge management tools such as database access,
expert systems, and groupware must be in place. Another critical enabling component to this scenario
is network technology, which physically connects the various modules. This is a multi faceted issue.
This chapter addresses the need for a cross-disciplinary approach to bring the pieces together. A
comprehensive review of current literature in these related areas is given. Also, a framework for
linking relevant technologies and a closer inspection of the database segment are presented. The
chapter concludes with a description of such a framework in place in a virtual organization, and
research issues in this area.

51
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

A personal digital assistant (PDA), also known as a palmtop computer,[1][2] is a mobile device that
functions as a personal information manager. Current PDAs often have the ability to connect to the
Internet. A PDA has an electronic visual display, enabling it to include a web browser, but some newer
models also have audio capabilities, enabling them to be used as mobile phones or portable media
players. Many PDAs can access the Internet, intranets or extranets via Wi-Fi or Wireless Wide Area
Networks. Many PDAs employ touchscreen technology.

The term PDA was first used on January 7, 1992 by Apple Computer CEO John Sculley at the
Consumer Electronics Show in Las Vegas, Nevada, referring to the Apple Newton. In 1996, Nokia
introduced the first mobile phone with full PDA functionality, the 9000 Communicator, which grew to
become the world's best-selling PDA. The Communicator spawned a new category of mobile phones:
the smartphone. Today, the vast majority of all PDAs are smartphones. Over 150 million smartphones
are sold each year, while "stand-alone" PDAs without phone functionality sell only about 3 million
units per year.[specify] Popular smartphone brands include HTC, Apple, Palm, Nokia Nseries, and RIM.

Contents
[hide]

 1 Typical features
o 1.1 Touch screen
o 1.2 Memory cards
o 1.3 Wired connectivity
o 1.4 Wireless connectivity
o 1.5 Synchronization
 1.5.1 Wireless synchronization
 2 Automobile navigation
 3 Ruggedized PDAs
 4 Medical and scientific uses
 5 Educational uses
o 5.1 Recreational uses
o 5.2 PDAs for people with disabilities
 6 Popular consumer PDAs
 7 Discontinued PDAs
 8 Rugged PDAs
 9 See also
 10 References

[edit] Typical features


A typical PDA has a touchscreen for entering data, a memory card slot for data storage, and IrDA,
Bluetooth and/or Wi-Fi. However, some PDAs may not have a touch screen, using softkeys, a

52
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

directional pad, and a numeric keypad or a thumb keyboard for input; this is typically seen on
telephones that are incidentally PDAs.

In order to have the functions expected of a PDA, a device's software typically includes an
appointment calendar, a to-do list, an address book for contacts, and some sort of memo (or "note")
program. PDAs with wireless data connections also typically include an email client and a Web
browser.

[edit] Touch screen

Many of the original PDAs, such as the Apple Newton and Palm Pilot, featured a touchscreen for user
interaction, having only a few buttons—usually reserved for shortcuts to often-used programs.
Touchscreen PDAs, including Windows Mobile devices, may have a detachable stylus to facilitate
making selections. The user interacts with the device by tapping the screen to select buttons or issue
commands, or by dragging a finger or the stylus on the screen to make selections or scroll.

Typical methods of entering text on touchscreen PDAs include:

 A virtual keyboard, where a keyboard is shown on the touchscreen. Text is entered by tapping
the on-screen keyboard with a finger or stylus.
 An external keyboard connected via USB, Infrared, or Bluetooth. Some users may choose a
chorded keyboard for one-handed use.
 Handwriting recognition, where letters or words are written on the touchscreen, and the PDA
converts the input to text. Recognition and computation of handwritten horizontal and vertical
formulas, such as "1 + 2 =", may also be a feature.
 Stroke recognition allows the user to make a predefined set of strokes on the touchscreen,
sometimes in a special input area, representing the various characters to be input. The strokes
are often simplified character shapes, making them easier for the device to recognize. One
widely-known stroke recognition system is Palm's Graffiti).

Despite rigorous research and development projects, end-users experience mixed results with
handwriting recognition systems. Some find it frustrating and inaccurate, while others are satisfied
with the quality of the recognition.[3]

Touchscreen PDAs intended for business use, such as the BlackBerry and Palm Treo, usually also full
keyboards and scroll wheels or thumbwheels to facilitate data entry and navigation.

Many touchscreen PDAs support some form of external keyboard as well. Specialized folding
keyboards, which offer a full-sized keyboard but collapse into a compact size for transport, are
available for many models. External keyboards may attach to the PDA directly, using a cable, or may
use wireless technology such as infrared or Bluetooth to connect to the PDA.

Newer PDAs, such as the Apple iPhone, Apple iPod Touch, HTC HD2, and Palm Pre, Palm Pre Plus,
Palm Pixi, Palm Pixi Plus, include more advanced forms of touchscreen that can register multiple
touches simultaneously. These "multi-touch" displays allow for more sophisticated interfaces using
various gestures entered with one or more fingers.
53
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

[edit] Memory cards

Although many early PDAs did not have memory card slots, now most have either some form of
Secure Digital (SD) slot or a CompactFlash slot. Although originally designed for memory, Secure
Digital Input/Output (SDIO) and CompactFlash cards are available that provide accessories like Wi-Fi
or digital cameras, if the device can support them. Some PDAs also have a USB port, mainly for USB
flash drives.[dubious – discuss] Some PDAs use microSD cards, which are electronically compatible with SD
cards, but have a much smaller physical size.

[edit] Wired connectivity

While early PDAs connected to a user's personal computer via serial ports or another proprietary
connection,[specify] many today connect via a USB cable. PDAs are not typically able to connect to each
other via USB, as USB requires one machine to act as a "host," which isn't a typical PDA function.

Some early PDAs were able to connect to the Internet indirectly by means off an external modem
connected via the PDA's serial port or "sync" connector,[4] or directly by using an expansion card that
provided an Ethernet port.

[edit] Wireless connectivity

Most modern PDAs have Bluetooth a popular wireless protocol for mobile devices. Bluetooth can be
used to connect keyboards, headsets, GPS receivers, and other nearby accessories. It's also possible to
transfer files between PDAs that have Bluetooth.

Many modern PDAs have Wi-Fi wireless network connectivity, and can connect to Wi-Fi hotspots.

All smartphones, and some other modern PDAs like the Apple iPod touch, can connect to Wireless
Wide Area Networks, such as those provided by cellular telecommunications companies.

Older PDAs typically had an IrDA (infrared) port allowing short-range, line-of-sight wireless
communication. Few current models use this technology, as it has been supplanted by Bluetooth and
Wi-Fi. IrDA allows communication between two PDAs, or between a PDA and any device with an
IrDA port or adapter. Some printers have IrDA receivers,[5] allowing IrDA-equipped PDAs to print to
them, if the PDA's operating system supports it. Most universal PDA keyboards use infrared
technology because many older PDAs have it.[citation needed] Infrared technology is low-cost and has the
advantage of being allowed aboard aircraft.

[edit] Synchronization

Most PDAs can synchronize their data with applications on a user's personal computer. This allows the
user to update contact, schedule, or other information on their computer, using software such as
Microsoft Outlook or ACT!, and have that same data transferred to PDA—or transfer updated
information from the PDA back to the computer. This eliminates the need for the user to update their
data in two places.

54
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Synchronization also prevents the loss of information stored on the device if it is lost, stolen, or
destroyed. When the PDA is repaired or replaced, it can be "re-synced" with the computer, restoring
the user's data.

Some users find that data input is quicker on their computer than on their PDA, since text input via a
touchscreen or small-scale keyboard is slower than a full-size keyboard. Transferring data to a PDA
via the computer is therefore a lot quicker than having to manually input all data on the handheld
device.[citation needed]

Most PDAs come with the ability to synchronize to a computer. This is done through synchronization
software provided with the handheld, or sometime with the computer's operating system. Examples of
synchronization software include:

 HotSync Manager, for Palm OS PDAs


 Microsoft ActiveSync, used by Windows XP and older Windows operating systems to
synchronize with Windows Mobile, Pocket PC, and Windows CE PDAs, as well as PDAs
running iOS, Palm OS, and Symbian
 Microsoft Windows Mobile Device Center for Windows Vista, which supports Microsoft
Windows Mobile and Pocket PC devices.
 Apple iTunes, used on Mac OS X and Microsoft Windows to sync iOS devices (such as the
iPhone and iPod touch)
 iSync, included with Mac OS X, can synchronize many SyncML-enabled PDAs
 BlackBerry Desktop Software, used to sync BlackBerry devices

These programs allow the PDA to be synchronized with a personal information manager, which may
be part of the computer's operating system, provided with the PDA, or sold separately by a third party.
For example, the RIM BlackBerry comes with RIM's Desktop Manager program, which can
synchronize to both Microsoft Outlook and ACT!.

Other PDAs come only with their own proprietary software. For example, some early Palm OS PDAs
came only with Palm Desktop, while later Palm PDAs—such as the Treo 650—have the ability to sync
to Palm Desktop aor Microsoft Outlook. Microsoft's ActiveSync and Windows Mobile Device Center
only synchronize with Microsoft Outlook or a Microsoft Exchange server.[citation needed]

Third-party synchronization software is also available for some PDAs from companies like
CommonTime and CompanionLink. Third-party software can be used to synchronize PDAs to other
personal information managers that are not supported by the PDA manufacturers (for example,
GoldMine and IBM Lotus Notes).

[edit] Wireless synchronization

Some PDAs can synchronize some or all of their data using their wireless networking capabilities,
rather than having to be directly connected to a personal computer via a cable.

55
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Apple iOS devices, like the iPhone, iPod Touch, and iPad, can use Apple's MobileMe subscription
service to synchronize calendar, address book, mail account, Internet bookmark, and other data with
one or more Macintosh or Windows computers using Wi-Fi or cellular data connections.[6]

Palm's webOS smartphones primarily sync with the cloud. For example, if Gmail is used, information
in contacts, email, and calendar can be synchronized between the phone and Google's servers.

RIM sells BlackBerry Enterprise Server to corporations so that corporate BlackBerry users can
wirelessly synchronize their PDAs with the company's Microsoft Exchange Server, IBM Lotus
Domino, or Novell GroupWise servers.[7] Email, calendar entries, contacts, tasks, and memos kept on
the company's server are automatically synchronized with the BlackBerry.[8]

[edit] Automobile navigation


This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and
removed. (August 2010)

Some PDAs include Global Positioning System (GPS) receivers; this is particularly true of
smartphones. Other PDAs are compatible with external GPS-receiver add-ons that use the PDA's
processor and screen to display location information.[9]

PDAs with GPS functionality can be used for automotive navigation. PDAs are increasingly being
fitted as standard on new cars.

PDA-based GPS can also display traffic conditions, perform dynamic routing, and show known
locations of roadside mobile radar guns. TomTom, Garmin, and iGO offer GPS navigation software
for PDAs.

[edit] Ruggedized PDAs


Some businesses and government organizations rely upon rugged PDAs, sometimes known as
enterprise digital assistants (EDAs), for mobile data applications. EDAs often have extra features for
data capture, such as barcode readers, radio-frequency identification (RFID) readers, magnetic stripe
card readers, or smart card readers.

Typical applications include:

 supply chain management in warehouses


 package delivery
 route accounting
 medical treatment and recordkeeping in hospitals
 facilities maintenance and management
 parking enforcement
 access control and security
 capital asset maintenance
56
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 meter reading by utilities


 "wireless waitress" applications in restaurants and hospitality venues

[edit] Medical and scientific uses


This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and
removed. (August 2010)

Many companies have developed PDA products aimed at the medical professions' unique needs, such
as drug databases, treatment information, and medical news. Services such as AvantGo translate
medical journals into PDA-readable formats. WardWatch organizes medical records, providing
reminders of information such as the treatment regimens of patients and programs to doctors making
ward rounds. Pendragon and Syware provide tools for conducting research with PDAs, allowing the
user to enter data into a centralized database using their PDA. Microsoft Visual Studio and Sun Java
also provide programming tools for developing survey instruments on the handheld. These
development tools allow for integration with SQL databases that are stored on the handheld and can be
synchronized with a desktop- or server-based database.

PDAs have been shown to aid diagnosis and drug selection and some studies[who?] have concluded that
when patients use PDAs to record their symptoms, they communicate more effectively with hospitals
during follow-up visits.

The development of Sensor Web technology may lead to wearable bodily sensors to monitor ongoing
conditions, like diabetes or epilepsy, which would alert patients and doctors when treatment is required
using wireless communication and PDAs.

[edit] Educational uses


This article is written like a personal reflection or essay and may require cleanup. Please
help improve it by rewriting it in an encyclopedic style. (December 2007)
This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and
removed. (August 2010)

As mobile technology becomes more common, it is increasingly being used as a learning tool. Some
educational institutions have embraced M-Learning, integrating PDAs into their teaching practices.

PDAs and handheld devices are allowed in many classrooms for digital note-taking. Students can
spell-check, modify, and amend their class notes on the PDA. Some educators[who?] distribute course
material through the Internet or infrared file-sharing functions of the PDA. Textbook publishers have
begun to release e-books, or electronic textbooks, which can be uploaded directly to a PDA, reducing
the number of textbooks students must carry.[10]

Software companies have developed PDA programs to meet the instructional needs of educational
institutions, such as dictionaries, thesauri, word processing software, encyclopedias, and digital lesson
planners.
57
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Recreational uses

PDAs may be used by music enthusiasts to play a variety of music file formats. Many PDAs include
the functionality of an MP3 player.

Road rally enthusiasts can use PDAs to calculate distance, speed, and time. This information may be
used for navigation, or the PDA's GPS functions can be used for navigation.

Underwater divers can use PDAs to plan breathing gas mixtures and decompression schedules using
software such as "V-Planner."

PDAs offer varying degrees of accessibility for people with differing abilities, based on the particular
device and service. People with vision, hearing, mobility, or speech impairments may be able to use
PDAs on a limited basis. This use may be enhanced by accessibility software (e.g., speech recognition
for verbal input instead of manual input). Universal design is relevant to PDAs as well as other
technology, and a viable solution for many user-access issues, though it has yet to be consistently
integrated into the design of popular consumer PDA devices.

PDAs are useful for people with traumatic brain injury or posttraumatic stress disorder, as seen in
troops returning home from the Iraq War and Operation Enduring Freedom. PDAs help address
memory problems, helping affected people with daily life organization and reminders. As of quite
recently[when?], the Department of Veterans' Affairs has issued thousands of PDAs to troops who need
them. Occupational therapists have taken on a crucial role within this population helping these veterans
return to the normality of life they once had.

For at least the past two decades, networks have been a part of computing. Local
area networks have allowed users to share peripherals and files. The Internet has
enabled communication with electronic mail, remote access via telnet, and
management at a distance. More recently, the Web has become a feature of everyday
life, allowing the publishing and finding of information in ways that had been
dreamed of, but never realized.
However, the Internet is now more than a way of connecting computers (and the
people who use them). Rather than being an infrastructure for computers and their
users, the Internet is an infrastructure for everyone. The way in which people
connect to the Internet is a secondary consideration. Instead, the fact that the
connections can be made has taken on a life of its own, leading to new ways of
thinking about how we work, how we are educated, and how we can use the
Internet itself.
The vision of the future of this kind of networking has been widely publicized.
Access to the Web will be through a multitude of devices, from the ubiquitous
personal computer to personal digital assistants (PDAs), cell phones, and pagers. In
addition, many of the devices we use in everyday life, including our automobiles,
televisions, most appliances, and homes will be able to access the Internet. In fact,
this access will be so common and pervasive that we will be surprised only when we
don’t have it, much like we are now surprised when we find ourselves without
58
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

access to the phone system or electrical grid.


Some have questioned if such pervasive computing and networking will really be a
benefit to the users of the network. Given the examples that are often provided, such
skepticism is understandable. These include connecting refrigerators to the Internet
so we can find out, via cell phone or PDA, what is in our own refrigerator while we
are at the store. Likewise, our stove will conspire with the refrigerator to decide
what recipe makes the best use of the available ingredients, then guide us through
preparation of the recipe with the aid of a network-connected food processor and
blender. My car will use the Internet to find an open parking space or the nearest
vegetarian restaurant.
2 Virtual Organizations, Pervasive Computing, and an Infrastructure for Networking at the Edge
While easy to understand, such examples have an air of the ridiculous about them,
and for good reason. Using the Internet to replace a shopping list hardly seems like
a good reason to increase the cost of an already costly appliance, or the risk of others
gaining access to the contents of my home. Likewise, letting our appliances decide
what we are going to eat reminds us of the The Hitchhiker’s Guide to the Galaxy1,
where the computerized food machines can never quite figure out the tastes of their
organic friends. And it is clear that the examples of "location-aware" uses of the
Internet, like those for the car, are really ways to target advertising, in ways that the
advertising’s targets are not going to be willing to pay for.

An Internet of Things

In fact, it is both highly likely that our appliances will be connected to the Internet,
and highly unlikely that those connections will be used in the ways that have been
mentioned. Rather than connecting the refrigerator to the Internet so I can find out
what I need to buy when I’m at the store, the connection will enable the
manufacturer to monitor the appliance to ensure that it is working correctly and
inform me when it is not. Rather than my stove and refrigerator conspiring in an
effort to determine what I will eat, they will conspire to optimize the energy usage in
my house (neighborhood, region). My car will connect to the Internet to allow the
manufacturer to diagnose problems before they happen, and either inform me of the
needed service or automatically install the necessary (software) repair.
These connections are fundamentally unlike those we associate with networks.
Rather than using the network to connect computers that are being used directly by
people, the coming age of Internet-connected appliances will usher in a time when
most of the communication occurring over networks is between machines and
programs that are not directly monitored by people. The refrigerator will not send
information to a human who is monitoring its performance at the manufacturer, but
rather to another computer that is running a program monitoring all of the
refrigerators made by the manufacturer. The appliances in our homes will not
communicate to someone working for the electrical utilities, but rather to a
computer (or set of computers) running programs that regulate the flow of energy to
all of the utility’s customers.
In addition, the majority of these communications will occur in an end-to-end
structure that does not include a human at any point. This can be seen from the
simple mathematics of the Internet. The number of machines connected to it has
been increasing at an exponential rate for some time. It will continue to grow at this
rate as the existing networks of embedded computers, including those that already
exist within our automobiles, are connected to the larger, global network, and as
new networks of embedded devices are constructed in our homes and offices.
59
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

And the rate of increase of computers will be faster than the rate of increase of the
human population, so the disparity will continue to grow. It is just a matter of time
before most of the traffic over our networks is sent from, received by, and
interpreted by programs that do not forward that information to a human recipient.
Not only is the network we call the Internet becoming unimaginably large, it is also
becoming much more heterogeneous than networks have been in the past. The
technology we developed for computer networks assumes that the machines
connected by the network are desktops or servers. While there may be significant
differences between such machines, they are all reasonably competent with regards
to the amount of computation they can do, reasonably unlimited in terms of the
amount of memory and electrical power available to them, and connected by
networks that are generally static, and in which bandwidth is high and growing fast.
However, all of these characteristics are about to change. The kinds of devices that
will be used to access the Internet are no longer confined to desktops and servers,
but include small devices with limited user interface facilities (such as cell phones
and PDAs); wireless devices with limited bandwidth, computing power, and
electrical power; and embedded processors with severe limitations on the amount of
memory and computing power available to them. Many of these devices are mobile,
changing not only geographic position, but also their place in the topology of the
network.
Such a network requires that we rethink how we approach the organizing and
constructing of networks. In a network that is in constant flux, with new kinds of
devices coming into the network and old devices leaving, our methods for
administering the network must change dramatically. If they don’t, we are doomed
to spending all of our time doing network administration. In a network where the
differences in available computing power, memory, and stable storage differ by
multiple orders of magnitude, we must control where computation happens, as well
as where information is located. And in a network where most of the communication
is done by computing elements (hardware or software) rather than people, we need
new approaches to the ways in which information and services are discovered and
described, ways that are appropriate for the skills of the computing elements
involved.

Networking at the Edge


One way to envision what this new environment will be like is to imagine what life
will be like at the edge of the Internet. Today, the edge is that part of the network
mostly filled with clients. It is predominantly made up of desktop-class machines
accessing the Web through browsers. Whether the edge is thought of as the machine,
browser software, or user is irrelevant. The edge is made up of components that
have these characteristics in common:
n Access to high levels of computing power, reliable energy sources, and reasonable
(and rapidly growing) network bandwidth.
n Used by people (or are people) who are looking for read-mostly information.
n Change slowly, are not used as a source of information, and stay in a constant
location (at least as far as the network is concerned).
However, as we move to a world where the Internet is used as an infrastructure for
embedded computing, all this will change. As we connect our appliances,
automobiles, and homes to the Internet; as we begin to access the Internet with
mobile, hand-held devices; and as we use the Internet for activities such as
environmental monitoring and automated remote diagnostics, the edge will be made

60
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

up of components with these characteristics:


n Many will have small, inexpensive processors with limited memory and little or
no persistent storage.
n They will connect to other computing elements without the direct intervention of
users.
n Often, they will be connected by wireless networks.
n They will change rapidly, sometimes by being mobile, sometimes by going on and
offline at widely varying rates. Over time, they will be replaced (or fail) far more
rapidly than is now common.
n They will be used as a source of information, often sending that information into
the center of the network to which they are attached.

Conclusion
Networks are expanding and changing in ways that would have been hard to
imagine only a few years ago. Part of this change is due to the rate of change of the
networks themselves. All aspects of the network — the devices that are connected,
the way in which services are offered, the kinds of services offered, and even the
connecting tissues of the protocols and physical transports used by the network
itself — are changing.
This demands a new kind of networking infrastructure, one that is built around the
notion of change. Originally devised to connect devices to the network, Jini
technology is now seen as a way of dealing with the problems posed by a constantly
changing network to the development and deployment of services. Jini technology is
not the final step in the infrastructure’s evolution, but it does point in the right
direction. As our networks become larger, more heterogeneous, and as nonstop
access to services becomes more essential, Jini technology offers a solution. Rather
than being an end point, Jini technology enables an infrastructure built around a
fractal notion of the network, where change is a constant.

REFERENCES

 Waldo, Jim and the Jini Technology Team, The Jini Specifications (Second Edition),
2001, Addison Wesley, Reading, MA. Available through www.sun.com/jini/
books.
 The Community Resource for Jini Technology, www.jini.org.

10. Which is more important for a manager – computer literacy or information literacy?

ANSWER

61
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Computer literacy is defined as the knowledge and ability to use computers and related technology
efficiently, with a range of skills covering levels from elementary use to programming and advanced
problem solving.[1] Computer literacy can also refer to the comfort level someone has with using
computer programs and other applications that are associated with computers. Another valuable
component of computer literacy is knowing how computers work and operate. Having basic computer
skills is a significant asset in the developed countries.

The precise definition of "computer literacy" can vary from group to group. Generally, literate (in the
realm of books) connotes one who can read any arbitrary book in their native language[s], looking up
new words as they are exposed to them. Likewise, an experienced computer professional may consider
the ability to self-teach (i.e. to learn arbitrary new programs or tasks as they are encountered) to be
central to computer literacy. In common discourse, however, "computer literate" often connotes little
more than the ability to use several very specific applications (usually Microsoft Word, Microsoft
Internet Explorer, and Microsoft Outlook) for certain very well-defined simple tasks, largely by rote.
(This is analogous to a child claiming that they "can read" because they have rote-memorized several
small children's books. Real problems can arise when such a "computer literate" person encounters a
new program for the first time, and large degrees of "hand-holding" will likely be required.) Being
"literate" and "functional" are generally taken to mean the same thing.

Computer skills
Computer skills refer to ability to use software and hardware of a computer.

They include:

Basic computer skills

 Knowing how to switch on the computer


 Being able to use a mouse to interact with elements on the screen
 Being able to use the computer keyboard
 Being able to close down the computer after use

Intermediate skills include being able to use the following

 Word processor
 e-mail
 Spreadsheets
 Databases
 Use the internet

Advanced skills include

 Programming
 Use of computer for scientific research

62
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

[edit] Social implications


The level of computer literacy one must achieve to gain an advantage over others depends both on the
society one is in and one's place in the social hierarchy. Prior to the development of the first computers
in the 1950s, the word computer referred to a person who could count, calculate, compute. The year of
2010, a mere 50 years later from its first personal/common business use, we see the term "computer
literacy" change deeply in meaning. We have on one hand the exponential speed that technology has
grown and is growing and on the other hand we have the practical use of the personal computer in our
everyday life. Computers are not just the boxes that took up large amounts of space with an even
bigger monitor. Now we have hand devices and cell phones to assist us, in most post-1995 model year
cars, at least 10 processors can be found controlling major components of our vehicles.

Taking most common points into consideration from former forms of literacy topics, the subject
requires a formal breakdown of the core components. To evaluate or maintain a consistently gradual
rise in practical application and scocial productivity from any technology we have to understand how
computers benefit humanity as a whole. Starting from the local sense.

The fear of some educators today is that computer training in schools will serve only to train data-entry
clerks of the next generation, low level workers of the knowledge economy. On the other hand, some
hope that enhanced computer literacy will enable a new generation of cultural producers to make
meanings and circulate those in the public sphere. The wildfire of cultural production associated with
sites such as YouTube seems to support this notion.

Different countries have different needs for computer literate people due to their society standards and
level of technology. The world's digital divide is now an uneven one with knowledge nodes such as
India disrupting old North/South dichotomies of knowledge and power.

[edit] Computer literacy in the first world

Computer literacy is considered to be a very important skill to possess while in developed countries.
Employers want their workers to have basic computer skills because their company becomes ever
more dependent on computers. Many companies try to use computers to help run their company faster
and cheaper.

Computers are just as common as pen and paper for writing, especially among youth. There seems to
be an inversely proportional relationship between computer literacy and compositional literacy among
first world computer users. For many applications - especially communicating - computers are
preferred over pen, paper, and typewriters because of their ability to duplicate and retain information
and ease of editing.

As personal computers become common-place and they become more powerful, the concept of
computer literacy is moving beyond basic functionality to more powerful applications under the
heading of multimedia literacy.

63
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Of course, arguments about computers being common-place in the first world assume that everyone in
the first world has equal access to the latest forms of technology. However, there is a pronounced
digital divide that separates both physical access to technology and the ability to use that technology
effectively.

[edit] Computer education

Where computers are widespread, they are also a part of education. Computers are used in schools for
many applications such as writing papers or searching the Internet for information. Computer skills are
also a subject being specifically taught in many schools, especially from adolescence onward - when
the ability to make abstractions forms.

One problematic element of many (though not all) "computer literacy" or computer education
programs is that they may resort too heavily on rote memorization. Students may be taught, for
example, how to perform several common functions (e.g.: Open a file, Save a file, Quit the program)
in very specific ways, using one specific version of one specific program. When a graduate of such a
program encounters a competing program, or even a different version of the same program, they may
be confused or even frightened by the differences from what they learned. This is one reason why
major computer and software firms such as Apple Computer and Microsoft consider the educational
market important: The often time-limited computer education provided in schools most often lends
itself to rote memorization, creating a sort of vendor lock-in effect whereby graduates are afraid to
switch to competing computer systems.

Graduates of computer education programs based around rote memorization may be heard asking
things such as "just tell me where to click", and may need to rely upon paper notes for some computing
tasks. (Example: A note on the monitor reading "Hit 'enter' after power up.") Many such users may
need tremendous amounts of "hand-holding" even after years or decades of daily computer use. (This
can be especially frustrating for experienced computer users, who are accustomed to figuring out
computers largely on their own.) The primary factor preventing such functionally computer illiterate
users from self-educating may simply be fear (of losing data through doing the "wrong thing") or lack
of motivation; in any case, more technically oriented friends and relatives often find themselves
pressed into service as "free tech support" for such users.

In addition to classes, there are many How-to books that cover various aspects of computer training,
such as the popular 'For Dummies' series. There are also many websites that devote themselves to this
task, such as The Hitchhiker's Guide to the Internet. Such tutorials often aim at gradually boosting
readers' confidence, while teaching them how to troubleshoot computers, fix security issues, set up
networks, and use software.

[edit] Aspects of computer literacy


Aspects of computer literacy include:

 what is a computer

64
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 what are its limitations


 what is a program (not necessarily how to program)
 what is an algorithm
 what is computable
 what a computer cannot do
 why computers cannot produce random numbers
 some seemingly simple problems are not
 concurrency and issues with shared data
 all computers have the same computing ability with differences in memory capacity and
speed
 performance depends on more than CPU clock speed

 understanding the concept of stored data


 what are the real causes of "computer errors"
 the implications of incorrect (buggy) programs
 the implications of using a program incorrectly (garbage in, garbage out)
 issues rising from distributed computing
 computer security

 trojan horse (computing), computer virus, email spoofing, URL spoofing, phishing,
etc ...
 what to do when a security certificate is questioned
 password creation (how to avoid bad ones)

 social implications/aspects of computing

 Netiquette (or at least E-mail Etiquette)


 identifying urban legends (and not forwarding them)
 critical assessment of internet sources
 criminal access to financial databases

 keyboarding, mousing (using input devices)


 plugging in and turning the computer on
 using/understanding user-interface elements (e.g., windows, menus, icons, buttons, etc.)
 Composing, editing and printing documents
 the ability to communicate with others using computers through electronic mail (email) or
instant messaging services
 managing and editing pictures (from cell phones, digital cameras or even scans)
 Opening files and recognizing different file types
 Multimedia literacy, including, but not limited to:

 making movies
 making sound files
 interactivity
 creating web pages

65
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

A higher order of computer literacy involves a user being able to adapt and learn new procedures
through various means while using a computer.

[edit] Copyright and fair use laws


Copyright and fairuse laws constitute a mammoth part of computer literacies.

It might be considered that the understanding of copyright and fair use is part of computer literacy.
That is, a web author might be deprived of agency by not having knowledge of basic copyright and
basic fair use. In the US, in order for an item to be copyrighted, it has to be original and fixed. If that is
true, then copyright protection is automatic. Therefore, much of the content on the web is copyright
protected.

Knowledge of fair use then becomes a crucial part of computer literacy, as to use under fair use is to
use without copyright infringement. Fair use in the US is defined in section 107 of Title 17 of the
copyright act. Four factors are relevant: basically, the purpose of the use, the amount used, the nature
of the copyrighted work, and the impact of the use on the potential market of the copyright holder.

Therefore, in order to compose in digital networks, and in a fashion that is literate, one needs basic
understanding of copyright and fair use.

[edit] Future
The ever-growing processing power of modern computers is used to present the user with an interface
that requires minimal computer skills to operate. Modern software often utilizes buttons, icons and
elaborate pictographic interfaces to try to achieve a high level of usability. Most of the time people use
computers, they do not realize that they are doing so. (Examples: ATMs, car navigation systems,
mobile phones, microwave ovens...)

One of the major goals in computer engineering is the construction of a natural language interface,
possibly with speech recognition, body language recognition and automatic visualisation. This would
eliminate the need for computer literacy in everyday work and life in areas where such machines are
available. An example of a futuristic Natural Language Interface can be found throughout the Star
Trek series, where characters simply tell the computer what they want using ordinary English

s I sit here typing away on my keyboard, I realize that I am as comfortable holding a computer mouse
as I am holding a pen. That wasn't always the case, though. As a high school senior taking my first
class in BASIC, I marveled at the freshmen who were effortlessly creating error-free and complex
programs. I did not reach that level of comfort around computers until many years later, after being
exposed to them a little in college and more in graduate school. Computers have come a long way
since I took that first class in BASIC and, I guess, so have I.

66
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

I was shocked and disappointed, when a college professor friend of mine told me about his students,
juniors and seniors at a highly competitive university, who knew little about computers, and did not
know how to retrieve an emailed file and operate a simple Windows based program. "How is that
possible?" I asked. Hadn't computers been a standard part of their education — hadn't they learned
reading, writing, and computing?

Why Is Computer Literacy Necessary?


There is a good chance that, if you are reading this, you have some hands-on computer experience.
However, I do know many people for whom surfing the Web was their first contact with a computer.
Many have not gone further than that. This is written for those people.

In most places of business, a computer is standard. In the bank they use computers to look up your
account information. They use computers in the auto repair shop to assess your car. You can't find
books in the library by looking in a card catalog — you must use a computerized database. Doctors'
offices utilize computers to store patient information. The point is this — no matter where you find
employment, there is a good chance a computer will be a basic tool you will have to use. It is in your
best interests to start off computer literate. It will help you get a job and it will help you advance in
your career. Computer literacy does not mean you need to know how use every single piece of
software you may encounter. It does not mean you need to know how to write programs or network
computers. You just need to know some basics — how to save and open a file, how to use a word
processing program, and how to send and receive email — for starters. It means having some sort of
level of comfort around computers rather than a look of fear and a feeling of foreboding.

How Do I Become Computer Literate?


Basic computer courses are offered by most continuing education programs. They are usually
reasonably priced and conveniently scheduled. These courses can usually be found in your local school
district or community college, on evenings and weekends.

Career retraining programs often offer computer courses for free or at a low fee for those who qualify.
Check with your local Labor Department Office for more information on these programs.

There are also online courses and tutorials available. You don't have a computer? Don't worry. Many
public libraries allow patrons to use computers with Internet access. Here are two Web sites that offer
free online courses.

Information literacy

Several conceptions and definitions of information literacy have become prevalent. For example, one
conception defines information literacy in terms of a set of competencies that an informed citizen of an
information society ought to possess to participate intelligently and actively in that society (from [7]).

The American Library Association's (ALA) Presidential Committee on Information Literacy, Final
Report states, "To be information literate, a person must be able to recognize when information is
needed and have the ability to locate, evaluate, and use effectively the needed information" (1989).

67
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Jeremy Shapiro & Shelley Hughes (1996) define information literacy as "A new liberal art that extends
from knowing how to use computers and access information to critical reflection on the nature of
information itself, its technical infrastructure and its social, cultural, and philosophical context and
impact." (from [8])

Information literacy is becoming a more important part of K-12 education. It is also a vital part of
university-level education (Association of College Research Libraries, 2007).

On May 28, 2009, U.S. California Governor Arnold Schwarzenneger posted an Executive Order
establishing a California ICT Digital Literacy Leadership Council and an ICT Digital Advisory
Committee. [9]

On October 1, 2009, U.S. President Barack Obama's designated October 2009 "National Information
Literacy Awareness Month" [10]

History of the concept


The phrase information literacy first appeared in print in a 1974 report by Paul G. Zurkowski, written
on behalf of the National Commission on Libraries and Information Science. Zurkowski used the
phrase to describe the "techniques and skills" known by the information literate "for utilizing the wide
range of information tools as well as primary sources in molding information solutions to their
problems".[1]

Subsequently a number of efforts were made to better define the concept and its relationship to other
skills and forms of literacy. Although other educational goals, including traditional literacy, computer
literacy, library skills, and critical thinking skills, were related to information literacy and important
foundations for its development, information literacy itself was emerging as a distinct skill set and a
necessary key to one's social and economic well-being in an increasingly complex information society.
[2]

A seminal event in the development of the concept of information literacy was the establishment of the
American Library Association's Presidential Committee on Information Literacy, whose 1989 final
report outlined the importance of the concept. The report defined information literacy as the ability "to
recognize when information is needed and have the ability to locate, evaluate, and use effectively the
needed information" and highlighted information literacy as a skill essential for lifelong learning and
the production of an informed and prosperous citizenry.

The committee outlined six principal recommendations: to "reconsider the ways we have organized
information institutionally, structured information access, and defined information's role in our lives at
home in the community, and in the work place"; to promote "public awareness of the problems created
by information illiteracy"; to develop a national research agenda related to information and its use; to
ensure the existence of "a climate conducive to students' becoming information literate"; to include
information literacy concerns in teacher education; and to promote public awareness of the relationship
between information literacy and the more general goals of "literacy, productivity, and democracy." [3]

68
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

The recommendations of the Presidential Committee led to the creation later that year of the National
Forum on Information Literacy, a coalition of more than 90 national and international organizations.[4]

In 1998, the American Association of School Librarians and the Association for Educational
Communications and Technology published Information Power: Building Partnerships for Learning,
which further established specific goals for information literacy education, defining some nine
standards in the categories of "information literacy", "independent learning", and "social
responsibility".[5]

In 1999, SCONUL, the Society of College, National and University Libraries in the UK, published
"The Seven Pillars of Information Literacy" [11] model, to "facilitate further development of ideas
amongst practitioners in the field ... stimulate debate about the ideas and about how those ideas might
be used by library and other staff in higher education concerned with the development of students'
skills."[6] A number of other countries have developed information literacy standards since then.

In 2003, the National Forum on Information Literacy, together with UNESCO and the National
Commission on Libraries and Information Science, sponsored an international conference in Prague
with representatives from some twenty-three countries to discuss the importance of information
literacy within a global context. The resulting Prague Declaration described information literacy as a
"key to social, cultural and economic development of nations and communities, institutions and
individuals in the 21st century" and declared its acquisition as "part of the basic human right of life
long learning".[7]

On May 28, 2009, U.S. California Governor Arnold Schwarzenneger signed Executive Order S-06-09
establishing a California ICT Digital Literacy Leadership Council, which, in turn, is directed to
establish an ICT Digital Literacy Advisory Committee. "The Leadership Council, in consultation with
the Advisory Committee, shall develop an ICT Digital Literacy Policy, to ensure that California
residents are digitally literate." The Executive Order states further: " ICT Digital Literacy is defined as
using digital technology, communications tools and/or networks to access, manage, integrate, evaluate,
create and communicate information in order to function in a knowledge-based economy and
society..." The Governor directs "...The Leadership Council, in consultation with the Advisory
Committee... [to] develop a California Action Plan for ICT Digital Literacy (Action Plan)." He also
directs "The California Workforce Investment Board (WIB)... [to] develop a technology literacy
component for its five-year Strategic State Plan." His Executive Order ends with the following: " I
FURTHER REQUEST that the Legislature and Superintendent of Public Instruction consider adopting
similar goals, and that they join the Leadership Council in issuing a "Call to Action" to schools, higher
education institutions, employers, workforce training agencies, local governments, community
organizations, and civic leaders to advance California as a global leader in ICT Digital Literacy".[8]

Information literacy rose to national consciousness in the U.S. with President Barack Obama's
Proclamation designating October 2009 as National Information Literacy Awareness Month.[9]
President Obama's Proclamation stated that "Rather than merely possessing data, we must also learn
the skills necessary to acquire, collate, and evaluate information for any situation... Though we may
know how to find the information we need, we must also know how to evaluate it. Over the past
decade, we have seen a crisis of authenticity emerge. We now live in a world where anyone can
publish an opinion or perspective, whether true or not, and have that opinion amplified within the
69
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

information marketplace. At the same time, Americans have unprecedented access to the diverse and
independent sources of information, as well as institutions such as libraries and universities, that can
help separate truth from fiction and signal from noise."

His Proclamation ends with: "NOW, THEREFORE, I, BARACK OBAMA, President of the United
States of America, by virtue of the authority vested in me by the Constitution and the laws of the
United States, do hereby proclaim October 2009 as National Information Literacy Awareness Month. I
call upon the people of the United States to recognize the important role information plays in our daily
lives, and appreciate the need for a greater understanding of its impact." [10]

[edit] Specific aspects of information literacy (Shapiro and


Hughes, 1996)
In "Information Literacy as a Liberal Art", Jeremy J. Shapiro and Shelley K. Hughes advocated a more
holistic approach to information literacy education, one that encouraged not merely the addition of
information technology courses as an adjunct to existing curricula, but rather a radically new
conceptualization of "our entire educational curriculum in terms of information".

Drawing upon Enlightenment ideals like those articulated by Enlightenment philosopher Condorcet,
Shapiro and Hughes argued that information literacy education is "essential to the future of democracy,
if citizens are to be intelligent shapers of the information society rather than its pawns, and to
humanistic culture, if information is to be part of a meaningful existence rather than a routine of
production and consumption".

To this end, Shapiro and Hughes outlined a "prototype curriculum" that encompassed the concepts of
computer literacy, library skills, and "a broader, critical conception of a more humanistic sort",
suggesting seven important components of a holistic approach to information literacy:

 Tool literacy, or the ability to understand and use the practical and conceptual tools of current
information technology relevant to education and the areas of work and professional life that
the individual expects to inhabit.
 Resource literacy, or the ability to understand the form, format, location and access methods
of information resources, especially daily expanding networked information resources.
 Social-structural literacy, or understanding how information is socially situated and
produced.
 Research literacy, or the ability to understand and use the IT-based tools relevant to the work
of today's researcher and scholar.
 Publishing literacy, or the ability to format and publish research and ideas electronically, in
textual and multimedia forms … to introduce them into the electronic public realm and the
electronic community of scholars.
 Emerging technology literacy, or the ability to continuously adapt to, understand, evaluate
and make use of the continually emerging innovations in information technology so as not to
be a prisoner of prior tools and resources, and to make intelligent decisions about the adoption
of new ones.

70
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Critical literacy, or the ability to evaluate critically the intellectual, human and social strengths
and weaknesses, potentials and limits, benefits and costs of information technologies.[11]

Ira Shor further defines critical literacy as "[habits] of thought, reading, writing, and speaking which
go beneath surface meaning, first impressions, dominant myths, official pronouncements, traditional
clichés, received wisdom, and mere opinions, to understand the deep meaning, root causes, social
context, ideology, and personal consequences of any action, event, object, process, organization,
experience, text, subject matter, policy, mass media, or discourse".[12]

[edit] National Forum on Information Literacy


[edit] Background

In 1983, the seminal report “A Nation at Risk: The Imperative for Educational Reform” declared that a
“rising tide of mediocrity” was eroding the very foundations of the American educational system. It
was, in fact, the genesis of the current educational reform movement within the United States.
Ironically, the report did not include in its set of reform recommendations the academic and/or the
public library as one of the key architects in the redesign of our K-16 educational system. This report
and several others that followed, in conjunction with the rapid emergence of the information society,
led the American Library Association (ALA) to convene a blue ribbon panel of national educators and
librarians in 1987. The ALA Presidential Committee on Information Literacy was charged with the
following tasks: (1) to define information literacy within the higher literacies and its importance to
student performance, lifelong learning, and active citizenship; (2) to design one or more models for
information literacy development appropriate to formal and informal learning environments
throughout people's lifetimes; and (3) to determine implications for the continuing education and
development of teachers. In the release of its Final Report in 1989, the American Library Association
Presidential Committee on Information Literacy summarized in its opening paragraphs the ultimate
mission of the National Forum on Information Literacy:

“How our country deals with the realities of the Information Age will have enormous impact on our
democratic way of life and on our nation's ability to compete internationally. Within America's
information society, there also exists the potential of addressing many long-standing social and
economic inequities. To reap such benefits, people--as individuals and as a nation--must be
information literate. To be information literate, a person must be able to recognize when information is
needed and have the ability to locate, evaluate, and use effectively the needed information. Producing
such a citizenry will require that schools and colleges appreciate and integrate the concept of
information literacy into their learning programs and that they play a leadership role in equipping
individuals and institutions to take advantage of the opportunities inherent within the information
society. Ultimately, information literate people are those who have learned how to learn. They know
how to learn because they know how knowledge is organized, how to find information, and how to use
information in such a way that others can learn from them. They are people prepared for lifelong
learning, because they can always find the information needed for any task or decision at hand."

Acknowledging that the major obstacle to people becoming information literate citizens, who are
prepared for lifelong learning, "is a lack of public awareness of the problems created by information

71
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

illiteracy," the report recommended the formation of a coalition of national organizations to promote
information literacy.”

Thus, in 1989, the A.L.A. Presidential Committee established the National Forum on Information
Literacy, a volunteer network of organizations committed to raising public awareness on the
importance of information literacy to individuals, to our diverse communities, to our economy, and to
engaged citizenship participation.

[edit] The forum today

Since 1989, the National Forum on Information Literacy has evolved steadily under the leadership of
its first chair, Dr. Patricia Senn Breivik. Today, the Forum represents over 90 national and
international organizations, all dedicated to mainstreaming the philosophy of information literacy
across national and international landscapes, throughout every educational, domestic, and workplace
venue.

Although the initial intent of the Forum was to raise public awareness and support on a national level,
over the last several years, the National Forum on Information Literacy has made significant strides
internationally in promoting the importance of integrating information literacy concepts and skills
throughout all educational, governmental, and workforce development programs. For example, the
National Forum co-sponsored with UNESCO and IFLA several “experts meetings”, resulting in the
Prague Declaration (2003) and the Alexandria Proclamation (2005) each underscoring the importance
of information literacy as a basic fundamental human right and lifelong learning skill.

In the United States, however, information literacy skill development has been the exception and not
the rule, particularly as it relates to the integration of information literacy practices within our
educational and workforce development infrastructures. In a 2000 peer reviewed publication, Nell K.
Duke, found that students in first grade classrooms were exposed to an average of 3.6 minutes of
informational text in a school day.[13] In October, 2006, the first national Summit on Information
Literacy brought together well over 100 representatives from education, business, and government to
address America’s information literacy deficits as a nation currently competing in a global
marketplace. This successful collaboration was sponsored by the National Forum on Information
Literacy, Committee for Economic Development, Educational Testing Service, the Institute for a
Competitive Workforce, and National Education Association (NEA). The Summit was held at NEA
headquarters in Washington, D.C.

A major outcome of the Summit was the establishment of a national ICT literacy policy council to
provide leadership in creating national standards for ICT literacy in the United States.

As stated on the Forum’s Main Web page, it recognizes that achieving information literacy has been
much easier for those with money and other advantages. For those who are poor, non-White, older,
disabled, living in rural areas or otherwise disadvantaged, it has been much harder to overcome the
digital divide. A number of the Forum’s members address the specific challenges for those
disadvantaged. For example, The Children’s Partnership advocates for the nearly 70 million children
and youth in the country, many of whom are disadvantaged. The Children’s Partnership currently runs
three programs, two of which specifically address the needs of those with low-incomes: Online content
72
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

for Low-Income and Underserved Americans Initiative, and the California Initiative Program. Another
example is the National Hispanic Council on Aging which is:

Dedicated to improving the quality of life for Latino elderly, families, and communities through
advocacy, capacity and institution building, development of educational materials, technical assistance,
demonstration projects, policy analysis and research (National Hispanic Council on Aging, and,
Mission Statement section).

In the final analysis, the National Forum on Information Literacy will continue to work closely with
educational, business, and non-profit organizations in the U.S. to promote information literacy skill
development at every opportunity, particularly in light of the ever growing social, economic, and
political urgency of globalization, prompting us to re-energize our promotional and collaborative
efforts here at home.

[edit] Bibliography

Prague Declaration: “Towards an Information Literate Society” - http://www.infolit.org/2003.html

Alexandria Proclamation: A High Level International Colloquium on Information Literacy and


Lifelong Learning, http://www.infolit.org/2005.html

2006 Information Literacy Summit: American Competitiveness in the Internet Age


http://www.infolit.org/reports.html

1989 Presidential Committee on Information Literacy: Final Report


http://www.ala.org/ala/mgrps/divs/acrl/publications/whitepapers/presidential.cfm

1983 A Nation at Risk: The Imperative for Educational Reform


http://www.ed.gov/pubs/NatAtRisk/index.html

Gibson, C. (2004). Information literacy develops globally: The role of the national forum on
information literacy. Knowledge Quest.
http://www.ala.org/ala/aasl/aaslpubsandjournals/kqweb/kqarchives/vol32/324TOC2.cfm

Breivik P.S. and Gee, E.G. (2006). Higher education in the internet age: Libraries creating a strategic
edge. Westport,CT: Greenwood Publishing

[edit] Educational schemata


[edit] One view of the components of information literacy

Based on the Big6 by Mike Eisenberg and Bob Berkowitz.

http://big6.com/

73
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

1. The first step in the Information Literacy strategy is to clarify and understand the requirements
of the problem or task for which information is sought. Basic questions asked at this stage:
1. What is known about the topic?
2. What information is needed?
3. Where can the information be found?
2. Locating: The second step is to identify sources of information and to find those resources.
Depending upon the task, sources that will be helpful may vary. Sources may include books,
encyclopedias, maps, almanacs, etc. Sources may be in electronic, print, social bookmarking
tools, or other formats.
3. Selecting/analyzing: Step three involves examining the resources that were found. The
information must be determined to be useful or not useful in solving the problem. The useful
resources are selected and the inappropriate resources are rejected.
4. Organizing/synthesizing: It is in the fourth step this information which has been selected is
organized and processed so that knowledge and solutions are developed. Examples of basic
steps in this stage are:
1. Discriminating between fact and opinion
2. Basing comparisons on similar characteristics
3. Noticing various interpretations of data
4. Finding more information if needed
5. Organizing ideas and information logically
5. Creating/presenting: In step five the information or solution is presented to the appropriate
audience in an appropriate format. A paper is written. A presentation is made. Drawings,
illustrations, and graphs are presented.
6. Evaluating: The final step in the Information Literacy strategy involves the critical evaluation
of the completion of the task or the new understanding of the concept. Was the problem
solved? Was new knowledge found? What could have been done differently? What was done
well?

The Big6 skills have been used in a variety of settings to help those with a variety of needs. For
example, the library of Dubai Women’s College, in Dubai, United Arab Emirates which is an English
as a second language institution, uses the Big6 model for its information literacy workshops.
According to Story-Huffman (2009), using Big6 at the college “has transcended cultural and physical
boundaries to provide a knowledge base to help students become information literate” (para. 8). In
primary grades, Big6 has been found to work well with variety of cognitive and language levels found
in the classroom.

Differentiated instruction and the Big6 appear to be made for each other. While it seems as though all
children will be on the same Big6 step at the same time during a unit of instruction, there is no reason
students cannot work through steps at an individual pace. In addition, the Big 6 process allows for
seamless differentiation by interest (Jansen, 2009, p.32).

A number of weaknesses in the Big6 approach have been highlighted by Philip Doty:

This approach is problem-based, is designed to fit into the context of Benjamin Bloom’s taxonomy of
cognitive objectives, and aims toward the development of critical thinking. While the Big6 approach
has a great deal of power, it also has serious weaknesses. Chief among these are the fact that users
74
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

often lack well-formed statements of information needs, as well as the model’s reliance on problem-
solving rhetoric. Often, the need for information and its use are situated in circumstances that are not
as well-defined, discrete, and monolithic as problems (Doty, 2003).

Eisenberg (2004) has recognized that there are a number of challenges to effectively applying the Big6
skills, not the least of which is information overload which can overwhelm students. Part of
Eisenberg’s solution is for schools to help students become discriminating users of information.

[edit] Another conception of information literacy

This conception, used primarily in the library and information studies field, and rooted in the concepts
of library instruction and bibliographic instruction, is the ability "to recognize when information is
needed and have the ability to locate, evaluate and use effectively the needed information"
(Presidential Committee on Information Literacy. 1989, p. 1). In this view, information literacy is the
basis for life-long learning.

In the publication Information power: Building partnerships for learning (AASL and AECT, 1998),
three categories, nine standards, and twenty-nine indicators are used to describe the information literate
student. The categories and their standards are as follows:

Category 1: Information Literacy

Standards:

1.The student who is information literate accesses information efficiently and effectively.

2.The student who is information literate evaluates information critically and competently.

3.The student who is information literate uses information accurately and creatively.

Category 2: Independent Learning

Standards:

1.The student who is an independent learner is information literate and pursues information related to
personal interests.

2.The student who is an independent learner is information literate and appreciates literature and other
creative expressions of information.

3.The student who is an independent learner is information literate and strives for excellence in
information seeking and knowledge generation.

Category 3: Social Responsibility

Standards:
75
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

1.The student who contributes positively to the learning community and to society is information
literate and recognizes the importance of information to a democratic society.

2.The student who contributes positively to the learning community and to society is information
literate and practices ethical behavior in regard to information and information technology.

3.The student who contributes positively to the learning community and to society is information
literate and participates effectively in groups to pursue and generate information. (AASL and AECT,
1998)

Since information may be presented in a number of formats, the term "information" applies to more
than just the printed word. Other literacies such as visual, media, computer, network, and basic
literacies are implicit in information literacy.

Many of those who are in most need of information literacy are often amongst those least able to
access the information they require:

Minority and at-risk students, illiterate adults, people with English as a second language, and
economically disadvantaged people are among those most likely to lack access to the information that
can improve their situations. Most are not even aware of the potential help that is available to them.
(Presidential Committee on Information Literacy. 1989, para. 7)

As the Presidential Committee report points out, members of these disadvantaged groups are often
unaware that libraries can provide them with the access, training and information they need. In
Osborne (2004) many libraries around the country are finding numerous ways to reach many of these
disadvantaged groups by discovering their needs in their own environments (including prisons) and
offering them specific services in the libraries themselves.

[edit] Evolution of the economy


The change from an economy based on labor and capital to one based on information requires
information literate workers who will know how to interpret information.

Barner's (1996) study of the new workplace indicates significant changes will take place in the future:

 The work force will become more decentralized


 The work force will become more diverse
 The economy will become more global
 The use of temporary workers will increase

These changes will require that workers possess information literacy skills. The SCANS (1991) report
identifies the skills necessary for the workplace of the future. Rather than report to a hierarchical
management structure, workers of the future will be required to actively participate in the management
of the company and contribute to its success. To survive in this information society, workers will need
to possess skills beyond those of reading, writing and arithmetic.
76
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

[edit] Effect on education


The rapidly evolving information landscape means that education methods and practices must evolve
and adapt accordingly. Information literacy must become a key focus of educational institutions at all
levels. This requires a commitment to lifelong learning and an ability to seek out and identify
innovations that will be needed to keep pace with or outpace changes.[14] Educational methods and
practices, within our increasingly information-centric society, must facilitate and enhance a student's
ability to harness the power of information. Key to harnessing the power of information is the ability to
evaluate information, to ascertain among other things its relevance, authenticity and modernity. The
information evaluation process is crucial life skill and a basis for lifelong learning.[15] Evaluation
consists of several component processes including metacognition, goals, personal disposition,
cognitive development, deliberation, and decision-making. This is both a difficult and complex
challenge and underscores the importance of being able to think critically.

Critical thinking is an important educational outcome for students.[15] Education institutions have
experimented with several strategies to help foster critical thinking, as a means to enhance information
evaluation and information literacy among students. When evaluating evidence, students should be
encouraged to practice formal argumentation.[16] Debates and formal presentations must also be
encouraged to analyze and critically evaluate information.

Education professionals must underscore the importance of high information quality. Students must be
trained to distinguish between fact and opinion. They must be encouraged to use cue words such as "I
think" and "I feel" to help distinguish between factual information and opinions. Information related
skills that are complex or difficult to comprehend must be broken down into smaller parts. Another
approach would be to train students in familiar contexts. Education professionals should encourage
students to examine "causes" of behaviors, actions and events. Research shows that people evaluate
more effectively if causes are revealed, where available.[14] Such initiatives would aid educators help
people become more Information Literate. As a society, we must critically evaluate information to
establish a public demand for high information quality.

Because information literacy skills are vital to future success:

 Information literacy skills must be taught in the context of the overall process.
 Instruction in information literacy skills must be integrated into the curriculum and reinforced
both within and outside of the educational setting.

[edit] Education in the USA


[edit] Standards

National content standards, state standards, and information literacy skills terminology may vary, but
all have common components relating to information literacy.

Information literacy skills are critical to several of the National Education Goals outlined in the Goals
2000: Educate America Act, particularly in the act's aims to increase "school readiness", "student
77
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

achievement and citizenship", and "adult literacy and lifelong learning".[17] Of specific relevance are
the "focus on lifelong learning, the ability to think critically, and on the use of new and existing
information for problem solving", all of which are important components of information literacy.[18]

In 1998, the American Association of School Librarians and the Association for Educational
Communications and Technology published "Information Literacy Standards for Student Learning",
which identified nine standards that librarians and teachers in K-12 schools could use to describe
information literate students and define the relationship of information literacy to independent learning
and social responsibility:

 Standard One: The student who is information literate accesses information efficiently and
effectively.
 Standard Two: The student who is information literate evaluates information critically and
competently.
 Standard Three: The student who is information literate uses information accurately and
creatively.
 Standard Four: The student who is an independent learner is information literate and pursues
information related to personal interests.
 Standard Five: The student who is an independent learner is information literate and
appreciates literature and other creative expressions of information.
 Standard Six: The student who is an independent learner is information literate and strives for
excellence in information seeking and knowledge generation.
 Standard Seven: The student who contributes positively to the learning community and to
society is information literate and recognizes the importance of information to a democratic
society.
 Standard Eight: The student who contributes positively to the learning community and to
society is information literate and practices ethical behavior in regard to information and
information technology.
 Standard Nine: The student who contributes positively to the learning community and to
society is information literate and participates effectively in groups to pursue and generate
information.[5]

In 2007 AASL expanded and restructured the standards that school librarians should strive for in their
teaching. These were published as "Standards for the 21st Century Learner" and address several
literacies: information, technology, visual, textual, and digital. These aspects of literacy were
organized within four key goals: that "learners use of skills, resources, & tools" to "inquire, think
critically, and gain knowledge"; to "draw conclusions, make informed decisions, apply knowledge to
new situations, and create new knowledge"; to "share knowledge and participate ethically and
productively as members of our democratic society"; and to "pursue personal and aesthetic growth".[19]

In 2000, the Association of College and Research Libraries (ACRL), a division of the American
Library Association (ALA), released "Information Literacy Competency Standards for Higher
Education", describing five standards and numerous performance indicators considered best practices
for the implementation and assessment of postsecondary information literacy programs. The five
standards are:

78
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Standard One: The information literate student determines the nature and extent of the
information needed.
 Standard Two: The information literate student accesses needed information effectively and
efficiently.
 Standard Three: The information literate student evaluates information and its sources critically
and incorporates selected information into his or her knowledge base and value system.
 Standard Four: The information literate student, individually or as a member of a group, uses
information effectively to accomplish a specific purpose.
 Standard Five: The information literate student understands many of the economic, legal, and
social issues surrounding the use of information and accesses and uses information ethically
and legally.[20]

These standards are meant to span from the simple to more complicated, or in terms of Bloom's
Taxonomy of Educational Objectives, from the "lower order" to the "higher order". Lower order skills
would involve for instance being able to use an online catalog to find a book relevant to an information
need in an academic library. Higher order skills would involve critically evaluating and synthesizing
information from multiple sources into a coherent interpretation or argument.

[edit] K-12 education restructuring

Educational reform and restructuring make information literacy skills a necessity as students seek to
construct their own knowledge and create their own understandings. Today instruction methods have
changed drastically from the mostly one-directional teacher-student model, to a more collaborative
approach where the students themselves feel empowered. Much of this challenge is now being
informed by the American Association of School Librarians that published new standards for student
learning in 2007.

Within the K-12 environment, effective curriculum development is vital to imparting Information
Literacy skills to students. Given the already heavy load on students, efforts must be made to avoid
curriculum overload.[21] Eisenberg strongly recommends adopting a collaborative approach to
curriculum development among classroom teachers, librarians, technology teachers, and other
educators. Staff must be encouraged to work together to analyze student curriculum needs, develop a
broad instruction plan, set information literacy goals, and design specific unit and lesson plans that
integrate the information skills and classroom content. These educators can also collaborate on
teaching and assessment duties

Educators are selecting various forms of resource-based learning (authentic learning, problem-based
learning and work-based learning) to help students focus on the process and to help students learn from
the content. Information literacy skills are necessary components of each. Within a school setting, it is
very important that a students' specific needs as well as the situational context be kept in mind when
selecting topics for integrated information literacy skills instruction The primary goal should be to
provide frequent opportunities for students to learn and practice information problem solving.[21] To
this extent, it is also vital to facilitate repetition of information seeking actions and behavior. The
importance of repetition in information literacy lesson plans cannot be underscored, since we tend to
learn through repetition. A students’ proficiency will improve over time if they are afforded regular
opportunities to learn and to apply the skills they have learnt.
79
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

The process approach to education is requiring new forms of student assessment. Students demonstrate
their skills, assess their own learning, and evaluate the processes by which this learning has been
achieved by preparing portfolios, learning and research logs, and using rubrics.

[edit] Efforts in K-12 education

Information literacy efforts are underway on individual, local, and regional bases.

Many states have either fully adopted AASL [12][13] information literacy standards or have adapted
them to suit their needs.[14] States such as Oregon (OSLIS, 2009) [14] increasing rely on these
guidelines for curriculum development and setting information literacy goals. Virginia,[22] on the other
hand, chose to undertake a comprehensive review, involving all relevant stakeholders and formulate it
own guidelines and standards for information literacy. At an international level, two framework
documents jointly produced by UNESCO and the IFLA (International Federation of Library
Associations and Institutions) developed two framework documents that laid the foundations in
helping define the educational role to be played by school libraries: the School library manifesto
(1999),.[23]

Another immensely popular approach to imparting information literacy is the Big6 set of skills.[21]
Eisenberg claims that the Big6 is the most widely used model in K-12 education. This set of skills
seeks to articulate the entire information seeking life cycle. The Big6 is made up of six major stages
and two sub-stages under each major stages. It defines the six steps as being: task definition,
information seeking strategies, location and access, use of information, synthesis, and evaluation. Such
approaches seek to cover the full range of information problem-solving actions that a person would
normally undertake, when faced with an information problem or with making a decision based on
available resources.

Imaginative Web based information literacy tutorials such as TILT [24] are being created and integrated
with curriculum areas, or being used for staff development purposes.

"Library media programs" are fostering information literacy by integrating the presentation of
information literacy skills with curriculum at all grade levels. But information literacy efforts are not
being limited to the library field, but are also being employed by regional educational consortia.

[edit] Efforts in higher education

Information literacy instruction in higher education can take a variety of forms: stand-alone courses or
classes, online tutorials, workbooks, course-related instruction, or course-integrated instruction.

State-wide university systems and individual colleges and universities are undertaking strategic
planning to determine information competencies, to incorporate instruction in information competence
throughout the curriculum and to add information competence as a graduation requirement for
students. The six regional accreditation boards have added information literacy to their standards [25],
additional text.</ref>. Librarians often are required to teach the concepts of information literacy during
"one shot" classroom lectures. There are also credit courses offered by academic librarians to prepare
college students to become information literate.
80
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Academic library programs are preparing faculty to facilitate their students' mastery of information
literacy skills so that the faculty can in turn provide information literacy learning experiences for the
students enrolled in their classes.

[edit] Technology

Information Technology is the great enabler. It provides, for those who have access to it, an extension
of their powers of perception, comprehension, analysis, thought, concentration, and articulation
through a range of activities that include: writing, visual images, mathematics, music, physical
movement, sensing the environment, simulation, and communication (Carpenter, 1989, p. 2).

Technology, in all of its various forms, offers users the tools to access, manipulate, transform,
evaluate, use, and present information.

Technology in schools includes computers, televisions, video cameras, video editing equipment, and
TV studios.

Two approaches to technology in K-12 schools are technology as the object of instruction approach,
and technology as the tool of instruction approach.

Schools are starting to incorporate technology skills instruction in the context of information literacy
skills. This is called technology information literacy

Technology is changing the way higher education institutions are offering instruction. The use of the
Internet is being taught in the contexts of subject area curricula and the overall information literacy
process.

There is some empirical indication that students who use technology as a tool may become better at
managing information, communicating, and presenting ideas.

[edit] Distance education

Now that information literacy has become a part of the core curriculum at many post-secondary
institutions, it is incumbent upon the library community to be able to provide information literacy
instruction in a variety of formats, including online learning and distance education. The Association
of College and Research Libraries (ACRL) addresses this need in its Guidelines for Distance
Education Services (2000):

“Library resources and services in institutions of higher education must meet the needs of all their
faculty, students, and academic support staff, wherever these individuals are located, whether on a
main campus, off campus, in distance education or extended campus programs—or in the absence of a
campus at all, in courses taken for credit or non-credit; in continuing education programs; in courses
attended in person or by means of electronic transmission; or any other means of distance education.”

Within the e-learning and distance education worlds, providing effective information literacy programs
brings together the challenges of both distance librarianship and instruction. With the prevalence of
81
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

course management systems such as WebCT and Blackboard, library staff are embedding information
literacy training within academic programs and within individual classes themselves (Presti, 2002).

[edit] Global Information Literacy


The International Federation of Library Associations and Institutions (IFLA) has established an
Information Literacy Section. The Section has, in turn, developed and mounted an Information
Literacy Resources Directory, called InfoLit Global. Librarians, educators and information
professionals may self-register and upload information-literacy-related materials (IFLA, Information
Literacy Section, n.d.) According to the IFLA website, "The primary purpose of the Information
Literacy Section is to foster international cooperation in the development of information literacy
education in all types of libraries and information institutions." http://www.ifla.org/en/about-
information-literacy.

Virtual Organisations
Virtual organisations are loosely coupled largely independent organisations cooperating in
an ad hoc manner for a specific goal. These characteristics allow for switching between
partners who can play a similar role. Mowshowitz [1] even considers the systematic use and
management of this flexibility to offer services where and when they are most needed, as
the fundamental aspect of virtual organisations. Even a virtual organisation requires
individuals and organisations to exchange communication, information and resources (e.g.
money) for various reasons such as synchronisation, agreements, to establish common goals
and for redistributions of rewards and efforts. If we consider the existence of agreements as
the essential aspect of an organisation, we see that there is a continuum between real and
virtual organisations: formal existing agreements between partners can be re-negotiated and
new agreements can emerge on the fly in both real and virtual organisations. From this
perspective, many human activities in commerce, healthcare, government and science
involve a virtual organisation.
The goal of this paper is to describe the design and results of the TeleCare pilot project
in which general practitioners and other healthcare professionals associated to the Stroke
Service Enschede (The Netherlands) are given mobile and ad hoc access to patient records.
We identify the stroke service as a virtual organisation with ad hoc collaboration around a
common task, but with some existing formal agreements to facilitate the process. The pilot
project provided a rudimentary virtual administrative domain centred around the patient
that spans several information systems and crosses organisation boundaries (technological
aspect). We discuss the changes in people’s working practices caused or required by this
technology (social aspect). The societal importance of this research is emphasised by the
finding of a nationwide Dutch survey [2] that a serious number of medical mistakes results
from incorrect or incomplete information in patient records and the lack of easy
communication and collaboration agreements to check the patient files. This illustrates the

urgency for improving support for virtual organisations in the medical field.

82
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

REFERENCES

 Computerized Manufacturing Automation: Employment, Education and the Workplace,


Washington, US Congress of Technology Assessment, OTA CIT-235 April 1984, page 234

 American Association of School Librarians and Association for Educational Communications


and Technology. (1988). Information power: Guidelines for school library media programs.
Chicago: Author. (ED 315 028)

 American Library Association and Association for Educational Communications and


Technology. (1998). Information power: Building partnerships for learning. Chicago: Author.

 American Library Association Presidential Committee on Information Literacy. (1989). Final


report. Chicago: Author. (ED 315 074)

 American Library Association. Association of College Research Libraries. (2000). Information


literacy competency standards for higher education. Retrieved November 3, 2007, from
http://www.ala.org/ala/acrl/acrlstandards/informationliteracycompetency.htm

 Association of College Research Libraries (2007). The First-Year Experience and Academic
Libraries: A Select, Annotated Bibliography. Retrieved April 20, 2008, from
http://www.ala.org/ala/acrlbucket/is/publicationsacrl/tmcfyebib.cfm

 Barner, R. (1996, March/April). Seven changes that will challenge managers-and workers. The
Futurist, 30(2), 14-18.

 Breivik. P. S., & Senn, J. A. (1998). Information literacy: Educating children for the 21st
century. (2nd ed.). Washington, DC: National Education Association.

 Carpenter, J. P. (1989). Using the new technologies to create links between schools throughout
the world: Colloquy on computerized school links. (Exeter, Devon, United Kingdom, 17–20
October 1988).

 Doty, P. (2003). Bibliographic instruction: The digital divide and resistance of users to
technologies. Retrieved July 12, 2009, from
http://www.ischool.utexas.edu/~l38613dw/website_spring_03/readings/BiblioInstruction.html

83
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Doyle, C.S. (1992). Outcome Measures for Information Literacy Within the National
Education Goals of 1990. Final Report to National Forum on Information Literacy. Summary
of Findings.

 Eisenberg, M. (2004). Information literacy: The whole enchilada [PowerPoint Presentation].


Retrieved July 14, 2009, from http://www.big6.com/presentations/sreb/

 Eisenberg, M., Lowe, C., & Spitzer, K. (2004). Information Literacy: Essential Skills for the
Information Age. 2nd. edition. Libraries Unlimited.

 Grassian, E. (2004) Information Literacy: Building on Bibliographic Instruction. American


Libraries, 35(9), 51-53.

 Grassian, E.S., & Kaplowitz, J.R. (2001). Information Literacy Instruction: Theory and
Practice. New York: Neal-Schuman Publishers, Inc.

 ______. (2009). Information Literacy Instruction: Theory and Practice. 2d ed. New York:
Neal-Schuman Publishers, Inc.

 Hashim, E. (1986). Educating students to think: The role of the school library media program,
an introduction. In Information literacy: Learning how to learn. A collection of articles from
School Library Media Quarterly, (15)1, 17-18.

 IFLA, Information Literacy Section. n.d. InfoLit Global. Retrieved February 23, 2008 from
http://www.infolitglobal.info/

 Jansen, B. A. (2009). Differentiating instruction in the primary grades with the Big6. Library
media connection, 27(4), 32-33.

 Koechlin, C., & Zwaan, S. (2003). Build your own information literate school. Salt Lake City,
UT: Hi Willow Research & Publishing.

 Kuhlthau, C. C. (1987). Information skills for an information society: A review of research.


Syracuse, NY: ERIC Clearinghouse on Information Resources. (ED 297 740)

 Lorenzen. M. (2001). The Land of Confusion? High School Students and Their Use of the Web
for Research. Research Strategies 18(2): 151-163.

 National Commission of Excellence in Education. (1983). A Nation at risk: The imperative for
educational reform. Washington, DC: U.S. Government Printing Office. (ED 226 006)

 National Hispanic Council on Aging. (nd). Mission statement. Retrieved July 13, 2009, from
National Forum on Information Literacy Web site.

84
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

 Obama, B. (2009). Presidential Proclamation: National Information Literacy Awareness


Month, 2009. Washington, DC: U.S. Government Printing Office. Retrieved October 27, 2009
from http://www.whitehouse.gov/assets/documents/2009literacy_prc_rel.pdf

 Osborne, R. (Ed.). (2004). From outreach to equity: Innovative models of library policy and
practice. Chicago: American Library Association.

 Presti, P. (2002). Incorporating information literacy and distance learning within a course
management system: a case study. Ypsilanti, MI: Loex News, (29)2-3, 3-12-13. Retrieved
February 3, 2004 from http://www.emich.edu/public/loex/news/ln290202.pdf

 Ryan, J., & Capra, S. (2001). Information literacy toolkit. Chicago: American Library
Association.

 Schwarzenneger, S. (2009). Executive order S-06-09. Sacramento, CA. Retrieved October 27,
2009 from http://gov.ca.gov/executive-order/12393/

 SCONUL. (2007). The Seven Pillars of Information Literacy model. Retrieved November 3,
2010 from http://www.sconul.ac.uk/groups/information_literacy/sp/model.html

 Secretary's Commission on Achieving Necessary Skills. (1991). What work requires of


schools: A SCANS report for America 2000. Washington, DC: U.S. Government Printing
Office. (ED 332 054)

 Shapiro, J., & Hughes, S (1996). Information Literacy as a Liberal Art: Enlightenment
proposals for a new curriculum. Educom Review
(http://www.educause.edu/pub/er/review/reviewarticles/31231.html).

 Story-Huffman, R. (2009). Big6 and higher education:Big6 transcends boundaries. Retrieved


July 13, 2009, from http://www.big6.com/2009/06/17/big6-and-higher-education-big6-
transcends-boundaries-1022/
 Presti, 2002

11. What special type of risk must be addressed by e-commerce systems? Analyze and Discuss.

ANSWERS

Introduction

85
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

The protection of electronic commerce systems pulls together a lot of the topics discussed
in previous chapters. Failures come from misconfigured access control, implementation
blunders, theft of network services, inappropriate use of crypotology—you
name it. In this chapter, I’ll cover some protection issues specific to e-commerce, such
as how online credit card payments are handled, and what goes wrong.
If you are a programmer building e-commerce systems for a dot-com startup, much
of the material in this chapter should be fairly familiar to you. You are much more
likely to get value from the chapters on access control, network security, and (particularly)
on banking. The most likely attacks on your business don’t involve the vulnerabilities
in the Internet protocol suite or the payment infrastructure—and you can’t do
anything about those anyway.
The typical e-business startup appears to be most at risk from internal fraud. This is
where most frauds in normal businesses come from, despite things like double-entry
bookkeeping, which have evolved over centuries to control them. Many startups have
none of these internal controls. They begin as a few people who all know each other
and, if successful at raising capital, rapidly hire a lot of new staff who’re focused on
money and not very carefully screened. A survey found in October 2000, for example,
that 37% of dot-com executives have shady pasts, compared with 10% found in due
diligence checks on normal companies [257].

A Telegraphic History of E-Commerce

Many of the problems afflicting e-businesses stem from the popular notion that ecommerce
is something completely new, invented in the mid-1990s. This is simply
untrue.
Various kinds of visual signalling were deployed from classical times. Systems included
heliographs (which used mirrors to flash sunlight at the receiver), semaphones
(which used the positions of moving arms to signal letters and numbers), and flags.
Land-based systems sent messages along chains of beacon towers, and naval systems
between ships. To begin with, their use was military, but after the Napoleonic War, the
French government opened its heliograph network to commercial use. Very soon, the
first frauds were carried out. For two years, until they were discovered in 1836, two
bankers bribed an operator to signal the movements of the stock market to them covertly
by making errors in transmissions, which they could observe from a safe distance.
Other techniques were devised to signal the results of horseraces. Various laws were
passed to criminalize this kind of activity, but they were ineffective. The only solution
for the bookies was to “call time” by a clock, rather than waiting for the result and
hoping that they were the first to hear it.
From the 1760s to the 1840s, the electric telegraph was developed by a number of
pioneers, of whom the most influential was Samuel Morse. He persuaded Congress in
1842 to fund an experimental line from Washington to Baltimore; this so impressed
people that serious commercial investment started, and by the end of that decade, there
were 12,000 miles of line being operated by 20 companies. This was remarkably like
the Internet boom of the late 1990s [729].
Banks were the first big users of the telegraph, and they decided that they needed
technical protection mechanisms to prevent transactions being altered by crooked operators
en route. (I discussed the test key systems they developed for the purpose in the

86
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

chapter on banking systems.) Telegrams were also used to create national markets. For
the first time, commodity traders in New York could find out within minutes the prices
that had been set in auctions in Chicago; likewise, fishing skippers arriving in Boston
could find out the price of cod in Gloucester. A recent history of the period shows that
most of the concepts and problems of electronic commerce were familiar to the Victorians
[729]. How do you know who you’re speaking to? How do you know if they’re
trustworthy? How do you know whether the goods will be delivered, and whether
payments will arrive? The answers found in the nineteenth century involved intermediaries—
principally banks that helped businesses manage risk using instruments such as
references, guarantees, and letters of credit.
In the 1960s, banks in many countries computerized their bookkeeping, and introduced
national interbank systems for handling direct payments to customer accounts,
enabling banks to offer services such as payroll to corporate customers. In the early
1970s, this was extended to international payments, as described in the banking systems
chapter. The next large expansion of electronic commerce took place in the late
1970s to mid-1980s with the spread of electronic data interchange (EDI). Companies
ranging from General Motors to Marks and Spencer built systems that enabled them to
link up their computers to their suppliers’, so that goods could be ordered automatically.
Travel agents built similar systems to order tickets in real time from airlines.

In 1985, the first retail electronic banking system was offered by the Bank of Scotland,
whose customers could use Prestel, a proprietary email system operated by British
Telecom, to make payments. When Steve Gold and Robert Schifreen hacked
Prestel—as described in the chapter on passwords—it initially terrified the press and
the bankers. They realized that the hackers could easily have captured and altered
transactions. But once the dust settled and people thought through the detail, it became
clear there was little real risk. The system allowed only payments between your own
accounts and to accounts which you’d previously notified to the bank, such as your gas
and electricity suppliers.
This pattern, of high-profile hacks—which caused great consternation but which, on
sober reflection, turned out to be not really a big deal—has continued ever since.
To resume this brief history, the late 1980s and early 1990s saw the rapid growth of
call centers, which—despite all the hoopla about the Web—remain in 2000 by far the
largest delivery channel for business-to-consumer electronic commerce. As for the Internet,
it was not something that suddenly sprung into existence in 1995, as a Martian
monitoring our TV channels might believe. The first time I used an online service to
sell software I’d written was in 1984 or 1985; and I first helped the police investigate
an online credit card fraud in 1987. In the latter case, the bad guy got a list of hot credit
card numbers from his girlfriend, who worked in a supermarket; he used them to buy
software from companies in California, which he downloaded to order for his customers.
This worked because hot card lists at the time carried only those cards that were
being used fraudulently in that country; it also guaranteed that the bank would not be
able to debit an innocent customer. As it happens, the criminal quit before there was
enough evidence to nail him. A rainstorm washed away the riverbank opposite his
house and exposed a hide which the police had built to stake him out.
The use of credit cards to buy stuff electronically “suddenly” became mainstream in
about 1994 or 1995, when the public started to go online in large numbers. Suddenly
there was a clamor that the Internet was insecure, that credit card numbers could be
harvested on a huge scale, and that encryption would be needed.

An Introduction to Credit Cards

For many years after their invention in the 1950s, credit cards were treated by most
87
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

banks as a loss leader used to attract high-value customers. Eventually, in most countries,
the number of merchants and cardholders reached critical mass, and the transaction
volume took off. In Britain, it took almost 20 years before most banks found the
business profitable; then all of a sudden it became extremely profitable. The credit card
system is now extremely well entrenched as the payment mechanism used on the Net.
Because of the huge investment involved in rolling out a competitor to tens of thousands
of banks, millions of merchants, and billions of customers worldwide, any new
payment mechanism is likely to take some time to get established—with a possible
exception, which I’ll discuss shortly. When you use a credit card to pay for a purchase in a store, the transaction
flows from the merchant to her bank (the acquiring bank),
which pays her after deducting a merchant discount of, typically, 4–5%. If the card was
issued by a different bank, the transaction next flows to a switching center run by the
brand (such as VISA), which takes a commission and passes it to the issuing bank for
payment. Daily payments between the banks and the brands settle the net cash flows.
The issuer also gets a slice of the merchant discount, but makes most of its money from
extending credit to cardholders at rates usually much higher than the interbank rate.

Fraud

The risk of fraud from stolen cards was traditionally managed by a system of hot card
lists and merchant floor limits. Each merchant gets a local hot card list—formerly on
paper, now stored in her terminal—plus a limit set by their acquiring bank, above
which they have to call for authorization. The call center, or online service, which she
uses for this has access to a national hot card list; above a higher limit, they will contact
the brand which has a complete list of all hot cards being used internationally;
above a still higher limit, the transaction will be checked all the way back to the card
issuer.
The introduction of mail order and telephone order (MOTO) transactions in the
1970s meant that the merchant did not have the customer present, and was not able to
inspect the card. What was to stop a crook ordering goods using a credit card numbers
he’d picked up from a discarded receipt?
Banks managed the risk by using the expiry date as a password, lowering the floor
limits, increasing the merchant discount, and insisting on delivery to a cardholder address,
which is supposed to be checked during authorization. But the main change was
to shift liability so that the merchant bore the full risk of disputes. If you challenge an
online credit card transaction (or in fact any transaction made under MOTO rules), the
full amount is immediately debited back to the merchant, together with a significant
handling fee. The same procedure applies whether the debit is a fraud, a dispute, or a
return.
Of course, having the cardholder present doesn’t guarantee that fraud will be rare.
For many years, most fraud was done in person with stolen cards, and the stores that
got badly hit tended to be those selling goods that can be easily fenced, such as jewelry
and consumer electronics. Banks responded by lowering their floor limits. More recently,
as technical protection mechanisms have improved, there has been an increase
in scams involving cards that were never received by genuine customers. This preissue
fraud can involve thefts from the mail of the many “pre-approved” cards that arrive
in junk mail, or even applications made in the names of people who exist and are
creditworthy, but are not aware of the application (identity theft). These attacks on the
system are intrinsically hard to tackle using purely technical means.

Forgery

88
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

In the early 1980s, electronic terminals were introduced through which a sales clerk
could swipe a card and get an authorization automatically. But the sales draft was still
captured from the embossing, so crooks figured out how to re-encode the magnetic
strip of a stolen card with the account number and expiry date of a valid card, which
they often got by fishing out discarded receipts from the trash cans of expensive restaurants.
A re-encoded card would authorize perfectly, but when the merchant submitted

the draft for payment, the account number didn’t match the authorization code (a sixdigit
number typically generated by encrypting the account number, date, and amount).
The merchants didn’t get paid, and raised hell.
Banks responded in the mid-1980s by introducing terminal draft capture, where a
sales draft is printed automatically using the data on the card strip. The crooks’ response
was a flood of forged cards, many produced by Triad gangs: between 1989 and
1992, magnetic strip counterfeiting grew from an occasional nuisance into half the total
fraud losses [6]. VISA’s response was card verification values (CVVs), three-digit
MACs computed on the card strip contents (account number, version number, expiry
date) and written at the end of the strip. They worked well at first; in the first quarter of
1994, VISA International’s fraud losses dropped by 15.5%, while Mastercard’s rose
67% [165]. Subsequently, Mastercard adopted similar checksums, too.
The crooks’ response was skimming—operating businesses where genuine customer
cards were swiped through an extra, unauthorized, terminal to grab a copy of the magnetic
strip, which would then be re-encoded on a genuine card. The banks’ response
was intrusion detection systems, which in the first instance tried to identify criminal
businesses by correlating the previous purchase histories of customers who complained.
In the late 1990s, credit card fraud rose sharply due to another simple innovation in
criminal technology: the crooked businesses that skim card data absorb the cost of the
customer’s transaction rather than billing it. You have a meal at a Mafia-owned restaurant,
offer a card, sign the voucher, and fail to notice when the charge doesn’t appear
on your bill. Perhaps a year later, there is suddenly a huge bill for jewelry, electronic
goods, or even casino chips. By then you’ve completely forgotten about the meal, and
the bank never had a record of it [318].

Automatic Fraud Detection

Consequently, a lot of work was done in the 1990s on beefing up intrusion detection.
There are a number of generic systems that do anomaly detection, using techniques
such as neural networks, but it’s unclear how effective they are. When fraud is down
one year, it’s hailed as a success for the latest fraud-spotting system [61]; when the
figures go up a few years later, the vendors let the matter pass quietly [714].
More convincing are projects undertaken by specific store chains that look for
known patterns of misuse. For example, an electrical goods chain in the New York area
observed that offender profiling (by age, sex, race, and so on) was ineffective, and used
purchase profiling instead to cut fraud by 82% in a year. Its technique involved not just
being suspicious of high-value purchases, but training staff to be careful when customers
were careless about purchases and spent less than the usual amount of time discussing
options and features. These factors can be monitored online, too, but one
important aspect of the New York success is harder for a Web site: employee rewarding.
Banks give a $50 reward per bad card captured, which many stores just keep, so
their employees don’t make an effort to spot cards or risk embarrassment by confronting
a customer. In New York, some store staff were regularly earning a weekly bonus
of $150 or more [525].

With the human out of the loop at the sales end, the only psychology from which a
89
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

site designer can leverage is that of the villain. It has been suggested that an ecommerce
site should have an unreasonably expensive “platinum” option, which few
genuine customers will want to buy [721]. This performs two functions. First, it allows

Economics

There’s a lot of misinformation about credit card fraud, with statistics quoted selectively
to make points. In one beautiful example, VISA was reported to have claimed
that card fraud was up and that card fraud was down, on the same day [380]
However, a consistent pattern of figures can be dug out of the trade publications.
The actual cost of credit card fraud, before the recent rise, was about 0.15% of all international
transactions processed by VISA and MasterCard [652], while national rates
varied from 1% in America to 0.2% in the U.K. to under 0.1% in France and Spain.
The prevailing business culture has a large effect on the rate. U.S. banks, for example,
are much more willing to send out huge junk mailings of pre-approved cards to increase
their customer base, and write off the inevitable pre-issue fraud as a cost of doing
business. In other countries, banks are more risk-averse.
France is interesting, as it seems, at first sight, to be an exceptional case, in which a
particular technology has brought real benefits. French banks introduced chip cards for
all domestic transactions in the late 1980s, and this reduced losses from 0.269% of
turnover in 1987 to 0.04% in 1993 and 0.028% in 1995. However, there is now an increasing
amount of cross-border fraud. French villains use foreign magnetic stripe
cards— particularly from Britain [315, 652]—while French chip cards are used at merchants
in non-chip countries [166]. But the biggest reduction in Europe was not in
France but in Spain, where the policy was to reduce all merchant floor limits to zero
and make all transactions online. This cut their losses from 0.21% of turnover in 1988
to 0.008% in 1991 [73].
The lessons appear to be that, first, card fraud is cyclical, as new defenses are introduced
and the villains learn to defeat them; and second, that the most complicated and
expensive technological solution doesn’t necessarily work best in the field.

Online Credit Card Fraud: The Hype and the Reality

We turn now from traditional credit card fraud to the online variety. There was great
anxiety in the mid-1990s that the use of credit cards on the Internet would lead to an
avalanche of fraud, as “Evil Hackers” intercepted emails and Web forms, and har vested credit card numbers by the
million. These fears drove banks and software vendors
to devise two protocols to protect Web-based credit card transactions: SSL and
SET, which I’ll explain in the next section.
The hype surrounding this type of fraud has been grossly overdone. Intercepting
email is indeed possible, but it’s surprisingly difficult in practice—so much so that
governments are bullying ISPs to install snooping devices on their networks to make
court-authorized wiretaps easier [114]. But the cost of such devices is so high that the
ISPs are resisting this pressure as forcefully as they can. I’ll go into this further in
Chapter 21. And although it is possible to redirect a popular Web page to your own site
using tricks such as DNS cache poisoning, it’s a lot simpler to tap the plain old telephone
system—and no one worries much about the few credit card numbers that get
harvested from hotel guests this way.
Credit card numbers are indeed available on the Net, but usually because someone
hacked the computer of a merchant who disobeyed the standard bank prohibition
against retaining customer credit card numbers after being paid. (As this book was going
to press, VISA announced that, starting in 2001, all its merchants will have to obey

90
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

10 new security rules; for example, they must install a firewall, keep security patches
up to date, encrypt stored and transmitted data, and regularly update antivirus software
[752].) Likewise, fraudulent Web-based transactions do occur, but mainly because of
poor implementation of the system whereby cardholder addresses are checked during
authorization. The real problem facing dot-coms is disputes.
It is easy to repudiate a transaction. Basically, all the customer has to do is call the
credit card company and say, “I didn’t authorize that,” and the merchant will be saddled
with the bill. This was workable in the days when almost all credit card transactions
took place locally, and most were for significant amounts. If a customer
fraudulently repudiated a transaction, the merchant would pursue them through the
courts and harrass them using local credit reference agencies. In addition, the banks’
systems are often quite capable of verifying local cardholder addresses.
But the Internet differs from the old mail order/telephone order regime, in that many
transactions are international, amounts often are small, and verifying overseas addresses
via the credit card system is problematic. Often, all the call center operator can
do is check that the merchant seems confident when reading an address in the right
country. Thus, the opportunity for repudiating transactions—and getting away with
it—is hugely increased. There are particularly high rates of repudiation of payment to
porn sites. No doubt some of these disputes happen when a transaction made under the
influence of a flush of hormones turns up on the family credit card bill, and the cardholder
has to repudiate it to save his marriage; but many are the result of blatant fraud
by operators.
At the time of writing, the press was reporting that the Federal Trade Commission
was prosecuting the operators of scores of adult Web sites, including playboy. com, for
billing thousands of users for supposedly free services. The scam was to offer a “free
tour” of the site, demand a credit card number, supposedly to verify that the user was
over 18, and then bill him anyway. Some sites billed other consumers who have never
visited them at all [389]. (Of course, none of this should have surprised the student of
more traditional telecomms fraud, as it’s just cramming in a new disguise.) If even apparently
large and “respectable” Web sites such as playboy. com indulge in such practices,
it’s much easier for consumers to get away with fraudulently repudiating transactions.

Security Engineering: A Guide to Building Dependable Distributed Systems


398
The critical importance of this for online businesses is that, if more than a small percentage
of your transactions are challenged by customers, your margins will be eroded;
and in extreme cases your bank may withdraw your card acquisition service. It has
been reported that the collapse of sportswear merchant boo.com was because it had too
many returns: its business model assumed a no-quibble exchange or refund policy. But
too many of its shipments were the wrong size, or the wrong color, or just didn’t appeal
to the customers. In the end, the credit card penalties were the straw that broke the
camel’s back [721].
This history suggests that technological fixes may not be as easy as many vendors
claim, and that the main recources will be essentially procedural. American Express
has announced that it will offer its customers credit card numbers that can be used once
only; this will protect customers from some of the scams. In order not to run out of
numbers, they will issue them one at a time to customers via their Web site (which will
drive lots of traffic to them) [204]. Many other bankers are already coming to the conclusion
that the way forward lies with better address verification, rather than with
cryptography [62].
However, if you’re working as a security engineer, then a lot of your clients will
want to talk about technical matters, such as encrypting credit card numbers, so we’ll
look at the available crypto mechanisms anyway.

91
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Protecting E-Commerce Systems


401
C S: {Order}KC, {Payment}KB, sigKC {h(Order), h(Payment)}
S B: {Summary}KB, {Payment}KB
B S: sigKB {Auth_response}
SET appears to have met its specifications, but failed to succeed in the marketplace.
The reasons are instructive.
First, the benefits turned out to be less than expected. Many large merchants
were breaking their cardholder agreements by retaining customer credit card
numbers—principally for use as indexes in marketing databases—and were not
prepared to stop using them. So a feature was added whereby merchants could
get the credit card number from the acquiring bank. This was thought to negate
much of the hoped-for security improvement. (In fact, it wasn’t that bad as
banks could have issued credit card numbers that were valid only for SET
transactions, so stealing them wouldn’t have mattered.)
Second, the costs were too high. Building a public key infrastructure to issue
all credit cardholders with public key certificates would have been enormously
expensive. Performance was also an issue.
Third, there was nothing in it for the customers. Customers trading on the Web
under MOTO rules could reverse the transaction if they were unhappy—not
just about the payment, but about the service, the product, or anything else.
Using SET transferred them to cardholder-present rules, and in many countries
removed this protection. Thus, customers were much worse off and would
have been insane to use SET. Also, installing SET usually involved downloading
megabytes of SET wallet and going through a laborious certification
procedure.
In the end, SET cost too much and delivered too little; and as far as the customers
were concerned, it was a disaster. It is being allowed to expire quietly. The main lesson
is, perhaps, that when designing systems for e-business, you should deal with issues as
they are in practice, rather than in theory, and think about how your design will affect
the interests of principals other than your client.

Security Engineering: A Guide to Building Dependable Distributed Systems

Many governments are thinking of issuing their citizens with public key certificates,
probably in smartcards, as next-generation identity cards. Although
most businesses are really only interested in whether they will be paid, governments
offer a range of services (such as tax and welfare) that can be
cheated by people who can masquerade as more than one person. So there is
much government interest in promoting the use of PKI technology, and this
has led to legislation, which I’ll discuss in Chapter 21.
The classic examples of closed PKIs are to be found in the networks operated by
military agencies and by banking service providers such as SWIFT, which use asymmetric
cryptography but do not publish any keys. Now that Win2K includes SSL as an
authentication mechanism, it can be used to set up secure wide area networking across
a number of scattered sites in a company; and if the number of sites is at all large, this
may involve the company operating its own PKI to manage the keys. So closed PKIs
may become much more common; and even where a service using asymmetric cryptography
is offered to the public, there may be no keys published. An example is the
Mondex electronic purse, which uses RSA cryptography and further protects the keys
in tamper-resistant smartcards.
At the time of writing, PKI was one of the most heavily promoted protection technologies.
92
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

However, it has a number of intrinsic limitations, many of which have to do


with the first interpretation—namely that the infrastructure is provided as a public
service that anyone can use. I discussed many of the underlying problems in Chapter 6.
Naming is difficult; and a certificate saying, “Ross Anderson has the right to administer
the machine foo.com” means little in a world with dozens of people of that name.
One way to solve the naming problem is for each business to run its own closed PKI,
which might be thought of at the system level as giving customers a unique account
number which isn’t shared with anyone else. This leads to the “one key or many” debate.
Should I expect to have a single signing key to replace each of the metal keys,
credit cards, swipe access cards, and other tokens that I currently carry around? Or
should each of these be replaced by a different signing key? The second option is more
convenient for business as sharing access tokens can lead to huge administrative costs
and liability issues. It also protects the customer: I don’t want to have to use a key with
which I can remortgage my house to make calls from a payphone. It’s just too easy to
dupe me into signing a message by having the equipment display another, innocuous,
one. (I don’t know how to be confident even of a digital signature I make on my own
PC, and I’ve worked in security for over fifteen years. Checking all the software in the
critical path between the display and the signature software is way beyond my patience.)
But the existing PKI machinery was largely developed to provide an electronic
replacement for the telephone book, and tends to assume that everyone will have a
unique name and a unique key. This in turn means an open PKI architecture.
This leads to political issues, such as, which CAs do we trust, and why? Various attempts
have been made by governments to license certification authorities and to impose
a condition that there be “back doors” for law enforcement access. Governments
overwhelmingly favor the one-key-fits-all model of the world. It’s also possible that
open PKIs will be favored by network economics, which I discuss in Section 19.6:
once a single PKI becomes dominant, the pressure on everyone to use it could lead to
its being entrenched as a monopoly. (This is the reason for the high stock market
valuation of VeriSign.)

There are numerous issues of implementation detail. For example, the dominant certificate
format (X.509) does not have the kind of flexible and globally scalable ‘hot
card’ system that the credit card industry has developed over the years. It rather assumes
that anyone relying on a certificate can download a certificate revocation list
from the issuing authority. This is tiresome and inefficient Better ways of managing
certificate revocation have been proposed; the question is whether they’ll get implemented.
Also, X.509 is designed to certify names, when for most purposes people want
to certify an authorization.
There are many other limitations of certificates:
Most users disable the security features on their browsers, even if these
weren’t disabled by default when the software shipped. Recall that the third
step of the SSL protocol was for the client browser to check the certificate
against its stored root certificates. If the check fails, the browser may ask the
client for permission to proceed; but the way most browsers are configured, it
will just proceed anyway. This lets many e-commerce sites save themselves
money by using expired certificates or even self-signed certificates; most users
don’t see the warnings (and wouldn’t know how to respond if they did).
The main vendors’ certificates bind a company name to a DNS name, but are
not authorities on either; and they go out of their way to deny all liability.
Competition in the certificate markets is blocked by the need to get a new root
certificate into Microsoft Internet Explorer (VeriSign stockholders will consider
this to be not a bug but a feature).
Even when you do get a valid certificate, it may be for a company and/or DNS
93
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

name different from that of the site you thought you were shopping at, because
the Web site hosting or the credit card acquisition was outsourced.
U.S. export regulations have meant that large numbers of sites use weak encryption.
A recent survey of SSL security showed that of 8,081 different secure
Web servers, 32% weren’t, for various reasons—too-short keys, weak
ciphersuites, and expired certificates being the main causes [567].
There is also a serious problem with consumer protection law. Introducing a presumption
that digital signatures are valid undermines the signer’s rights; in paper systems
the risk of fraud is borne by the party who relies on the signature. In the absence
of such a presumption, it makes no difference to the cardholder’s liability whether the
sites at which he shopped had valid certificates with appropriate names; and in any
case, there’s no convenient way for him to record his transactions to show that he exercised
due diligence. I go into some of these issues at greater length in Chapter 21.
In short, while public key infrastructures can be useful in some applications, they are
unlikely to be the universal solution to security problems as their advocates seem to
believe. They don’t tackle most of the really important issues at all.

Summary

Most of the problems facing online businesses are no different from those facing other organizations,
and the network security risks are not much different from those facing traditional businesses. The real
increased risks to an e-business have to do with ways in which traditional risk management
mechanisms don’t scale properly from a world of local physical transactions to one of worldwide,
dematerialized ones. Credit card transaction repudiation is the main example at present. There are also
significant risks to rapidly growing companies that have hired a lot of new staff but that don’t have the
traditional internal controls in place.

12. Assume that you are the newly hired corporate information systems security officer for a small
Midwest manufacturing firm and you are shocked to learn that there is no virus protection
software. Write a memo to your president, stating the case for implementing such software.

ANSWER

Afedzi Consultancies Limited

Memo
To: Mr John Edward Afedzi
From: Daniel Afedzi
CC: Mr Anthony Nkunyimdzihene Afedzi
Date: 8 December 2021
Subject: Implementation of Virus Protection Software

94
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Studying antivirus solutions has been ongoing for some time.

Stephen Cobb notes that organizations can increase the benefits of antivirus
products and technologies by considering the security management view of
antivirus deployment. This includes:
The security context of implementing antivirus solutions into a networkcomputing
configuration
The security configuration options of antivirus technologies
The tools or processes related to operating, administering, maintaining or
providing control metrics about an antivirus deployment
"If you do the right things with AV software, you're well protected," Cobb explains.
"But an awful lot of people don't configure or install it properly, nor do they
update. We need things that can protect and defend systems automatically. That
isn't being done at the moment." (Saita)

One important aspect of security management is to consider the relative


placement and interaction of antivirus products, adjunct tools, and processes
within the context of a defense-in depth-strategy.

Depending on the rigor needed to control virus infections, there might also be a
need to restrict the user community from accepting mail attachments that could
be accessed outside the anitvirus deployment design. Figure 1, Node 1.b can be
configured to deny POP and IMAP access to external email servers. Some
organizations might have to go so far as to restrict access to web mail services
with firewall controls. Experience shows that blocking all web mail servers is an
arduous experience. Even blocking the web mail servers of the major providers
like AOL, Hotmail, and Yahoo by DNS name involves periodic maintenance.
If there is a need to control the administrative overhead associated with
propagating virus infections outside an organization, you can configure a
firewall’s access control filters to deny the use of non-organization SMTP servers
from user email clients. Depending on the structure of an organization’s Intranet,
similar IMAP, POP and SMTP access controls can also be placed at the border
router(s) associated with Layer 5,

Antivirus or anti-virus software is used to prevent, detect, and remove computer viruses, worms, and
trojan horses. It may also prevent and remove adware, spyware, and other forms of malware. This page
talks about the software used for the prevention and removal of such threats, rather than computer
security implemented by software methods.

A variety of strategies are typically employed. Signature-based detection involves searching for known
patterns of data within executable code. However, it is possible for a computer to be infected with new
malware for which no signature is yet known. To counter such so-called zero-day threats, heuristics
95
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

can be used. One type of heuristic approach, generic signatures, can identify new viruses or variants of
existing viruses by looking for known malicious code, or slight variations of such code, in files. Some
antivirus software can also predict what a file will do by running it in a sandbox and analyzing what it
does to see if it performs any malicious actions.

No matter how useful antivirus software can be, it can sometimes have drawbacks. Antivirus software
can impair a computer's performance. Inexperienced users may also have trouble understanding the
prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a
security breach. If the antivirus software employs heuristic detection, success depends on achieving the
right balance between false positives and false negatives. False positives can be as destructive as false
negatives. Finally, antivirus software generally runs at the highly trusted kernel level of the operating
system, creating a potential avenue of attack.

There are several methods which antivirus software can use to identify malware.

Signature based detection is the most common method. To identify viruses and other malware,
antivirus software compares the contents of a file to a dictionary of virus signatures. Because viruses
can embed themselves in existing files, the entire file is searched, not just as a whole, but also in
pieces.[11]

Heuristic-based detection, like malicious activity detection, can be used to identify unknown viruses.

File emulation is another heuristic approach. File emulation involves executing a program in a virtual
environment and logging what actions the program performs. Depending on the actions logged, the
antivirus software can determine if the program is malicious or not and then carry out the appropriate
disinfection actions.[12]

Yours Sincerely

Daniel Afedzi

96
DANIEL AFEDZI MANAGEMENT INFORMATION SYSTEMSTUDENT NO: WA 10209

Implementation of Virus Protection Software

P.O.Box FN 110, Asafo Kumasi, Ghana. +233 24 3351203 deafedzi@yahoo.co.uk

97

You might also like