Analysis Information: Technology

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Risk Analysis for Information Technology

REX KELLY RAIDER, JR., CHARLES A. SNYDER,


and HOUSTON II. CARR

R I:x KL.I.Y RAtNER, JR., is Assistant Professor in they Department of Management ‹it
Auburn University. His rcscarch interests include executive information systems,
end-user computing, and current technology underlying information systems. He has
published in the Journal of Management Informoiion S yslews , and MIS Q uarirrl y.
among other journals.

CHAR LES A. SNYDFR is Professor and head of the Department of Management at


Auburn University. His research interests include information resource management,
end -user computing, and telecommunications management. He has published in ihe
Journal of Management Informa Lion S yslems, Information and Manay eineni. iñe
Academy of MnnaCement Review , as well as other journals.

HOUS ION H. CARR is Associate Professor of Management and Coordinator of MIS


Programs at Auburn University. His research interest includc end-user coin puting
and tclccommunications management. He has published in Journal o/ lilacaYemeni
liiforuuiiioii S ystetiis , MIS uarierl y, -and Information and Mana gerein , nmong othc•r
journals. He is thc auLhor at Managing End I'/.her Comf›uiin¿.

Ads rRACi: As Information Technology (IT) has become incrmisingl y important to


ihe cornpctitivc position of firms, managers havc grown more sensitive to their
organization’s overall IT risk management. Recent publicity concerning losses in-
currcd by companies because of problems with their sophisticated inrormaiion systems
ha.s focused attention on the importance of these systems to the organization. In an
attempt to minimize or avoid such losses, managers are cmploying various qii:ilihativc
and quantiHtivc risk analysis methodologies. The risk analysis literature, however,
suggests that these managers typically utilize a single methodology, not a combination
of methodologies. This paper proposes a risk analysis process that employs a combi-
nation of qualitative and quantitative methodologies. This proccss should pro›'ide
managers witfi a better approximation of their organization’s overall intonation
technology risk posture. Practicing managers can use this proposed process as a
guiilcline in formulating new risk analysis procedures and/or evaluating their currcnl
risk analysis proccdurcs.

KEY WORDS AND rI IRSSES: computcr sec urity, MIS risk analysis, risk nianagcm cnt.

INFORMATION fl ECHNOLOGY (1 RESOURCES arc becoming increasin gly essential for


the firm’s daily operations and strategic objectives. Risk management for IT resources
has thereforc assumed greater importance. As companies become more dependent
upon IT, the consequences of loss of IT assets can be critical. as the following examples
An earlier version of this paper was presumed at the 14i h Symposi um on Operations R esearch,
U Inn. Germany, September 1989.

Copyright O M.fi. Sh • q•c, Inc. , 199 ]


130 RAINER, Sh YDER. A h'D CORR

demonstrate:
AT&T's nationwide network suffered the most widespread malfunction in its his iory
due to a software failure. {I 0}
Robert Mon is, Jr. was convicind of brcaking federal 1 aw when he introduced a com-
puter virus into Internet. affecting more ihan 6.000 computers. [23]
Transition io a new company wide computer system introduced system errors that
caused reduced net income for the founh quarter ai Sun Microsystems Inc. [21 ]
American Airline’s Sabre reservation sysiem crashed for 13 hours when data from an
application program wiped out vital information. [45]

Parker stated the importance of IT to an organization when he noted that the amount
of time that an organization can go without computer scrvices, or the “ mean time tc
belly-up,” was steadily decreasing [36].
While IT risk management is a rclatively new field, it is a natural extension of
management’s concern for thc organization’soverall risk posture. The objective of IT
risk management is to minimize the total expected cost of loss by selecting and
implementing an optimal combination Of security measures [14, 20, 22, 34, 35]. In
spite of the growing importnnce of IT risk management, a majority of companies do
not have a tested, up-to-date risk management program [19, 27, 28, 30, 36, 50].
Thc puiposc of this paper is Io exarninc risk analysis methodologies. First, the risk
analysis process is placed in the context of the overall risk management process. The
various risk analysis methodologies are discussed. The article then proposes a risk
analysis process employing a combination of methodologies thot practicing managcrs
can use in their organizations.

The Risk Management Process


FOR EVERY ORGANHATION THERE IS SOMfi COMBmATIOx of optim um loss prevention
and reasonable cost. The purpose of risk management is to find that combination | 16,
17]. Simply stated, risk management seeks to avoid or lessen loss. Loss implies injury
to, denial or cass to, or dcstruction of, assets. The opportunity for a threat to impact
an asset adversely is called a vulnerability. Risk is present when an asset is vulnerable
to a thrcai. Assets associated with IT include data, hardware, software, personncl, ;ind
faciliues. Facilities consist of computer sites, the communications network plant, and
associatcd subsystem installations [5, 36].
Many authors have discussed the varied thrcam to IT resources [4, 19, 31, 34, 3ñ,
49]. Table 11ists these threats and shows that they may originate from physical sources,
unauthorized access, and authorized access. Further, threats may originate from
internal and external sources. The threats arising from authorized acccss are the most
difficult to find and assess.
The risk management life cycle (see Figure 1) begins with the risk analysis process,
which analyzes IT assets, threats to those assets, and vulnerabilities of those asses.
Risk analysis is discussed in the next section of this paper.
Following risk analysis, several alternative sccurity measurcs (or combinations
IUSK ANnL YSIS I-OF 1NI'OItN4ATIO N 4’E£'li OLOGY 131

Table 1 Potential Threats to IT

(1) Physical Threats


• Equipment failure [19, 20]
• Power in lemiption [19, 20]
• Ctintaminants in the air [19, 2(i]
• Weather [19, 20]
• Fire [ 19, 20]
• Humidity [19. 201
• Destruction or damage to facility or equipment by humans 119, *0l
• Death or injury to key pcrsonnel [1 I , 13]
• Personnel turnover | 11, 12]
(2) Unauthorized physical or electronic access
• Microcomputer theft 10]
• Thcf't of datu [20, 42]
• Di.sclosurc, mmlification, and/or dcstrucfion of data Uh, 52, i3
• Hackers [10]
• Viruses, bombs, worms [52]
• EDI fraud [2, 6, 31]
• Phantom nodes on network {2, 6, 9, 31, 46)
• Voice mail fraud [2, 6, 31 )
• Software piracy 10]
(3) Authorized physical or electronic access
• I/S applications portfolio may be outdated or obsolete (S, 29]
• Increase in end-user computing [1, 19, 26, 41, 44]
—increased cnd -user access to corporate datn
prolifcratitin of end-user-(levclopcd applications

thereof) that address a parucular risk are prescntcd to managcment for an iiiiplciiicn-
u\tiun dccisi‹›n. The cost of \hc sccurity mcasurcs will bc wcighcd again.sl heir
cffcctiveness in reducing risk. Because 100 percent IT security is imptis.sible, nianag-
crs must evaluate the choicc of security measure.s. In general, any security ineiisurc or
combination of such measures must not cost more than it would cost to i‹ilcrate the
problem addressed by the mcasurc(s) [33j. Figure 2 indicates the tradc-offs between
increased costs and incrcascd security mea.sures. This figure also shows that there is
some optimal point bclwcen sccurity and cost.
After nianagcmcnt has decided on appropriate security mcasurcs, the implementa-
tion process is initiated und the security measwes are installed. Ncxt, ii sur vcillance
and auilit process is necessary; this should incorporate tcsting and cvalu:ition of the
IT security system. Data are gathered so that the cflcctivcncss of the security measures
in reducing risk may be determined. There are two basic strateg ics for survcill:ince
and audii of sccuri iy syste me [53]' (1) activcl y inoni iorin5 control system s as they ar‹•
132 RAI hO. S NUDER. ANH CARR

md Audit

seoirity

mmage‹ent

Figure 1. The Risk Management Life Cyc]e

Fig ure 2. Trade-offs between Cost of Protection and Cost of Loss

challengcd (e.g., control programs that monitor user logon procedures and keep a
record of logon failures); (2) challenging the security systrm under controlled,
simulated conditions (e.g., hiring outside personnel to attempt to infiltrate an
organization’s security mechanisms).
The risk management process is cyclical for two rensons. First, the changing
environment will generate new external threats for IT assets. Second, the security
RIS x Avaxsis rowim'ORMATION TECI I hOLOGY 133

surveillance and audit process will uncover ncw internal threats to IT assets. Therefore,
management must periodically reovaluate the organization’s exposure to loss.

The Risk Analysis Process


RISK ANALYSIS IS THE PROCESS MANAGERS UsE to examinc the thrcnts facing their IT
assets and thc vulnerabilities of those assets to the risks (see Figure 3). Risk analysis
consists of identifying IT assets, identifying threats to those assets, and determining
the vulnerability of asset(s) to threat(s).
Risk analysis is the basis on which risk management decisions are made. However,
rlsk analysis is also the point in tic risk managcment process where the most difficulty
arises. Thc fact that risk must often be expresscd in perceptions ma1«cs any measure
of risk highly subjective. The high degree of subjectivity associated with perccption
of risk means that management is often skeptical of ri.sk analysis results, and is
unwilling to make important decisions based on them.
There are many methodologies currently in use that attempt to measure tne loss
exposure of IT assccs. These mcthodoTogies may be categorized as quantilañvc or
qualiiafive.
Regardless of the methodology used, it should have certain desirable properties.
First. it should be acceptable to management, the usct community, and the information
systems department. Second, even though no single risk analysis methodology can
consider ail risks, it should be as comprehensive as possible, and bc able io handle
new technologies. Third, it should he 1os' C£tl1y sound. Fourth, it should be pracucal—
meaning that it should deliver optimum protection for the cost. Fifth, it should be open
io continuing cvaluaiion from all parties. Sixth, ii should br conducivc to le‹arning,
acctimp:inicd by clear documentation and records of dclibcrations.

Quantitative Risk Analysis Methodologies


Most quantitative methods are based on regarding loss exposure as a function ot the
vulnerability of an asset to a threat multiplied by the probability of ihe threat becoming
a real icy. These methods are callcd “expected value analyses," and include annualized
loss expectancy (ALE) [34, 37, 38, 40a, Courtncy [37, 381, ths Livcrmore Risk
Analysis Methodology (LRAM) [22], and Stochastic Dominance [48
It is impormnt that managers involved in the risk analysis process rrach consensus
regarding the value of IT assets and probability estimates. Delphi techniques may be
used in conjunction z'ith any of the four qunnutntive methodologies to elicit value.s of
IT assets as well as probability estimates of threat occ urrencc.
The Dclphi approach begins with an initial open solicitation step followed by
inulupte rounds ot feedback, and may be used to identify issues ind obtain consensus
among participants. This tccljniquc is effective whcn participants ;ire not in physical
proximity, a situation typical with busy managcrc.
For example, the risk analysis process using Delphi techniques might follow this
scenario. Participants place initial values on IT assets and threat probabilities and the
134 RAI EUR . SN YDI:It, A US C IRR

IDR4TIMCATION
ANDANALYSIS

AMDANALYSIS

VUINERA8ILMY
IDDfFIMCATION
AND ANALYSIS

RISK ANALYSIS
Fig are 3 . The kisk Ana lysis I'rocess

results arc averaged. Each participant receives a li.si showing hi.s or her inc i wiou:
value in relation to the average valucs. Participants may now change iheir valuc.s or
provide a rauonale(s) for not doing so. In subsequent rountls, p:irticip:mls rccci 'c the
new average value, the previous average ranking(s), and their previous in tiii 'itl nal
ranking(s). The process continues until consensus is reached or until consen.sus cannot
be reac hed because individuals refuse to change their rankings.
The Delphi technique is not the truly approach thai may be used to re:ie h consensus.
Managers iiiay meet to brainstorm and negotiate. Group decision support system s
would be valuable in these ineeungs for anonymous in put and rJpitl attainment of
consensus. Although only Delphi techniques are noted in Uic remainder of' the paper r.
meetings (with or without GDSS) may be employed.

Annualized Loss Expectancy


Annualized Loss Expectancy (ALE) l3°, 37, 35, 4()] first lisL8 a lI IT *ssets. Then, with
the help of users and other knowledgcable parties such as bfIS/DP personnel and
general managcmcnt, potential threats to those assets are anal yzed alt›ng with ilie lo.ss
RISK ANALYSIS FOR INFORMATION TEMINOLOGY 135

that would result from the realization of those threats. The vulnerability of each asset
to a threat is expressed as some probability of occurrence per year. Muluplying the
probability of occurrence per year by the expected loss yields the expected loss per
year from a particular threal/vulnerability pair. The summation of the expected losses
represents the total IT risk exposure. This figure represents what management may
reasonably spend for security and preventive measures.

Annualized Loss Expectancy Formula


Total IT risk exposure = Z (•: • **.›

where vñlntrability = Pi = probability of occurrence per year, and expected loss =


EL, -- expected loss of ith threat/vulnerability pair.

Courtney
Courtney [37,38] modified the standard ALE approach by adopting scales of magni-
tude. In Courtney’s mcthod, dollar loss is expressed as a power of ten, and the
estimated frequency of occurrence is selected from a range of magnitudes. The
resulting estimates are used in a formula that yields a dollar estimate of the annualized
expected loss or exposure that an organization might reasonably expect.

Courtney Formula
}{} z;
Tolal IT exposure = ,

where p -- an integer representing orders of magnimde of estimnted frequencies of a loss,


and v = an integer represenung orden of magnitude of the dotlar impaci of an asset’s loss.
Table 2, assmiated with the Courtney methodology, shows the integer values for v
and p, and the corresponding values associaud with each integer.
Table 3, also associated with the Courtney methodology, shows the values of p and
v,and the expected dollar loss associated with each pair oY p and v values. In Table 3,
K represents $1,000 and M represents $1 ,000. For example, if the dollar impact
of a threat to an asset is $10,000 (v = 4), and the estimated frequency of occurrence is
once in three years (p = 3), the expected loss is $3333.33.
The Courtney method results in a generalized measure of annualized expected loss
and is therefore best suited for initial risk analysis. This method saves time, effort, and
money, but it is inexact. Therefore. if greater accuracy is mquéed, other quantitative
risk analysis methodologies must be used.

Livermore Risk Analysis Methodology


The Livermore Risk Analysis Methodology (LRW Hl operates similarly to ALE.
First, the LRAM methodology collects data regarding IT assets and then considers
136 RAI NER, S NY DER, A ND C UtR

Table 2 Ranges of Magnitude for Courtney J'vlethodology

v dollar impart
0 0 0
1 10 onoe in 300 years
2 100 2
3 1000 3 car:e in 3 years
4 10,000 4 once in 100 days
5 100,000 5 onoe in 10 days
6 1,000,000 6 crxe per day
7 10,000,000 7

Table 3 Estimated Dollar Losses from Courmey Methodology

Valuss of p

l 2 3 5

1 300 3X 30It
2 300 3X 30E 3OOX
Valuee 3 300 3K 30K 30OK 3M
of v 4 300 3K 30K 30OK 3H 30M
5 300 3E 30K 3O0E 3M 3OM 3O0H
6 3K 30x 3eDx D 3m4 3O0H
7 30K 300E 3M 3Od 30DH

risk clements Risk elements are combinations of risk initiators, the ir propagation
paths (i.e., thc means by which thcy can affect IT assets), possible resulting
consequences, and applicable controls (see Figure 4). LRAM diflcrs from ALE,
however, in that it does not attempt to derive a total risk measure, but l’ocuscs
instead on the risk produced by individual risk elements involving the occurrcnc e
of single event losses.

LRAM Form ma

A (J21,) = MPL (C,) x T'CF (EMC,) x EF T,),

whcre R (fiñ,) is the annualized measure of risk associated with ihe itfi risk element;
MPL (C ) is the maximum potential loss (MPL ) that can be estimated to result from
unmitigated conscqucnccs (C,) of a threat to an asset; PCB (PMC ,) is the probability
of a control failure (PCB') of a combined set of preventive anrl mitigauve controls
[PMC ); and EF (T) is tfie expected frequency of a ihrcat expressed as an annuai
probability.
RIS K ANALYSIS FOR I NEORM ATION TECH NOLOG Y 1S9

Stochastic Dominance
Stochastic Dominance [40] initially assumes that some disaster or risk has already
occurred. The effects of the disaster are then analyzed over time by examining all areas
of the organization that are susceptible io In.sses if IT assets are damaged or destroyed.
S tochastic dominance describes thcsc loss functions mathematically and uses com -
putcr simulation to analyze rhem.
The stochastic dominance methodology answers the spccil'ic question of what type
of contingency plan should bc used if disaster stri1‹es. S4=• s cment does not have to
cstimatc thc probability that disaster might strike and dan age IT assets. Rather ,
management cstimatcs how long it will take to recover from a disaster, and how rti uch
the business will suffer during that time period.
The stochastic dominance methodology defines three scqucnoal stages in recovery
from a disaster. Stage I is the time period between the initial loss of processing
capability and the actual opcration of the coniingenc y system. Stage II begins when
the contingency system starts operating, and ends when processing capability is I ir.st
restored. Stage III is the time period necessary for full recovery of the information
system to normal operations.

Daily Loss Formulae for ihc Three Stnges of Stochastic Dominance

Slade I . Juicy loss = R [I — e ”" ];


whcrc R -— prcdisastcr duly revenue, and / = numbcr of days spent in stagc I. Total
stage I loss = summation of daily losses.
Siag e II. daily loss is equal to tne loss incurred during the last
day of Stage I. Total Stngc II loss = summation of daily losses.
Stage Ill. daily loss = daily loss in stage II if 0 < i < 0.25a,
daily loss = DL2 z e"' 0 "°"*" ' “*’, if I > 0.257.
DLI -- daily loss in stage II;
T —— total timc spent in stage III.
Thc stocliasuc dominance approach considcrs omy major disa.stc that is, catastrophcx
that cause the loss o1 mainframe computing for ihe organic ition. This approach dccs not
consider othcr, smaller threats, such as left or modification of data (sec Table 1).

Advantages of Quantitative Risk Analysis Mcthodologies

Quantitnti ve risk analysis mcthodologics have several advantages. Participants musi


identify specific IT assets that are most susceptible to damage or disaster, Further,
participants must identify IT assets thai are most criucal io the operation of the
organization. Cicnerating and icsung contingency plans shows management where
problems are likely to occur if clamage or disastcr occurs. Finally, testing the contin-
gency plans will graphically demonstrate the value or IT assets io management.
138 RA I NER, SNYDER. ANO CARR

Disadvantages of Quantimtive Risk Analysis Methodologies

Quantitative risk analysis mcthodologics also have disadvantages. Estimaung the


probability of damage or loss of each IT asset is imprecise. In addition, the probability
distribution of losses is highly skewed. Many circumstances can cause minor prob-
lems, but few circumstanccs can cause major problcms. Quantitative risk analysis
tends to average these events, thus blurring the differences between the extremes and
implying similar solutions. Quantitative risk analysis techniques cannot lierally
define the contingency plan an organization should use. Finally, quantitative method-
ologies result in point estimates, which are statistically too high 50 percent of the time,
and too low 50 percent of the time.

Qualitative Risk Analysis Methodologies


It may be neither necessary nor desirable to spend tfie time and effort required to
perform a quantitative risk analysis. Management may decide that only a quick
evaluation of the firm’s IT risk posture is needed. In such cases, qualitative risk
analysis approaches may be used.
Qualitzu ve methodologies attempt io express risk in terms of descripu ve variables,
rather thin in precise dollar terms. These approaches are based on the assumption that
certain threat or loss data cannot be appropriately expressed in dollars or discrete
events, and that precise information may be unobtainable. These methodologies
include Scenario Analysis [23, 27, 34, 35], Fuzzy Metrics [23, 34, 35], and question-
naires [27, 34, 35]. Delphi techniques could be used with any of the ihrce methodol-
ogies presented herr to clarify dcscripuve or natural language variables.

Scenario Analysis [23, 27, 34, 35a


In this methodology, a group of expcns identifies IT asset and potential threats. The
group then develops various scenarios describing how those assets might be subject
to loss from the threats. These scenarios can be ranked in order of importance and will
quickly identify the weakest parts of a security program.
Scenarios are an excellent communication tool, in that they can graphically explain
how a loss could result. Management can therefore visualize the risk. Scenarios can
k especially useful in identifying vulnerability to intentional threats.

Fuzzy Metrics [23, 34, 35]

This methodology uses natural language values to describe asscts, threats, and security
mechanisms. Fuzzy metrics is statistically valid, but requires absolutely consistent
definitions and understanding of the linguistic variables. There is at so much debar
atx›ut the best way to modcl thc natural language exprcssions mathematically.
Fuzzy metrics utilizes fuzzy descriptors. For exampIc, asscts may have values of
large, medium, and small. Also. threats may have probabilities of occurrcnce of high,
RIS K ANA LYS IS FOR I NJ-Olt M A’I'ION 'PUCK I NO l.OC Y 1 29

medium, and low. The simplcst way for all pariicipanis in the risk anal 'sis process to
understand the descriptors is by labeling them. Participants may define “large” valued
assets to be those from $1 mil lion to $2 million, “medium ” from $100,000 to S I
million, and “small ” less than 5100,000. Furthcr, participants may define “ high ”
probabilities of threats to be trom 0.7 to 1.0, “ medium’ ’ from 0.35 to 0.7, and “ low ”
less than 0.35.
The most elemcntary method for mathematically niotlc ling these descriptors is
to use the mean ot' ihe rangc of cach descriptor. In our exam plc, ihe na can ot
“ large’ ' va lucd assets is S I.5 million, that of ‘‘mcdi u m’’ assets is S550,00€1, anal
that of “small’’ assets is $50,000. The mean of' “ high ” probabilities is 0.81,
“ medium’ ' is 0.525, and “ low” is 0.175. Therefore, the expected loss of‘ a large
asset under high probabili iy of a threat equals $1.5 million multiplied by 0.85, ‹ir
51.275 million.
Another method that can be used to yield expected losses is to calculate the ranges
of such losscs. For example, a large asset under high probability of a threal will yic Ill
ezpectcd losses Item $700,000 to S2 million:

low estimate = S I million x 0.7 = $700,000;


high cstiinate = $2 million x 1.0 = S2 million.

The difficulty of mathematically modeling fuzzy descriptors is illustrated by not in p


that the midpoint of thc range of expected losses is S I.35 million, This figure is higher
than that obtained above by multiplying the medau of the large asset range and ihe mean
of ihc high probabil iiy rangc (51.275 million). Both figures :cc “ correct.”

Qucsuonnaircs [27, 34, 35]


Questionnaires regarding risk analysis arc available front computer vendors, sccurity
companies, and publications on computer security. Questions are usually scgrcgatc‹l
into func uonal arcas such as in put, communications, processing, and output. They in:iy
also be lisied by asset, such as hardware, software, personnel. etc.
Questionnaires do provide an advantage. They typically can idcn0l y glaring
weaknesses often present in a firm where sccuriiy has recently become a concern or
where it has been ncg lectcd for a period of time. However, qucstionnaircs do ha c
disadvantages. They arc generic, while companies havc unique IT assets. They do not
consider tire probabilities associated w'ith poteni ia1 losses, nor do they consider ilie
magnitude of those pcicnlial losses.

Ad vantages of Qua1itntis’e Risk Anal ysis Methodologies


These methodologies save time, effort, and expense over quantitative methodologies
because IT assets nccd not havc exact Jollar valucs, nor do Lhzeafs need tu havc exact
probabil ities. Further, qualitative methodologies are valuable in idenii fy ing gross
weaknesses in a risk management portfolio.
140 RA1NER. SN YI9K R , A h G CK Rf

Disadvantages of Qualimtive Risk Anal ysis Mcthodologics

Qualitative methodologies are incxact. The variables used (i.e., low, medium, and
high) must be labeled and understood by all parties involvcrl in thc risk analysis,
including management. Management may consider qualiiativc methodologies suspect
because they do not provide “exact” dollar values and probabilities.

A Proposed Risk Analysis Process


THERE ARE MMfY RISK ANALYSIS MI TI IODOLDGIES available to an organiznu on.
These methodologies may be applied singly or in combination to help determine the
risk posture of the firm. However, the advantages and disadvantages of cach method-
ology suggest that each one may bestk applied to certain iyprs of ihreacs or certain
areas of the organization. Therefore, a combinaflon of methodologies provides the
optimum process for risk analysis in ihe firm.

Step 1: Use the Value Chain to Enumerate the Organization’s Activitie s


Before bcginning the risk analysis process, the firm must c lcarly understand ail its
various activities and the IT component of cach. The value chain is a systematic way
of examining ail the activities a firm performs, and their interaction {39]. This concept
divides the firm’s activihes into valtie acUvities, those csscntial, distinct, and inter-
dependent actions that bring a product or scrvice to a customer. These primary
activities consist of inbound logistics, operations, outbound logistics, marketing and
sales, and service.
Inbound logistics consists of those activities associated with receiving, storing, and
disseminating inputs to the product, such as material handling, inventory control, and
returns to supplicrs. Operations consists of those activities associated with transl orm-
ing in pm into final prorluct form, for example, machining, packaging, and assembly.
Outbound logistics consists of those activities associated with collecting, storing, and
physically dismibuting the product io buyers, such as warehousing, material handling,
and order processing. Marketing and sales consists of those activitics associated with
providing a mezns by which buyers can purchase the product and inducing them t‹i
do so. including advertising, promotion, sales force, pricing, etc. Service consists of
those activities associated with providing service to enhance or maintain the valuc of
the product, such as installation, repair, training, and parts supply.

Step 2: Use the Value Chain to Enumerate the IT Component


of Each Value Activity
The concept of the value chain has been employed to help managers understand how
information technology can be used to support their business activities [36]. Each
value activity has a physical component and an information -processing component.
ThephysicalcOmponenlcoosistso£physicM asksnccessry operCormtheacñvity
RIS K ANALYS IS FOR INFOR MATION TECHNO LOGY 14 I

The informauon-processing component includes the strps rcquired to capturc, manip-


ulate, and channel the data to support the activity. Information systems technology is
panicularly extensive in the vaiue chain, because every valuc activity crentes and uses
information t39].
Examples of IT components of inbound logistics’ value activities include inventory,
purchasing, and order processing sysems. Examples of IT components of operations’
value activities include computer-assisied design, computcr-aided manufacturing, and
robotics. Examples of IT components of outbound logistics’ value activities include
inventory, materials handling, and order-processing systems. Examples of‘ IT compo-
nents of marketing and sales’ value activities include multimedia and telecomm uni-
cations.ExamplesoflTcomponentsofservicevaiueacnvities includctelecommunications,
desktop publishing, and scheduling the service force.
To list and describe the tT asses that support cach value activity, participants (uscrs,
MISfDP personnel, general managemcnt) can use a qualitative methodology or
combination of qualitative methodologies such as scenarios or questionnaires. Furtticr,
they can employ Delphi techniques to refine the completeness of the IT asset list and
the characterization of IT assets.

Step 3: Use the Value Chain to Enumerate the Linkages


between Value Activities and to Determine the IT Assets
that Support Each Linkage
Linkages are relationships between the way one value activity is performcd and the
cost or performancc of another. For example, in a tast-food store, the timing of
promotional campaigns influences capacity utilization [39].
IT resources have an important role in linkagcs among value activities of all iypes,
because the coordination and optimization of linkages requires information how
among activities. A good example of IT support of linkages is American Airline’s
Sabre reservations system. American leases terminals to travel agents, which allows
automated reservations and ticketing. The system is also used inside American for
ticketing and issuing boarding passes, as wcll as in route scheduling. Aineric‹tn also
sells listings on the sysiem to other airlines.
The important point of linkages is Lhat the organization may nolicc ifie iMJpurtancc
of IT assets in areas that mighi, in isolation, be considered noncriiical. The import‹ancc
of lT in some of the activiues, such as operations, may be rclativcly obviou.s. At rite
same time, essential IT componcnfs in othcr •\ctivitics n4;Jy not readily bc associated
with the overall performance of the business. The performance of any activity m.ij'
a1'fect the performancc of an y oihcr. Thus, the vaiue chain can clearly inclic:ite to
managers the IT component of cach activity and, throu s° ifl kage exam inau on, re veal
the importance to the total business system.
To list and describe thc IT asscis that support each linkage, parucipants (user.s,
MIS/DP personnel, general management) should use a qualitative rnetliorlology or
combination of qualitativc methodologies such as scenarios or qucsii onn;iire.s. Further ,
142 RAC FOR, S KYDEk. A ID CATtA

they should cmploy Delphi techniques to refine the completeness of the IT asset list
and ihe characterization of IT assets.

Step 4: Use the Value Chain to Examine the


Organizational Value System and to Determine IT Assets
that Support Interorganizational Linkages
Linkages exist between a firm’s value chain and the value chains of suppliers and
customers. For example, a firm’s inbound logistics’ activities interact with a supplier’s
order enlry system, a supplier’s engineering staff works air a firm’s technology
development and manufacturing activities, or frequent supplier shipments can reduce
a fém ’s inventory needs. Similar examples cxist for linkages between the firm’s value
chain and customer value chains. The best example of IT resources that support
interorganizational linkagcs is electronic data interchange (EDI).
To list and describe the IT assets that Support the value system’s interorganizational
linkages, participants tusers, MIS/DP personnel, general management) should use a
qualitative methodology or combinauon of qualitative methodologies such as scenar-
ios and/or questionnaires. Further, they can employ Delphi techniques to refine the
completeness of the IT asset list and the characterization of IT assets.

Step 5: Determine the Value of the IT Assets Listed


and Described in Steps I through 4
Thc organization has employed the value chain in an effort to catalog and describe all
IT assets. The participants in the risk analysis process must now assign a value to cach
IT asset. Therc are two methods for assigning such vaiues. The first is for the
participants io assign an exact dollar value to ench IT asset. Delphi techniques may
be employed to refine assigned dollar values by helping the participants reach
conscnsus. The second mcfhod is to emyloy fuzzy mcLric s and assign fuzzy de.scriptors
such as high, medium, and low as values of IT assets. Again, Dclphi techniques may
be used with fuzzy metrics.

Step 6: Enumerate the Possible Threats to IT Assets


To list and describe the threats to IT assets, parucipants (users, MIS/DP personnel,
general management) should use a qualitative mcthodology or combination of quali-
tative methodologies such as scenarios and/or questionnaires. Further, they may
employ Delphi techniques to refine the completeness of the threat list and the
characterization of the threats.

Step 7: Determine the Vulnerability of IT Assei s to Potential Threats


At this stas C 0 * th risk analysis process, ice organization has a list of all IT assets,
val ucs assigned to each IT asset, and a list of potential threats to IT assets. To dcteriiiine
the vulnerability of IT assets to threats, participants must first notc that any asset ma '
be vulnerable to more than one threat, and that one threat may impact more than one
asset. Thcrcforc, the participants should consider asset/thrcat pairs in the following
manner. The asseis and tlueats should be listed in side-by -sidc columns, and arrows
drawn to represent threats that may impact an asset, as shown in Figure S.
The vulnerability of the asset to the threat must be assigned ftir each reset/ihre;it
pair. The participants may definc these vulnerabilities precisely by designating a
number (e.g., the probability of thrcat 1 impacting asset 1 is 0.33), or by using fuz.zy
descriptors (e.g., the probability of threat 1 impacting asset 1 is metlium). 4 he
participants may use Dclphi techniques to r••finc probability numbers or fuxzy dcscrip-

Step S: Determine the IT Risk Exposure for the Organization


Participants now have the information they need to detemiine the overall IT risk
exposure for their organization. Participants have iwo possible paths, depending on
the risk analysis process to this point. If the panicipants have assigned exact numerical
values to IT assets, and the vulnerabilities of those asscts to threats, they may use the
quantit@vc methodologies of ALE, Courmey, or LRAM. The ALE, Courtney, and
LRAM methods will result in prcc ise dollar esu mates for the firm’s risk exposure. The
Courtney method, hti vever, is not as exact as the other two because it uses ranges o1'
values in its calculations. An iniportnnt point is that if the participants are performing
risk anal ysis for a complete disaster, they shouId employ stochastic dominance anal
not one of the other quantitative methods.
If the participants have a.ssigncd fuzzy descriptors to IT assc us and the v ulncrabilitics
of those assets to threats, they must use fuzr.y metrics io describe the firm’s risk
exposure. Fuzzy metrics wifi result in descriptions (e.g., high, medium, and low) that
will categorize the I irm’s risk cxposure.
Tablc 4 summarizes the risk analysis process and the methodologies applicable in
each step.
As yet, there is no empirical evidence to support the proposed risk analysis process.
S ubstaniiating the effectiveness of this process will require a field study of one or more
organizations. The l‘ield study would cxaminc thc risk analysis process cach firm uses.
In addition, the study would ask each company how clenrly it feels that its risk analysis
process portrays the firm’s ovcrall IT risk posture.

Cone fusion
ON i. I i UN i3R ID Y i:K r ENT •5Y CVRTTX IS L POSSIBLE. It simply cosIs too m uc h and is too
inconvcnicnt. Zalubsky [521 t•‹• f at the root of ihc prublcnJ For the risk nJanaycin m
pr‹›ccss is the overall lack ol awarcncss, attcnrion, c‹Jnccrn, ‹tm commiuncnt from
inane\gcmcnt. Further, Zimmcrman 153] mlcs that, as a result of huying security, the
firm will n‹›t be any bcttcr, it will merely be lcss likely lo be any wor4c. Sccurity
personnel want management to invest corporate resources in system sccurity mca.surcs
figure #. LRn Risk Elements

Threat 1 Asset 1
Threat 3, Asset 3
Ihreat 2 Asset 2

Ihreat 3 Asset 3

etr.

Fig u re 5. Asset-Threat Pairs

that will be unpopular with staff (because they bring new rules and restrictions), and
ihat will show no apparent return on investment. The most common situations thai
managcment faces are ihose in which threats are believed possible, but no empirical
evidence is avalable. The best possible scenario for security pcrsonncl is a disaster
that happens to the organization next doot, because such an occurrence will graphi-
cally provide empirical evidence to management.
Tlac proposed risk analysis process using a combination of meihodologics seems to
be more effective than the use of any single methodology. A single risk analysis
methodology is not flexible enough to properly consider the wide variety of IT asses,
threats, and vulnerabilities, and still give managemCnt a reasonable estimate of the
organization’soverall IT risk exposure. In addition, the proposed risk analysis process
includes management in every step, thereby ensuring management participation. By
using a combination of risk analysis methodologies, the firm can overcome these
problems.
In particular, it is important to note that the proposed risk anai ysis process dcx•s not
use quantitative methodologies until the last step (Step 8). The reason for this late
appearance is chat a lnrge amounc of information musc be dctcrmined before quanriw-
five methodologies can be used wilh even rudimeniaty accuracy. Qualilativc
“ brainstorming” methodologies are used to obtain this oflcn imprccisc information.
Too many organizations, if they have a formal risk analysis process at all, simply use
a singlc qualitative or quantitative mcthodology for the entire process. Such an
approach is too simplistic for a process that is based solely on informed estimates.
The risk analysis process proposed here can help with the difficulty of convincing
management to invest in security measures. Properly used, this process will provide
management with an idea of the importance and value of their lT assets, the threats to
those asseis, and the probability that those threats will succeed in harming the assets.
This risk analysis process will provide management with a basis for logical and
prudent investment in a risk management program.
Table 4. The Risk Analysis Process and Applicablc Methodologies

Alavi, M and Weiss, I.R Managing the risks associated with end- user
Journal of Ma' 8emeru Information $ ystems 2, 3 (W inier 1985/86), 5-20.
2. Bacon, M. Assessing public network security. Telecommunicaiiom 23. 12 (December
l9tl3). 19-20.
3. Banking World. The management of risk. October 1988. 34-36.
Behesfi, H.M and Maison, M.R. Computer based management informadon
146 RAI NER, S NYD n. hND CARR

1954), 66-67.
6. Briere. D., and Walton, L.T. The best way to prevent e disastcr: plan for on*. Network
I't'orld 6. 47 (November 27. 1989), pp. 1. 31, 34.
4. Business Week. How personal computers can trip up executives. September 24, 1984,
94-102.
8. Cash, 1.1.-, Mc£arlan, F.W.; and Me Kenncy, J.L. Corporate lnforn'mlion S stems Man-
agement, 2d ed. Homewood, IL: Richard D. Irwin, 1988.
9. Cohen. F. Design and proteclion of an information network under a partial ordering: a
case study. Computers and Security 6 (1987), 332-338.
10. Cownunicntions Week. Hacker’s doings are costly. January 29, 1990, 14.
11. Croekford, N. In Introduction to Risk Managemem. Cambridge, MA: Woodhead-
Faulkner, 1980.
12. Crouch, E.A.C., and Wilson, R. fiisUfiene/ir Anofysñ. Cambridge, MA: Ballingcr ,
1982.
13. Doherty, N.A. Corporate pisk 3fanagetnent. New York: McGraw-Hill, 1985.
14. Emmett, A. Managing risk. Network World. Novemkr 21, 1988, 37-38, 47.
15. Even-Tsur. D., and Shulman. D. Designing built-in system controls. Journal o/ /›i/or
maiion S ysiez s Manageme rri (Winter 1989), 28-36.
16. Farthing, D. How risk management is driven by ins m ance. National Underwriter. No-
vember 2. 1987, 9, 14-15.
17. Farthing, D. Is risk managem‹:rit essential to cor{x›rate surv iv or? Rred Management,
FMu t988,34-37.
18. Gerard, T. Ev aluaung a disaster recovery plan. Datacenter Manager 2, 1 (January/Feb-
ruary 1990). 36N1.
19. Gonnella, G. Making eipensiv c decisions. Information Center 4. 10 (October 1985),
32-35.
20. Gottfried, l.S. When disaster strikes. mural of Information S ystem.s Management
(Spring 1989). 86-89.
21. Greenstein, I. MIS snafu lost orders, could mean sun loss, Ma›mpemem in/or ion
S ysiems fleet 10, 23 (June 5, 1989). 4.
22. G uarro, S.B. Principles and procedures of the LRAM approach to informal ion systems
risk analysis and management. Counters end Securil y 6 (1987). 493-504.
23. Hammond, R. Improving productivity through risk management. In Umbaugh, R.F.,
ed., Handbook of MIS Ma genieru, 2d ed. Boston: Auerbach, 1988. 655-665.
24. Housel, T.1.; El Sa wy. O.A.; and Donovan. P.F. Information systems for crisis man-
egemont. MIS @uarrer/} 10, 4 (December 1986), 389-402.
25. Keller, 1.J. Software bug closes AT&T’s network, cutting phone service for millions
in U.S. Wall Street Journal, January 16, 1990, A2.
26. King. J.L. Coping with the perils of expanding PC use. Journal oflrformalion S ys-
tems Manageme *, * tF•1i 1986). 66-70.
27. Abel, J. Rlsk analysis in the 1980’s. American Federation of Informal ion Processing
Soeiei res Proceedings (National Computer Confaence) 49 (May 19-22, 1980), 831-836.
28. Mansui. B.J. The night Ore lights went out in Georgia. Telecommunicm ions 23. 12
(December 1989), 67-68.
29. McFsrlan, F.W., and McKenney. J. IS technology organization issues. In Me Ferlan.
F.W., and McKenney, 1., Gorporote Information S ysiems Management. Homewood, IL:
Irwin. 1983, 27-48.
30. Meall, L. Survival of the finest. Accountonry (March 1989). 140-141.
31. Morrisey, J. New security risks seen for ’90s. PC tr ees December 11. 1989, 55.
32. Murray, W. How much is enough? Expert says security efforts should pay. not cosi,
Compwerworld, April 6, 1988, 30.
33. National Bureau of Standards. Guidelines for ADP risk analysis. Washington, DC:
U.S. Department of Commerce, FtPS Publicauon 87, Merch 1981.
34. Ncwion, f.D. Developing and implementing an EDP disaster contingency plan for a
small naiional bank. Unpublished master’s thesis, Auburn University, 1987.
35. Newton, 1.D.. and Snyder, C.A. Risk analysis for computerized information systems.
Proceedings, Southern Msnagemenr Association (1987), 306-308.
RIS K ANALYS IS FOR INFORMATION TECHNOLOGY IAJ

36. Parker, D.B. Computer Security Management. Rcston, VA: Reston Publishing, 1981.
37. Perschke, G.A.; Karabin, S.J.; and Brock, T.L. Four steps to information security.
Journal of Accou rarer (April 1986), 104-111.
38. Pickard, R. Computer crime. f/ormotioc Center 5, 9 (September 1989), 18-27.
39. Porter, M.E., and Millar. Y.E. How information gives you competitive advantage. Har-
vard Business Review (July-Augusi 1985). 149-160.
40. Post G.V., and Diltz, LD. A stochasuc dominance approach to risk analysis of com-
puter systems. MIS Quarlerl 10, 4 (December 1986), 363-375.
41. Pybum. P.J. Managing personal computer use: the role of corporate management infor-
mation sysmms. Journal of Managemeru Informfion S ynems 3, 2 (Winter 1986-87), 49-70.
42. Radding. A. Plans for a safer system. CompHer Decixioiu (April 6. 1987). 36-38.
43. Riemer, M.S. Fighfing computer viruses through systems management. Information
Center 3, 9 (September 1989). 11-17.
44. Rivard, S., and Huff, S.L. An empirical study of users as application developers. lnfor-
cin£ion and ñfonngemerif 8, 2 (January 1985), 89-102.
45. Scheier. R.L. American Aéline’s still shoring up SABRE. PC Week, June 26, 1989.
46. Semilof, M. Network disaster planning. CornmuNcai ionsWeek, Febniary 12, 1990,
33-3S.
47. Sobol, M. DP all iance bolsters security. CompNerworld. December 16, 1985. 59-60.
48. Stem, E. The lessons of San Francisco. Dataceruer Manager 2, 1 (January/February
1990), 30-35.
49. Tate. P. Risk I the third facior. Daiamation, April 15. 1988, 58-64.
50. V iiale, M.R. The growing risks of iriforinañon systems success. MIS Quarferl y 10, 4
(December 1986). 327-334.
51. Wood, C.C. The human immune system as an information systems security model.
Computers and Security 6 (1987), 511-516.
52. Zalubski. J. Threat of viruses must be iaLen seriously. Network World. luly 31. 1989,

53. Zimmerman, J.S. 1s your computer insecure? Datamaiion, May 15, 1985. I 19-128.
View publication stats

You might also like