Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

A Comparison of Commercial and MilitarY computer Security Policies

David D. Clark* - David Il. Wilson*’e

* Senior Research Scientist, MIT Laboratory for Computer Sciencf


545 Technology Square, Cambridge, MA 02139
**
Director, Information Security Servicesl Ernst & whinneY
2000 National City Center, Cleveland, OH 44114

ABSTRACT misunderstanding about the underlying


policies the two are trying to enforce
Most discussions of often leads to difficulty in
computer
security focus on control of disclosure. understanding the motivation for certain
In Particular, the U.S. Department of mechanisms that have been developed and
Defense has developed a set of criteria the other.
espoused by one 9rouP or
for computer mechanisms to provide discusses the military
This paper
control of classified information. policy, presents a security
security
However, for that core of data policy valid in many commercial
processing concerned with business situations, and then compares the two
operation and control of assets, the policies to reveal important differences
primary security concern is data between them.
integrity. This paper presents a policy The military security policy we are
for data integrity based on commercial referring to is a set of policies that
data processing practices, and compares the control of classified
regulate
the mechanisms needed for this policy information within the government. This
with the mechanisms needed to enforce well-understood, high-level information
the lattice model for information security policy is that all classified
security. We argue that a lattice model information shall be protected from
is not sufficient to characterize unauthorized disclosure or
integrity policies, and that distinct declassification. Mechanisms used to
mechanisms are needed to Control enforce this policy include the
disclosure and to provide integrity.
mandatory labeling of all documents with
their classification level, and the
assigning of user access categories
based on the investigation (or
INTRODUCTION “clearing”) of all persons permitted to
use this information. During the last
Any discussion of mechanisms to 15 to 20 years, considerable effort has
enforce computer security must involve a gone into determining which mechanisms
particular security policy that should be used to enforce this policy
specifies the security goals the system within a computer. Mechanisms such as
must meet and the threats it identification and authorization of
must
resist. For example, the high-level users, generation of audit information,
security goals most often specified are and association of access control labels
that the system should prevent with all information objects are well
disclosure or theft of understood. This policy is defined in
unauthorized
the Department of Defense Trusted
information, should prevent unauthorized
and should computer System Evaluation Criteria
modification of information,
prevent denial of service. [DOD], often called the “Orange Book”
Traditional
from the color of its cover . It
threats that must be countered are
system penetration by unauthorized articulates a standard for maintaining
actions by confidentiality of information and is,
persons, unauthorized
authorized persons, and abuse of special for the purposes of our paper , the
“military” information security policy.
privileges by systems programmers and
facility operators. These The term “military” is perhaps not the
threats may
be intentional or accidental. most descriptive characterization of
Imprecise or conflicting assumptions this policy; it is relevant to any
policies situation in which access rules for
about desired often confuse
discussions sensitive material must be enforced. We
of computer security
use the term ‘military” as a concise tag
mechanisms. In particular, in comparing
which at least captures the origin of
commercial and military systems, a
the policy.

184
CH2416-61871000010 184SOi.000 19871EEE
In the commercial environment, provides a good starting point, we begin
preventing disclosure “ often with a brief summary of computer
important, but preventing ~~authorized security in the context of classified
data modification is usually paramount. information control.
In particular, for that core of The top-level goal for the control
commercial data processing that relates of classified information is very
to management and accounting for assets, simple: classified information must not
preventing fraud and error is the be disclosed to unauthorized
primary goal. This goal is addressed by individuals. At first glance, it
enforcing the integrity rather than the appears the correct mechanism to enforce
privacy of the information. For this this policy is a control over which
reason, the policy we will concern individuals can read which data items.
ourselves with is one that addresses This mechanism, while certainly needed,
integrity rather than disclosure. We is much too simplistic to solve the
will call this a commercial policy, in entire problem of unauthorized
contrast to the military information information release. In particular,
security policy. We are not suggesting enforcing this policy requires a
that integrity plays no role in military mechanism to control writing of data as
concerns. However, to the extent that well as reading it. Because the control
the Orange Book is the articulation of of writing data is superficially
the military information security associated with ensuring integrity
policy , there is a clear difference of rather than preventing theft, and the
emphasis in the military and commercial classification policy concerns the
worlds. control of theft, confu~ion has arisen
While the accounting principles that about the fact that the military
are the basis of fraud and error control mechanism includes strong controls over
are well known, there is yet no Orange who can write which data.
Book for the commercial sector that Informally, the line of reasoning
articulates how these policies are to be that leads to this mechanism is as
implemented in the context of a computer follows. To enforce this policy, the
system. This makes it difficult to system must protect itself from the
answer the question of whether the authorized user as well as the
mechanisms designed to enforce military unauthorized user. There are a number
information security policies also apply of ways for the authorized user to
to enforcing commercial integrity declassify information. He can do so as
policies. It would be very nice if the a result of a mistake, as a deliberate
same mechanisms could meet both goals, illegal action, or because he invokes a
thus enabling the commercial and program on his behalf that, without his
military worlds to share the development knowledge, declassifies data as a
costs of the necessary mechanisms . malicious side effect of its execution.
However, we will argue that two distinct This class of program, sometimes
classes of mechanism will be required, called a “Trojan Horse” program, has
because some of the mechanisms needed to received much attention within the
enforce disclosure controls and military. To understand how to COntrOl
integrity controls are very different. this class of problem in the computer,
Therefore, the goal of this paper is consider how a document can be
to defend two conclusions. First, there declassified in a noncomputerized
is a distinct set of security policies, context . The simple technique involves
related to integrity rather than copying the document, removing the
disclosure, which are often of highest classification labels from the document
priority in the commercial data with a pair of scissors, and then making
processing environment . Second, some another copy that does not have the
separate mechanisms are required for classification labels. This second
enforcement of these policies, disjoint copy , which physically appears to be
from those of the Orange Book. unclassified, can then be carried past
security guards who are responsible for
MILITARY SECURITY POLICY controlling the theft of classified
documents. Declassification occurs by
The policies associated with the copying.
management of classified information, To prevent this in a computer
and the mechanisms used to enforce these system, it is necessary to control the
policies, are carefully defined and well ability of an authorized user to copy a
understood within the military. data item. In particula~, once a
However, these mechanisms are not computation has read a data item of a
necessarily well understood in the certain security level, the system must
commercial world, which normally does ensure that any data items written by
not have such a complex requirement for tnat computation have a security label
control of unauthorized disclosure. at least as restrictive as the label of
Because the military security model the item previously read. It is this

185
mandatory check of the security level of Yet, these packages are used commonly in
all data items whenever they are written industry and viewed as being rather
that enforces the high level security effective in their meeting of industry
policy. requirements. This would suggest that
An important component of this industry views security requirements
mechanism is that checking the security somewhat differently than the security
level on all reads and writes is policy described in the Orange Book .
mandatory and enforced by the system, as The next section of the paper begins a
opposed to being at the discretion of discussion of this industry view.
the individual user or application. In
a typical time sharing system not COMMERCIAL SECURITY POLICY FOR INTEGRITY
intended for multilevel secure
operation, the individual responsible Clearly, control of confidential
for a piece of data determines who may information is important in both the
read or write that data. Such commercial and military environments.
discretionary controls are not However, a major goal of commercial data
sufficient to enforce the military processing, often the most important
security rules because, as suggested goal, is to ensure integrity of data to
above, the authorized user (or programs prevent fraud and errors. No user of
running on his behalf) cannot be trusted the system, even if authorized, may be
to enforce the rules properly. The permitted to modify data items in such a
mandatory controls of the system way that assets or accounting records of
constrain the individual user so that the company are lost or corrupted. Some
any action he takes is guaranteed to mechanisms in the system, such as user
conform to the security policy. Most authentication, are an integral part of
systems intended for military security enforcing both the commercial and
provide traditional discretionary military policies. However, other
control in addition to the mandatory mechanisms are very different.
classification checking to support what The high-level mechanisms used to
is informally called “need to know.” BY enforce commercial security policies
this mechanism, it is possible for the related to data integrity were derived
user to further restrict the long before computer systems came into
accessibility of his data, but it is not existence. Essentiallyr there are two
possible to increase the scope in a mechanisms at the heart of fraud and
manner inconsistent with the error control: the well-formed
classification levels. transaction, and separation of duty
In 1983, the U.S. Department of among employees.
Defense produced the Orange Book, which The concept of the well-formed
attempts to organize and document transaction is that a user should not
mechanisms that should be found in a manipulate data arbitrarily, but only in
computer system designed to enforce the constrained ways that preserve or ensure
military security policies. This the integrity of the data. A very
document stresses the importance of common mechanism in well-formed
mandatory controls if effective transactions is to record all data
enforcement of a policy is to be modifications in a log so that actions
achieved within a system. To enforce can be audited later. (Before the
the particular policy of the Orange computer, bookkeepers were instructed to
Book, the mandatory controls relate to write in ink, and to make correcting
data labels and user access categories. entries rather than erase in case of
Systems in division C have no error. In this way the books
requirement for mandatory controls, themselves, being write-only, became the
while systems in divisions A and B log, and any evidence of erasure was
specifically have these mandatory indication of fraud.)
maintenance and checking controls for Perhaps the most formally structured
labels and user rights. (Systems in example of well-formed transactions
Division A are distinguished from those occurs in accounting systems, which
in B, not by additional function, hut by model their transactions on the
having been designed to Permit formal principles of double entry bookkeeping,
verification of the security principles Double entry bookkeeping ensures the
of the system. ) internal consistency of the system’s
several security systems used in the data items by requiring that any
commercial environment, specifically modification of the books comprises two
RACF , AcF/2 , and CA-TopSecret, were parts, which account for or balance each
recently evaluated using the Orange Book other. For example, if a check is to be
criteria. The C ratings that these written (which implies an entry in the
security packages received would cash account) there must be a matching
indicated that they did not meet the entry on the accounts payable account.
mandatory requirements of the security If an entry is not performed properly,
model as described in the Orange Book. so that the parts do not match, this can

186
be detected by an independent test enforce these two rules. To ensure that
(balancing the books). It is thus data items are manipulated only by means
possible to detect such frauds as the of well-formed transactions, it is first
simple issuing of unauthorized checks. necessary to ensure that a data item can
The second mechanism to control be manipulated only by a specific set of
fraud and error, separation of duty, programs. These programs must be
attempts to ensure the external inspected for proper Construction, and
consistency of the data objects: the controls must be provided on the ability
correspondence between the data object to install and modify these programs, so
and the real world object “ that their continued validity is
represents . Because computers do n;; ensured. To ensure separation of
normally have direct sensors to monitor duties, each user must be permitted to
the real world, computers cannot verify use only certain sets of programs. The
external consistency directly. Rather, assignment of people to programs must
the correspondence is ensured indirectly again be inspected to ensure that the
by separating all operations into desired controls are actually met.
several subparts and requiring that each These integrity mechanisms differ in
subpart be executed by a different a number of important ways from the
person. For example, the process of mandatory controls for military security
purchasing some item and paying for it as described in the Orange Book, First,
might involve subparts: authorizing the with these integrity controls, a data
purchase order, recording the arrival of item is not necessarily associated with
the item, recording the arrival of the a Particular security level, but rather
invoice, and authorizing payment. The with a set of programs permitted to
last subpart, or step, should not be manipulate it. Second, a user is not
executed unless the previous three are given authority to read or write certain
properly done. If each step is data items, but to execute certain
performed by a different person, the programs on certain data items. The
external and internal representation distinction between these two mechanisms
should correspond unless some of these is fundamental. With the Orange Book
people conspire. If one person can controls, a user is constrained by what
execute all of these steps, then a data items he can read and write. If he
simple form of fraud is possible, in is authorized to write a particular data
which an order is placed and payment item he may do so in any way he
made to a fictitious company without any chooses. With commercial integrity
actual delivery of items. In this case, controls, the user is constrained by
the books appear to balance; the error what programs he can execute, and the
is in the correspondence between real manner in which he can read or write
and recorded inventory. data items is implicit in the actions of
Perhaps the most basic separation of those programs. Because of separation
duty rule is that any person permitted of duties, it will almost always be the
to create or certify a well-formed case that a user, even though he is
transaction may not be permitted to authorized to write a data item, can do
execute it (at least against production so only by using some of the
data) . This rule ensures that at least transactions defined for that data
two people are required to cause a item. Other users, with different
change in the set of well-formed duties, will have access to different
transactions. sets of transactions related to that
The separation of duty method is
data.
effective except in the case of
collusion among employees. For this MANDATORY COMMERCIAL CONTROLS
reason, a standard auditing disclaimer
is that the system is certified correct The concept of mandatory control is
under the assumption that there has been central to the mechanisms for military
no collusion. While this might seem a security, but the term is not usually
risky assumption, the method has proved applied to commercial systems. That is,
very effective in practical control of commercial systems have not reflected
fraud. Separation of duty can be made the idea that certain functions, central
very powerful by thoughtful application to the enforcement of policy,
of the technique, such as random designed as a fundamental characteris~!~
selection of the sets of p:;g;e to of the system. However, it is important
perform some operation, so any to understand that the mechanisms
proposed collusion is only safe by described in the previous section, in
chance. Separation of duty is thus a some respects, are mandatory controls.
fundamental principle of commercial data They are mandatory in that the user of
integrity control. the system should not, by any sequence
Therefore, for a computer system to of operations, be able to modify
be used for commercial data processing, list of programs permitted to manipul~?~
specific mechanisms are needed to a particular data item or to modify the

187
list of users permitted to execute a separation of duty can be done only by a
given program. If the individual user comparison with application-specific
could do so, then there would be no criteria. The separation of duty
control over the ability of an determination can be rather complex,
untrustworthy user to alter the system because the decisions for al 1 the
for fraudulent ends. transactions interact. This greater
In the commercial integrity discretion means that there is also
environment, the owner of an application greater scope for error by the security
and the general controls implemented by officer or system owner, and that the
the data processing organization are system is less able to prevent the
responsible for ensuring that all security officer, as opposed to the
programs are well-formed transactions. user, from misusing the system. To the
As in the military environment, there is system user, however, the behavior of
usually a designated separake staff the two mandatory controls is similar.
responsible for assuring that users can The rules are seen as a fundamental part
execute transactions only in such a way of the system, and may not be
that the separation of duty rule is circumvented, only further restricted,
enforced. The system ensures that the by any other discretionary control that
user cannot circumvent these controls. exists.
This is a mandatory rather than a
discretionary control. COMMERCIAL EVALUATION CRITERIA
The two mandatory controls, military
and commercial, are very different As discussed earlier, RACF, ACF/2,
mechanisms. They do not enforce the and CA-TopSecret were all reviewed using
same policy. The military mandatory the Department of Defense evaluation
control enforces the correct setting of criteria described in the Orange Book.
classification levels. The commercial Under these criteriat these systems did
mandatory control enforces the rules not provide any mandatory controls.
that implement the well-formed However, these systems, especially when
transaction and separation of duty executed in the context of a
model. When constructing a computer telecommunications monitor system such
system to support these mechanisms, very as CICS or IMS, constitute the closest
different low-level tools are approximation the commercial world has
implemented. to the enforcement of a mandatory
An interesting example of these two integrity policy. There is thus a
sets of mechanisms can be found in the strong need for a commercial equivalent
Multics operating system, marketed by of the military evaluation criteria to
Honeywell Information Systems and provide a means of categorizing systems
evaluated by the Department of Defense that are useful for integrity control.
in Class B2 of its evaluation criteria. Extensive study is needed to develop
A certification in Division B implies a document with the depth of detail
that Multics has mandatory mechanisms to associated with the Department of
enforce security levels, and indeed Defense evaluation criteria. But, as a
those mechanisms were specifically starting point, we propose the following
implemented to make the system usable in criteria, which we compare to the
a military multilevel secure environment fundamental computer security
[wHITMORE]. However, those mechanisms requirements from the “Introduction” to
do not provide a sufficient basis for the Orange Book. First, the system must
enforcing a commercial integrity model. separately authenticate and identify
In fact, Multics has an entirely every user, so that his actions can be
different set of mechanisms, called controlled and audited. (This is
protection rings, that were developed similar to the Orange Book requirement
specifically for this purpose for identification.) Second, the system
[SCHROEDERI. Protection rings provide a must ensure that specified data items
means for ensuring that data bases can can be manipulated only by a restricted
be manipulated only by programs set of programs, and the data center
authorized to use them. Multics thus controls must ensure that these programs
has two complete sets of security meet the well-formed transaction rule.
mechanisms , one oriented toward the Third, the system must associate with
military and designed specifically for each user a valid set of programs to be
multilevel operation, and the other run, and the data center controls must
designed for the commercial model of ensure that these sets meet the
integrity. separation of duty rule. Fourth, the
The analogy between the two forms of system must maintain an auditing log
mandatory control is not perfect. In that records every program executed and
the integrity control model, there must the name of the authorizing user. (This
be more discretion left to the is superficially similar to the Orange
administrator of the system, because the Book requirement for accountability, but
determination of what constitutes proper

188
the events to be audited are quite valid state as follows. By definition
different. ) it will take the CDIS into a valid state
In addition to these criteria, the if they were in a valid state before
military and commercial environments execution of the TP. But this
share two requirements. First, the precondition was ensured by execution of
computer system must contain mechanisms the IVP. For each TP in turn, we can
to ensure that the system enforces its repeat this necessary step to ensure
requirements. And second, ttl: that, at any point after a sequence of
mechanisms in the system must TPs , the system is still valid. This
protected against tampering proof method resembles the mathematical
unauthorized change. These t:; method of induction, and is valid
requirements, which ensure that the provided the system ensures that only
system actually does what it asserts it TPs can manipulate the CDIS.1
does, are clearly an integral part of While the system can ensure that
any security policy. These are only TPs manipulate CDIS, it cannot
generally referred to as the “general” ensure that the TP performs a
or “administrative” controls in a well-formed transformation. The
commercial data center. validity of a TP (or an IVP) can be
determined only by certifying it with
A FORMAL MODEL OF INTEGRITY respect to a specific integrity policy.
In the case of the bookkeeping example,
In this section, we introduce a more each TP would be certified to implement
formal model for data integrity within transactions that lead to properly
computer systems, and compare our work segregated double entry accounting. The
with other efforts in this area. We use certification function is usually a
as examples the specific integrity manual operation, although some
policies associated with accounting automated aids may be available.
practices, but we believe our model is Integrity assurance is thus a
applicable to a wide range of integrity two-part process: certification, which
policies. is done by the security officer, system
To begin, we must identify and label owner, and system custodian with
those data items within the system to respect to an integrity policy; and
which the integrity model must be enforcement, which is done by the
applied. We call these “Constrained system. Our model to this point can be
Data Items,” or CDIS. The particular summarized in the following three rules:
integrity policy desired is defined by
two classes of procedures: Integrity Cl: (Certification) All IVPS must
Verification Procedures, or IVPS , and properly ensure that all CDIS
Transformation Procedures, or TPs. The are in a valid state at the time
purpose of an IVP is to confirm that all the IVP is run.
of the CDIS in the system conform to the
integrity specification at the time the C2: All TPs must be certified to be
IVP is executed. In the accounting valid. That is, they must take
example, this corresponds to the audit a CDI to a valid final state,
function, in which the books are given that it is in a valid
balanced and reconciled to the external state to begin with. For each
environment. The TP corresponds to our TP, and each set of CDIS that it
concept of the well-formed transaction. may manipulate, the security
The purpose of the TPs is to change the officer must specify a
set of CDIS from one valid state to “relation,” which defines that
another. In the accounting example, a execution. A relation is thus
TP would correspond to a double entry
transaction.
To maintain the integrity of the --- ----- ------ - ----
CDIS, the system must ensure that only a lThere is an additional detail which
TP can manipulate the CDIS. It is this the system must enforce, which is to
constraint that motivated the term ensure that TPs are executed serially,
constrained Data Item. Given this rather than several at once. During the
constraint~ we can argue that, at any mid-point of the eXeCUtiOn of a TP,
given time, the CDIS meet the integrity there is no requirement that the system
requirements. (We call this condition a be in a valid state. If another TP
“valid state.”) We can assume that at begins execution at this point, there is
some time in the past the system was in no assurance that the final state will
a valid state, because an IVP was be valid. To address this problem, most
executed to verify this. Reasoning modern data base systems have mechanisms
forward from this point, we can examine to ensure that TPs appear to have
the sequence of TPs that have been executed in a strictly serial fashion,

executed. For the first TP executed, we even if they were actually executed
can assert that it left the system in a concurrently for efficiency reasons.

189
of the form: (TPi, (CDIa, CDIb, commercial policy is likely to be based
CDIC, . . .)), where the list of on separation of responsibility among
CDIS defines a particular set of two or more users.
arguments for which the TP has There may be other restrictions on
been certified. the validity of a TP. In each case,
this restriction will be manifested as a
El: (Enforcement) The system must certification rule and enforcement
maintain the list of relations rule. For example, if a TP is valid
specified in rule c2, and must only during certain hours of the day,
ensure that the only then the system must provide a
manipulation of any CDI is by a trustworthy clock (an enforcement rule)
TP, where the TP is operating on and the TP must be certified to read the
the CDI as specified in some clock properly.
relation. Almost all integrity enforcement
The above rules provide the basic systems require that all TP execution be
framework to ensure internal consistency logged to provide an audit trail.
of the CDIS. To provide a mechanism for However, no special enforcement rule is
external consistency, the separation of needed to implement this facility; the
duty mechanism, we need additional rules log can be modeled as another CDI, with
to control which persons can execute an associated TP that only appends to
which programs on specified CDIS: the existing CDI value. The only rule
required is:
E2 : The system must maintain a list
of relations of the form: C4: All TPs must be certified to
(UserID, TPi, (CDIa, CDIb, CDIC, write to an append-only CD I
. . . )), which relates a user, a (the log) all information
TP, and the data objects that TP necessary to permit the nature
may reference on behalf of that of the operation to be
user. It must ensure that only reconstructed.
executions described in one of
the relations are performed. There is only one more critical
component to this integrity model. Not
C3: The list of relations in E2 must. all data is constrained data. In
be certified to meet the addition to CDIS, most systems contain
separation of duty requirement. data items not covered by the integrity
policy that may be manipulated
Formally, the relations specified arbitrarily, subject only to
for rule E2 are more powerful than those discretionary controls. These
of rule El, so El is unnecessary. Unconstrained Data Items, or UDIS, are
However, for both philosophical and relevant because they represent the way
practical reasons, it is helpful to have new information is fed into the system.
both sorts of relations. For example, information typed by a user
Philosophically, keeping El and E2 at the keyboard is a UDI; it may have
separate helps to indicate that there been entered or modified arbitrarily.
are two basic problems to be solved: To deal with this class of data, it is
internal and external consistency. As a necessary to recognize that certain TPs

practical matter, the existence of both may take UDIS as input values, and may
forms together permits complex relations modify or create CDIS based on this
to be expressed with shorter lists, by information. This impl ies a
use of identifiers within the relations certification rule:
that use “wild card” characters to match
classes of TPs or CDIS. C5 : Any TP that takes a UDI as an
The above relation made use of input V?ilUe mUSt be certified
UserID, an identifier for a user of the to perform only valid
system. This implies the need for a transformations, or else no
rule to define these: transformations, for any
possible value of the UDI. The
E3: The system must authenticate transformation should take the
the idenkity of each user input frOM a UDI to a CDI, OK
attempting to execute a TP. the UD I is rejected.
Typically, this is an edit
Rule E3 is relevant to both commercial program.
and military systems. Howevert those
two classes of systems use the identity For this model be effective, the
of the user to enforce very different various certification rules must not be
policies. The relevant policy in the bypassed. For example, if a user can
military context, as described in the create and run a new TP without having
Orange Book, is based on level and it certified, the system cannot meet its
category of clearance, while the goals. For this reason, the system must

190
ensure certain additional constraints. Other examples exist. Separation of
Most obviously: duty might be enforced by analysis of
sets of accessible CDIS for each user.
E4 : Only the agent permitted to We believe that further research on
certify entities may change the specific aspects of integrity policy
list of such entities would lead to a new generation of tools
associated with other for integrity control.
entities: specifically, the
associated with a TP. An agent OTHER MODELS OF INTEGRITY
that can certify an entity may
not have any execute rights Other attempts to model integrity
with respect to that entity. have tried to follow more closely the
structure for data security defined by
This last rule makes this integrity Bell and LaPadula [BELLI, the formal
enforcement mechanism mandatory rather basis of the military security
than discretionary. For this structure mechanisms. Biba [BIBA] defined an
to work overall, the ability to change integrity model that is the inverse of
permission lists must be coupled to the the Bell and LaPadula model. His model
ability to certify, and not to some states that data items exist at
other ability, such as the ability to different levels of integrity, and that
execute a TP. This coupling is the the system should prevent lower level
critical feature that ensures that the data from contaminating higher level
certification rules govern what actually data. In particular, once a program
happens when the system is run. reads lower level data, the system
Together, these nine rules define a prevents that program from writing to
system that enforces a consistent (and thus contaminating ) higher level
integrity policy. The rules are data.
summarized in Figure 1, which shows the Our model has two levels of
way the rules control the system integrity: the lower level UDIS and the
operation. The figure shows a TP that higher level CDIS. CDIS would be
takes certain CDIS as input and produces considered higher level because they can
new versions of certain CDIS as output. be verified using an IVP. In Biba’s
two model, any conversion of a UDI to a CDI
These two sets of CDIS represent
successive valid states of the system. could be done only by a security officer
The figure also shows an IVP reading the or trusted process. This restriction is
collected CDIS in the system in order to clearly unrealistic; data input is the
verify the CDIS’ validity. Associated most common system function, and should
with each part of the system is the rule not be done by a mechanism essentially
that governs it to ensure outside the security model. Our model
(or rules)
integrity. permits the security officer to certify
Central to this model is the idea the method for integrity upgrade (in our
that there are two classes of rules: ‘terms? those TPs that take UDIS as
enforcement rules and certification input values), and-thus recognizes ch e
rules. Enforcement rules correspond to fundamental cole of the TP (i.e.,
the application-independent security trusted process) in our model. More
functions, while certification rules generally, Bibs’s model lacks any
permit the application-specific equivalent of rule El (CDIS changed only
integrity definitions to be incorporated by authorized TP), and thus cannot
into the model. It is desirable to provide the specific idea of constrained
minimize certification rules, because data.
the certification process is complex, Another attempt to describe
prone to error, and must be repeated integrity using the Bell and LaPadula
after each program change. In extending model is Lipner [LIPNER]. He recognizes
this model , therefore, an important that the cakegory facility of this model
research goal must be to shift as much can be used to distinguish the general
of the security burden as possible from user from the systems programmer or the
certification to enforcement. security officer. Lipner also
For example, a common integrity recognizes that data should be
constraint is that TP S are to be manipulated only by certified
executed in a certain order. (production) programs. In attempting to
model (and in most systems of ~~da~?~ express this in terms of the lattice
this idea can be captured only by model, he is constrained to attach lists
storing control information in some CDI, of users to programs and data
and executing explicit program steps in separately, rather than attaching a list
each TP to test this information. The of programs to a data item. His model
result of this style is that the desired thus has no way to express our rule El.
policy is hidden within the program, By combining a lattice security model
rather than being stated as an explicit with the Biba integrity model, he more
rule that the system can then enforce. closely approximates the desired model,

191
Figure 1: Summary of System Integrity Rules
USERS

Cl: lVPvalidatea CDl atate

C5: TPsvatidate UDl

C2: TPspreserve valid state

$
CDI E4: Authorization lists

CDI
IVP

CDI CDI
D

System in
some state

El:
==%/CDlschanged only byauthorized T

192
but still cannot effectively express the do not mean to imply that the commercial
idea that data may be manipulated only world has no use for control of
by specified programs (rule El). disclosure, or that the military is not
Our integrity model is less related concerned with integrity. Indeed, much
to the Bell and LaPadula model than it data processing within the military
is to the models constructed in support exactly matches commercial practices.
of security certification of systems However, taking the Orange Book as the
themselves. The iterative process we most organized articulation of military
use to argue that TPs preserve concerns, there is a clear difference in
integrity, which starts with a known priority between the two sectors. For
valid state and then validates the core of traditional commercial data
incremental modifications, is also the processing, preservation of integrity is
methodology often used to verify that a the central and critical goal.
system, while executing, continues to This difference in priority has
meet its requirements for enforcing impeded the introduction of Orange Book
security. In this comparison, our CDIS mechanisms into the commercial sector.
would correspond to the data structures If the Orange Book mechanisms could
of the system, and the TPs to the system enforce commercial integrity policies as
code. This comparison suggests that the well as those for military information
certification tools developed for system control, the difference in priority
security certification may be relevant would not matter, because the same
for the certifications that must be system could be used for both.
performed on this model. Regrettably, this paPer argues there is
For example, if an Orange Book for not an effective overlap between the
industry were created, it also might mechanisms needed for the two . The
have rating levels. Existing systems lattice model of Bell and LaPadula
such as ACF/2, RACF , and CA-TopSecret cannot directly express the idea that
would certainly be found wanting in manipulation of data must be restricted
comparison to the model. This model to well-formed transformations, and that
would suggest that, to receive higher separation of duty must be based on
ratings, these security systems must control of subjects to these
provide: better facilities for end-user transformations.
authentication; segregation of duties The evaluation of RACF, ACF/2, and
within the security officer functions, CA-TopSecret against the Orange Book
such as the ability to segregate the criteria has made clear to the
person who adds and deletes users from commercial sector that many of these
those who write a user$s rules, and criteria are not central to the security
restriction of the security function concerns in the commercial world. What
from user passwords; and the need to is needed is a new set of criteria that
provide much better rule capabilities to would be more revealing with respect to
govern the execution of programs and integrity enforcement. ‘This paper
transactions. offers a first cut at such a set of
The commercial sector would be very criteria. We hope that we can stimulate
interested in a model that would lead to further effort to refine and formalize
and measure these kinds of changes. an integrity model, with the eventual
Further, for the commercial world, these goal of providing better security
changes would be much more valuable than systems and tools in the commercial
to take existing operating systems and sector.
security packages to B or A levels as There is no reason to believe that
defined in the Orange Book. this effort would be irrelevant to
military concerns . Indeed,
CONCLUSION incorporation of some form of integrity
controls into the Orange Book might lead
With the publication of the Orange to systems that better meet the needs of
Book, a great deal of public and both groups.
governmental interest has focused on the
evaluation of computer systems for ACKNOWLEDGMENTS
security. However, it has been
difficult for the commercial sector to The authors would like to thank
evaluate the relevance of the Orange Robert G. Andersen, Frank S. Smith, 111,
Book criteria, because there is no clear and Ralph S. Poore (Ernst & Whinney,
articulation of the goals of commercial Information Security Services) for their
security. This paper has attempted to assistance in preparing this paper. We
identify and describe one such goal , also thank Steve Lipner (Digital
information integrity, a goal that is Equipment Corporation) and the referees
central to much of commercial data of the paper for their very helpful
processing. comments .
In using the woras commercial and
military in describing these models, we

193
REFERENCES
[DoD] Department of Defense
Trusted computer System
Evaluation Criteria,
CSC-STD-011-83, Department
of Defense Computer
Security Centerr Fort
Meade, MD, August 1983.
[Bell] Bell, D. E. and L. J.
LaPadulat “Secure Computer
systems,” ESD-TR-73-278
(Vol I-III) (also Mitre
TR-2547), Mitre
Corporation, Bedford, MA,
April 1974.
[Bibs] Biba, K. J., “Integrity
Considerations for Secure
Computer systems,” Mitre
TR-3153, Mitre
Corporation, Bedford, MA,
April 1977.
[Lipner] Lipner, s. B.t
“Non-Discretionary
Controls for Commercial
Applications,” Proceedings
of the 1982 IEEE Symposium
Security and Privacy,
~kland, CA, April 1982.
[Schroeder] Schroeder, M. D. and J. H.
Saltzer, “A Hardware
Architecture for
Implementing Protection
Rings, “ Comm ACM, Vol 15,
3 March 1972.
[Whitmore] Whitmore, J. C. et al.,
“Design for Multics
Security Enhancements ,“
ESD-TR-74-176, Honeywell
Information Systems, 1974.

194

You might also like