Professional Documents
Culture Documents
Computer Security # CoSc4171 PDF
Computer Security # CoSc4171 PDF
COMPUTER SECURITY
(Module Code: CoSc-M4171 & Course Code: CoSc4171)
AY 2017-2018
TABLE OF CONTENTS
Glossary
References
CHAPTER ONE
FUNDAMENTAL OF COMPUTER SECURITY
1.1 What is Computer Security?
Computer security is the protection of the items you value, called the assets of a computer or
computer system. There are many types of assets, involving hardware, software, data, people,
processes, or combinations of these. To determine what to protect, we must first identify what has
value and to whom.
A computer device (including hardware, added components, and accessories) is certainly an asset.
Because most computer hardware is pretty useless without programs, the software is also an asset.
Software includes the operating system, utilities and device handlers; applications such as word
processing, media players or email handlers; and even programs that you may have written
yourself. Much hardware and software is off the-shelf, meaning that it is commercially available
(not custom-made for your purpose) and that you can easily get a replacement. The thing that
makes your computer unique and important to you is its content: photos, tunes, papers, email
messages, projects, calendar information, e-books (with your annotations), contact information,
code you created, and the like. Thus, data items on a computer are assets, too. Unlike most
hardware and software, data can be hard—if not impossible—to recreate or replace. These assets
are all shown in Figure 1.2.
These three things—hardware, software, and data—contain or express things like the design for
your next new product, the photos from your recent vacation, the chapters of your new book, or
the genome sequence resulting from your recent research. All of these things represent intellectual
endeavor or property, and they have value that differs from one person or organization to another.
It is that value that makes them assets worthy of protection, and they are the elements we want to
protect. Other assets—such as access to data, quality of service, processes, human users, and
network connectivity—deserve protection, too; they are affected or enabled by the hardware,
software, and data. So in most cases, protecting hardware, software, and data covers these other
assets as well.
Computer systems—hardware, software, and data—have value and deserve security protection.
Values of Assets
After identifying the assets to protect, we next determine their value. We make value based
decisions frequently, even when we are not aware of them. For example, when you go for a swim
you can leave a bottle of water and a towel on the beach, but not your wallet or cell phone. The
difference relates to the value of the assets.
The value of an asset depends on the asset owner’s or user’s perspective, and it may be independent
of monetary cost, as shown in Figure 1-3. Your photo of your sister, worth only a few cents in
terms of paper and ink, may have high value to you and no value to your roommate. Other items’
value depends on replacement cost; some computer data are difficult or impossible to replace. For
example, that photo of you and your friends at a party may have cost you nothing, but it is
invaluable because there is no other copy. On the other hand, the DVD of your favorite film may
have cost a significant portion of your take-home pay, but you can buy another one if the DVD is
stolen or corrupted. Similarly, timing has bearing on asset value. For example, the value of the
plans for a company’s new product line is very high, especially to competitors. But once the new
product is released, the plans’ value drops dramatically.
Assets’ values are personal, time dependent, and often imprecise.
The Vulnerability–Threat–Control Paradigm
The goal of computer security is protecting valuable assets. To study different ways of protection,
we use a framework that describes how assets may be harmed and how to counter or mitigate that
harm.
A vulnerability is a weakness in the system, for example, in procedures, design, or
implementation that might be exploited to cause loss or harm. For instance, a particular system
may be vulnerable to unauthorized data manipulation because the system does not verify a user’s
identity before allowing data access.
A threat to a computing system is a set of circumstances that has the potential to cause loss or
harm. To see the difference between a threat and a vulnerability, consider the illustration in Figure
1.4. Here, a wall is holding water back. The water to the left of the wall is a threat to the man on
the right of the wall: The water could rise, overflowing onto the man, or it could stay beneath the
height of the wall, causing the wall to collapse. So the threat of harm is the potential for the man
to get wet, get hurt, or be drowned. For now, the wall is intact, so the threat to the man is unrealized.
A vulnerability is a weakness that could be exploited to cause harm.
A threat is a set of circumstances that could cause harm.
However, we can see a small crack in the wall—a vulnerability that threatens the man’s security.
If the water rises to or beyond the level of the crack, it will exploit the vulnerability and harm the
man.
There are many threats to a computer system, including human-initiated and computer-initiated
ones. We have all experienced the results of inadvertent human errors, hardware design flaws, and
software failures. But natural disasters are threats, too; they can bring a system down when the
computer room is flooded or the data center collapses from an earthquake, for example.
A human who exploits a vulnerability perpetrates an attack on the system. An attack can also be
launched by another system, as when one system sends an overwhelming flood of messages to
another, virtually shutting down the second system’s ability to function. Unfortunately, we have
seen this type of attack frequently, as denial-of-service attacks deluge servers with more messages
than they can handle.
How do we address these problems? We use a control or countermeasure as protection. That is, a
control is an action, device, procedure, or technique that removes or reduces a vulnerability. In
general, we can describe the relationship between threats, controls, and vulnerabilities in this way:
A threat is blocked by control of a vulnerability. Controls prevent threats from exercising
vulnerabilities.
Before we can protect assets, we need to know the kinds of harm we have to protect them against,
so now we explore threats to valuable assets.
1.2 The Challenges of Computer Security
Computer and network security is both fascinating and complex. Some of the reasons follow:
1. Security is not as simple as it might first appear to the novice. The requirements seem to
be straightforward; indeed, most of the major requirements for security services can be
given self-explanatory, one-word labels: confidentiality, authentication, nonrepudiation, or
integrity. But the mechanisms used to meet those requirements can be quite complex, and
understanding them may involve rather suitable reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks are designed
by looking at the problem in a completely different way, therefore exploiting an unexpected
weakness in the mechanism.
3. Because of point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious from
the statement of a particular requirement that such elaborate measures are needed. It is only
when the various aspects of the threat are considered that elaborate security mechanisms
make sense.
4. Having designed various security mechanisms, it is necessary to decide where to use them.
This is true both in terms of physical placement (e.g., at what points in a network are certain
security mechanisms needed) and in a logical sense [e.g., at what layer or layers of an
architecture such as TCP/IP (Transmission Control Protocol/Internet Protocol) should
mechanisms be placed].
5. Security mechanisms typically involve more than a particular algorithm or protocol. They
also require that participants be in possession of some secret information (e.g., an
encryption key), which raises questions about the creation, distribution, and protection of
that secret information. There also may be a reliance on communications protocols whose
behavior may complicate the task of developing the security mechanism. For example, if
the proper functioning of the security mechanism requires setting time limits on the transit
time of a message from sender to receiver, then any protocol or network that introduces
variable, unpredictable delays may render such time limits meaningless.
6. Computer and network security is essentially a battle of wits between a perpetrator who
tries to find holes and the designer or administrator who tries to close them. The great
advantage that the attacker has is that he or she need only find a single weakness, while the
designer must find and eliminate all weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little
benefit from security investment until a security failure occurs.
8. Security requires regular, even constant, monitoring, and this is difficult in today’s short-
term, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the design
is complete rather than being an integral part of the design process.
10.Many users and even security administrators view strong security as an impediment to
efficient and user-friendly operation of an information system or use of information.
1.3 Security Attacks
A useful means of classifying security attacks, used both in X.800 and RFC 2828, is in terms of
passive attacks and active attacks. A passive attack attempts to learn or make use of information
from the system but does not affect system resources. An active attack attempts to alter system
resources or affect their operation.
Passive Attacks
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of
the opponent is to obtain information that is being transmitted. Two types of passive attacks are
the release of message contents and traffic analysis.
The release of message contents is easily understood. A telephone conversation, an electronic
mail message, and a transferred file may contain sensitive or confidential information. We would
like to prevent an opponent from learning the contents of these transmissions.
A second type of passive attack, traffic analysis, is subtler. Suppose that we had a way of masking
the contents of messages or other information traffic so that opponents, even if they captured the
message, could not extract the information from the message. The common technique for masking
contents is encryption. If we had encryption protection in place, an opponent might still be able to
observe the pattern of these messages. The opponent could determine the location and identity of
communicating hosts and could observe the frequency and length of messages being exchanged.
This information might be useful in guessing the nature of the communication that was taking
place.
Passive attacks are very difficult to detect, because they do not involve any alteration of the data.
Typically, the message traffic is sent and received in an apparently normal fashion, and neither the
sender nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of encryption.
Thus, the emphasis in dealing with passive attacks is on prevention rather than detection.
Active Attacks
Active attacks involve some modification of the data stream or the creation of a false stream and
can be subdivided into four categories: masquerade, replay, modification of messages, and denial
of service.
A Masquerade takes place when one entity pretends to be a different entity. A masquerade attack
usually includes one of the other forms of active attack. For example, authentication sequences
can be captured and replayed after a valid authentication sequence has taken place, thus enabling
an authorized entity with few privileges to obtain extra privileges by impersonating an entity that
has those privileges.
Replay involves the passive capture of a data unit and its subsequent retransmission to produce an
unauthorized effect.
Modification of Messages simply means that some portion of a legitimate message is altered, or
that messages are delayed or reordered, to produce an unauthorized effect). For example, a
message meaning “Allow John Smith to read confidential file accounts” is modified to mean
“Allow Fred Brown to read confidential file accounts.”
The Denial of Service prevents or inhibits the normal use or management of communications
facilities. This attack may have a specific target; for example, an entity may suppress all messages
directed to a particular destination (e.g., the security audit service).Another form of service denial
is the disruption of an entire network, either by disabling the network or by overloading it with
messages so as to degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks are
difficult to detect, measures are available to prevent their success.
On the other hand, it is quite difficult to prevent active attacks absolutely because of the wide
variety of potential physical, software, and network vulnerabilities. Instead, the goal is to detect
active attacks and to recover from any disruption or delays caused by them. If the detection has a
deterrent effect, it may also contribute to prevention.
1.4 Threats
We can consider potential harm to assets in two ways: First, we can look at what bad things can
happen to assets, and second, we can look at who or what can cause or allow those bad things to
happen. These two perspectives enable us to determine how to protect assets.
Think for a moment about what makes your computer valuable to you. First, you use it as a tool
for sending and receiving email, searching the web, writing papers, and performing many other
tasks, and you expect it to be available for use when you want it. Without your computer these
tasks would be harder, if not impossible. Second, you rely heavily on your computer’s integrity.
When you write a paper and save it, you trust that the paper will reload exactly as you saved it.
Similarly, you expect that the photo a friend passes you on a flash drive will appear the same when
you load it into your computer as when you saw it on your friend’s computer. Finally, you expect
the “personal” aspect of a personal computer to stay personal, meaning you want it to protect your
confidentiality. For example, you want your email messages to be just between you and your listed
recipients; you don’t want them broadcast to other people. And when you write an essay, you
expect that no one can copy it without your permission.
These three aspects, confidentiality, integrity, and availability, make your computer valuable to
you. But viewed from another perspective, they are three possible ways to make it less valuable,
that is, to cause you harm. If someone steals your computer, scrambles data on your disk, or looks
at your private data files, the value of your computer has been diminished or your computer use
has been harmed. These characteristics are both basic security properties and the objects of security
threats.
We can define these three properties as follows.
Confidentiality: the ability of a system to ensure that an asset is viewed only by authorized
parties.
Integrity: the ability of a system to ensure that an asset is modified only by authorized
parties.
Availability: the ability of a system to ensure that an asset can be used by any authorized
parties.
Taken together (and rearranged), the properties are called the C-I-A triad or the security triad. ISO
7498-2 [ISO89] adds to them two more properties that are desirable, particularly in communication
networks:
Authentication: the ability of a system to confirm the identity of a sender
Nonrepudiation or accountability: the ability of a system to confirm that a sender cannot
convincingly deny having sent something
What can happen to harm the confidentiality, integrity, or availability of computer assets? If a thief
steals your computer, you no longer have access, so you have lost availability; furthermore, if the
thief looks at the pictures or documents you have stored, your confidentiality is compromised. And
if the thief changes the content of your music files but then gives them back with your computer,
the integrity of your data has been harmed. You can envision many scenarios based around these
three properties.
Integrity
Examples of integrity failures are easy to find. A number of years ago a malicious macro in a Word
document inserted the word “not” after some random instances of the word “is;” you can imagine
the havoc that ensued. Because the document was generally syntactically correct, people did not
immediately detect the change. In another case, a model of the Pentium computer chip produced
an incorrect result in certain circumstances of floating-point arithmetic. Although the
circumstances of failure were rare, Intel decided to manufacture and replace the chips. Many of us
receive mail that is misaddressed because someone typed something wrong when transcribing
from a written list. A worse situation occurs when that inaccuracy is propagated to other mailing
lists such that we can never seem to correct the root of the problem. Other times we find that a
spreadsheet seems to be wrong, only to find that someone typed “space 123” in a cell, changing it
from a numeric value to text, so the spreadsheet program misused that cell in computation.
Suppose someone converted numeric data to roman numerals: One could argue that IV is the same
as 4, but IV would not be useful in most applications, nor would it be obviously meaningful to
someone expecting 4 as an answer. These cases show some of the breadth of examples of integrity
failures. For example, if we say that we have preserved the integrity of an item, we may mean that
the item is:
precise
accurate
unmodified
modified only in acceptable ways
modified only by authorized people
modified only by authorized processes
consistent
internally consistent
meaningful and usable
Integrity can also mean two or more of these properties. Welke and Mayfield recognize three
particular aspects of integrity—authorized actions, separation and protection of resources, and
error detection and correction. Integrity can be enforced in much the same way as can
confidentiality: by rigorous control of who or what can access which resources in what ways.
Availability
A computer user’s worst nightmare: You turn on the switch and the computer does nothing. Your
data and programs are presumably still there, but you cannot get at them. Fortunately, few of us
experience that failure. Many of us do experience overload, however: access gets slower and
slower; the computer responds but not in a way we consider normal or acceptable.
Availability applies both to data and to services (that is, to information and to information
processing), and it is similarly complex. As with the notion of confidentiality, different people
expect availability to mean different things. For example, an object or service is thought to be
available if the following are true:
It is present in a usable form.
It has enough capacity to meet the service’s needs.
It is making clear progress, and, if in wait mode, it has a bounded waiting time.
The service is completed in an acceptable period of time.
We can construct an overall description of availability by combining these goals. Following are
some criteria to define availability.
There is a timely response to our request.
Resources are allocated fairly so that some requesters are not favored over others.
Concurrency is controlled; that is, simultaneous access, deadlock management, and
exclusive access are supported as required.
The service or system involved follows a philosophy of fault tolerance, whereby hardware
or software faults lead to graceful cessation of service or to workarounds rather than to
crashes and abrupt loss of information. (Cessation does mean end; whether it is graceful or
not, ultimately the system is unavailable. However, with fair warning of the system’s
stopping, the user may be able to move to another system and continue work.)
The service or system can be used easily and in the way it was intended to be used. (This
is a characteristic of usability, but an unusable system may also cause an availability
failure.)
In Figure 1.6 we depict some of the properties with which availability overlaps. Indeed, the security
community is just beginning to understand what availability implies and how to ensure it.
A person or system can do three basic things with a data item: view it, modify it, or use it. Thus,
viewing (confidentiality), modifying (integrity), and using (availability) are the basic modes of
access that computer security seeks to preserve.
A paradigm of computer security is access control: To implement a policy, computer security
controls all accesses by all subjects to all protected objects in all modes of access. A small,
centralized control of access is fundamental to preserving confidentiality and integrity, but it is not
clear that a single access control point can enforce availability. Indeed, experts on dependability
will note that single points of control can become single points of failure, making it easy for an
attacker to destroy availability by disabling the single control point. Much of computer security’s
past success has focused on confidentiality and integrity; there are models of confidentiality and
integrity.
Configuration Weaknesses
Network administrators or network engineers need to learn what the configuration weaknesses are
and correctly configure their computing and network devices to compensate. Table 2.2 lists some
common configuration weaknesses.
Table 2.2 Configuration Weaknesses
Security Policy Weaknesses
Security policy weaknesses can create unforeseen security threats. The network can pose security
risks to the network if users do not follow the security policy. Table 2.3 lists some common security
policy weaknesses and how those weaknesses are exploited.
Table 2.3 Security Policy Weaknesses
2.3 Threats
There are four primary classes of threats to network security, as Figure 2.1 depicts. The list that
follows describes each class of threat in more detail.
2.4 Attacks
Four primary classes of attacks exist:
■ Reconnaissance
■ Access
■ Denial of Service
■ Worms, Viruses, and Trojan Horses
Reconnaissance
Reconnaissance is the unauthorized discovery and mapping of systems, services, or vulnerabilities
(see Figure 2.2). It is also known as information gathering and, in most cases, it precedes an actual
access or denial-of-service (DoS) attack. Reconnaissance is somewhat analogous to a thief casing
a neighborhood for vulnerable homes to break into, such as an unoccupied residence, easy-to-open
doors, or open windows.
Figure 2.2 Reconnaissance
Access
System access is the ability for an unauthorized intruder to gain access to a device for which the
intruder does not have an account or a password. Entering or accessing systems to which one does
not have authority to access usually involves running a hack, script, or tool that exploits a known
vulnerability of the system or application being attacked.
Denial of Service (DoS)
Denial of service implies that an attacker disables or corrupts networks, systems, or services with
the intent to deny services to intended users. DoS attacks involve either crashing the system or
slowing it down to the point that it is unusable. But DoS can also be as simple as deleting or
corrupting information. In most cases, performing the attack simply involves running a hack or
script. The attacker does not need prior access to the target because a way to access it is all that is
usually required. For these reasons, DoS attacks are the most feared.
Worms, Viruses, and Trojan Horses
Malicious software is inserted onto a host to damage a system; corrupt a system; replicate itself;
or deny services or access to networks, systems or services. They can also allow sensitive
information to be copied or echoed to other systems.
Trojan horses can be used to ask the user to enter sensitive information in a commonly trusted
screen. For example, an attacker might log in to a Windows box and run a program that looks like
the true Windows logon screen, prompting a user to type his username and password. The program
would then send the information to the attacker and then give the Windows error for bad password.
The user would then log out, and the correct Windows logon screen would appear; the user is none
the wiser that his password has just been stolen.
Even worse, the nature of all these threats is changing-from the relatively simple viruses of the
1980s to the more complex and damaging viruses, DoS attacks, and hacking tools in recent years.
Today, these hacking tools are powerful and widespread, with the new dangers of self spreading
blended worms such as Slammer and Blaster and network DoS attacks. Also, the old days of
attacks that take days or weeks to spread are over. Threats now spread worldwide in a matter of
minutes. The Slammer worm of January 2003 spread around the world in less than 10 minutes.
The next generations of attacks are expected to spread in just seconds. These worms and viruses
could do more than just wreak havoc by overloading network resources with the amount of traffic
they generate, they could also be used to deploy damaging payloads that steal vital information or
erase hard drives. Also, there is a strong concern that the threats of tomorrow will be directed at
the very infrastructure of the Internet.
Attack Examples
Several types of attacks are used today, and this section looks at a representative sample in more
detail.
Reconnaissance
Attacks Reconnaissance attacks can consist of the following:
■ Packet Sniffers
■ Port Scans
■ Ping Sweeps
■ Internet Information Queries
A malicious intruder typically ping sweeps the target network to determine which IP addresses are
alive. After this, the intruder uses a port scanner, as shown in Figure 2.2, to determine what network
services or ports are active on the live IP addresses. From this information, the intruder queries the
ports to determine the application type and version, and the type and version of operating system
running on the target host. Based on this information, the intruder can determine whether a possible
vulnerability exists that can be exploited.
Figure 2.2 NMAP
Using, for example, the Nslookup and Whois software utilities (see Figure 2.3), an attacker can
easily determine the IP address space assigned to a given corporation or entity. The ping command
tells the attacker what IP addresses are alive. Network snooping and packet sniffing are common
terms for eavesdropping. Eavesdropping is listening in to a conversation, spying, prying, or
snooping. The information gathered by eavesdropping can be used to pose other attacks to the
network.
Figure 3.1 Compares some of the distinct features of secret and public key systems
Cryptography
Cryptographic systems are characterized along three independent dimensions:
1. The type of operations used for transforming plaintext to ciphertext. All encryption
algorithms are based on two general principles: substitution, in which each element in the
plaintext (bit, letter, group of bits or letters) is mapped into another element, and
transposition, in which elements in the plaintext are rearranged. The fundamental
requirement is that no information be lost (that is, that all operations are reversible). Most
systems, referred to as product systems, involve multiple stages of substitutions and
transpositions.
2. The number of keys used. If both sender and receiver use the same key, the system is
referred to as symmetric, single-key, secret-key, or conventional encryption. If the sender
and receiver use different keys, the system is referred to as asymmetric, two-key, or public-
key encryption.
3. The way in which the plaintext is processed. A block cipher processes the input one
block of elements at a time, producing an output block for each input block. A stream
cipher processes the input elements continuously, producing output one element at a time,
as it goes along.
Note that the alphabet is wrapped around, so that the letter following Z is A. We can define the
transformation by listing all possibilities, as follows:
Output-64 bit
Cipher text
1. It is computationally easy for a party B to generate a pair (public key PUb, private key PRb).
2. It is computationally easy for a sender A, knowing the public key and the message to be
encrypted, M, to generate the corresponding cipher text:
C= E (PUb, M)
3. It is computationally easy for the receiver B to decrypt the resulting cipher text using the
private key to recover the original message:
Efficiency of Operation
o Generally for any hash function h with input x, computation of h(x) is a fast operation.
o Computationally hash functions are much faster than a symmetric encryption.
Properties of Hash Functions
In order to be an effective cryptographic tool, the hash function is desired to possess following
properties:
Pre-Image Resistance
o This property means that it should be computationally hard to reverse a hash function.
o In other words, if a hash function h produced a hash value z, then it should be a
difficult process to find any input value x that hashes to z.
o This property protects against an attacker who only has a hash value and is trying to
find the input.
o This property means given an input and its hash, it should be hard to find a different
input with the same hash.
o In other words, if a hash function h for an input x produces hash value h(x), then it
should be difficult to find any other input value y such that h(y) = h(x).
o This property of hash function protects against an attacker who has an input value
and its hash, and wants to substitute different value as legitimate value in place of
original input value.
Collision Resistance
o This property means it should be hard to find two different inputs of any length that
result in the same hash. This property is also referred to as collision free hash
function.
o In other words, for a hash function h, it is hard to find any two different inputs x and
y such that h(x) = h(y).
o Since, hash function is compressing function with fixed hash length, it is impossible
for a hash function not to have collisions. This property of collision free only
confirms that these collisions should be hard to find.
o This property makes it very difficult for an attacker to find two input values with the
same hash.
o Also, if a hash function is collision-resistant then it is second pre-image resistant.
o MD5 digests have been widely used in the software world to provide assurance about integrity
of transferred file. For example, file servers often provide a pre-computed MD5 checksum for
the files, so that a user can compare the checksum of the downloaded file to it.
o In 2004, collisions were found in MD5. An analytical attack was reported to be successful
only in an hour by using computer cluster. This collision attack resulted in compromised MD5
and hence it is no longer recommended for use.
o SHA-1 is the most widely used of the existing SHA hash functions. It is employed in several
widely used applications and protocols including Secure Socket Layer (SSL) security.
o In 2005, a method was found for uncovering collisions for SHA-1 within practical time frame
making long-term employability of SHA-1 doubtful.
o SHA-2 family has four further SHA variants, SHA-224, SHA-256, SHA-384, and SHA-512
depending up on number of bits in their hash value. No successful attacks have yet been
reported on SHA-2 hash function.
o Though SHA-2 is a strong hash function. Though significantly different, its basic design is
still follows design of SHA-1. Hence, NIST called for new competitive hash function designs.
o In October 2012, the NIST chose the Keccak algorithm as the new SHA-3 standard. Keccak
offers many benefits, such as efficient performance and good resistance for attacks.
3.8 Message Authentication
We discussed the data integrity threats and the use of hashing technique to detect if any
modification attacks have taken place on the data. Another type of threat that exist for data is the
lack of message authentication. In this threat, the user is not sure about the originator of the
message. Message authentication can be provided using the cryptographic techniques that use
secret keys as done in case of encryption.
Message Authentication Code (MAC)
MAC algorithm is a symmetric key cryptographic technique to provide message authentication.
For establishing MAC process, the sender and receiver share asymmetric key K. Essentially, a
MAC is an encrypted checksum generated on the underlying message that is sent along with a
message to ensure message authentication. The process of using MAC for authentication is
depicted in the following illustration:
Similar to hash, MAC function also compresses an arbitrary long input into a fixed length
output. The major difference between hash and MAC is that MAC uses secret key during
the compression.
The sender forwards the message along with the MAC. Here, we assume that the message is
sent in the clear, as we are concerned of providing message origin authentication, not
confidentiality. If confidentiality is required then the message needs encryption.
On receipt of the message and the MAC, the receiver feeds the received message and the
shared secret key K into the MAC algorithm and re-computes the MAC value.
The receiver now checks equality of freshly computed MAC with the MAC received from the
sender. If they match, then the receiver accepts the message and assures himself that the
message has been sent by the intended sender.
If the computed MAC does not match the MAC sent by the sender, the receiver cannot
determine whether it is the message that has been altered or it is the origin that has been
falsified. As a bottom-line, a receiver safely assumes that the message is not the genuine.
Limitations of MAC
There are two major limitations of MAC, both due to its symmetric nature of operation:
Establishment of Shared Secret.
o It can provide message authentication among pre-decided legitimate users who
have shared key.
o This requires establishment of shared secret prior to use of MAC.
No restrictions apply to the usage of information when the user has received it.
The privileges for accessing objects are decided by the owner of the object, rather than
through a system-wide policy that reflects the organization’s security requirements.
ACLs and owner/group/other access control mechanisms are by far the most common mechanism
for implementing DAC policies. Other mechanisms, even though not designed with DAC in mind,
may have the capabilities to implement a DAC policy.
Non-Discretionary Access Control
In general, all access control policies other than DAC are grouped in the category of
nondiscretionary access control (NDAC). As the name implies, policies in this category have rules
that are not established at the discretion of the user. Non-discretionary policies establish controls
that cannot be changed by users, but only through administrative action. Separation of duty (SOD)
policy can be used to enforce constraints on the assignment of users to roles or tasks. An example
of such a static constraint is the requirement that two roles be mutually exclusive; if one role
requests expenditures and another approves them, the organization may prohibit the same user
from being assigned to both roles.
So, membership in one role may prevent the user from being a member of one or more other roles,
depending on the SOD rules, such as Work Flow and Role-Based Access Control (see the
following sections). Another example is a history-based SOD policy that regulates, for example,
whether the same subject (role) can access the same object a certain number of times. Three
popular non-discretionary access control policies are discussed in this section.
Mandatory Access Control (MAC)
Mandatory Access Control (MAC) policy means that access control policy decisions are made by
a central authority, not by the individual owner of an object, and the owner cannot change access
rights. An example of MAC occurs in military security, where an individual data owner does not
decide who has a Top Secret clearance, nor can the owner change the classification of an object
from Top Secret to Secret. MAC is the most mentioned NDAC policy.
1. The need for a MAC mechanism arises when the security policy of a system dictates that:
Protection decisions must not be decided by the object owner.
2. The system must enforce the protection decisions (i.e., the system enforces the security
policy over the wishes or intentions of the object owner).
Usually a labeling mechanism and a set of interfaces are used to determine access based on the
MAC policy; for example, a user who is running a process at the Secret classification should not
be allowed to read a file with a label of Top Secret. This is known as the “simple security rule,” or
“no read up.” Conversely, a user who is running a process with a label of Secret should not be
allowed to write to a file with a label of Confidential. This rule is called the “*-property”
(pronounced “star property”) or “no write down.”
The *-property is required to maintain system security in an automated environment. A variation
on this rule called the “strict *-property” requires that information can be written at, but not above,
the subject’s clearance level. Multilevel security models such as the Bell-La Padula Confidentiality
and Biba Integrity models are used to formally specify this kind of MAC policy.
However, information can pass through a covert channel in MAC, where information of a higher
security class is deduced by inference such as assembling and intelligently combining information
of a lower security class.
Figure 4.1 A Mandatory Protection System: The protection state is defined in terms of
labels and is immutable. The immutable labeling state and transition state enable the
definition and management of labels for system subjects and objects.
Role-Based Access Control
Although RBAC is technically a form of non-discretionary access control, recent computer
security texts often list RBAC as one of the three primary access control policies (the others are
DAC and MAC). In RBAC, access decisions are based on the roles that individual users have as
part of an organization. Users take on assigned roles (such as doctor, nurse, teller, or manager).
Access rights are grouped by role name, and the use of resources is restricted to individuals
authorized to assume the associated role.
For example, within a hospital system, the role of doctor can include operations to perform a
diagnosis, prescribe medication, and order laboratory tests; the role of researcher can be limited to
gathering anonymous clinical information for studies. The use of roles to control access can be an
effective means for developing and enforcing enterprise-specific security policies and for
streamlining the security management process.
Under RBAC, users are granted membership into roles based on their competencies and
responsibilities in the organization. The operations that a user is permitted to perform are based on
the user's role. User membership into roles can be revoked easily and new memberships established
as job assignments dictate. Role associations can be established when new operations are
instituted, and old operations can be deleted as organizational functions change and evolve. This
simplifies the administration and management of privileges; roles can be updated without updating
the privileges for every user on an individual basis.
When a user is associated with a role, the user can be given no more privilege than is necessary to
perform the job; since many of the responsibilities overlap between job categories, maximum
privilege for each job category could cause unauthorized access. This concept of least privilege
requires identifying the user's job functions, determining the minimum set of privileges required
to perform those functions, and restricting the user to a domain with those privileges and nothing
more. In less precisely controlled systems, least privilege is often difficult or costly to achieve
because it is difficult to tailor access based on various attributes or constraints. Role hierarchies
can be established to provide for the natural structure of an enterprise. A role hierarchy defines
roles that have unique attributes and that may contain other roles; that is, one role may implicitly
include the operations that are associated with another role.
Temporal Constraints
Temporal constraints are formal statements of access policies that involve time-based restrictions
on access to resources; they are required in several application scenarios. In some applications,
temporal constraints may be required to limit resource use. In other types of applications, they may
be required for controlling time-sensitive activities. It is these time-based constraints (in addition
to other constraints like workflow precedence relationships) that must be evaluated for generating
dynamic authorizations during workflow execution time. Temporal constraints may also be
required in non-work flow environments as well.
For example, in a commercial banking enterprise, an employee should be able to assume the role
of a teller (to perform transactions on customer accounts) only during designated banking hours
(such as 9 a.m. to 2 p.m., Monday through Friday, and 9 a.m. to 12 p.m. on Saturday). To meet
this requirement, it is necessary to specify temporal constraints that limit role availability and
activation capability only to those designated banking hours.
Popular access control policies related to temporal constraints are the history-based access control
policies, which are not supported by any standard access control mechanism but have practical
application in many business operations such as task transactions and separation of conflicts-of-
interests. History-based access control is defined in terms of subjects and events where the events
of the system are specified as the object access operations associated with activity at a particular
security level. This assures that the security policy is defined in terms of the sequence of events
over time, and that the security policy decides which events of the system are permitted to ensure
that information does not “flow” in an unauthorized manner. Popular history-based access control
policies are Workflow and Chinese wall.
Physical' controls are present to prevent a person from entering office spaces, buildings
or structures.
'Logical' controls are designed to limit electronic access to a network, a computer, digital
files, and stored data.
An Access Control Matrix is a single digital file or written record having 'subjects' and 'objects'
and identifies what actions, if any, are permitted by individuals. In simple terms, the matrix allows
only certain people (subjects) to access certain information (objects). As shown in the table below,
the matrix consists of one or more subjects (i.e., people) along one axis and the associated objects
(i.e., files) along the other axis. Certain people are allowed to Read (R), Write (W), execute (E)
and delete (D) files. Restricted access is indicated by a dash.
For example, you can see that Gary is allowed to delete the promotion list, whereas Sandy is
allowed to Read or execute it. Tonya has full access, while Samantha has no access whatsoever.
Resource-oriented access controls, such as access control lists (ACLs), are the most common
access control mechanism in use and by far the most common mechanism for implementing DAC
policies. An ACL is an example of a policy concept that is frequently implemented directly or as
an approximation in modern operating systems. An ACL associates the permitted operation to an
object and specifies all the subjects that can access the object, along with their rights to the object.
That is, each entry in the list is a pair (subject, set of rights). From another point of view, an ACL
corresponds to a column of the access control matrix. ACLs provide a straightforward way of
granting or denying access for a specified user or groups of users. Without the use of access control
lists, granting access to the level of granularity of a single user can be cumbersome.
The composition of an ACL entry is as follows:
Tag type specifies that the ACL entry is one of the following: the file owner, the owning
group, a specific named user, a specific named group, or other (meaning all other users).
Qualifier field describes the specific instance of the tag type. For specific named users and
specific named groups, the qualifier field contains the userid and groupid, respectively.
Qualifier fields for the owner entry and owning group entries are not relevant because this
information is specified elsewhere.
Set of permissions specifies the access rights such as read, write, or execute for the entry.
For example
File owner tag (with empty qualifier) specifies the owning user has read, write, and execute
permission.
user.userid:rw-
Named user tag specifies the user userid (qualifier) has read and write permission.
group:r-x
Group owner tag (with empty qualifier) specifies the owning group has read and execute
permission.
group.groupid:--x
Named group tag specifies the group groupid (qualifier) has execute permission.
other:r--
Other tag (with empty qualifier) specifies all other users (than the owning and named users and
groups) have the read permission.
Another type of access control is the capability list or access list. A capability is a key to a specific
object, along with a mode of access (read, write, or execute). In a capability system, access to an
object is allowed if the subject that is requesting access possesses a capability for the object. It can
thus be thought of as the inverse of an access control list: an ACL is attached to an object and
specifies which subjects may access the object, while a capability list is attached to a subject and
specifies which objects the subject may access. A capability is a protected identifier that both
identifies the object and specifies the operations to be allowed to the user who possesses the
capability. This approach corresponds to storing the access control matrix by rows.
Table 4.3 Presents an example of a capability list
The capability list in the table is associated with a subject and specifies the subject’s rights. Each
entry in the list is a capability - a pair <object, set of operations>. A subject possessing a capability
is proof of the subject having the access privileges. So, it can be useful to think of capabilities as
being similar to tickets. However, unlike tickets, in some systems capabilities can be copied, and
there may be the potential for the possessor of a capability to give a copy to someone else (this
capacity is itself often represented as a “right”).
A capability list corresponds to a row of the access control matrix. The principal advantage of
capabilities is that it is easy to review all accesses that are authorized for a given subject. The most
successful use of capabilities is at lower levels in the system, where capabilities provide the
underlying protection mechanism and not the user-visible access control scheme. At the highest
levels in the system, the system maintains a list of capabilities for each user. Users cannot add
capabilities to this list except to cover new files that they create. Users might, however, be allowed
to give access to objects by passing copies of their own capabilities to other users, and they might
be able to revoke access to their own objects by taking away capabilities from others (although
revocation can be difficult to implement).
The Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access
control in government and military applications. It was developed by David Elliott Bell and
Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell to formalize the U.S.
Department of Defense (DoD) multilevel security (MLS) policy.
The model is a formal state transition model of computer security policy that describes a set of
access control rules which use security labels on objects and clearances for subjects. Security labels
range from the most sensitive (e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified"
or "Public"). The Bell-LaPadula model is an example of a model where there is no clear distinction
of protection and security.
Features
The Bell-LaPadula model focuses on data confidentiality and controlled access to classified
information, in contrast to the Biba Integrity Model which describes rules for the protection of data
integrity. In this formal model, the entities in an information system are divided into subjects and
objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves
security by moving from secure state to secure state, thereby inductively proving that the system
satisfies the security objectives of the model. The Bell-La Padula model is built on the concept of
a state machine with a set of allowable states in a computer network system. The transition from
one state to another state is defined by transition functions.
A system state is defined to be "secure" if the only permitted access modes of subjects to objects
are in accordance with a security policy. To determine whether a specific access mode is allowed,
the clearance of a subject is compared to the classification of the object (more precisely, to the
combination of classification and set of compartments, making up the security level) to determine
if the subject is authorized for the specific access mode. The clearance/classification scheme is
expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and
one discretionary access control (DAC) rule with three security properties:
1. The Simple Security Property - a subject at a given security level may not read an object
at a higher security level (no read-up).
2. The *-property (read "star"-property) - a subject at a given security level must not
write to any object at a lower security level (no write-down). The *-property is also known
as the Confinement property.
With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret
researchers can create secret or top-secret files but may not create public files; no write-down).
Conversely, users can view content only at or below their own security level (i.e. secret researchers
can view public or secret files, but may not view top-secret files; no read-up).
Limitations
Only addresses confidentiality, control of writing (one form of integrity), *-property and
discretionary access control
Covert channels are mentioned but are not addressed comprehensively
The tranquility principle limits its applicability to systems where security levels do not
change dynamically. It allows controlled copying from high to low via trusted subjects.
Classification Levels
Although the classification systems vary from country to country, most have levels corresponding
to the following British definitions (from the highest level to lowest):
Based on the strict integrity condition, a number of other policies were established. Each of the
other policies is less restrictive than the strict integrity policy. The model has a number of dynamic
policies that lower the integrity levels of either subjects or objects based on the access mode
operation. The Biba model is not without its problems. The model does not support the granting
and revocation of authorizations. Another problem is that the model is used strictly for integrity;
it does nothing to enforce confidentiality. For these reasons, to use the Biba model, it should be
combined with other security models. The Lipner Integrity model is one model that combines the
Biba model and the Bell-LaPadula models together to enforce both integrity and confidentiality.
There are two key consequences of the Biba model: called the low water mark principle and the
high water mark principle. The low water mark principle means that an asset’s integrity is only as
good as the subject with least integrity that can write it. The high water mark means that when
assets combine they take on the highest level of their contributors. In practice the high water mark
principle means that over time assets will tend to take on higher classifications.
The execute property is not always included in descriptions of Biba, but it is added here to point
out that integrity would be violated if computer processes could elevate in privilege. In fact many
viruses attempt to do just that. Together, the BLP and Biba models point out the difficult with
attaining security in a multi-level model. The characteristics that ensure confidentiality fail to
ensure confidentiality and the characteristics that ensure integrity fail to ensure proper
confidentiality.
2. The AS responds with a message, encrypted with a key derived from the user’s password that
contains the ticket for the TGS. The encrypted message also contains a copy of the session key
used by C and the TGS. In this way, only the user’s client can read it. The same session key is
included in the ticket, which can be read only by the TGS. Now C and the TGS share a common
key.
3. Armed with the ticket and the session key, C is ready to approach the TGS. C sends the TGS
a message that includes the ticket plus the ID of the requested service. In addition, C transmits
an authenticator, which includes the ID and address of C’s user and a timestamp; it is encrypted
with the session key known only by C and TGS. Unlike the ticket, which is reusable, the
authenticator is intended to use only once and has a very shot lifetime. The TGS can decrypt
the ticket with the key that it shares with the AS. This ticket indicates that the user C has been
provided with the session key Kc. Thus the TGS decrypt the authenticator with Kc gained from
the ticket and check the name and the address from the authenticator with that of the ticket and
with the network address of the incoming message. If all match, then the TGS is assured that
the sender of the ticket is indeed the ticket’s real owner.
4. Then the TGS replies to the client, sending a message encrypted with the common key that
they share. It includes a new session key to be used with the server V that must provide the
service, the ID of V, the ticket valid for a specific service and the timestamp of the ticket.
The ticket itself includes the new session key.
5. C now has a reusable service-granting ticket for V. When C presents this ticket, it also
sends an authenticator. The server can decrypt the ticket, recover the session key and
decrypt the authenticator.
Finally, at the conclusion of this process, the client and the server share a secret key. This key can
be used to encrypt future messages between the two or to exchange a new session key for that
purpose.
Another relatively general-purpose solution is to implement security just above TCP (Figure 5.1b).
The foremost example of this approach is the Secure Sockets Layer (SSL) and the follow-on
Internet standard known as Transport Layer Security (TLS). At this level, there are two
implementation choices. For full generality, SSL (or TLS) could be provided as part of the
underlying protocol suite and therefore be transparent to applications. Alternatively, SSL can be
embedded in specific packages. For example, Netscape and Microsoft Explorer browsers come
equipped with SSL, and most Web servers have implemented the protocol.
Application-specific security services are embedded within the particular application. Figure 5.1c
shows examples of this architecture. The advantage of this approach is that the service can be
tailored to the specific needs of a given application.
5.2 Secure Socket Layer and Transport Layer Security
SSL is designed to make use of TCP to provide a reliable end-to-end secure service. SSL is not a
single protocol but rather two layers of protocols, as illustrated in Figure 5.2. The SSL Record
Protocol provides basic security services to various higher-layer protocols. In particular, the
Hypertext Transfer Protocol (HTTP), which provides the transfer service for Web client/server
interaction, can operate on top of SSL. Three higher-layer protocols are defined as part of SSL:
the Handshake Protocol, the Change Cipher Spec Protocol, and the Alert Protocol. These SSL-
specific protocols are used in the management of SSL exchanges.
Two important SSL concepts are the SSL session and the SSL connection, which are defined in
the specification as follows.
Connection: A connection is a transport (in the OSI layering model definition) that provides
a suitable type of service. For SSL, such connections are peer-to-peer relationships. The
connections are transient. Every connection is associated with one session.
Session: An SSL session is an association between a client and a server. Sessions are created
by the Handshake Protocol. Sessions define a set of cryptographic security parameters which
can be shared among multiple connections. Sessions are used to avoid the expensive
negotiation of new security parameters for each connection.
Between any pair of parties (applications such as HTTP on client and server), there may be
multiple secure connections. In theory, there may also be multiple simultaneous sessions between
parties, but this feature is not used in practice. There are a number of states associated with each
session. Once a session is established, there is a current operating state for both read and write (i.e.,
receive and send). In addition, during the Handshake Protocol, pending read and write states are
created. Upon successful conclusion of the Handshake Protocol, the pending states become the
current states.
SSL Record Protocol
The SSL Record Protocol provides two services for SSL connections:
Confidentiality: The Handshake Protocol defines a shared secret key that is used for
conventional encryption of SSL payloads.
Message Integrity: The Handshake Protocol also defines a shared secret key that is
used to form a message authentication code (MAC).
Figure 5.3 indicates the overall operation of the SSL Record Protocol. The Record Protocol takes
an application message to be transmitted, fragments the data into manageable blocks, optionally
compresses the data, applies a MAC, encrypts, adds a header, and transmits the resulting unit in a
TCP segment. Received data are decrypted, verified, decompressed, and reassembled before being
delivered to higher-level users.
The first step is fragmentation. Each upper-layer message is fragmented into blocks of 214 bytes
(16384 bytes) or less. Next, compression is optionally applied. Compression must be lossless and
may not increase the content length by more than 1024 bytes. In SSLv3 (as well as the current
version of TLS), no compression algorithm is specified, so the default compression algorithm is
null. The next step in processing is to compute a message authentication code over the
compressed data. For this purpose, a shared secret key is used.
Next, the compressed message plus the MAC are encrypted using symmetric encryption.
Encryption may not increase the content length by more than 1024 bytes, so that the total length
may not exceed 214+ 2048. The following encryption algorithms are permitted:
For stream encryption, the compressed message plus the MAC are encrypted. Note that the MAC
is computed before encryption takes place and that the MAC is then encrypted along with the
plaintext or compressed plaintext.
For block encryption, padding may be added after the MAC prior to encryption. The padding is in
the form of a number of padding bytes followed by a one-byte indication of the length of the
padding. The total amount of padding is the smallest amount such that the total size of the data to
be encrypted (plaintext plus MAC plus padding) is a multiple of the cipher’s block length.
An example is a plaintext (or compressed text if compression is used) of 58 bytes, with a MAC of
20 bytes (using SHA-1), that is encrypted using a block length of 8 bytes (e.g., DES). With the
padding-length byte, this yields a total of 79 bytes. To make the total an integer multiple of 8, one
byte of padding is added. The final step of SSL Record Protocol processing is to prepare a header
consisting of the following fields:
Content Type (8 bits): The higher-layer protocol used to process the enclosed fragment.
Major Version (8 bits): Indicates major version of SSL in use. For SSLv3, the value is 3.
Minor Version (8 bits): Indicates minor version in use. For SSLv3, the value is 0.
Compressed Length (16 bits): The length in bytes of the plaintext fragment (or compressed
fragment if compression is used). The maximum value is 214+2048.
Change Cipher Spec Protocol
The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL
Record Protocol, and it is the simplest. This protocol consists of a single message (Figure 5.5a),
which consists of a single byte with the value 1. The sole purpose of this message is to cause the
pending state to be copied into the current state, which updates the cipher suite to be used on this
connection.
Handshake Protocol
The most complex part of SSL is the Handshake Protocol. This protocol allows the server and
client to authenticate each other and to negotiate an encryption and MAC algorithm and
cryptographic keys to be used to protect data sent in an SSL record. The Handshake Protocol is
used before any application data is transmitted. The Handshake Protocol consists of a series of
messages exchanged by client and server. Each message has three fields.
IPsec provides an easy mechanism for implementing Virtual Private Network (VPN) for such
organizations. VPN technology allows organization’s inter-office traffic to be sent over public
Internet by encrypting traffic before entering the public Internet and logically separating it from
other traffic.
Overview of IPsec
The IPsec suite can be considered to have two separate operations, when performed in unison,
providing a complete set of security services. These two operations are IPsec Communication and
Internet Key Exchange.
IPsec Communication
It is typically associated with standard IPsec functionality. It involves encapsulation, encryption,
and hashing the IP datagrams and handling all packet processes. It is responsible for managing the
communication according to the available Security Associations (SAs) established between
communicating parties. It uses security protocols such as Authentication Header (AH) and
Encapsulated SP (ESP). IPsec communication is not involved in the creation of keys or their
management. IPsec communication operation itself is commonly referred to as IPsec.
Internet Key Exchange (IKE)
IKE is the automatic key management protocol used for IPsec. Technically, key management is
not essential for IPsec communication and the keys can be manually managed. However, manual
key management is not desirable for large networks. IKE is responsible for creation of keys for
IPsec and providing authentication during key establishment process.
Though, IPsec can be used for any other key management protocols, IKE is used by default. IKE
defines two protocol (Oakley and SKEME) to be used with already defined key management
framework Internet Security Association Key Management Protocol (ISAKMP). ISAKMP is not
IPsec specific, but provides the framework for creating SAs for any protocol.
IPsec Communication has two modes of functioning; transport and tunnel modes. These modes
can be used in combination or used individually depending upon the type of communication
desired.
Transport Mode
IPsec does not encapsulate a packet received from upper layer. The original IP header is maintained
and the data is forwarded based on the original attributes set by the upper layer protocol.
Tunnel Mode
This mode of IPsec provides encapsulation services along with other security services. In tunnel
mode operations, the entire packet from upper layer is encapsulated before applying security
protocol. New IP header is added. Tunnel mode is typically associated with gateway activities.
The encapsulation provides the ability to send several sessions through a single gateway. As far as
the endpoints are concerned, they have a direct transport layer connection.
The datagram from one system forwarded to the gateway is encapsulated and then forwarded to
the remote gateway. The remote associated gateway de-encapsulates the data and forwards it to
the destination endpoint on the internal network. Using IPsec, the tunneling mode can be
established between the gateway and individual end system as well.
IPsec is a suite of protocols for securing network connections. It is rather a complex mechanism,
because instead of giving straightforward definition of a specific encryption algorithm and
authentication function, it provides a framework that allows an implementation of anything that
both communicating ends agree upon. Authentication Header (AH) and Encapsulating Security
Payload (ESP) are the two main communication protocols used by IPsec. While AH only
authenticate, ESP can encrypt and authenticate the data transmitted over the connection.
Transport Mode provides a secure connection between two endpoints without changing the IP
header. Tunnel Mode encapsulates the entire payload IP packet. It adds new IP header. The latter
is used to form a traditional VPN, as it provides a virtual secure tunnel across an untrusted Internet.
Setting up an IPsec connection involves all kinds of crypto choices. Authentication is usually built
on top of a cryptographic hash such as MD5 or SHA-1. Encryption algorithms are DES, 3DES,
Blowfish, and AES being common. Other algorithms are possible too.
Both communicating endpoints need to know the secret values used in hashing or encryption.
Manual keys require manual entry of the secret values on both ends, presumably conveyed by
some out-of-band mechanism, and IKE (Internet Key Exchange) is a sophisticated mechanism for
doing this online.
PPTP
Point-to-Point Tunneling Protocol (PPTP) is the oldest of the three protocols used in VPNs. It was
originally designed as a secure extension to Point-to-Point Protocol (PPP). PPTP works at the data
link layer of the OSI model. PPTP offers two different methods of authenticating the user:
EAP was actually designed specifically for PPTP and is not proprietary. CHAP is a three-way
process whereby the client sends a code to the server, the server authenticates it, and then the server
responds to the client. CHAP also periodically re-authenticates a remote client, even after the
connection is established. PPTP uses Microsoft Point-to-Point Encryption (MPPE) to encrypt
packets. MPPE is actually a version of DES. DES is still useful for many situations; however,
newer versions of DES, such as DES 3, are preferred.
L2TP
Layer 2 Tunneling Protocol (L2TP) was explicitly designed as an enhancement to PPTP. Like
PPTP, it works at the data link layer of the OSI model. It has several improvements to PPTP. First,
it offers more and varied methods for authentication—PPTP offers two, whereas L2TP offers five.
In addition to CHAP and EAP, L2TP offers PAP, SPAP, and MS-CHAP.
In addition to more authentication protocols available for use, L2TP offers other enhancements.
PPTP will only work over standard IP networks, whereas L2TP will work over X.25 networks (a
common protocol in phone systems) and ATM (asynchronous transfer mode, a high-speed
networking technology) systems. L2TP also uses IPsec for its encryption.
The simplest way of sending an e-mail would be sending a message directly from the sender’s
machine to the recipient’s machine. In this case, it is essential for both the machines to be running
on the network simultaneously.
However, this setup is impractical as users may occasionally connect their machines to the
network. Hence, the concept of setting up e-mail servers arrived. In this setup, the mail is sent to
a mail server which is permanently available on the network. When the recipient’s machine
connects to the network, it reads the mail from the mail server.
In general, the e-mail infrastructure consists of a mesh of mail servers, also termed as Message
Transfer Agents (MTAs) and client machines running an e-mail program comprising of User
Agent (UA) and local MTA. Typically, an e-mail message gets forwarded from its UA, goes
through the mesh of MTAs and finally reaches the UA on the recipient’s machine.
Simple Mail Transfer Protocol (SMTP) used for forwarding e-mail messages.
Post Office Protocol (POP) and Internet Message Access Protocol (IMAP) are used to
retrieve the messages by recipient from the server.
MIME
Basic Internet e-mail standard was written in 1982 and it describes the format of e-mail message
exchanged on the Internet. It mainly supports e-mail message written as text in basic Roman
alphabet. By 1992, the need was felt to improve the same. Hence, an additional standard
Multipurpose Internet Mail Extensions (MIME) was defined. It is a set of extensions to the basic
Internet E-mail standard. MIME provides an ability to send e-mail using characters other than
those of the basic Roman alphabet such as Cyrillic alphabet (used in Russian), the Greek alphabet,
or even the ideographic characters of Chinese. Another need fulfilled by MIME is to send non-text
contents, such as images or video clips. Due to this features, the MIME standard became widely
adopted with SMTP for e-mail communication.
Growing use of e-mail communication for important and crucial transactions demands provision
of certain fundamental security services as the following:
Confidentiality: E-mail message should not be read by anyone but the intended recipient.
Authentication: E-mail recipient can be sure of the identity of the sender.
Integrity: Assurance to the recipient that the e-mail message has not been altered since it
was transmitted by the sender.
Non-repudiation: E-mail recipient is able to prove to a third party that the sender really
did send the message.
Proof of submission: E-mail sender gets the confirmation that the message is handed to
the mail delivery system.
Proof of delivery: Sender gets a confirmation that the recipient received the message.
Security services such as privacy, authentication, message integrity, and non-repudiation are
usually provided by using public key cryptography.
PGP
Pretty Good Privacy (PGP) is an e-mail encryption scheme. It has become the de-facto standard
for providing security services for e-mail communication. As discussed above, it uses public key
cryptography, symmetric key cryptography, hash function, and digital signature. It provides:
Privacy
Sender Authentication
Message Integrity
Non-repudiation
Along with these security services, it also provides data compression and key management support.
PGP uses existing cryptographic algorithms such as RSA, IDEA, MD5, etc., rather than inventing
the new ones.
PGP Certificate
PGP key certificate is normally established through a chain of trust. For example, A’s public key
is signed by B using his public key and B’s public key is signed by C using his public key. As this
process goes on, it establishes a web of trust. In a PGP environment, any user can act as a certifying
authority. Any PGP user can certify another PGP user's public key. However, such a certificate is
only valid to another user if the user recognizes the certifier as a trusted introducer.
Several issues exist with such a certification method. It may be difficult to find a chain leading
from a known and trusted public key to desired key. Also, there might be multiple chains which
can lead to different keys for desired user. PGP can also use the PKI infrastructure with
certification authority and public keys can be certified by CA (X.509 certificate).
S/MIME
S/MIME stands for Secure Multipurpose Internet Mail Extension. S/MIME is a secure e-mail
standard. It is based on an earlier non-secure e-mailing standard called MIME.
Working of S/MIME
S/MIME approach is similar to PGP. It also uses public key cryptography, symmetric key
cryptography, hash functions, and digital signatures. It provides similar security services as PGP
for e-mail communication. The most common symmetric ciphers used in S/MIME are RC2 and
TripleDES. The usual public key method is RSA, and the hashing algorithm is SHA-1 or MD5.
S/MIME specifies the additional MIME type, such as “application/pkcs7-mime”, for data
enveloping after encrypting. The whole MIME entity is encrypted and packed into an object.
S/MIME has standardized cryptographic message formats (different from PGP). In fact, MIME is
extended with some keywords to identify the encrypted and/or signed parts in the message.
S/MIME relies on X.509 certificates for public key distribution. It needs top-down hierarchical
PKI for certification support.
Employability of S/MIME
Due to the requirement of a certificate from certification authority for implementation, not all users
can take advantage of S/MIME, as some may wish to encrypt a message, with a public/private key
pair. For example, without the involvement or administrative overhead of certificates.
In practice, although most e-mailing applications implement S/MIME, the certificate enrollment
process is complex. Instead PGP support usually requires adding a plug-in and that plug-in comes
with all that is needed to manage keys. The Web of Trust is not really used. People exchange their
public keys over another medium. Once obtained, they keep a copy of public keys of those with
whom e-mails are usually exchanged.
One of the schemes, either PGP or S/MIME, is used depending on the environment. A secure e-
email communication in a captive network can be provided by adapting to PGP. For e-mail security
over Internet, where mails are exchanged with new unknown users very often, S/MIME is
considered as a good option.
CHAPTER SIX
DATABASE SECURITY
6.1 Introduction to Database Security
Data is the most valuable asset in today’s world as it is used in day –to –day life from a single
individual to large organizations. To make the retrieval and maintenance of data easy and efficient
it is stored in a database. Databases are a favorite target for attackers because of the data these are
containing and also because of their volume. There are many ways a database can be compromised.
The goal of database security is to prevent unauthorized or accidental access to data. Because the
database environment has become more complex and more decentralized, management of data
security and integrity has become a more complex and time consuming job for data administrators.
Threats to the database can be numerous. The threat can be accidental or intentional. In either case,
security of the database and the entire system, including the network, operating system, the
physical area where the database resides and the personnel allowed access, must all be considered.
A database designer must consider a security strategy to address potential threats to data. Generally
a data security plan includes such procedures, including physical protection methods and use of
data management software.
Theft and fraud Includes intentional security breaches of data and unauthorized
data manipulation because of inadequate access controls,
inadequate physical security of the database
Loss of privacy and Includes loss of protection of an individual's data files. Loss of
confidentiality confidentiality refers to loss of protection of organizational data.
Failure to control these types of losses may lead to blackmail,
public embarrassment and bribery and loss of competitiveness
Loss of data integrity Includes data corruption and invalid data. Failure to control data
corruption can severely impact an organization
Loss of data availability Includes the damage of hardware, the network or applications that
may cause data to become unavailable to users
Controlling outside access and threats to the database is another form of security.
These methods are described below.
This includes the actual physical space that houses the database and/or
a computer system. Physical protection may include password entry to
Physical protection
personnel in the form or key card or notepad entry, video cameras in
the area, lead-based doors, adequate lighting, etc.
Active Attacks
In active attack, actual database values are modified. These are more problematic than passive
attacks because they can mislead a user. For example a user getting wrong information in result of
a query. There are some ways of performing such kind of attack which are mentioned below:
Spoofing – In this type of attack, cipher text value is replaced by a generated value.
Splicing – Here, a cipher text value is replaced by different cipher text value.
Replay – replay is a kind of attack where cipher text value is replaced with old version previously
updated or deleted.
6.3 Security Threats to Database
Databases are a favorite target for attackers because of their data. There are many ways in which
a database can be compromised. There are various types of attacks and threats from which a
database should be protected. Solutions to most of the threats mentioned above have been found,
although some solutions are good while some are only temporary.
Excessive Privilege Abuse: When users (or applications) are granted database access privileges
that exceed the requirements of their job function, these privileges may be abused for malicious
purpose. For example, a computer operator in an organization requires only the ability to change
employee contact information may take advantage of excessive database update privileges to
change salary information.
Legitimate Privilege Abuse: Legitimate privilege abuse is when an authorized user misuses their
legitimate database privileges for unauthorized purposes. Legitimate privilege abuse can be in the
form of misuse by database users, administrators or a system manager doing any unlawful or
unethical activity. It is, but not limited to, any misuse of sensitive data or unjustified use of
privileges.
Privilege Elevation: Sometimes there are vulnerabilities in database software and attackers may
take advantage of that to convert their access privileges from an ordinary user to those of an
administrator, which could result in bogus accounts, transfer of funds, and misinterpretation of
certain sensitive analytical information.
A database rootkit is such a program or a procedure that is hidden inside the database and that
provides administrator-level privileges to gain access to the data in the database. These rootkits
may even turn off alerts triggered by Intrusion Prevention Systems (IPS). It is possible to install a
rootkit only after compromising the underlying operating system.
Platform Vulnerabilities: Vulnerabilities in operating systems and additional services installed
on a database server may lead to unauthorized access, data corruption, or denial of service. For
example, the Blaster Worm took advantage of a Windows 2000 vulnerability to create denial of
service conditions.
Inference: Even in secure DBMSs, it is possible for users to draw inferences from the information
they obtain from the database. A user can draw inference from a database when the user can guess
or conclude more sensitive information from the retrieved information from the database or
additionally with some prior knowledge. An inference presents a security breach if more highly
classified information can be inferred from less classified information. There are two important
cases of the inference problem, which often arise in database systems.
1) Aggregation problem: occurs when a collection of data items is more sensitive i.e. classified
at a higher level than the levels of individual data items. For example in an organization the profit
of each branch is not sensitive but total profit of organization is at higher level of classification.
2) Data association problem: occurs whenever two values seen together are classified at a higher
level than the classification of either value individually. As an example, the list containing the
names of all employees and the list containing all employee salaries are unclassified, while a
combined list giving employee names with their salaries is classified.
SQL Injection: In a SQL injection attack, an attacker typically inserts (or “injects”) unauthorized
SQL statements into a vulnerable SQL data channel. Typically targeted data channels include
stored procedures and Web application input parameters. These injected statements are then passed
to the database where they are executed. For example in a web application the user inserts a query
instead of his name. Using SQL injection, attackers may gain unrestricted access to an entire
database.
Unpatched DBMS: In database, as the vulnerabilities are kept changing that are being exploited
by attackers, database vendors release patches so that sensitive information in databases remain
protected from threats. Once these patches are released they should be patched immediately. If left
unpatched, hackers can reverse engineer the patch, or can often find information online on how to
exploit the unpatched vulnerabilities, leaving a DBMS even more vulnerable that before the patch
was released.
Unnecessary DBMS Features Enabled: In a DBMS there are many unneeded features which are
enabled by default and which should be turned off otherwise they would be the reason for the most
effective attacks on a database.
Misconfigurations: Unnecessary features are left on because of poor configuration at the database
level. Database misconfigurations provide weak access points for hackers to bypass authentication
methods and gain access to sensitive information. These flaws become the main targets for
criminals to execute certain types of attacks. Default settings may not have been properly re-set,
unencrypted files may be accessible to non-privileged users, and unpatched flaws may lead to
unauthorized access of sensitive data.
Buffer Overflow: When a program or process tries to store more data in a buffer than it was
intended to hold, this situation is called buffer overflow. Since buffers contains only a finite
amount of data, the extra data - which has to go somewhere - can overflow into adjacent locations,
corrupting or overwriting the valid data held in those locations. For example, a program is waiting
for a user to enter his or her name. Rather than entering the name, the hacker would enter an
executable command that exceeds the size of buffer. The command is usually something short.
Weak Audit Trails: A database audit policy ensures automated, timely and proper recording of
database transactions. Such a policy should be a part of the database security considerations since
all the sensitive database transactions have an automated record and the absence of which poses a
serious risk to the organization’s databases and may cause instability in operations. Weak database
audit policy represents a serious organizational risk on many levels.
Denial of Service: In this type of attack all users (including legitimate users) are denied access to
data in the database. Denial of service (DOS) conditions may be created via many techniques -
many of which are related to the other mentioned vulnerabilities.
For example, DOS may be achieved by taking advantage of a database platform vulnerability to
crash a database server. Other common DOS techniques include data corruption, network flooding,
and server resource overload (memory, CPU, etc.).
Covert Channel: A covert channel is an indirect means of communication in a computer system
which can be used to weaken the system's security policy. A program running at a secret level is
prevented from writing directly to unclassified data item. There are, however, other ways of
communicating information to unclassified programs.
For example, the secret program wants to know the amount of memory available. Even if the
unclassified program is prevented from directly observing the amount of free memory, it can do
so indirectly by making a request for a large amount of memory itself. Granting or denial of this
request will convey some information about free memory to the unclassified program. Such
indirect methods of communication are called covert channels.
Database Communication Protocol Vulnerabilities: Large number of security weaknesses is
being identified in the database communication protocols of all database retailers. Fraudulent
activities directing these vulnerabilities can vary from illegal data access to data exploitation and
denial of service and many more.
Advanced Persistent Threats: This type threat happens whenever large, well-funded
organizations makes highly focused assaults on large stores of critical data. These attacks are
relentless, defined, and perpetrated by skilled, motivated, organized, and well-funded groups.
Organized criminals and state-sponsored cyber-professionals are targeting databases those where
they can harvest data in bulk. They target large repositories of personal and financial information.
Once stolen, these data records can be sold on the information black market or used and
manipulated by other governments.
Insider Mistakes: Some attacks are not intentional, they just happen unknowingly, by mistake.
This type of attack can be called as “unintentional authorized user attack” or insider mistake. It
can occur in two situations. The first one is when an authorized user inadvertently accesses
sensitive data and mistakenly modifies or deletes the information.
The latter can occur accidentally when a user makes an unauthorized copy of sensitive information
for the purpose of backup or “taking work home.” Although it is not a malicious act, but the
organizational security policies are being violated and results in data residing on a storage device
which, if compromised, could lead to an unintentional security breach. For example a laptop
containing sensitive information can be stolen.
Social Engineering: In this, users unknowingly provide information to an attacker via a web
interface like a compromised website or through an email response to what appears to be a
legitimate request. An example of this is the RSA breach, which occurred when legitimate users
unknowingly provided security keys to attackers as a result of sophisticated phishing techniques.
Weak Authentication: Weak authentication schemes allow attackers to assume the identity of
legitimate database users by stealing or otherwise obtaining login credentials. An attacker may
employ any number of strategies to obtain credentials.
1) Brute Force: In this strategy, attacker repeatedly enters username/password combinations until
he finds the correct one. The brute force process may involve simple guesswork systematic
enumeration of all possible username/password combinations. The attacker can often use
automated programs to accelerate the brute force process.
2) Direct Credential Theft: An attacker may steal login credentials from the authorized user.
Backup Data Exposure: Backup database storage media is often completely unprotected from an
attack as well as a natural calamity like flood, earthquake etc. As a result, several high profile
security breaches have involved theft of database backup tapes and hard disks.
Certifications for security professionals are a way of ensuring one’s skills are up to date and stand
out from the crowd. Some of the popular certifications available include
Certified Information Systems Security Professional (CISSP)
System Security Certified Practitioner (SSCP)
Certificate Information Systems Auditor (CISA)
Global Information Assurance Certification (GIAC)
Web Links:
https://www.isc2.org/cissp/default.aspx
https://www.isc2.org/sscp/default.aspx
https://www.isaca.org/
https://www.giac.org/
Glossary
This section contains terms from both hackers and security professionals. To truly understand
Computer Security, you must be familiar with both worlds. General networking terms are also
included in this glossary.
Audit: A check of a system’s security. This check usually includes a review of documents,
procedures, and system configurations.
Authentication: The process of proving that someone is who he or she claims to be.
Back Door: A hole in the security system deliberately left by the creator of the system.
Black Hat Hacker: Someone who uses hacking skills for malicious and illegal purposes.
BlowFish: A well-known symmetric key encryption algorithm that uses a variable-length key and
was invented by Bruce Schneier.
Brute Force: To try to crack a password by simply trying every possible combination.
Caesar cipher: One of the oldest encryption algorithms. It uses a basic mono-alphabetic cipher.
Code: The source code for a program; or the act of programming, as in “to code an algorithm.”
Cookie: A small bit of data, often in plain text, that is stored by web browsers.
Computer Security: Is the process of preventing and detecting unauthorized use of your
computer. It involves the process of safeguarding against intruders from using your computer
resources for malicious intents or for their own gains (or even gaining access to them accidentally).
Cracker: One who breaks into a system in order to do something malicious, illegal, or harmful.
Synonymous with black hat hacker.
Demigod: A hacker with years of experience, who has a national or international reputation.
DDoS: Distributed Denial of Service attacks are denial of service attacks launched from multiple
source locations.
Denial of Service (DoS): An attack that prevents legitimate users from accessing a resource. This
is usually done by overloading the target system with more workload than it can handle.
DES: Data Encryption Standard, a block cipher that was developed in the 1970s. It uses a 56-bit
key on 64-bit blocks. It is no longer considered secure enough.
Encrypting File System: Also known as EFS, this is Microsoft’s file system that allows users to
encrypt individual files. It was first introduced in Windows 2000.
Encryption: The act of encrypting a message. Encryption usually involves altering a message so
that it cannot be read without the key and the decryption algorithm.
Ethical Hacker: One who hacks into systems to accomplish some goal that he feels is ethically
valid.
Firewall: A device or software that provides a barrier between your machine or network and the
rest of the world.
Gray Hat Hacker: A hacker who usually obeys the law but in some instances will cross the line
into black hat hacking.
Hacker: One who tries to learn about a system by examining it in detail by reverse-engineering.
Hash: An algorithm that takes variable length input and produced fixed-length output and is not
reversible.
Honey Pot: A system or server designed to be very appealing to hackers, when in fact it is a trap
to catch them.
IP spoofing: Making packets seem to come from a different IP address than they really originated
from.
MAC Address: The physical address of a network card. It is a 6-byte hexadecimal number. The
first 3 bytes define the vendor.
Malware: Any software that has a malicious purpose, such as a virus or Trojan horse.
Multi-Alphabet Substitutions: Encryption methods that use more than one substitution alphabet.
Packet Filter Firewall: A firewall that scans incoming packets and either allows them to pass or
rejects them. It only examines the header, not the data, and does not consider the context of the
data communication.
Penetration testing: Assessing the security of a system by attempting to break into the system.
Penetration testing is the activity of most sneakers.
Proxy Server: A device that hides your internal IP addresses and presents a single IP address to
the outside world.
RSA: A public key encryption method developed in 1977 by three mathematicians: Ron Rivest,
Adi Shamir, and Leonard Adleman. The name RSA is derived from the first letter of each
mathematician’s last name.
RST cookie: A simple method for alleviating the danger of certain types of DoS attacks.
Script kiddy: A slang term for an unskilled person who purports to be a skilled hacker.
Sneaker: Someone who is attempting to compromise a system in order to assess its vulnerability.
Sniffer: A program that captures data as it travels across a network. Also called a packet sniffer.
Social Engineering: The use of persuasion on human users in order to gain information required
to access a system.
SPAP: Shiva Password Authentication Protocol. SPAP is a proprietary version of PAP. This
protocol basically adds encryption to PAP.
Spoofing: Pretending to be something else, as when a packet might spoof another return IP address
(as in the smurf attack) or when a website is spoofing a well-known e-commerce site.
Stack Tweaking: A complex method for protecting a system against DoS attacks. This method
involves reconfiguring the operating system to handle connections differently.
Stateful Packet Inspection: A type of firewall that not only examines packets but knows the
context within which the packet was sent.
Symmetric Key System: An encryption method where the same key is used to encrypt and decrypt
the message.
SYN Flood: Sending a stream of SYN packets (requests for connection) then never responding,
thus leaving the connection half open.
Trojan horse: Software that appears to have a valid and benign purpose but really has another,
nefarious purpose.
War-Dialing: Dialing phones waiting for a computer to pick up. War-dialing is usually done via
some automated system.
War-driving: Driving and scanning for wireless networks that can be compromised.
White Hat Hacker: A hacker who does not break the law, often synonymous with ethical hacker.
References