Download as pdf or txt
Download as pdf or txt
You are on page 1of 123

College of Computing & Informatics

Department of Computer Science

COMPUTER SECURITY
(Module Code: CoSc-M4171 & Course Code: CoSc4171)

AY 2017-2018
TABLE OF CONTENTS

Chapter: 1 # Fundamental of Computer Security

Chapter: 2 # Computer Security Attacks and Threats

Chapter: 3 # Cryptography and Encryption Techniques

Chapter: 4 # Access Control

Chapter: 5 # Network and Internet Security

Chapter: 6 # Database Security

Chapter: 7 # Security Administration and Evaluation

Glossary

References
CHAPTER ONE
FUNDAMENTAL OF COMPUTER SECURITY
1.1 What is Computer Security?
Computer security is the protection of the items you value, called the assets of a computer or
computer system. There are many types of assets, involving hardware, software, data, people,
processes, or combinations of these. To determine what to protect, we must first identify what has
value and to whom.
A computer device (including hardware, added components, and accessories) is certainly an asset.
Because most computer hardware is pretty useless without programs, the software is also an asset.
Software includes the operating system, utilities and device handlers; applications such as word
processing, media players or email handlers; and even programs that you may have written
yourself. Much hardware and software is off the-shelf, meaning that it is commercially available
(not custom-made for your purpose) and that you can easily get a replacement. The thing that
makes your computer unique and important to you is its content: photos, tunes, papers, email
messages, projects, calendar information, e-books (with your annotations), contact information,
code you created, and the like. Thus, data items on a computer are assets, too. Unlike most
hardware and software, data can be hard—if not impossible—to recreate or replace. These assets
are all shown in Figure 1.2.

These three things—hardware, software, and data—contain or express things like the design for
your next new product, the photos from your recent vacation, the chapters of your new book, or
the genome sequence resulting from your recent research. All of these things represent intellectual
endeavor or property, and they have value that differs from one person or organization to another.
It is that value that makes them assets worthy of protection, and they are the elements we want to
protect. Other assets—such as access to data, quality of service, processes, human users, and
network connectivity—deserve protection, too; they are affected or enabled by the hardware,
software, and data. So in most cases, protecting hardware, software, and data covers these other
assets as well.
Computer systems—hardware, software, and data—have value and deserve security protection.
Values of Assets
After identifying the assets to protect, we next determine their value. We make value based
decisions frequently, even when we are not aware of them. For example, when you go for a swim
you can leave a bottle of water and a towel on the beach, but not your wallet or cell phone. The
difference relates to the value of the assets.
The value of an asset depends on the asset owner’s or user’s perspective, and it may be independent
of monetary cost, as shown in Figure 1-3. Your photo of your sister, worth only a few cents in
terms of paper and ink, may have high value to you and no value to your roommate. Other items’
value depends on replacement cost; some computer data are difficult or impossible to replace. For
example, that photo of you and your friends at a party may have cost you nothing, but it is
invaluable because there is no other copy. On the other hand, the DVD of your favorite film may
have cost a significant portion of your take-home pay, but you can buy another one if the DVD is
stolen or corrupted. Similarly, timing has bearing on asset value. For example, the value of the
plans for a company’s new product line is very high, especially to competitors. But once the new
product is released, the plans’ value drops dramatically.
Assets’ values are personal, time dependent, and often imprecise.
The Vulnerability–Threat–Control Paradigm
The goal of computer security is protecting valuable assets. To study different ways of protection,
we use a framework that describes how assets may be harmed and how to counter or mitigate that
harm.
A vulnerability is a weakness in the system, for example, in procedures, design, or
implementation that might be exploited to cause loss or harm. For instance, a particular system
may be vulnerable to unauthorized data manipulation because the system does not verify a user’s
identity before allowing data access.
A threat to a computing system is a set of circumstances that has the potential to cause loss or
harm. To see the difference between a threat and a vulnerability, consider the illustration in Figure
1.4. Here, a wall is holding water back. The water to the left of the wall is a threat to the man on
the right of the wall: The water could rise, overflowing onto the man, or it could stay beneath the
height of the wall, causing the wall to collapse. So the threat of harm is the potential for the man
to get wet, get hurt, or be drowned. For now, the wall is intact, so the threat to the man is unrealized.
A vulnerability is a weakness that could be exploited to cause harm.
A threat is a set of circumstances that could cause harm.
However, we can see a small crack in the wall—a vulnerability that threatens the man’s security.
If the water rises to or beyond the level of the crack, it will exploit the vulnerability and harm the
man.
There are many threats to a computer system, including human-initiated and computer-initiated
ones. We have all experienced the results of inadvertent human errors, hardware design flaws, and
software failures. But natural disasters are threats, too; they can bring a system down when the
computer room is flooded or the data center collapses from an earthquake, for example.
A human who exploits a vulnerability perpetrates an attack on the system. An attack can also be
launched by another system, as when one system sends an overwhelming flood of messages to
another, virtually shutting down the second system’s ability to function. Unfortunately, we have
seen this type of attack frequently, as denial-of-service attacks deluge servers with more messages
than they can handle.
How do we address these problems? We use a control or countermeasure as protection. That is, a
control is an action, device, procedure, or technique that removes or reduces a vulnerability. In
general, we can describe the relationship between threats, controls, and vulnerabilities in this way:
A threat is blocked by control of a vulnerability. Controls prevent threats from exercising
vulnerabilities.
Before we can protect assets, we need to know the kinds of harm we have to protect them against,
so now we explore threats to valuable assets.
1.2 The Challenges of Computer Security
Computer and network security is both fascinating and complex. Some of the reasons follow:
1. Security is not as simple as it might first appear to the novice. The requirements seem to
be straightforward; indeed, most of the major requirements for security services can be
given self-explanatory, one-word labels: confidentiality, authentication, nonrepudiation, or
integrity. But the mechanisms used to meet those requirements can be quite complex, and
understanding them may involve rather suitable reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks are designed
by looking at the problem in a completely different way, therefore exploiting an unexpected
weakness in the mechanism.
3. Because of point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious from
the statement of a particular requirement that such elaborate measures are needed. It is only
when the various aspects of the threat are considered that elaborate security mechanisms
make sense.
4. Having designed various security mechanisms, it is necessary to decide where to use them.
This is true both in terms of physical placement (e.g., at what points in a network are certain
security mechanisms needed) and in a logical sense [e.g., at what layer or layers of an
architecture such as TCP/IP (Transmission Control Protocol/Internet Protocol) should
mechanisms be placed].
5. Security mechanisms typically involve more than a particular algorithm or protocol. They
also require that participants be in possession of some secret information (e.g., an
encryption key), which raises questions about the creation, distribution, and protection of
that secret information. There also may be a reliance on communications protocols whose
behavior may complicate the task of developing the security mechanism. For example, if
the proper functioning of the security mechanism requires setting time limits on the transit
time of a message from sender to receiver, then any protocol or network that introduces
variable, unpredictable delays may render such time limits meaningless.
6. Computer and network security is essentially a battle of wits between a perpetrator who
tries to find holes and the designer or administrator who tries to close them. The great
advantage that the attacker has is that he or she need only find a single weakness, while the
designer must find and eliminate all weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little
benefit from security investment until a security failure occurs.
8. Security requires regular, even constant, monitoring, and this is difficult in today’s short-
term, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the design
is complete rather than being an integral part of the design process.
10.Many users and even security administrators view strong security as an impediment to
efficient and user-friendly operation of an information system or use of information.
1.3 Security Attacks
A useful means of classifying security attacks, used both in X.800 and RFC 2828, is in terms of
passive attacks and active attacks. A passive attack attempts to learn or make use of information
from the system but does not affect system resources. An active attack attempts to alter system
resources or affect their operation.
Passive Attacks
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of
the opponent is to obtain information that is being transmitted. Two types of passive attacks are
the release of message contents and traffic analysis.
The release of message contents is easily understood. A telephone conversation, an electronic
mail message, and a transferred file may contain sensitive or confidential information. We would
like to prevent an opponent from learning the contents of these transmissions.
A second type of passive attack, traffic analysis, is subtler. Suppose that we had a way of masking
the contents of messages or other information traffic so that opponents, even if they captured the
message, could not extract the information from the message. The common technique for masking
contents is encryption. If we had encryption protection in place, an opponent might still be able to
observe the pattern of these messages. The opponent could determine the location and identity of
communicating hosts and could observe the frequency and length of messages being exchanged.
This information might be useful in guessing the nature of the communication that was taking
place.
Passive attacks are very difficult to detect, because they do not involve any alteration of the data.
Typically, the message traffic is sent and received in an apparently normal fashion, and neither the
sender nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of encryption.
Thus, the emphasis in dealing with passive attacks is on prevention rather than detection.
Active Attacks
Active attacks involve some modification of the data stream or the creation of a false stream and
can be subdivided into four categories: masquerade, replay, modification of messages, and denial
of service.
A Masquerade takes place when one entity pretends to be a different entity. A masquerade attack
usually includes one of the other forms of active attack. For example, authentication sequences
can be captured and replayed after a valid authentication sequence has taken place, thus enabling
an authorized entity with few privileges to obtain extra privileges by impersonating an entity that
has those privileges.
Replay involves the passive capture of a data unit and its subsequent retransmission to produce an
unauthorized effect.
Modification of Messages simply means that some portion of a legitimate message is altered, or
that messages are delayed or reordered, to produce an unauthorized effect). For example, a
message meaning “Allow John Smith to read confidential file accounts” is modified to mean
“Allow Fred Brown to read confidential file accounts.”
The Denial of Service prevents or inhibits the normal use or management of communications
facilities. This attack may have a specific target; for example, an entity may suppress all messages
directed to a particular destination (e.g., the security audit service).Another form of service denial
is the disruption of an entire network, either by disabling the network or by overloading it with
messages so as to degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks are
difficult to detect, measures are available to prevent their success.
On the other hand, it is quite difficult to prevent active attacks absolutely because of the wide
variety of potential physical, software, and network vulnerabilities. Instead, the goal is to detect
active attacks and to recover from any disruption or delays caused by them. If the detection has a
deterrent effect, it may also contribute to prevention.

1.4 Threats
We can consider potential harm to assets in two ways: First, we can look at what bad things can
happen to assets, and second, we can look at who or what can cause or allow those bad things to
happen. These two perspectives enable us to determine how to protect assets.
Think for a moment about what makes your computer valuable to you. First, you use it as a tool
for sending and receiving email, searching the web, writing papers, and performing many other
tasks, and you expect it to be available for use when you want it. Without your computer these
tasks would be harder, if not impossible. Second, you rely heavily on your computer’s integrity.
When you write a paper and save it, you trust that the paper will reload exactly as you saved it.
Similarly, you expect that the photo a friend passes you on a flash drive will appear the same when
you load it into your computer as when you saw it on your friend’s computer. Finally, you expect
the “personal” aspect of a personal computer to stay personal, meaning you want it to protect your
confidentiality. For example, you want your email messages to be just between you and your listed
recipients; you don’t want them broadcast to other people. And when you write an essay, you
expect that no one can copy it without your permission.
These three aspects, confidentiality, integrity, and availability, make your computer valuable to
you. But viewed from another perspective, they are three possible ways to make it less valuable,
that is, to cause you harm. If someone steals your computer, scrambles data on your disk, or looks
at your private data files, the value of your computer has been diminished or your computer use
has been harmed. These characteristics are both basic security properties and the objects of security
threats.
We can define these three properties as follows.
 Confidentiality: the ability of a system to ensure that an asset is viewed only by authorized
parties.
 Integrity: the ability of a system to ensure that an asset is modified only by authorized
parties.
 Availability: the ability of a system to ensure that an asset can be used by any authorized
parties.
Taken together (and rearranged), the properties are called the C-I-A triad or the security triad. ISO
7498-2 [ISO89] adds to them two more properties that are desirable, particularly in communication
networks:
 Authentication: the ability of a system to confirm the identity of a sender
 Nonrepudiation or accountability: the ability of a system to confirm that a sender cannot
convincingly deny having sent something
What can happen to harm the confidentiality, integrity, or availability of computer assets? If a thief
steals your computer, you no longer have access, so you have lost availability; furthermore, if the
thief looks at the pictures or documents you have stored, your confidentiality is compromised. And
if the thief changes the content of your music files but then gives them back with your computer,
the integrity of your data has been harmed. You can envision many scenarios based around these
three properties.

Figure 1.4 Four Acts to Cause Security Harm


The C-I-A triad can be viewed from a different perspective: the nature of the harm caused to assets.
Harm can also be characterized by four acts: interception, interruption, modification, and
fabrication. These four acts are depicted in Figure 1.4. From this point of view, confidentiality can
suffer if someone intercepts data, availability is lost if someone or something interrupts a flow of
data or access to a computer, and integrity can fail if someone or something modifies data or
fabricates false data. Thinking of these four kinds of acts can help you determine what threats
might exist against the computers you are trying to protect.
To analyze harm, we next refine the C-I-A triad, looking more closely at each of its elements.
C-I-A triad: confidentiality, integrity, availability
Confidentiality
Some things obviously need confidentiality protection. For example, students’ grades, financial
transactions, medical records, and tax returns are sensitive. A proud student may run out of a
classroom screaming “I got an A!” but the student should be the one to choose whether to reveal
that grade to others. Other things, such as diplomatic and military secrets, companies’ marketing
and product development plans, and educators’ tests, also must be carefully controlled. Sometimes,
however, it is not so obvious that something is sensitive. For example, a military food order may
seem like innocuous information, but a sudden increase in the order could be a sign of incipient
engagement in conflict. Purchases of food, hourly changes in location, and access to books are not
things you would ordinarily consider confidential, but they can reveal something that someone
wants to be kept confidential.
The definition of confidentiality is straightforward: Only authorized people or systems can access
protected data. However, as we see in later chapters, ensuring confidentiality can be difficult. For
example, who determines which people or systems are authorized to access the current system?
By “accessing” data, do we mean that an authorized party can access a single bit? The whole
collection? Pieces of data out of context? Can someone who is authorized disclose data to other
parties? Sometimes there is even a question of who owns the data: If you visit a web page, do you
own the fact that you clicked on a link, or does the web page owner, the Internet provider, someone
else, or all of you?
In spite of these complicating examples, confidentiality is the security property we understand best
because its meaning is narrower than that of the other two. We also understand confidentiality well
because we can relate computing examples to those of preserving confidentiality in the real world.
Confidentiality relates most obviously to data, although we can think of the confidentiality of a
piece of hardware (a novel invention) or a person (the whereabouts of a wanted criminal). Here
are some properties that could mean a failure of data confidentiality:
 An unauthorized person accesses a data item.
 An unauthorized process or program accesses a data item.
 A person authorized to access certain data accesses other data not authorized (which is a
specialized version of “an unauthorized person accesses a data item”).
 An unauthorized person accesses an approximate data value (for example, not knowing
someone’s exact salary but knowing that the salary falls in a particular range or exceeds a
particular amount).
 An unauthorized person learns the existence of a piece of data (for example, knowing that a
company is developing a certain new product or that talks are underway about the merger of
two companies).
Notice the general pattern of these statements: A person, process, or program is (or is not)
authorized to access a data item in a particular way. We call the person, process, or program a
subject, the data item an object, the kind of access (such as read, write, or execute) an access mode,
and the authorization a policy, as shown in Figure 1.5. These four terms reappear throughout this
book because they are fundamental aspects of computer security.
One word that captures most aspects of confidentiality is view, although you should not take that
term literally. A failure of confidentiality does not necessarily mean that someone sees an object
and, in fact, it is virtually impossible to look at bits in any meaningful way (although you may look
at their representation as characters or pictures). The word view does connote another aspect of
confidentiality in computer security, through the association with viewing a movie or a painting in
a museum: look but do not touch. In computer security, confidentiality usually means obtaining
but not modifying. Modification is the subject of integrity, which we consider in the next section.

Figure 1.5 Access Control

Integrity
Examples of integrity failures are easy to find. A number of years ago a malicious macro in a Word
document inserted the word “not” after some random instances of the word “is;” you can imagine
the havoc that ensued. Because the document was generally syntactically correct, people did not
immediately detect the change. In another case, a model of the Pentium computer chip produced
an incorrect result in certain circumstances of floating-point arithmetic. Although the
circumstances of failure were rare, Intel decided to manufacture and replace the chips. Many of us
receive mail that is misaddressed because someone typed something wrong when transcribing
from a written list. A worse situation occurs when that inaccuracy is propagated to other mailing
lists such that we can never seem to correct the root of the problem. Other times we find that a
spreadsheet seems to be wrong, only to find that someone typed “space 123” in a cell, changing it
from a numeric value to text, so the spreadsheet program misused that cell in computation.
Suppose someone converted numeric data to roman numerals: One could argue that IV is the same
as 4, but IV would not be useful in most applications, nor would it be obviously meaningful to
someone expecting 4 as an answer. These cases show some of the breadth of examples of integrity
failures. For example, if we say that we have preserved the integrity of an item, we may mean that
the item is:
 precise
 accurate
 unmodified
 modified only in acceptable ways
 modified only by authorized people
 modified only by authorized processes
 consistent
 internally consistent
 meaningful and usable

Integrity can also mean two or more of these properties. Welke and Mayfield recognize three
particular aspects of integrity—authorized actions, separation and protection of resources, and
error detection and correction. Integrity can be enforced in much the same way as can
confidentiality: by rigorous control of who or what can access which resources in what ways.

Availability
A computer user’s worst nightmare: You turn on the switch and the computer does nothing. Your
data and programs are presumably still there, but you cannot get at them. Fortunately, few of us
experience that failure. Many of us do experience overload, however: access gets slower and
slower; the computer responds but not in a way we consider normal or acceptable.
Availability applies both to data and to services (that is, to information and to information
processing), and it is similarly complex. As with the notion of confidentiality, different people
expect availability to mean different things. For example, an object or service is thought to be
available if the following are true:
 It is present in a usable form.
 It has enough capacity to meet the service’s needs.
 It is making clear progress, and, if in wait mode, it has a bounded waiting time.
 The service is completed in an acceptable period of time.
We can construct an overall description of availability by combining these goals. Following are
some criteria to define availability.
 There is a timely response to our request.
 Resources are allocated fairly so that some requesters are not favored over others.
 Concurrency is controlled; that is, simultaneous access, deadlock management, and
exclusive access are supported as required.
 The service or system involved follows a philosophy of fault tolerance, whereby hardware
or software faults lead to graceful cessation of service or to workarounds rather than to
crashes and abrupt loss of information. (Cessation does mean end; whether it is graceful or
not, ultimately the system is unavailable. However, with fair warning of the system’s
stopping, the user may be able to move to another system and continue work.)
 The service or system can be used easily and in the way it was intended to be used. (This
is a characteristic of usability, but an unusable system may also cause an availability
failure.)
In Figure 1.6 we depict some of the properties with which availability overlaps. Indeed, the security
community is just beginning to understand what availability implies and how to ensure it.
A person or system can do three basic things with a data item: view it, modify it, or use it. Thus,
viewing (confidentiality), modifying (integrity), and using (availability) are the basic modes of
access that computer security seeks to preserve.
A paradigm of computer security is access control: To implement a policy, computer security
controls all accesses by all subjects to all protected objects in all modes of access. A small,
centralized control of access is fundamental to preserving confidentiality and integrity, but it is not
clear that a single access control point can enforce availability. Indeed, experts on dependability
will note that single points of control can become single points of failure, making it easy for an
attacker to destroy availability by disabling the single control point. Much of computer security’s
past success has focused on confidentiality and integrity; there are models of confidentiality and
integrity.

Figure 1.6 Availability and related Aspects


We have just described the C-I-A triad and the three fundamental security properties it represents.
Our description of these properties was in the context of things that need protection. To motivate
your understanding we gave some examples of harm and threats to cause harm. Our next step is to
think about the nature of threats themselves.
Types of Threats
For some ideas of harm, look at Figure 1.7, taken from Willis Ware’s report [WAR70]. Although
it was written when computers were so big, so expensive, and so difficult to operate that only large
organizations like universities, major corporations, or government departments would have one,
Ware’s discussion is still instructive today. Ware was concerned primarily with the protection of
classified data that is, preserving confidentiality. In the figure, he depicts humans such as
programmers and maintenance staff gaining access to data, as well as radiation by which data can
escape as signals. From the figure you can see some of the many kinds of threats to a computer
system.
One way to analyze harm is to consider the cause or source. We call a potential cause of harm a
threat. Harm can be caused by either nonhuman events or humans. Examples of nonhuman
threats include natural disasters like fires or floods; loss of electrical power; failure of a component
such as a communications cable, processor chip, or disk drive; or attack by a wild boar.

Figure 1.7 Computer [Network] Vulnerabilities (from [WAR70])


Human threats can be either benign (nonmalicious) or malicious. Nonmalicious kinds of harm
include someone’s accidentally spilling a soft drink on a laptop, unintentionally deleting text,
inadvertently sending an email message to the wrong person, and carelessly typing “12” instead
of “21” when entering a phone number or clicking “yes” instead of “no” to overwrite a file. These
inadvertent, human errors happen to most people; we just hope that the seriousness of harm is not
too great, or if it is, that we will not repeat the mistake.
Most computer security activity relates to malicious, human-caused harm: A malicious person
actually wants to cause harm, and so we often use the term attack for a malicious computer security
event. Malicious attacks can be random or directed. In a random attack the attacker wants to harm
any computer or user; such an attack is analogous to accosting the next pedestrian who walks down
the street. An example of a random attack is malicious code posted on a website that could be
visited by anybody.
In a directed attack, the attacker intends harm to specific computers, perhaps at one organization
(think of attacks against a political organization) or belonging to a specific individual (think of
trying to drain a specific person’s bank account, for example, by impersonation). Another class of
directed attack is against a particular product, such as any computer running a particular browser.
(We do not want to split hairs about whether such an attack is directed—at that one software
product—or random, against any user of that product; the point is not semantic perfection but
protecting against the attacks.) The range of possible directed attacks is practically unlimited.
Threats can be malicious or not.
Although the distinctions shown in Figure 1.8 seem clear-cut, sometimes the nature of an attack is
not obvious until the attack is well underway, or perhaps even ended. A normal hardware failure
can seem like a directed, malicious attack to deny access, and hackers often try to conceal their
activity to look like ordinary, authorized users. As computer security experts we need to anticipate
what bad things might happen, instead of waiting for the attack to happen or debating whether the
attack is intentional or accidental.

Figure 1.8 Kinds of Threats


Neither this note nor any checklist or method can show you all the kinds of harm that can happen
to computer assets. There are too many ways to interfere with your use of these assets. Two
retrospective lists of known vulnerabilities are of interest, however. The Common Vulnerabilities
and Exposures (CVE) list (see http://cve.mitre.org/) is a dictionary of publicly known security
vulnerabilities and exposures. CVE’s common identifiers enable data exchange between security
products and provide a baseline index point for evaluating coverage of security tools and services.
To measure the extent of harm, the Common Vulnerability Scoring System (CVSS) (see
http://nvd.nist.gov/cvss.cfm) provides a standard measurement system that allows accurate and
consistent scoring of vulnerability impact.
CHAPTER TWO
COMPUTER SECURITY ATTACKS AND THREATS
2.1 Introduction to Vulnerabilities, Threats and Attacks
When discussing network security, the three common terms used are as follows:
 Vulnerability—A weakness that is inherent in every network and device. This includes
routers, switches, desktops, servers, and even security devices themselves.
 Threats—The people eager, willing, and qualified to take advantage of each security
weakness, and they continually search for new exploits and weaknesses.
 Attacks—The threats use a variety of tools, scripts, and programs to launch attacks against
networks and network devices. Typically, the network devices under attack are the
endpoints, such as servers and desktops.
The sections that follow discuss vulnerabilities, threats, and attacks in further detail.
2.2 Vulnerabilities
Vulnerabilities in network security can be summed up as the “soft spots” that are present in every
network. The vulnerabilities are present in the network and individual devices that make up the
network.
Networks are typically plagued by one or all of three primary vulnerabilities or weaknesses:
 Technology weaknesses
 Configuration weaknesses
 Security policy weaknesses
The sections that follow examine each of these weaknesses in more detail.
Technological Weaknesses
Computer and network technologies have intrinsic security weaknesses. These include TCP/IP
protocol weaknesses, operating system weaknesses, and network equipment weaknesses. Table
2.1 describes these three weaknesses.
Table 2.1 Network Security Weaknesses

Configuration Weaknesses
Network administrators or network engineers need to learn what the configuration weaknesses are
and correctly configure their computing and network devices to compensate. Table 2.2 lists some
common configuration weaknesses.
Table 2.2 Configuration Weaknesses
Security Policy Weaknesses
Security policy weaknesses can create unforeseen security threats. The network can pose security
risks to the network if users do not follow the security policy. Table 2.3 lists some common security
policy weaknesses and how those weaknesses are exploited.
Table 2.3 Security Policy Weaknesses
2.3 Threats
There are four primary classes of threats to network security, as Figure 2.1 depicts. The list that
follows describes each class of threat in more detail.

Figure 2.1 Variety of Threats


 Unstructured threats: Unstructured threats consist of mostly inexperienced individuals
using easily available hacking tools such as shell scripts and password crackers. Even
unstructured threats that are only executed with the intent of testing and challenging a
hacker’s skills can still do serious damage to a company. For example, if an external
company website is hacked, the integrity of the company is damaged. Even if the external
website is separate from the internal information that sits behind a protective firewall, the
public does not know that. All the public knows is that the site is not a safe environment to
conduct business.
 Structured threats: Structured threats come from hackers who are more highly motivated
and technically competent. These people know system vulnerabilities and can understand
and develop exploit code and scripts. They understand, develop, and use sophisticated
hacking techniques to penetrate unsuspecting businesses. These groups are often involved
with the major fraud and theft cases reported to law enforcement agencies.
 External threats: External threats can arise from individuals or organizations working
outside of a company. They do not have authorized access to the computer systems or
network. They work their way into a network mainly from the Internet or dialup access
servers.
 Internal threats: Internal threats occur when someone has authorized access to the network
with either an account on a server or physical access to the network. According to the FBI,
internal access and misuse account for 60 percent to 80 percent of reported incidents.
As the types of threats, attacks, and exploits have evolved, various terms have been coined to
describe different groups of individuals. Some of the most common terms are as follows:
o Hacker: Hacker is a general term that has historically been used to describe a computer
programming expert. More recently, this term is commonly used in a negative way to
describe an individual who attempts to gain unauthorized access to network resources with
malicious intent.
o Cracker: Cracker is the term that is generally regarded as the more accurate word that is
used to describe an individual who attempts to gain unauthorized access to network
resources with malicious intent.
o Phreaker: A phreaker is an individual who manipulates the phone network to cause it to
perform a function that is normally not allowed. A common goal of phreaking is breaking
into the phone network, usually through a payphone, to make free long-distance calls.
o Spammer: A spammer is an individual who sends large numbers of unsolicited e-mail
messages. Spammers often use viruses to take control of home computers to use these
computers to send out their bulk messages.
o Phisher: A phisher uses e-mail or other means in an attempt to trick others into providing
sensitive information, such as credit card numbers or passwords. The phisher masquerades
as a trusted party that would have a legitimate need for the sensitive information.
o White hat: White hat is a term used to describe individuals who use their abilities to find
vulnerabilities in systems or networks and then report these vulnerabilities to the owners
of the system so that they can be fixed.
o Black hat: Black hat is another term for individuals who use their knowledge of computer
systems to break into systems or networks that they are not authorized to use.

2.4 Attacks
Four primary classes of attacks exist:
■ Reconnaissance
■ Access
■ Denial of Service
■ Worms, Viruses, and Trojan Horses

Reconnaissance
Reconnaissance is the unauthorized discovery and mapping of systems, services, or vulnerabilities
(see Figure 2.2). It is also known as information gathering and, in most cases, it precedes an actual
access or denial-of-service (DoS) attack. Reconnaissance is somewhat analogous to a thief casing
a neighborhood for vulnerable homes to break into, such as an unoccupied residence, easy-to-open
doors, or open windows.
Figure 2.2 Reconnaissance
Access
System access is the ability for an unauthorized intruder to gain access to a device for which the
intruder does not have an account or a password. Entering or accessing systems to which one does
not have authority to access usually involves running a hack, script, or tool that exploits a known
vulnerability of the system or application being attacked.
Denial of Service (DoS)
Denial of service implies that an attacker disables or corrupts networks, systems, or services with
the intent to deny services to intended users. DoS attacks involve either crashing the system or
slowing it down to the point that it is unusable. But DoS can also be as simple as deleting or
corrupting information. In most cases, performing the attack simply involves running a hack or
script. The attacker does not need prior access to the target because a way to access it is all that is
usually required. For these reasons, DoS attacks are the most feared.
Worms, Viruses, and Trojan Horses
Malicious software is inserted onto a host to damage a system; corrupt a system; replicate itself;
or deny services or access to networks, systems or services. They can also allow sensitive
information to be copied or echoed to other systems.
Trojan horses can be used to ask the user to enter sensitive information in a commonly trusted
screen. For example, an attacker might log in to a Windows box and run a program that looks like
the true Windows logon screen, prompting a user to type his username and password. The program
would then send the information to the attacker and then give the Windows error for bad password.
The user would then log out, and the correct Windows logon screen would appear; the user is none
the wiser that his password has just been stolen.
Even worse, the nature of all these threats is changing-from the relatively simple viruses of the
1980s to the more complex and damaging viruses, DoS attacks, and hacking tools in recent years.
Today, these hacking tools are powerful and widespread, with the new dangers of self spreading
blended worms such as Slammer and Blaster and network DoS attacks. Also, the old days of
attacks that take days or weeks to spread are over. Threats now spread worldwide in a matter of
minutes. The Slammer worm of January 2003 spread around the world in less than 10 minutes.
The next generations of attacks are expected to spread in just seconds. These worms and viruses
could do more than just wreak havoc by overloading network resources with the amount of traffic
they generate, they could also be used to deploy damaging payloads that steal vital information or
erase hard drives. Also, there is a strong concern that the threats of tomorrow will be directed at
the very infrastructure of the Internet.
Attack Examples
Several types of attacks are used today, and this section looks at a representative sample in more
detail.

Reconnaissance
Attacks Reconnaissance attacks can consist of the following:
■ Packet Sniffers
■ Port Scans
■ Ping Sweeps
■ Internet Information Queries
A malicious intruder typically ping sweeps the target network to determine which IP addresses are
alive. After this, the intruder uses a port scanner, as shown in Figure 2.2, to determine what network
services or ports are active on the live IP addresses. From this information, the intruder queries the
ports to determine the application type and version, and the type and version of operating system
running on the target host. Based on this information, the intruder can determine whether a possible
vulnerability exists that can be exploited.
Figure 2.2 NMAP
Using, for example, the Nslookup and Whois software utilities (see Figure 2.3), an attacker can
easily determine the IP address space assigned to a given corporation or entity. The ping command
tells the attacker what IP addresses are alive. Network snooping and packet sniffing are common
terms for eavesdropping. Eavesdropping is listening in to a conversation, spying, prying, or
snooping. The information gathered by eavesdropping can be used to pose other attacks to the
network.

Figure 2.3 ARIN Whois


An example of data susceptible to eavesdropping is SNMP Version 1 community strings, which
are sent in clear text. An intruder could eavesdrop on SNMP queries and gather valuable data on
network equipment configuration. Another example is the capture of usernames and passwords as
they cross a network.
Types of Eavesdropping
A common method for eavesdropping on communications is to capture TCP/IP or other protocol
packets and decode the contents using a protocol analyzer or similar utility, as shown in Fig 2.4.

Figure 2.4 Protocol Analyzer


Two common uses of eavesdropping are as follows:
■ Information Gathering: Network intruders can identify usernames, passwords, or information
carried in the packet such as credit card numbers or sensitive personal information.
■ Information Theft: Network eavesdropping can lead to information theft. The theft can occur
as data is transmitted over the internal or external network. The network intruder can also steal
data from networked computers by gaining unauthorized access. Examples include breaking into
or eavesdropping on financial institutions and obtaining credit card numbers. Another example is
using a computer to crack a password file.
Tools Used to Perform Eavesdropping
The following tools are used for eavesdropping:
■ Network or protocol analyzers
■ Packet capturing utilities on networked computers
Methods to Counteract Eavesdropping
Three of the most effective methods for counteracting eavesdropping are as follows:
■ Implementing and enforcing a policy directive that forbids the use of protocols with known
susceptibilities to eavesdropping
■ Using encryption that meets the data security needs of the organization without imposing an
excessive burden on the system resources or the users
■ Using switched networks
Encrypted Data for Protection against Reconnaissance Attacks
Encryption provides protection for data susceptible to eavesdropping attacks, password crackers,
or manipulation. Some benefits of data encryption are as follows:
■ Almost every company has transactions, which, if viewed by an eavesdropper, could have
negative consequences. Encryption ensures that when sensitive data passes over a medium
susceptible to eavesdropping, it cannot be altered or observed.
■ Decryption is necessary when the data reaches the router or other termination device on the far-
receiving LAN where the destination host resides.
■ By encrypting after the UDP or TCP headers, so that only the IP payload data is encrypted, Cisco
IOS network-layer encryption allows all intermediate routers and switches to forward the traffic
as they would any other IP packets. Payload-only encryption allows flow switching and all access
list features to work with the encrypted traffic just as they would with plain text traffic, thereby
preserving desired quality of service (QoS) for all data.
Most encryption algorithms can be broken and the information can be revealed if the attacker has
enough time, desire, and resources. A realistic goal of encryption is to make obtaining the
information too work-intensive to be worth it to the attacker.

2.5 Access Attacks


Access attacks exploit known vulnerabilities in authentication services, FTP services, and web
services to gain entry to web accounts, confidential databases, and other sensitive information.
Access attacks can consist of the following:
■ Password Attacks
■ Trust Exploitation
■ Port Redirection
■ Man-in-the-Middle Attacks
■ Social Engineering
■ Phishing
Password Attacks
Password attacks can be implemented using several methods, including brute-force attacks, Trojan
horse programs, IP spoofing, and packet sniffers. Although packet sniffers and IP spoofing can
yield user accounts and passwords, password attacks usually refer to repeated attempts to identify
a user account, password, or both (see Figure 2.5 for an illustration of an attempt to attack using
the administrator’s profile). These repeated attempts are called brute-force attacks.
Figure 2.5 Password Attack Example
Often a brute-force attack is performed using a program that runs across the network and attempts
to log in to a shared resource, such as a server. When an attacker gains access to a resource, he has
the same access rights as the user whose account has been compromised. If this account has
sufficient privileges, the attacker can create a back door for future access, without concern for any
status and password changes to the compromised user account. In fact, not only would the attacker
have the same rights as the exploited, he could attempt privilege escalation.
The following are the two methods for computing passwords:
■ Dictionary cracking: All of the words in a dictionary file are computed and compared against
the possible users’ password. This method is extremely fast and finds simple passwords.
■ Brute-force computation: This method uses a particular character set, such as A to Z, or A to Z
plus 0 to 9, and computes the hash for every possible password made up of those characters. It
always computes the password if that password is made up of the character set you have selected
to test. The downside is that time is required for completion of this type of attack.
Trust Exploitation

Figure 2.6 Trust Exploitation


Although it is more of a technique than a hack itself, trust exploitation, as shown in Figure 2.6
refers to an attack in which an individual takes advantage of a trust relationship within a network.
The classic example is a perimeter network connection from a corporation. These network
segments often house Domain Name System (DNS), SMTP, and HTTP servers. Because all these
servers reside on the same segment, the compromise of one system can lead to the compromise of
other systems because these systems usually trust other systems attached to the same network.
Another example is a system on the outside of a firewall that has a trust relationship with a system
on the inside of a firewall. When the outside system is compromised, it can take advantage of that
trust relationship to attack the inside network. Another form of an access attack involves privilege
escalation. Privilege escalation occurs when a user obtains privileges or rights to objects that were
not assigned to the user by an administrator. Objects can be files, commands, or other components
on a network device. The intent is to gain access to information or execute unauthorized
procedures. This information is used to gain administrative privileges to a system or device. They
use these privileges to install sniffers, create backdoor accounts, or delete log files.
Trust exploitation-based attacks can be mitigated through tight constraints on trust levels within a
network. Systems on the outside of a firewall should never be absolutely trusted by systems on the
inside of a firewall. Such trust should be limited to specific protocols and should be authenticated
by something other than an IP address where possible.
Port Redirection

Figure 2.7 Protocol Analyzer


Port redirection attacks, as shown in Figure 2.7, are a type of trust exploitation attack that uses a
compromised host to pass traffic through a firewall that would otherwise be dropped. Consider a
firewall with three interfaces and a host on each interface. The host on the outside can reach the
host on the public services segment, but not the host on the inside. This publicly accessible segment
is commonly referred to as a demilitarized zone (DMZ). The host on the public services segment
can reach the host on both the outside and the inside.
If hackers were able to compromise the public services segment host, they could install software
to redirect traffic from the outside host directly to the inside host. Although neither communication
violates the rules implemented in the firewall, the outside host has now achieved connectivity to
the inside host through the port redirection process on the public services host. An example of an
application that can provide this type of access is Netcat.
Port redirection can be mitigated primarily through the use of proper trust models, which are
network specific (as mentioned earlier). Assuming a system under attack, a host-based IDS can
help detect a hacker and prevent installation of such utilities on a host.
Man-in-the-Middle Attacks
A man-in-the-middle attack requires that the hacker have access to network packets that come
across a network. An example could be someone who is working for an Internet service provider
(ISP) and has access to all network packets transferred between the ISP network and any other
network.
Such attacks are often implemented using network packet sniffers and routing and transport
protocols. The possible uses of such attacks are theft of information, hijacking of an ongoing
session to gain access to private network resources, traffic analysis to derive information about a
network and its users, denial of service, corruption of transmitted data, and introduction of new
information into network sessions.
Man-in-the-middle attack mitigation is achieved by encrypting traffic in an IPsec tunnel, which
would allow the hacker to see only cipher text.
Social Engineering
The easiest hack (social engineering) involves no computer skill at all. If an intruder can trick a
member of an organization into giving over valuable information, such as locations of files, and
servers, and passwords, the process of hacking is made immeasurably easier.
Perhaps the simplest, but a still-effective attack is tricking a user into thinking one is an
administrator and requesting a password for various purposes. Users of Internet systems frequently
receive messages that request password or credit card information to “set up their account” or
“reactivate settings.” Users of these systems must be warned early and frequently not to divulge
sensitive information, passwords or otherwise, to people claiming to be administrators. In reality,
administrators of computer systems rarely, if ever, need to know the user’s password to perform
administrative tasks.
Phishing
Phishing is a type of social-engineering attack that involves using e-mail or other types of
messages in an attempt to trick others into providing sensitive information, such as credit card
numbers or passwords. The phisher masquerades as a trusted party that has a seemingly legitimate
need for the sensitive information. Frequent phishing scams involve sending out spam e-mails that
appear to be from common online banking or auction sites. These e-mails contain hyperlinks that
appear to be legitimate but actually cause users to visit a phony site set up by the phisher to capture
their information. The site appears to belong to the party that was faked in the e-mail, and when
users enter their information it is recorded for the phisher to use.
Denial-of-Service (DoS) Attacks
Certainly the most publicized form of attack, DoS attacks are also among the most difficult to
completely eliminate. Even within the hacker community, DoS attacks are regarded as trivial and
considered bad form because they require so little effort to execute. Still, because of their ease of
implementation and potentially significant damage, DoS attacks deserve special attention from
security administrators. If you are interested in learning more about DoS attacks, researching the
methods employed by some of the better-known attacks can be useful. DoS attacks take many
forms. Ultimately, they prevent authorized people from using a service by using up system
resources, as shown in Figure 2.8.

Figure 2.8 Denial of Service

The following are some examples of common DoS threats:


■ Ping of Death: This attack modifies the IP portion of the header, indicating that there is more
data in the packet than there actually is, causing the receiving system to crash, as shown in Fig 2.9.
Figure 2.9 Ping of Death
■ SYN Flood Attack: This attack randomly opens up many TCP ports, tying up the network
equipment or computer with so many bogus requests that sessions are thereby denied to others.
This attack is accomplished with protocol analyzers or other programs.
The SYN flood attack sends TCP connections requests faster than a machine can process them.
The SYN flood attack follows these steps:
 An attacker creates a random source address for each packet.
 The SYN flag set in each packet is a request to open a new connection to the server
from the spoofed IP address.
 A victim responds to spoofed IP address, and then waits for confirmation that never
arrives (waits about three minutes).
 The victim’s connection table fills up waiting for replies.
 After the table fills up, all new connections are ignored.
 Legitimate users are ignored, too, and cannot access the server.
 When the attacker stops flooding the server, it usually goes back to normal state (SYN
floods rarely crash servers).
Newer operating systems manage resources better, making it more difficult to overflow tables, but
still are vulnerable. The SYN flood can be used as part of other attacks, such as disabling one side
of a connection in TCP hijacking, or by preventing authentication or logging between servers.
■ Packet fragmentation and reassembly: This attack exploits a buffer–overrun bug in hosts or
internetworking equipment.
■ E-mail bombs: Programs can send bulk e-mails to individuals, lists, or domains, monopolizing
e-mail services.
■ CPU hogging: These attacks constitute programs such as Trojan horses or viruses that tie up
CPU cycles, memory, or other resources.
■ Malicious applets: These attacks are Java, JavaScript, or ActiveX programs that act as Trojan
horses or viruses to cause destruction or tie up computer resources.
■ Misconfiguring routers: Misconfiguring routers to reroute traffic disables web traffic.
■ The chargen attack: This attack establishes a connection between UDP services, producing a
high character output. The host chargen service is connected to the echo service on the same or
different systems, causing congestion on the network with echoed chargen traffic.
■ Out-of-band attacks such as WinNuke: These attacks send out-of-band data to port 139 on
Windows 95 or Windows NT machines. The attacker needs the victim’s IP address to launch this
attack, as shown in Figure 2.10.

Figure 2.10 WinNuke


 DoS: DoS can occur accidentally because of misconfigurations or misuse by legitimate
users or system administrators.
 Land.c: This program sends a TCP SYN packet that specifies the target host address as
both source and destination. The program also uses the same port (such as 113 or 139) on
the target host as both source and destination, causing the target system to stop functioning.
 Teardrop.c: In this attack, the fragmentation process of the IP is implemented in such a
way that reassembly problems can cause machines to crash.
 Targa.c: This attack is a multiplatform DoS attack that integrates bonk, jolt, land, nestea,
netear, syndrop, teardrop, and WinNuke all into one exploit.
Masquerade/IP Spoofing Attacks
With a masquerade attack, the network intruder can manipulate TCP/IP packets by IP spoofing,
falsifying the source IP address, thereby appearing to be another user. The intruder assumes the
identity of a valid user and gains that user’s access privileges by IP spoofing. IP spoofing occurs
when intruders create IP data packets with falsified source addresses. During an IP spoofing attack,
an attacker outside the network pretends to be a trusted computer. The attacker may either use an
IP address that is within the range of IP addresses for the network or use an authorized external IP
address that is trusted and provides access to specified resources on the network.
Normally, an IP spoofing attack is limited to the injection of data or commands into an existing
stream of data passed between a client and server application or a peer-to-peer network connection.
The attacker simply does not worry about receiving any response from the applications.
To enable bidirectional communication, the attacker must change all routing tables to point to the
spoofed IP address. Another approach the attacker could take is to simply not worry about
receiving any response from the applications.
If attackers manage to change the routing tables, they can receive all the network packets that are
addressed to the spoofed address, and reply just as any trusted user can. Like packet sniffers, IP
spoofing is not restricted to people who are external to the network.
Some tools used to perform IP spoofing attacks are as follows:
■ Protocol analyzers, also called password sniffers
■ Sequence number modification
■ Scanning tools that probe TCP ports for specific services, network or system architecture,
and the operating system
After obtaining information through scanning tools, the intruder looks for vulnerabilities
associated with those entities.
Distributed Denial-of-Service Attacks
Distributed Denial-of-Service Attacks (DDoS) attacks are designed to saturate network links with
spurious data. This data can overwhelm an Internet link, causing legitimate traffic to be dropped.
DDoS uses attack methods similar to standard DoS attacks but operates on a much larger scale.
Typically hundreds or thousands of attack points attempt to overwhelm a target, as shown below.

Figure 2.11 DDoS Attack


 Smurf
 Tribe Flood Network (TFN)
 Stacheldraht
The sections that follow describe each of these DDoS attacks in more detail.
Smurf Attacks
The Smurf attack starts with a perpetrator sending a large number of spoofed ICMP echo, or ping,
requests to broadcast addresses, hoping that these packets will be magnified and sent to the spoofed
addresses, as shown in Figure 2.12. If the routing device delivering traffic to those broadcast
addresses performs the Layer 3 broadcast-to-Layer 2 broadcast function, most hosts on that IP
network will each reply to the ICMP echo request with an ICMP echo reply, multiplying the traffic
by the number of hosts responding. On a multi-access broadcast network, there could potentially
be hundreds of machines replying to each echo packet.

Figure 2.12 Smurf Attack


Assume the network has 100 hosts and that the attacker has a T1 link. The attacker sends a 768-
kbps stream of ICMP echo, or ping packets, with a spoofed source address of the victim, to the
broadcast address of the “bounce site.” These ping packets hit the bounce site broadcast network
of 100 hosts, and each takes the packet and responds to it, creating 100 outbound ping replies. A
total of 76.8 Mbps of bandwidth is used outbound from the bounce site after the traffic is
multiplied. This is then sent to the victim, or the spoofed source of the originating packets.
Turning off directed broadcast capability in the network infrastructure prevents the network from
being used as a bounce site.
Tribe Flood Network (TFN)
Tribe Flood Network (TFN) and Tribe Flood Network 2000 (TFN2K) are distributed tools used to
launch coordinated DoS attacks from many sources against one or more targets. A TFN attack can
generate packets with spoofed source IP addresses. An intruder instructing a master to send attack
instructions to a list of TFN servers or daemons carries out a DoS attack using a TFN network.
The daemons then generate the specified type of DoS attack against one or more target IP
addresses. Source IP addresses and source ports can be randomized, and packet sizes can be
altered. Use of the TFN master requires an intruder-supplied list of IP addresses for the daemons.
Stacheldraht Attack
Stacheldraht, German for “barbed wire,” combines features of several DoS attacks, including TFN.
It also adds features such as encryption of communication between the attacker and Stacheldraht
masters and automated update of the agents. There is an initial mass-intrusion phase, in which
automated tools are used to remotely root-compromise large numbers of systems to be used in the
attack. This is followed by a DoS attack phase, in which these compromised systems are used to
attack one or more sites. Figure 2.13 illustrates a Stacheldraht attack.

Figure 2.13 Stacheldraht Attack


Malicious Code
The primary vulnerabilities for end-user workstations are worm, virus, and Trojan horse attacks.
A worm executes arbitrary code and installs copies of itself in the infected computer’s memory,
which infects other hosts. A virus is malicious software that is attached to another program to
execute a particular unwanted function on a user’s workstation. A Trojan horse differs only in that
the entire application was written to look like something else, when in fact it is an attack tool.
Examples of attack types include the following:
■ Trojan horse—An application written to look like something else that in fact is an attack tool
■ Worm—An application that executes arbitrary code and installs copies of itself in the memory
of the infected computer, which then infects other hosts
■ Virus—Malicious software that is attached to another program to execute a particular unwanted
function on the user workstation
Worms
The anatomy of a worm attack is as follows:
1. The enabling vulnerability: A worm installs itself using an exploit vector on a vulnerable
system.
2. Propagation mechanism: After gaining access to devices, a worm replicates and selects
new targets.
3. Payload: After the device is infected with a worm, the attacker has access to the host—
often as a privileged user. Attackers could use a local exploit to escalate their privilege level
to administrator.
Typically, worms are self-contained programs that attack a system and try to exploit a specific
vulnerability in the target. Upon successful exploitation of the vulnerability, the worm copies its
program from the attacking host to the newly exploited system to begin the cycle again. A virus
normally requires a vector to carry the virus code from one system to another. The vector can be a
word processing document, an e-mail message, or an executable program. The key element that
distinguishes a computer worm from a computer virus is that human interaction is required to
facilitate the spread of a virus.
Worm attack mitigation, as shown in Figure 2.14, requires diligence on the part of system and
network administration staff. Coordination between system administration, network engineering,
and security operations personnel is critical in responding effectively to a worm incident. The
following are the recommended steps for worm attack mitigation:
Step 1. Containment
Step 2. Inoculation
Step 3. Quarantine
Step 4. Treatment
Figure 2.14 Worm Attack Propagation
Viruses and Trojan Horses
Viruses are malicious software that is attached to another program to execute a particular unwanted
function on a user’s workstation. An example of a virus is a program that is attached to
command.com (the primary interpreter for Windows systems) that deletes certain files and infects
any other versions of command.com that it can find.
A Trojan horse differs only in that the entire application was written to look like something else,
when in fact it is an attack tool. An example of a Trojan horse is a software application that runs a
simple game on the user’s workstation. While the user is occupied with the game, the Trojan horse
mails a copy of itself to every user in the user’s address book. The other users receive the game
and then play it, thus spreading the Trojan horse.

Figure 2.15 Virus and Trojan Horse Attack Mitigation


These kinds of applications can be contained through the effective use of antivirus software at the
user level and potentially at the network level, as depicted by Figure 2.15. Antivirus software can
detect most viruses and many Trojan horse applications and prevent them from spreading in the
network. Keeping current with the latest developments in these sorts of attacks can also lead to a
more effective posture against these attacks. As new virus or Trojan applications are released,
enterprises need to keep current with the latest antivirus software and application versions.
CHAPTER THREE
CRYPTOGRAPHY AND ENCRYPTION TECHNIQUES
3.1 Introduction
Cryptography is a branch of mathematics based on the transformation of data. It provides an
important tool for protecting information and is used in many aspects of computer security. For
example, cryptography can help provide data confidentiality, integrity, electronic signatures, and
advanced user authentication. Although modern cryptography relies upon advanced mathematics,
users can reap its benefits without understanding its mathematical underpinnings.
Cryptography is traditionally associated only with keeping data secret. However, modern
cryptography can be used to provide many security services, such as electronic signatures and
ensuring that data has not been modified.
Cryptography relies upon two basic components: an algorithm (or cryptographic methodology)
and a key. In modern cryptographic systems, algorithms are complex mathematical formulae and
keys are strings of bits. For two parties to communicate, they must use the same algorithm (or
algorithms that are designed to work together). In some cases, they must also use the same key.
Many cryptographic keys must be kept secret; sometimes algorithms are also kept secret. There
are two basic types of cryptography: secret key systems (also called symmetric systems) and public
key systems (also called asymmetric systems).
Symmetric encryption, also referred to as conventional encryption or single-key encryption, was
the only type of encryption in use prior to the development of public key encryption in the 1970s.
It remains by far the most widely used of the two types of encryption. In this chapter, we begin
with a look at a general model for the symmetric encryption process; this will enable us to
understand the context within which the algorithms are used.
Before beginning, we define some terms. An original message is known as the plaintext, while
the coded message is called the cipher text. The process of converting from plaintext to cipher
text is known as enciphering or encryption; restoring the plaintext from the cipher text is
deciphering or decryption. The many schemes used for encryption constitute the area of study
known as cryptography. Such a scheme is known as a cryptographic system or a cipher.
Techniques used for deciphering a message without any knowledge of the enciphering details fall
into the area of cryptanalysis. Cryptanalysis is what the layperson calls “breaking the code.” The
areas of cryptography and cryptanalysis together are called cryptology.

3.2 Private (Secret) Key Encryption


In secret key cryptography, two (or more) parties share the same key, and that key is used to
encrypt and decrypt data. As the name implies, secret key cryptography relies on keeping the key
secret. If the key is compromised, the security offered by cryptography is severely reduced or
eliminated. Secret key cryptography assumes that the parties who share a key rely upon each other
not to disclose the key and protect it against modification.
Secret key cryptography has been in use for centuries.
Early forms merely transposed the written characters to hide the message.
The best known secret key system is the Data Encryption Standard (DES), published by NIST as
Federal Information Processing Standard (FIPS) 46-2. Although the adequacy of DES has at times
been questioned, these claims remain unsubstantiated, and DES remains strong. It is the most
widely accepted, publicly available cryptographic system today. The American National Standards
Institute (ANSI) has adopted DES as the basis for encryption, integrity, access control, and key
management standards.
The following figure shows the comparison between Symmetric (private key) cryptography and
Asymmetric (public key) cryptography.

Figure 3.1 Compares some of the distinct features of secret and public key systems

3.2.1 Symmetric Cipher Model


A symmetric encryption scheme has five ingredients. (See Figure 3.1):
 Plaintext: This is the original intelligible message or data that is fed into the algorithm as input.
 Encryption algorithm: The encryption algorithm performs various substitutions and
transformations on the plaintext.
 Secret key: The secret key is also input to the encryption algorithm. The key is a value
independent of the plaintext and of the algorithm. The algorithm will produce a different output
depending on the specific key being used at the time. The exact substitutions and
transformations performed by the algorithm depend on the key.
 Cipher text: This is the scrambled message produced as output. It depends on the plaintext and
the secret key. For a given message, two different keys will produce two different cipher texts.
The cipher text is an apparently random stream of data and, as it stands, is unintelligible.
 Decryption algorithm: This is essentially the encryption algorithm run in reverse. It takes the
cipher text and the secret key and produces the original plaintext.
There are two requirements for secure use of conventional encryption:
1) We need a strong encryption algorithm. At a minimum, we would like the algorithm to be
such that an opponent who knows the algorithm and has access to one or more cipher texts
would be unable to decipher the cipher text or figure out the key. This requirement is
usually stated in a stronger form: The opponent should be unable to decrypt cipher text or
discover the key even if he or she is in possession of a number of cipher texts together with
the plaintext that produced each cipher text.
2) Sender and receiver must have obtained copies of the secret key in a secure fashion and
must keep the key secure. If someone can discover the key and knows the algorithm, all
communication using this key is readable.

Figure 3.1 Simplified Model of Symmetric Encryption


We assume that it is impractical to decrypt a message on the basis of the cipher text plus knowledge
of the encryption/decryption algorithm. In other words, we do not need to keep the algorithm
secret; we need to keep only the key secret. This feature of symmetric encryption is what makes it
feasible for widespread use. The fact that the algorithm need not be kept secret means that
manufacturers can and have developed low-cost chip implementations of data encryption
algorithms. These chips are widely available and incorporated into a number of products. With the
use of symmetric encryption, the principal security problem is maintaining the secrecy of the key.
Let us take a closer look at the essential elements of a symmetric encryption scheme, using Figure
3.2. A source produces a message in plain text, X=[X1, X2, X3,……, XM]. The elements of are
letters in some finite alphabet. Traditionally, the alphabet usually consisted of the 26 capital letters.
Nowadays, the binary alphabet {0, 1} is typically used. For encryption, a key of the form K= [K1,
K2, K3, ……, KJ] is generated.
If the key is generated at the message source, then it must also be provided to the destination by
means of some secure channel. Alternatively, a third party could generate the key and securely
deliver it to both source and destination.

Figure 3.2 Model of Symmetric Cryptosystem


With the message and the encryption key as input, the encryption algorithm forms the cipher text
Y= [Y1, Y2, Y3, ……,YN]. We can write this as
Y = E(K, X)
This notation indicates that is produced by using encryption algorithm E as a function of the plain
text, with the specific function determined by the value of the key. The intended receiver, in
possession of the key, is able to invert the transformation:
X = D(K, Y)
An opponent, observing Y but not having access to or K or X, may attempt to recover X or K or
both X and K. It is assumed that the opponent knows the encryption (E) and decryption (D)
algorithms. If the opponent is interested in only this particular message, then the focus of the effort
is to recover X by generating a plaintext estimate X’. Often, however, the opponent is interested
in being able to read future messages as well, in which case an attempt is made to recover K by
generating an estimate K’.

Cryptography
Cryptographic systems are characterized along three independent dimensions:
1. The type of operations used for transforming plaintext to ciphertext. All encryption
algorithms are based on two general principles: substitution, in which each element in the
plaintext (bit, letter, group of bits or letters) is mapped into another element, and
transposition, in which elements in the plaintext are rearranged. The fundamental
requirement is that no information be lost (that is, that all operations are reversible). Most
systems, referred to as product systems, involve multiple stages of substitutions and
transpositions.
2. The number of keys used. If both sender and receiver use the same key, the system is
referred to as symmetric, single-key, secret-key, or conventional encryption. If the sender
and receiver use different keys, the system is referred to as asymmetric, two-key, or public-
key encryption.
3. The way in which the plaintext is processed. A block cipher processes the input one
block of elements at a time, producing an output block for each input block. A stream
cipher processes the input elements continuously, producing output one element at a time,
as it goes along.

3.3 Substitution Techniques


In this section and the next, we examine a sampling of what might be called classical encryption
techniques. A study of these techniques enables us to illustrate the basic approaches to symmetric
encryption used today and the types of cryptanalytic attacks that must be anticipated. The two
basic building blocks of all encryption techniques are substitution and transposition. We examine
these in the next two sections. Finally, we discuss a system that combines both substitution and
transposition.
A substitution technique is one in which the letters of plaintext are replaced by other letters or by
numbers or symbols. If the plaintext is viewed as a sequence of bits, then substitution involves
replacing plaintext bit patterns with cipher text bit patterns.
Caesar Cipher
The earliest known, and the simplest, use of a substitution cipher was by Julius Caesar. The Caesar
cipher involves replacing each letter of the alphabet with the letter standing three places further
down the alphabet. For example,

Note that the alphabet is wrapped around, so that the letter following Z is A. We can define the
transformation by listing all possibilities, as follows:

Let us assign a numerical equivalent to each letter:


Then the algorithm can be expressed as follows. For each plaintext letter P, substitute the cipher
text letter C.
C = E (3, p) = (p + 3) mod 26
A shift may be of any amount, so that the general Caesar algorithm is
C= E (k, p) = (p + k) mod 26 (3.1)
Where takes on a value in the range 1 to 25. The decryption algorithm is simply
p = D (k, C) = (C - k) mod 26 (3.2)

3.4 Symmetric Block Encryption Algorithms


The most commonly used symmetric encryption algorithms are block ciphers. A block cipher
processes the plaintext input in fixed-sized blocks and produces a block of cipher text of equal size
for each plaintext block. This section focuses on the three most important symmetric block ciphers:
Data Encryption Standard (DES), Triple DES (3DES), and Advanced Encryption Standard (AES).

3.4.1 Data Encryption Standard


The most widely used encryption scheme is based on the Data Encryption Standard (DES) issued
in 1977, as Federal Information Processing Standard 46 (FIPS 46) by the National Bureau of
Standards, now known as the National Institute of Standards and Technology (NIST). The
algorithm itself is referred to as the Data Encryption Algorithm (DEA).
The plaintext is 64 bits in length and the key is 56 bits in length; longer plaintext amounts are
processed in 64-bit blocks. The DES structure is a minor variation of the Feistel network. There
are 16 rounds of processing. From the original 56-bit key, 16 sub keys are generated, one of which
is used for each round.
The process of decryption with DES is essentially the same as the encryption process. The rule is
as follows: Use the cipher text as input to the DES algorithm, but use the sub keys Ki in reverse
order. That is, use K16 on the first iteration, K15 on the second iteration, and so on until K1 is used
on the 16th and last iteration.
THE STRENGTH OF DES: Concerns about the strength of DES fall into two categories:
concerns about the algorithm itself and concerns about the use of a 56-bit key. The first concern
refers to the possibility that cryptanalysis is possible by exploiting the characteristics of the DES
algorithm. Over the years, there have been numerous attempts to find and exploit weaknesses in
the algorithm, making DES the most-studied encryption algorithm in existence. Despite numerous
approaches, no one has so far succeeded in discovering a fatal weakness in DES. A more serious
concern is key length. With a key length of 56 bits, there are 256 possible keys, which is
approximately 7.2 × 1016 keys. Thus, on the face of it, a brute force attack appears impractical.
Assuming that on average half the key space has to be searched, a single machine performing one
DES encryption per microsecond would take more than a thousand years to break the cipher.
Input-64 bit
plain text

Output-64 bit
Cipher text

Figure 3.4.1.1 Structure of DES Algorithm


Triple DES
Triple DES (3DES) was first standardized for use in financial applications in ANSI standard X9.17
in 1985. 3DES was incorporated as part of the Data Encryption Standard in 1999 with the
publication of FIPS 46-3.

Figure 3.4.1.2 Triple DES


3DES uses three keys and three executions of the DES algorithm. The function follows an encrypt-
decrypt-encrypt (EDE) sequence (Figure 3.4.1.2.a):
C = E (K3, D (K2, E (K1, P)))
Where
C = cipher text
P = plain text
E [K, X] = encryption of X using key K
D [K, Y] = decryption of Y using key K
Decryption is simply the same operation with the keys reversed (Figure 3.4.1.2.b):
P = D (K1, E(K2, D(K3, C)))
There is no cryptographic significance to the use of decryption for the second stage of 3DES
encryption. Its only advantage is that it allows users of 3DES to decrypt data encrypted by users
of the older single DES:
C = E (K1, D(K1, E(K1, P)))= E[K, P]
With three distinct keys, 3DES has an effective key length of 168 bits. FIPS 46-3 also allows for
the use of two keys, with K1= K3; this provides for a key length of 112 bits. FIPS 46-3 includes
the following guidelines for 3DES.
 3DES is the FIPS approved symmetric encryption algorithm of choice.
 The original DES, which uses a single 56-bit key, is permitted under the standard for legacy
systems only. New procurements should support 3DES.
 Government organizations with legacy DES systems are encouraged to transition to 3DES.
 It is anticipated that 3DES and the Advanced Encryption Standard (AES) will coexist as
FIPS-approved algorithms, allowing for a gradual transition to AES
It is easy to see that 3DES is a formidable algorithm. Because the underlying cryptographic
algorithm is DEA, 3DES can claim the same resistance to cryptanalysis or DEA. Furthermore,
with a 168-bit key length, brute-force attacks are effectively impossible. Ultimately, AES is
intended to replace 3DES, but this process will take a number of years. NIST anticipates that 3DES
will remain an approved algorithm (for U.S. government use) for the foreseeable future.

3.4.2 Advanced Encryption Standard (AES)


3DES has two attractions that assure its widespread use over the next few years. First, with its 168-
bit key length, it overcomes the vulnerability to brute-force attack of DEA. Second, the underlying
encryption algorithm in 3DES is the same as in DEA. This algorithm has been subjected to more
scrutiny than any other encryption algorithm over a longer period of time, and no effective
cryptanalytic attack based on the algorithm rather than brute force has been found. Accordingly,
there is a high level of confidence that 3DES is very resistant to cryptanalysis. If security were the
only consideration, then 3DES would be an appropriate choice for a standardized encryption
algorithm for decades to come.
3DES, which has three times as many rounds as DEA, is correspondingly slower. A secondary
drawback is that both DEA and 3DES use a 64-bit block size. For reasons of both efficiency and
security, a larger block size is desirable.
Because of these drawbacks, 3DES is not a reasonable candidate for long-term use. As a
replacement, NIST in 1997 issued a call for proposals for a new Advanced Encryption Standard
(AES), which should have a security strength equal to or better than 3DES and significantly
improved efficiency. In addition to these general requirements, NIST specified that AES must be
a symmetric block cipher with a block length of 128 bits and support for key lengths of 128, 192,
and 256 bits. Evaluation criteria included security, computational efficiency, memory
requirements, hardware and software suitability, and flexibility.
OVERVIEW OF THE ALGORITHM AES uses a block length of 128 bits and a key length that
can be 128, 192, or 256 bits. In the description of this section, we assume a key length of 128 bits,
which is likely to be the one most commonly implemented.
Figure 3.4.2 shows the overall structure of AES. The input to the encryption and decryption
algorithms is a single 128-bit block. In FIPS PUB 197, this block is depicted as a square matrix of
bytes. This block is copied into the State array, which is modified at each stage of encryption or
decryption. After the final stage, State is copied to an output matrix. Similarly, the 128-bit key is
depicted as a square matrix of bytes. This key is then expanded into an array of key schedule
words: each word is four bytes and the total key schedule is 44 words for the 128-bit key.
The ordering of bytes within a matrix is by column. So, for example, the first four bytes of a 128-
bit plaintext input to the encryption cipher occupy the first column of the in matrix, the second
four bytes occupy the second column, and so on. Similarly, the first four bytes of the expanded
key, which form a word, occupy the first column of the w matrix.
The features of AES are as follows:
 Symmetric key symmetric block cipher
 128-bit data, 128/192/256-bit keys
 Stronger and faster than Triple-DES
 Provide full specification and design details
 Software implementable in C and Java
AES is an iterative rather than Feistel cipher. It is based on ‘substitution– permutation network’.
It comprises of a series of linked operations, some of which involve replacing inputs by
specific outputs (substitutions) and others involve shuffling bits around (permutations).

Figure. 3.4.2 AES Encryption & Decryption


3.5 Public Key Encryption
Public-key encryption, first publicly proposed by Diffie and Hellman in 1976 [DIFF76], is the first
truly revolutionary advance in encryption in literally thousands of years. Public-key algorithms are
based on mathematical functions rather than on simple operations on bit patterns, such as are used
in symmetric encryption algorithms. More important, public-key cryptography is asymmetric,
involving the use of two separate keys—in contrast to the symmetric conventional encryption,
which uses only one key. The use of two keys has profound consequences in the areas of
confidentiality, key distribution, and authentication.
One is that public-key encryption is more secure from cryptanalysis than conventional encryption.
In fact, the security of any encryption scheme depends on (1) the length of the key and (2) the
computational work involved in breaking a cipher. There is nothing in principle about either
conventional or public-key encryption that makes one superior to another from the point of view
of resisting cryptanalysis. A second misconception is that public-key encryption is a general-
purpose technique that has made conventional encryption obsolete.
Public key cryptography is a modern invention and requires the use of advanced mathematics.
Whereas secret key cryptography uses a single key shared by two (or more) parties, public key
cryptography uses a pair of keys for each party. One of the keys of the pair is "public" and the
other is "private." The public key can be made known to other parties; the private key must be kept
confidential and must be known only to its owner. Both keys, however, need to be protected against
modification.
Public key cryptography is particularly useful when the parties wishing to communicate cannot
rely upon each other or do not share a common key. There are several public key cryptographic
systems. One of the first public key systems is RSA, which can provide many different security
services. The Digital Signature Standard (DSS), described later in the chapter, is another example
of a public key system.
Symmetric cryptography was well suited for organizations such as governments, military, and big
financial corporations were involved in the classified communication. With the spread of more
unsecure computer networks in last few decades, a genuine need was felt to use cryptography at
larger scale. The symmetric key (secret key cryptography) was found to be non-practical due to
challenges it faced for key management. This gave rise to the public key cryptosystems.
The most important properties of public key encryption scheme are:
 Different keys are used for encryption and decryption. This is a property which set this
scheme different than symmetric encryption scheme.
 Each receiver possesses a unique decryption key, generally referred to as his private key.
Receiver needs to publish an encryption key, referred to as his public key.
 Some assurance of the authenticity of a public key is needed in this scheme to avoid
spoofing by adversary as the receiver. Generally, this type of cryptosystem involves trusted
third party which certifies that a particular public key belongs to a specific person or entity
only.
 Encryption algorithm is complex enough to prohibit attacker from deducing the plaintext
from the cipher text and the encryption (public) key.
 Though private and public keys are related mathematically, it is not be feasible to calculate
the private key from the public key. In fact, intelligent part of any public-key cryptosystem
is in designing a relationship between two keys.
A public-key encryption scheme has six ingredients:
Plaintext: This is the readable message or data that is fed into the algorithm as input.
Encryption Algorithm: The encryption algorithm performs various transformations on the
plaintext.
Public and Private Key: This is a pair of keys that have been selected so that if one is used for
encryption, the other is used for decryption. The exact transformations performed by the
encryption algorithm depend on the public or private key that is provided as input.
Cipher Text: This is the scrambled message produced as output. It depends on the plaintext and
the key. For a given message, two different keys will produce two different cipher texts.
Decryption Algorithm: This algorithm accepts the cipher text and the matching key and produces
the original plaintext.
As the names suggest, the public key of the pair is made public for others to use, while the private
key is known only to its owner. A general-purpose public-key cryptographic algorithm relies on
one key for encryption and a different but related key for decryption.

Figure 3.5 The process of encryption and decryption in public cryptography


The essential steps are the following:
1. Each user generates a pair of keys to be used for the encryption and decryption of messages.
2. Each user places one of the two keys in a public register or other accessible file. This is the
public key. The companion key is kept private. Each user maintains a collection of public
keys obtained from others.
3. If Bob wishes to send a private message to Alice, Bob encrypts the message using Alice’s
public key.
4. When Alice receives the message, she decrypts it using her private key. No other recipient
can decrypt the message because only Alice knows Alice’s private key.

3.5.1 Applications for Public-Key Cryptosystems


Public-key systems are characterized by the use of a cryptographic type of algorithm with two
keys, one held private and one available publicly. Depending on the application, the sender uses
either the sender’s private key, the receiver’s public key, or both to perform some type of
cryptographic function. In broad terms, we can classify the use of public-key cryptosystems into
three categories:
 Encryption/Decryption: The sender encrypts a message with the recipient’s public key.
 Digital Signature: The sender “signs” a message with its private key. Signing is achieved
by a cryptographic algorithm applied to the message or to a small block of data that is a
function of the message.
 Key Exchange: Two sides cooperate to exchange a session key. Several different
approaches are possible, involving the private key(s) of one or both parties.

3.5.2 Requirements for Public-Key Cryptography

1. It is computationally easy for a party B to generate a pair (public key PUb, private key PRb).
2. It is computationally easy for a sender A, knowing the public key and the message to be
encrypted, M, to generate the corresponding cipher text:

C= E (PUb, M)
3. It is computationally easy for the receiver B to decrypt the resulting cipher text using the
private key to recover the original message:

M= D (PRb,C) =D[PRb, E(PUb ,M)]


4. It is computationally infeasible for an opponent, knowing the public key, PUb, to determine
the private key, PRb.
5. It is computationally infeasible for an opponent, knowing the public key, PUb, and a cipher
text, C, to recover the original message, M.
6. Either of the two related keys can be used for encryption, with the other used for decryption.

M= D[PUb, E(PRb, M)]= D[PRb,E(PUb,M)]


3.5.3 RSA Cryptosystem
One of the first public-key schemes was developed in 1977 by Ron Rivest, Adi Shamir, and Len
Adleman at MIT and first published in 1978 [RIVE78]. The RSA scheme has since that time
reigned supreme as the most widely accepted and implemented approach to public-key encryption.
RSA is a block cipher in which the plaintext and cipher text are integers between 0 and n-1 for
some n. Encryption and decryption are of the following form period for some plaintext block M
and cipher text block C:
C=Me mod n
M=Cd mod n (Me)d mod n=Med mod n
Both sender and receiver must know the values of n and e, and only the receiver knows the value
of d. This is a public-key encryption algorithm with a public key of KU {e,n} and a private key of
KR {d,n}.For this algorithm to be satisfactory for public-key encryption, the following
requirements must be met.
1. It is possible to find values of e, d, n such that Med mod n=M for all M< n.
2. It is relatively easy to calculate Me and Cd for all values of M< n.
3. It is infeasible to determine d given e and n.

Figure 3.5.3 RSA Cryptosystem


Figure 3.5.3 summarizes the RSA algorithm. Begin by selecting two prime numbers p and q and
calculating their product n, which is the modulus for encryption and decryption. Next, we need the
quantity Ø(n), referred to as the Euler totient of n, which is the number of positive integers less
than n and relatively prime to n. Then select an integer e that is relatively prime to Ø(n) [i.e., the
greatest common divisor of e and Ø(n) is 1]. Finally, calculate d as the multiplicative inverse of e,
modulo Ø(n). It can be shown that d and e have the desired properties.
For this example, the keys were generated as follows:
1. Select two prime numbers, p=17 and q =11
2. Calculate n=pq=17 ×11=187
3. Calculate Ø(n)=(p-1)(q-1) =16 ×10 =160
4. Select e such that e is relatively prime to Ø(n)=160 and less than Ø(n); we choose e=7.
5. Determine d such that de mod 160=1 and d< 160. The correct value is d=23,
because 23 ×7=161 =(1 ×160)+1.
The resulting keys are public key PU {7, 187} and private key PR {23,187}.

3.6 Digital Signature


Today's computer systems store and process increasing numbers of paper-based documents in
electronic form. Having documents in electronic form permits rapid processing and transmission
and improves overall efficiency. However, approval of a paper document has traditionally been
indicated by a written signature. What is needed, therefore, is the electronic equivalent of a written
signature that can be recognized as having the same legal status as a written signature. In addition
to the integrity protections, discussed above, cryptography can provide a means of linking a
document with a particular person, as is done with a written signature.
Electronic signatures can use either secret key or public key cryptography; however, public key
methods are generally easier to use. Digital signatures are the public-key primitives of message
authentication. In the physical world, it is common to use handwritten signatures on handwritten
or typed messages. They are used to bind signatory to the message.
An electronic (digital) signature is a cryptographic mechanism that performs a similar function to
a written signature. It is used to verify the origin and contents of a message. For example, a
recipient of data (e.g., an e-mail message) can verify who signed the data and that the data was not
modified after being signed. This also means that the originator (e.g., sender of an e-mail message)
cannot falsely deny having signed the data.
Cryptographic signatures provide extremely strong proof that a message has not been altered and
was signed by a specific key. However, there are other mechanisms besides cryptographic based
electronic signatures that perform a similar function. These mechanisms provide some assurance
of the origin of a message, some verification of the message's integrity, or both. Simply taking a
digital picture of a written signature does not provide adequate security. Such a digitized written
signature could easily be copied from one electronic document to another with no way to determine
whether it is legitimate. Electronic signatures, on the other hand, are unique to the message being
signed and will not verify if they are copied to another document.
Digital signature is a cryptographic value that is calculated from the data and a secret key known
only by the signer. Similarly, a digital signature is a technique that binds a person/entity to the
digital data. This binding can be independently verified by receiver as well as any third party. In
real world, the receiver of message needs assurance that the message belongs to the sender and he
should not be able to repudiate the origination of that message. This requirement is very crucial in
business applications, since likelihood of a dispute over exchanged data is very high.

Secret Key Electronic Signatures


An electronic signature can be implemented using secret key message authentication codes
(MACs). For example, if two parties share a secret key, and one party receives data with a MAC
that is correctly verified using the shared key, that party may assume that the other party signed
the data. This assumes, however, that the two parties trust each other. Thus, through the use of a
MAC, in addition to data integrity, a form of electronic signature is obtained. Using additional
controls, such as key notarization and key attributes, it is possible to provide an electronic signature
even if the two parties do not trust each other.

Public Key Electronic Signatures


Another type of electronic signature called a digital signature is implemented using public key
cryptography. Data is electronically signed by applying the originator's private key to the data. To
increase the speed of the process, the private key is applied to a shorter form of the data, called a
"Hash" or "Message Digest," rather than to the entire set of data. The resulting digital signature
can be stored or transmitted along with the data. The signature can be verified by any party using
the public key of the signer. This feature is very useful, for example, when distributing signed
copies of virus-free software. Any recipient can verify that the program remains virus-free. If the
signature verifies properly, then the verifier has confidence that the data was not modified after
being signed and that the owner of the public key was the signer.
As mentioned earlier, the digital signature scheme is based on public key cryptography. The model
of digital signature scheme is depicted in the following illustration:

Figure 3.6 Model of Digital Signature


The following points explain the entire process in detail:
 Each person adopting this scheme has a public-private key pair.
 Generally, the key pairs used for encryption/decryption and signing/verifying are different.
The private key used for signing is referred to as the signature key and the public key as the
verification key.
 Signer feeds data to the hash function and generates hash of data.
 Hash value and signature key are then fed to the signature algorithm which produces the digital
signature on given hash. Signature is appended to the data and then both are sent to the verifier.
 Verifier feeds the digital signature and the verification key into the verification algorithm. The
verification algorithm gives some value as output.
 Verifier also runs same hash function on received data to generate hash value.
 For verification, this hash value and output of verification algorithm are compared. Based on
the comparison result, verifier decides whether the digital signature is valid.
 Since digital signature is created by ‘private’ key of signer and no one else can have this key;
the signer cannot repudiate signing the data in future.
It should be noticed that instead of signing data directly by signing algorithm, usually a hash of
data is created. Since the hash of data is a unique representation of data, it is sufficient to sign the
hash in place of data. The most important reason of using hash instead of data directly for signing
is efficiency of the scheme. Signing large data through modular exponentiation is computationally
expensive and time consuming. The hash of the data is a relatively small digest of the data, hence
signing a hash is more efficient than signing the entire data.

Importance of Digital Signature


Out of all cryptographic primitives, the digital signature using public key cryptography is
considered as very important and useful tool to achieve information security. Apart from ability to
provide non-repudiation of message, the digital signature also provides message authentication
and data integrity. Let us briefly see how this is achieved by the digital signature:
 Message Authentication: When the verifier validates the digital signature using public key
of a sender, he is assured that signature has been created only by sender who possess the
corresponding secret private key and no one else.
 Data Integrity – In case an attacker has access to the data and modifies it, the digital
signature verification at receiver end fails. The hash of modified data and the output provided
by the verification algorithm will not match. Hence, receiver can safely deny the message
assuming that data integrity has been breached.
 Non-repudiation: Since it is assumed that only the signer has the knowledge of the signature
key, he can only create unique signature on a given data. Thus the receiver can present data
and the digital signature to a third party as evidence if any dispute arises in the future.
By adding public-key encryption to digital signature scheme, we can create a cryptosystem that
can provide the four essential elements of security namely: Privacy, Authentication, Integrity, and
Non-repudiation.
3.7 Hash Functions
Hash functions are extremely useful and appear in almost all information security applications. A
hash function is a mathematical function that converts a numerical input value into another
compressed numerical value. The input to the hash function is of arbitrary length but output is
always of fixed length. Values returned by a hash function are called Message Digest or simply
Hash Values. The following picture illustrated hash function:

Figure 3.7.1 Hash Function


The typical features of hash functions are:
Fixed Length Output (Hash Value)
o Hash function coverts data of arbitrary length to a fixed length. This process is often
referred to as hashing the data.
o In general, the hash is much smaller than the input data, hence hash functions are
sometimes called compression functions.
o Since a hash is a smaller representation of a larger data, it is also referred to as a digest.
o Hash function with n bit output is referred to as an n-bit hash function. Popular hash
functions generate values between 160 and 512 bits.

Efficiency of Operation
o Generally for any hash function h with input x, computation of h(x) is a fast operation.
o Computationally hash functions are much faster than a symmetric encryption.
Properties of Hash Functions
In order to be an effective cryptographic tool, the hash function is desired to possess following
properties:
 Pre-Image Resistance

o This property means that it should be computationally hard to reverse a hash function.
o In other words, if a hash function h produced a hash value z, then it should be a
difficult process to find any input value x that hashes to z.
o This property protects against an attacker who only has a hash value and is trying to
find the input.

 Second Pre-Image Resistance

o This property means given an input and its hash, it should be hard to find a different
input with the same hash.
o In other words, if a hash function h for an input x produces hash value h(x), then it
should be difficult to find any other input value y such that h(y) = h(x).
o This property of hash function protects against an attacker who has an input value
and its hash, and wants to substitute different value as legitimate value in place of
original input value.

 Collision Resistance

o This property means it should be hard to find two different inputs of any length that
result in the same hash. This property is also referred to as collision free hash
function.
o In other words, for a hash function h, it is hard to find any two different inputs x and
y such that h(x) = h(y).
o Since, hash function is compressing function with fixed hash length, it is impossible
for a hash function not to have collisions. This property of collision free only
confirms that these collisions should be hard to find.
o This property makes it very difficult for an attacker to find two input values with the
same hash.
o Also, if a hash function is collision-resistant then it is second pre-image resistant.

Design of Hashing Algorithms


At the heart of a hashing is a mathematical function that operates on two fixed-size blocks of data
to create a hash code. This hash function forms the part of the hashing algorithm. The size of each
data block varies depending on the algorithm. Typically the block sizes are from 128 bits to 512
bits. The following illustration demonstrates hash function:
Figure 3.7.2 Process in Hash function
Hashing algorithm involves rounds of above hash function like a block cipher. Each round takes
an input of a fixed size, typically a combination of the most recent message block and the output
of the last round. This process is repeated for as many rounds as are required to hash the entire
message. Schematic of hashing algorithm is depicted in the following illustration:

Figure 3.7.3 Hash Algorithm


Since, the hash value of first message block becomes an input to the second hash operation, output
of which alters the result of the third operation, and so on. This effect, known as an avalanche
effect of hashing. Avalanche effect results in substantially different hash values for two messages
that differ by even a single bit of data.
Understand the difference between hash function and algorithm correctly. The hash function
generates a hash code by operating on two blocks of fixed-length binary data. Hashing algorithm
is a process for using the hash function, specifying how the message will be broken up and how
the results from previous message blocks are chained together.
Popular Hash Functions

Message Digest (MD)


MD5 was most popular and widely used hash function for quite some years.
o The MD family comprises of hash functions MD2, MD4, MD5 and MD6. It was adopted as
Internet Standard RFC 1321. It is a 128-bit hash function.

o MD5 digests have been widely used in the software world to provide assurance about integrity
of transferred file. For example, file servers often provide a pre-computed MD5 checksum for
the files, so that a user can compare the checksum of the downloaded file to it.

o In 2004, collisions were found in MD5. An analytical attack was reported to be successful
only in an hour by using computer cluster. This collision attack resulted in compromised MD5
and hence it is no longer recommended for use.

Secure Hash Function (SHA)


Family of SHA comprise of four SHA algorithms; SHA-0, SHA-1, SHA-2, and SHA-3. Though
from same family, there are structurally different.
o The original version is SHA-0, a 160-bit hash function, was published by the National Institute
of Standards and Technology (NIST) in 1993. It had few weaknesses and did not become very
popular. Later in 1995, SHA-1 was designed to correct alleged weaknesses of SHA-0.

o SHA-1 is the most widely used of the existing SHA hash functions. It is employed in several
widely used applications and protocols including Secure Socket Layer (SSL) security.

o In 2005, a method was found for uncovering collisions for SHA-1 within practical time frame
making long-term employability of SHA-1 doubtful.

o SHA-2 family has four further SHA variants, SHA-224, SHA-256, SHA-384, and SHA-512
depending up on number of bits in their hash value. No successful attacks have yet been
reported on SHA-2 hash function.

o Though SHA-2 is a strong hash function. Though significantly different, its basic design is
still follows design of SHA-1. Hence, NIST called for new competitive hash function designs.

o In October 2012, the NIST chose the Keccak algorithm as the new SHA-3 standard. Keccak
offers many benefits, such as efficient performance and good resistance for attacks.
3.8 Message Authentication
We discussed the data integrity threats and the use of hashing technique to detect if any
modification attacks have taken place on the data. Another type of threat that exist for data is the
lack of message authentication. In this threat, the user is not sure about the originator of the
message. Message authentication can be provided using the cryptographic techniques that use
secret keys as done in case of encryption.
Message Authentication Code (MAC)
MAC algorithm is a symmetric key cryptographic technique to provide message authentication.
For establishing MAC process, the sender and receiver share asymmetric key K. Essentially, a
MAC is an encrypted checksum generated on the underlying message that is sent along with a
message to ensure message authentication. The process of using MAC for authentication is
depicted in the following illustration:

Figure 3.8 Message Authentication Code model


The entire process of MAC in detail is as follows:
 The sender uses some publicly known MAC algorithm, inputs the message and the secret
key K and produces a MAC value.

 Similar to hash, MAC function also compresses an arbitrary long input into a fixed length
output. The major difference between hash and MAC is that MAC uses secret key during
the compression.

 The sender forwards the message along with the MAC. Here, we assume that the message is
sent in the clear, as we are concerned of providing message origin authentication, not
confidentiality. If confidentiality is required then the message needs encryption.

 On receipt of the message and the MAC, the receiver feeds the received message and the
shared secret key K into the MAC algorithm and re-computes the MAC value.
 The receiver now checks equality of freshly computed MAC with the MAC received from the
sender. If they match, then the receiver accepts the message and assures himself that the
message has been sent by the intended sender.

 If the computed MAC does not match the MAC sent by the sender, the receiver cannot
determine whether it is the message that has been altered or it is the origin that has been
falsified. As a bottom-line, a receiver safely assumes that the message is not the genuine.
Limitations of MAC
There are two major limitations of MAC, both due to its symmetric nature of operation:
 Establishment of Shared Secret.
o It can provide message authentication among pre-decided legitimate users who
have shared key.
o This requires establishment of shared secret prior to use of MAC.

 Inability to Provide Non-Repudiation


o Non-repudiation is the assurance that a message originator cannot deny any
previously sent messages and commitments or actions.
o MAC technique does not provide a non-repudiation service. If the sender and
receiver get involved in a dispute over message origination, MACs cannot provide
a proof that a message was indeed sent by the sender.
o Though no third party can compute the MAC, still sender could deny having sent
the message and claim that the receiver forged it, as it is impossible to determine
which of the two parties computed the MAC.
N.B: Both these limitations can be overcome by using the public key based digital signature.

3.9 Key Distribution and Management


The most distinct feature of Public Key Infrastructure (PKC) is that it uses a pair of keys to achieve
the underlying security service. The key pair comprises of private key and public key. Since the
public keys are in open domain, they are likely to be abused. It is, thus, necessary to establish and
maintain some kind of trusted infrastructure to manage these keys.
Key Management
It goes without saying that the security of any cryptosystem depends upon how securely its keys
are managed. Without secure procedures for the handling of cryptographic keys, the benefits of
the use of strong cryptographic schemes are potentially lost.
It is observed that cryptographic schemes are rarely compromised through weaknesses in their
design. However, they are often compromised through poor key management. There are some
important aspects of key management which are as follows:
 Cryptographic keys are nothing but special pieces of data.
 Key management refers to the secure administration of cryptographic keys.
 Key management deals with entire key lifecycle as depicted in the following illustration:

Figure 3.9.1 Lifecycle in Key Management


There are two specific requirements of key management for public key cryptography.
 Secrecy of private keys. Throughout the key lifecycle, secret keys must remain secret from
all parties except those who are owner and are authorized to use them.
 Assurance of public keys. In public key cryptography, the public keys are in open domain
and seen as public pieces of data. By default there are no assurances of whether a public
key is correct, with whom it can be associated, or what it can be used for. Thus key
management of public keys needs to focus much more explicitly on assurance of purpose
of public keys.
The most crucial requirement of ‘assurance of public key’ can be achieved through the public-key
infrastructure (PKI), a key management systems for supporting public-key cryptography.
Public Key Infrastructure (PKI)
PKI provides assurance of public key. It provides the identification of public keys and their
distribution. An anatomy of PKI comprises of the following components.
 Public Key Certificate, commonly referred to as ‘digital certificate’.
 Private Key tokens.
 Certification Authority.
 Registration Authority.
 Certificate Management System.
Digital Certificate
For analogy, a certificate can be considered as the ID card issued to the person. People use ID
cards such as a driver's license, passport to prove their identity. A digital certificate does the same
basic thing in the electronic world, but with one difference. Digital Certificates are not only
issued to people but they can be issued to computers, software packages or anything else that need
to prove the identity in the electronic world.
 Digital certificates are based on the International Telecom Union (ITU) standard X.509
which defines a standard certificate format for public key certificates and certification
validation. Hence digital certificates are sometimes also referred to as X.509 certificates.
 Public key pertaining to the user client is stored in digital certificates by the Certification
Authority (CA) along with other relevant information such as client information, expiration
date, usage, issuer etc.
 CA digitally signs this entire information and includes digital signature in the certificate.
 Anyone who needs the assurance about the public key and associated information of client,
he carries out the signature validation process using CA’s public key. Successful validation
assures that the public key given in the certificate belongs to the person whose details are
given in the certificate.
The process of obtaining Digital Certificate by a person/entity is depicted in the following figure.

Figure 3.9.2 Process in Digital Certificate


As shown in the illustration, the CA accepts the application from a client to certify his public key.
The CA, after duly verifying identity of client, issues a digital certificate to that client.
Certifying Authority (CA)
The CA issues certificate to a client and assist other users to verify the certificate. The CA takes
responsibility for identifying correctly the identity of the client asking for a certificate to be issued,
and ensures that the information contained within the certificate is correct and digitally signs it.
The key functions of a CA are as follows:
 Generating key pairs: The CA may generate a key pair independently or jointly with the
client.
 Issuing digital certificates: The CA could be thought of as the PKI equivalent of a passport
agency the CA issues a certificate after client provides the credentials to confirm his
identity. The CA then signs the certificate to prevent modification of the details contained
in the certificate.
 Publishing Certificates: The CA need to publish certificates so that users can find them.
There are two ways of achieving this. One is to publish certificates in the equivalent of an
electronic telephone directory. The other is to send your certificate out to those people you
think might need it by one means or another.
 Verifying Certificates: The CA makes its public key available in environment to assist
verification of his signature on clients’ digital certificate.
 Revocation of Certificates: At times, CA revokes the certificate issued due to some reason
such as compromise of private key by user or loss of trust in the client. After revocation,
CA maintains the list of all revoked certificate that is available to the environment.
Registration Authority (RA)
CA may use a third-party Registration Authority (RA) to perform the necessary checks on the
person or company requesting the certificate to confirm their identity. The RA may appear to the
client as a CA, but they do not actually sign the certificate that is issued.
Certificate Management System (CMS)
It is the management system through which certificates are published, temporarily or permanently
suspended, renewed, or revoked. Certificate management systems do not normally delete
certificates because it may be necessary to prove their status at a point in time, perhaps for legal
reasons. A CA along with associated RA runs certificate management systems to be able to track
their responsibilities and liabilities.
CHAPTER FOUR
ACCESS CONTROL
Access control is concerned with determining the allowed activities of legitimate users, mediating
every attempt by a user to access a resource in the system. A given information technology (IT)
infrastructure can implement access control systems in many places and at different levels.
Operating systems use access control to protect files and directories. Database management
systems DBMS apply access control to regulate access to tables and views. Most commercially
available application systems implement access control, often independent of the operating
systems and/or DBMSs on which they are installed.
The objectives of an access control system are often described in terms of protecting system
resources against inappropriate or undesired user access. From a business perspective, this
objective could just as well be described in terms of the optimal sharing of information. After all,
the main objective of IT is to make information available to users and applications. A greater
degree of sharing may get in the way of resource protection; in reality, a well-managed and
effective access control system actually facilitates sharing. A sufficiently fine-grained access
control mechanism can enable selective sharing of information where in its absence, sharing may
be considered too risky altogether.
An access enforcement mechanism authorizes requests (e.g., system calls) from multiple subjects
(e.g. users, processes, etc.) to perform operations (e.g., read, write, etc.) on objects (e.g., files,
sockets, etc.).An operating system provides an access enforcement mechanism. In this chapter, we
define the fundamental concepts of access control: a protection system that defines the access
control specification and a reference monitor that is the system’s access enforcement mechanism
that enforces this specification.

4.1 Basic Concepts


This section introduces some of the concepts that are commonly used in the access control.
Object: An entity that contains or receives information. Access to an object potentially implies
access to the information it contains. Examples of objects are records, fields (in a database record),
blocks, pages, segments, files, directories, directory trees, process, and programs, as well as
processors, video displays, keyboards, clocks, printers, and network nodes. Devices such as
electrical switches, disc drives, relays, and mechanical components connected to a computer
system may also be included in the category of objects.
Subject: An active entity, generally in the form of a person, process, or device that causes
information to flow among objects (see below) or changes the system state.
Operation: An active process invoked by a subject; for example, when an automatic teller
machine (ATM) user enters a card and correct personal identification number (PIN), the control
program operation on the user’s behalf is a process, but the subject can initiate more than one
operation-deposit, withdrawal, balance inquiry, etc.
Permission (Privilege) / Access Right: An authorization to perform some action on the system.
In most computer security literature, the term permission refers to some combination of object and
operation. A particular operation used on two different objects represents two distinct permissions,
and similarly, two different operations applied to a single object represent two distinct permissions.
For example, a bank teller may have permissions to execute debit and credit operations on
customer records through transactions, while an accountant may execute debit and credit
operations on the general ledger, which consolidates the bank’s accounting data.
Access Control List (ACL): A list associated with an object that specifies all the subjects that can
access the object, along with their rights to the object. Each entry in the list is a pair (subject, set
of rights). An ACL corresponds to a column of the access control matrix. ACLs are frequently
implemented directly or as an approximation in modern operating systems.
Access Control Matrix: A table in which each row represents a subject, each column represents
an object, and each entry is the set of access rights for that subject to that object. In general, the
access control matrix is sparse: most subjects do not have access rights to most objects. Therefore,
different representations have been proposed. The access control matrix can be represented as a
list of triples, having the form. Searching a large number of these triples is inefficient enough that
this implementation is seldom used. Rather, the matrix is typically subdivided into columns
(ACLs) or rows (capabilities).
Policies, Models, and Mechanisms
When planning an access control system, three abstractions of controls should be considered:
access control policies, models, and mechanisms. Access control policies are high-level
requirements that specify how access is managed and who, under what circumstances, may access
what information. While access control policies can be application-specific and thus taken into
consideration by the application vendor, policies are just as likely to pertain to user actions within
the context of an organizational unit or across organizational boundaries. For instance, policies
may pertain to resource usage within or across organizational units or may be based on need-to-
know, competence, authority, obligation, or conflict-of-interest factors. Such policies may span
multiple computing platforms and applications.
At a high level, access control policies are enforced through a mechanism that translates a user’s
access request, often in terms of a structure that a system provides. There are a wide variety of
structures; for example, a simple table lookup can be performed to grant or deny access. Although
no well-accepted standard yet exists for determining their policy support, some access control
mechanisms are direct implementations of formal access control policy concepts.
Rather than attempting to evaluate and analyze access control systems exclusively at the
mechanism level, security models are usually written to describe the security properties of an
access control system. A model is a formal presentation of the security policy enforced by the
system and is useful for proving theoretical limitations of a system. Access control models are of
general interest to both users and vendors. They bridge the rather wide gap in abstraction between
policy and mechanism. Access control mechanisms can be designed to adhere to the properties of
the model. Users see an access control model as an unambiguous and precise expression of
requirements. Vendors and system developers see access control models as design and
implementation requirements. On one extreme, an access control model may be rigid in its
implementation of a single policy. On the other extreme, a security model will allow for the
expression and enforcement of a wide variety of policies and policy classes. As stated previously,
the focus of this document is on the practical side of the access control system; detailed
descriptions of access control models are not included in this publication.
Generating a list of access control policies is of limited value, since business objectives, tolerance
for risk, corporate culture, and the regulatory responsibilities that influence policy differ from
enterprise to enterprise, and even from organizational unit to organizational unit. The access
control policies within a hospital may pertain to privacy and competency (e.g., only doctors and
nurse practitioners may prescribe medication), and hospital policies will differ greatly from those
of a military system or a financial institution. Even within a specific business domain, policy will
differ from institution to institution. Furthermore, access control policies are dynamic in nature, in
that they are likely to change over time in reflection of ever-evolving business factors, government
regulations, and environmental conditions. There are several well-known access control policies,
which can be categorized as discretionary or non-discretionary. Typically, discretionary access
control policies are associated with identity-based access control, and non-discretionary access
controls are associated with rule-based controls (for example, mandatory security policy).
Discretionary Access Control (DAC)
DAC leaves a certain amount of access control to the discretion of the object's owner or anyone
else who is authorized to control the object's access. For example, it is generally used to limit a
user's access to a file; it is the owner of the file who controls other users' accesses to the file. Only
those users specified by the owner may have some combination of read, write, execute, and other
permissions to the file. DAC policy tends to be very flexible and is widely used in the commercial
and government sectors. However, DAC is known to be inherently weak for two reasons.
First, granting read access is transitive; for example, when Ann grants Bob read access to a file,
nothing stops Bob from copying the contents of Ann’s file to an object that Bob controls. Bob may
now grant any other user access to the copy of Ann’s file without Ann’s knowledge. Second, DAC
policy is vulnerable to Trojan horse attacks. Because programs inherit the identity of the invoking
user, Bob may, for example, write a program for Ann that, on the surface, performs some useful
function, while at the same time destroys the contents of Ann’s files. When investigating the
problem, the audit files would indicate that Ann destroyed her own files. Thus, formally, the
drawbacks of DAC are as follows:
 Information can be copied from one object to another; therefore, there is no real assurance
on the flow of information in a system.

 No restrictions apply to the usage of information when the user has received it.

 The privileges for accessing objects are decided by the owner of the object, rather than
through a system-wide policy that reflects the organization’s security requirements.
ACLs and owner/group/other access control mechanisms are by far the most common mechanism
for implementing DAC policies. Other mechanisms, even though not designed with DAC in mind,
may have the capabilities to implement a DAC policy.
Non-Discretionary Access Control
In general, all access control policies other than DAC are grouped in the category of
nondiscretionary access control (NDAC). As the name implies, policies in this category have rules
that are not established at the discretion of the user. Non-discretionary policies establish controls
that cannot be changed by users, but only through administrative action. Separation of duty (SOD)
policy can be used to enforce constraints on the assignment of users to roles or tasks. An example
of such a static constraint is the requirement that two roles be mutually exclusive; if one role
requests expenditures and another approves them, the organization may prohibit the same user
from being assigned to both roles.
So, membership in one role may prevent the user from being a member of one or more other roles,
depending on the SOD rules, such as Work Flow and Role-Based Access Control (see the
following sections). Another example is a history-based SOD policy that regulates, for example,
whether the same subject (role) can access the same object a certain number of times. Three
popular non-discretionary access control policies are discussed in this section.
Mandatory Access Control (MAC)
Mandatory Access Control (MAC) policy means that access control policy decisions are made by
a central authority, not by the individual owner of an object, and the owner cannot change access
rights. An example of MAC occurs in military security, where an individual data owner does not
decide who has a Top Secret clearance, nor can the owner change the classification of an object
from Top Secret to Secret. MAC is the most mentioned NDAC policy.
1. The need for a MAC mechanism arises when the security policy of a system dictates that:
Protection decisions must not be decided by the object owner.
2. The system must enforce the protection decisions (i.e., the system enforces the security
policy over the wishes or intentions of the object owner).
Usually a labeling mechanism and a set of interfaces are used to determine access based on the
MAC policy; for example, a user who is running a process at the Secret classification should not
be allowed to read a file with a label of Top Secret. This is known as the “simple security rule,” or
“no read up.” Conversely, a user who is running a process with a label of Secret should not be
allowed to write to a file with a label of Confidential. This rule is called the “*-property”
(pronounced “star property”) or “no write down.”
The *-property is required to maintain system security in an automated environment. A variation
on this rule called the “strict *-property” requires that information can be written at, but not above,
the subject’s clearance level. Multilevel security models such as the Bell-La Padula Confidentiality
and Biba Integrity models are used to formally specify this kind of MAC policy.
However, information can pass through a covert channel in MAC, where information of a higher
security class is deduced by inference such as assembling and intelligently combining information
of a lower security class.

Figure 4.1 A Mandatory Protection System: The protection state is defined in terms of
labels and is immutable. The immutable labeling state and transition state enable the
definition and management of labels for system subjects and objects.
Role-Based Access Control
Although RBAC is technically a form of non-discretionary access control, recent computer
security texts often list RBAC as one of the three primary access control policies (the others are
DAC and MAC). In RBAC, access decisions are based on the roles that individual users have as
part of an organization. Users take on assigned roles (such as doctor, nurse, teller, or manager).
Access rights are grouped by role name, and the use of resources is restricted to individuals
authorized to assume the associated role.
For example, within a hospital system, the role of doctor can include operations to perform a
diagnosis, prescribe medication, and order laboratory tests; the role of researcher can be limited to
gathering anonymous clinical information for studies. The use of roles to control access can be an
effective means for developing and enforcing enterprise-specific security policies and for
streamlining the security management process.
Under RBAC, users are granted membership into roles based on their competencies and
responsibilities in the organization. The operations that a user is permitted to perform are based on
the user's role. User membership into roles can be revoked easily and new memberships established
as job assignments dictate. Role associations can be established when new operations are
instituted, and old operations can be deleted as organizational functions change and evolve. This
simplifies the administration and management of privileges; roles can be updated without updating
the privileges for every user on an individual basis.
When a user is associated with a role, the user can be given no more privilege than is necessary to
perform the job; since many of the responsibilities overlap between job categories, maximum
privilege for each job category could cause unauthorized access. This concept of least privilege
requires identifying the user's job functions, determining the minimum set of privileges required
to perform those functions, and restricting the user to a domain with those privileges and nothing
more. In less precisely controlled systems, least privilege is often difficult or costly to achieve
because it is difficult to tailor access based on various attributes or constraints. Role hierarchies
can be established to provide for the natural structure of an enterprise. A role hierarchy defines
roles that have unique attributes and that may contain other roles; that is, one role may implicitly
include the operations that are associated with another role.
Temporal Constraints
Temporal constraints are formal statements of access policies that involve time-based restrictions
on access to resources; they are required in several application scenarios. In some applications,
temporal constraints may be required to limit resource use. In other types of applications, they may
be required for controlling time-sensitive activities. It is these time-based constraints (in addition
to other constraints like workflow precedence relationships) that must be evaluated for generating
dynamic authorizations during workflow execution time. Temporal constraints may also be
required in non-work flow environments as well.
For example, in a commercial banking enterprise, an employee should be able to assume the role
of a teller (to perform transactions on customer accounts) only during designated banking hours
(such as 9 a.m. to 2 p.m., Monday through Friday, and 9 a.m. to 12 p.m. on Saturday). To meet
this requirement, it is necessary to specify temporal constraints that limit role availability and
activation capability only to those designated banking hours.
Popular access control policies related to temporal constraints are the history-based access control
policies, which are not supported by any standard access control mechanism but have practical
application in many business operations such as task transactions and separation of conflicts-of-
interests. History-based access control is defined in terms of subjects and events where the events
of the system are specified as the object access operations associated with activity at a particular
security level. This assures that the security policy is defined in terms of the sequence of events
over time, and that the security policy decides which events of the system are permitted to ensure
that information does not “flow” in an unauthorized manner. Popular history-based access control
policies are Workflow and Chinese wall.

4.2 Reference Monitor


A reference monitor is the classical access enforcement mechanism. Figure 4.2 presents a
generalized view of a reference monitor. It takes a request as input, and returns a binary response
indicating whether the request is authorized by the reference monitor’s access control policy. We
identify three distinct components of a reference monitor: (1) its interface; (2) its authorization
module; and (3) its policy store. The interface defines where the authorization module needs to be
invoked to perform an authorization query to the protection state, a labeling query to the labeling
state, or a transition query to the transition state. The authorization module determines the exact
queries that are to be made to the policy store. The policy store responds to authorization, labeling,
and transition queries based on the protection system that it maintains.
Reference Monitor Interface: The reference monitor interface defines where protection system
queries are made to the reference monitor. In particular, it ensures that all security-sensitive
operations are authorized by the access enforcement mechanism. By a security-sensitive operation,
we mean an operation on a particular object (e.g., file, socket, etc.) whose execution may violate
the system’s security requirements.
For example, an operating system implements file access operations that would allow one user to
read another’s secret data (e.g., private key) if not controlled by the operating system. Labeling
and transitions may be executed for authorized operations.

Figure 4.2 Reference Monitor


A reference monitor is a component that authorizes access requests at the reference monitor
interface defined by individual hooks that invoke the reference monitor’s authorization module to
submit an authorization query to the policy store. The policy store answers authorization queries,
labeling queries, and label transition queries using the corresponding states.
The reference monitor interface determines where access enforcement is necessary and the
information that the reference monitor needs to authorize that request. In a traditional UNIX file
open request, the calling process passes a file path and a set of operations. The reference monitor
interface must determine what to authorize (e.g., directory searches, link traversals, and finally the
operations for the target file’s inode), where to perform such authorizations (e.g., authorize a
directory search for each directory inode in the file path), and what information to pass to the
reference monitor to authorize the open (e.g., an inode reference). Incorrect interface design may
allow an unauthorized process to gain access to a file.
Authorization Module: The core of the reference monitor is its authorization module. The
authorization module takes interface’s inputs (e.g., process identity, object references, and system
call name), and converts these to a query for the reference monitor’s policy store. The challenge
for the authorization module is to map the process identity to a subject label, the object references
to an object label, and determine the actual operations to authorize (e.g., there may be multiple
operations per interface). The protection system determines the choices of labels and operations,
but the authorization module must develop a means for performing the mapping to execute the
“right” query.
For the open request above, the module responds to the individual authorization requests from the
interface separately. For example, when a directory in the file path is requested, the authorization
module builds an authorization query. The module must obtain the label of the subject responsible
for the request (i.e., requesting process), the label of the specified directory object (i.e., the
directory inode), and the protection state operations implied the request (e.g., read or search the
directory). In some cases, if the request is authorized by the policy store, the module may make
subsequent requests to the policy store for labeling (i.e., if a new object were created) or label
transitions.
Policy Store: The policy store is a database for the protection state, labeling state, and transition
state. An authorization query from the authorization module is answered by the policy store. These
queries are of the form {subject_label, object_label, operation_set} and return a binary
authorization reply. Labeling queries are of the form {subject_label, resource} where the
combination of the subject and, optionally, some system resource attributes determine the resultant
resource label returned by the query. For transitions, queries include the {subject_label,
object_label, operation, resource}, where the policy store determines the resultant label of the
resource. The resource may be either be an active entity (e.g., a process) or a passive object (e.g.,
a file). Some systems also execute queries to authorize transitions as well.

4.3 Access Control Model


Access control models provide a model for developers who need to implement access control
functionality in their software and devices. Instead of having to reinvent the wheel for every system
and design a complex access control system, developers can write a system based on existing well
thought-out models. There are three different types of access control models, which you need to
be able to explain and differentiate: MAC, DAC, and RBAC.

Access Control Matrix


An Access Control Matrix is a table that maps the permissions of a set of subjects to act upon a set
of objects within a system. The matrix is a two-dimensional table with subjects down the columns
and objects across the rows. The permissions of the subject to act upon a particular object are found
in the cell that maps the subject to that object.
Access to any type of information is regulated by organizations having either physical or logical
access controls in place, some organizations offering both.

 Physical' controls are present to prevent a person from entering office spaces, buildings
or structures.
 'Logical' controls are designed to limit electronic access to a network, a computer, digital
files, and stored data.

An Access Control Matrix is a single digital file or written record having 'subjects' and 'objects'
and identifies what actions, if any, are permitted by individuals. In simple terms, the matrix allows
only certain people (subjects) to access certain information (objects). As shown in the table below,
the matrix consists of one or more subjects (i.e., people) along one axis and the associated objects
(i.e., files) along the other axis. Certain people are allowed to Read (R), Write (W), execute (E)
and delete (D) files. Restricted access is indicated by a dash.

For example, you can see that Gary is allowed to delete the promotion list, whereas Sandy is
allowed to Read or execute it. Tonya has full access, while Samantha has no access whatsoever.

Figure 4.3 Access Control Matrix

Access Control List (ACL)

Resource-oriented access controls, such as access control lists (ACLs), are the most common
access control mechanism in use and by far the most common mechanism for implementing DAC
policies. An ACL is an example of a policy concept that is frequently implemented directly or as
an approximation in modern operating systems. An ACL associates the permitted operation to an
object and specifies all the subjects that can access the object, along with their rights to the object.
That is, each entry in the list is a pair (subject, set of rights). From another point of view, an ACL
corresponds to a column of the access control matrix. ACLs provide a straightforward way of
granting or denying access for a specified user or groups of users. Without the use of access control
lists, granting access to the level of granularity of a single user can be cumbersome.
The composition of an ACL entry is as follows:

 Tag type specifies that the ACL entry is one of the following: the file owner, the owning
group, a specific named user, a specific named group, or other (meaning all other users).
 Qualifier field describes the specific instance of the tag type. For specific named users and
specific named groups, the qualifier field contains the userid and groupid, respectively.
Qualifier fields for the owner entry and owning group entries are not relevant because this
information is specified elsewhere.
 Set of permissions specifies the access rights such as read, write, or execute for the entry.

For example

user: rwx (r is read, w is write, and x is execute)

File owner tag (with empty qualifier) specifies the owning user has read, write, and execute
permission.

user.userid:rw-

Named user tag specifies the user userid (qualifier) has read and write permission.

group:r-x

Group owner tag (with empty qualifier) specifies the owning group has read and execute
permission.

group.groupid:--x

Named group tag specifies the group groupid (qualifier) has execute permission.

other:r--

Other tag (with empty qualifier) specifies all other users (than the owning and named users and
groups) have the read permission.

Capability List (CL)

Another type of access control is the capability list or access list. A capability is a key to a specific
object, along with a mode of access (read, write, or execute). In a capability system, access to an
object is allowed if the subject that is requesting access possesses a capability for the object. It can
thus be thought of as the inverse of an access control list: an ACL is attached to an object and
specifies which subjects may access the object, while a capability list is attached to a subject and
specifies which objects the subject may access. A capability is a protected identifier that both
identifies the object and specifies the operations to be allowed to the user who possesses the
capability. This approach corresponds to storing the access control matrix by rows.
Table 4.3 Presents an example of a capability list

The capability list in the table is associated with a subject and specifies the subject’s rights. Each
entry in the list is a capability - a pair <object, set of operations>. A subject possessing a capability
is proof of the subject having the access privileges. So, it can be useful to think of capabilities as
being similar to tickets. However, unlike tickets, in some systems capabilities can be copied, and
there may be the potential for the possessor of a capability to give a copy to someone else (this
capacity is itself often represented as a “right”).

A capability list corresponds to a row of the access control matrix. The principal advantage of
capabilities is that it is easy to review all accesses that are authorized for a given subject. The most
successful use of capabilities is at lower levels in the system, where capabilities provide the
underlying protection mechanism and not the user-visible access control scheme. At the highest
levels in the system, the system maintains a list of capabilities for each user. Users cannot add
capabilities to this list except to cover new files that they create. Users might, however, be allowed
to give access to objects by passing copies of their own capabilities to other users, and they might
be able to revoke access to their own objects by taking away capabilities from others (although
revocation can be difficult to implement).

4.4 Bell-LaPadula Model

The Bell-LaPadula Model (abbreviated BLP) is a state machine model used for enforcing access
control in government and military applications. It was developed by David Elliott Bell and
Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell to formalize the U.S.
Department of Defense (DoD) multilevel security (MLS) policy.

The model is a formal state transition model of computer security policy that describes a set of
access control rules which use security labels on objects and clearances for subjects. Security labels
range from the most sensitive (e.g."Top Secret"), down to the least sensitive (e.g., "Unclassified"
or "Public"). The Bell-LaPadula model is an example of a model where there is no clear distinction
of protection and security.
Features

The Bell-LaPadula model focuses on data confidentiality and controlled access to classified
information, in contrast to the Biba Integrity Model which describes rules for the protection of data
integrity. In this formal model, the entities in an information system are divided into subjects and
objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves
security by moving from secure state to secure state, thereby inductively proving that the system
satisfies the security objectives of the model. The Bell-La Padula model is built on the concept of
a state machine with a set of allowable states in a computer network system. The transition from
one state to another state is defined by transition functions.

A system state is defined to be "secure" if the only permitted access modes of subjects to objects
are in accordance with a security policy. To determine whether a specific access mode is allowed,
the clearance of a subject is compared to the classification of the object (more precisely, to the
combination of classification and set of compartments, making up the security level) to determine
if the subject is authorized for the specific access mode. The clearance/classification scheme is
expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and
one discretionary access control (DAC) rule with three security properties:

1. The Simple Security Property - a subject at a given security level may not read an object
at a higher security level (no read-up).

2. The *-property (read "star"-property) - a subject at a given security level must not
write to any object at a lower security level (no write-down). The *-property is also known
as the Confinement property.

3. The Discretionary Security Property - use of an access matrix to specify the


discretionary access control.

The transfer of information from a high-sensitivity document to a lower-sensitivity document may


happen in the Bell-LaPadula model via the concept of trusted subjects. Trusted Subjects are not
restricted by the *-property. Untrusted subjects are. Trusted Subjects must be shown to be
trustworthy with regard to the security policy. This security model is directed toward access control
and is characterized by the phrase: "no read up, no write down." Compare the Biba model, the
ClarkWilson model and the Chinese wall model.

With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret
researchers can create secret or top-secret files but may not create public files; no write-down).
Conversely, users can view content only at or below their own security level (i.e. secret researchers
can view public or secret files, but may not view top-secret files; no read-up).
Limitations

 Only addresses confidentiality, control of writing (one form of integrity), *-property and
discretionary access control
 Covert channels are mentioned but are not addressed comprehensively
 The tranquility principle limits its applicability to systems where security levels do not
change dynamically. It allows controlled copying from high to low via trusted subjects.

Classification Levels

Although the classification systems vary from country to country, most have levels corresponding
to the following British definitions (from the highest level to lowest):

Top Secret (TS)


The highest level of classification of material on a national level. Such material would cause
"exceptionally grave damage" to national security if made publicly available.
Secret
Such material would cause "grave damage" to national security if it were publicly available.
Confidential
Such material would cause "damage" or be "prejudicial" to national security if publicly available.
Restricted
Such material would cause "undesirable effects" if publicly available. Some countries do not have
such a classification.
Unclassified
Technically not a classification level, but is used for government documents that do not have a
classification listed above. Such documents can sometimes be viewed by those without security
clearance.

4.5 Biba Model


The motivation for creating the Biba model was for preserving the integrity in a computer system.
The model prevents the unauthorized modification of data and maintains the consistency of the
data. The Biba model assigns the subjects and objects an integrity label. The integrity label is used
to tell the degree of confidence that may be placed in the data. The Biba model contains four access
modes: modify, observe, invoke, and execute.
The Biba model defines a family of different policies that can be used to enforce integrity. The
Biba model supports five mandatory policies: strict integrity policy, low watermark policy for
subjects, low-watermark for objects, low-watermark integrity audit policy, and ring policy. Three
discretionary policies have been established: access control lists, object hierarchy, and ring. In
practice the mandatory policies of the Biba model are most often used.
The most restrictive of the policies of the Biba model is the strict integrity property. The strict
integrity property is essentially a reverse of the Bell-LaPadula model. The strict integrity model is
composed of three properties. First, the simple integrity condition allows a subject to observe
(read) an object only if the integrity level of the subject is less than the integrity level of the object.
The simple integrity condition enforces “no read down”. Second, the integrity star property states
that a subject can modify (write to) an object only if the integrity level of the object is less than the
integrity level of the object.
The integrity star property enforces “no write up”. Last, in the invocation property, a subject can
invoke another subject only if the integrity level of the subject being invoked is less than the subject
that invoked it. The figures below show the simple integrity condition (Fig. 1) and the integrity
star property (Fig. 2).

Based on the strict integrity condition, a number of other policies were established. Each of the
other policies is less restrictive than the strict integrity policy. The model has a number of dynamic
policies that lower the integrity levels of either subjects or objects based on the access mode
operation. The Biba model is not without its problems. The model does not support the granting
and revocation of authorizations. Another problem is that the model is used strictly for integrity;
it does nothing to enforce confidentiality. For these reasons, to use the Biba model, it should be
combined with other security models. The Lipner Integrity model is one model that combines the
Biba model and the Bell-LaPadula models together to enforce both integrity and confidentiality.
There are two key consequences of the Biba model: called the low water mark principle and the
high water mark principle. The low water mark principle means that an asset’s integrity is only as
good as the subject with least integrity that can write it. The high water mark means that when
assets combine they take on the highest level of their contributors. In practice the high water mark
principle means that over time assets will tend to take on higher classifications.
The execute property is not always included in descriptions of Biba, but it is added here to point
out that integrity would be violated if computer processes could elevate in privilege. In fact many
viruses attempt to do just that. Together, the BLP and Biba models point out the difficult with
attaining security in a multi-level model. The characteristics that ensure confidentiality fail to
ensure confidentiality and the characteristics that ensure integrity fail to ensure proper
confidentiality.

4.6 What is a Firewall?


A firewall is a system that enforces an access control policy between two networks—such as your
private LAN and the unsafe, public Internet. The firewall determines which inside services can be
accessed from the outside, and vice versa. The actual means by which this is accomplished varies
widely, but in principle, the firewall can be thought of as a pair of mechanisms: one to block traffic,
and one to permit traffic. A firewall is more than the locked front door to your network—it’s your
security guard as well.
Firewalls are also important because they provide a single “choke point” where security and audits
can be imposed. A firewall can provide a network administrator with data about what kinds and
amount of traffic passed through it, how many attempts were made to break into it, and so on. Like
a closed circuit security TV system, your firewall not only prevents access, but also monitors who’s
been sniffing around, and assists in identifying those who attempt to breach your security.

Basic Purpose of a Firewall


Basically, a firewall does three things to protect your network:
 It blocks incoming data that might contain a hacker attack.
 It hides information about the network by making it seem that all outgoing traffic originates
from the firewall rather than the network. This is called Network Address Translation (NAT).
 It screens outgoing traffic to limit Internet use and/or access to remote sites.
Screening Levels
A firewall can screen both incoming and outgoing traffic. Because incoming traffic poses a greater
threat to the network, it’s usually screened more closely than outgoing traffic. When you are
looking at firewall hardware or software products, you’ll probably hear about three types of
screening that firewalls perform:
 Screening that blocks any incoming data not specifically ordered by a user on the network
 Screening by the address of the sender
 Screening by the contents of the communication
Think of screening levels as a process of elimination. The firewall first determines whether the
incoming transmission is something requested by a user on the network, rejecting anything else.
Anything that is allowed in is then examined more closely. The firewall checks the sender’s
computer address to ensure that it is a trusted site. It also checks the contents of the transmission.
Firewall Technologies
Firewalls come in all shapes, sizes, and prices. Choosing the correct one depends mainly on your
business requirements and the size of your network. This section discusses the different types of
firewall technologies and formats available.
Above all, no matter what type of firewall you choose or its functionality, you must ensure that it
is secure and that a trusted third party, such as the International Computer Security Association
(ICSA), has certified it. The ICSA classifies firewalls into three categories: packet filter firewalls,
application-level proxy servers, and stateful packet inspection firewalls.
Packet Filter Firewall
Every computer on a network has an address commonly referred to as an IP address. A packet
filter firewall checks the address of incoming traffic and turns away anything that doesn’t match
the list of trusted addresses. The packet filter firewall uses rules to deny access according to
information located in each packet such as: the TCP/IP port number, source/destination IP address,
or data type. Restrictions can be as tight or as loose as you want.
An ordinary router on a network may be able to screen traffic by address, but hackers have a little
trick called source IP spoofing that makes data appear to come from a trusted source, even from
your own network. Unfortunately, packet filter firewalls are prone to IP spoofing and are also
difficult and confusing to configure. And any mistake in configuration could potentially leave you
wide open to attack.
Application-Level Proxy Server
An application-level proxy server examines the application used for each individual IP packet to
verify its authenticity. Traffic from each application—such as HTTP for Web, FTP for file
transfers, and SMTP/POP3 for e-mail—typically requires the installation and configuration of a
different application proxy. Proxy servers often require administrators to reconfigure their network
settings and applications (i.e., Web browsers) to support the proxy, and this can be a labor intensive
process.
Stateful Packet Inspection Firewall
This is the latest generation in firewall technology. Stateful packet inspection is considered by
Internet experts to be the most advanced and secure firewall technology because it examines all
parts of the IP packet to determine whether to accept or reject the requested communication. The
firewall keeps track of all requests for information that originate from your network. Then it scans
each incoming communication to see if it was requested, and rejects anything that wasn’t.
Requested data proceeds to the next level of screening. The screening software determines the
state of each packet of data, hence the term stateful packet inspection.
Additional Firewall Features and Functionality
In addition to the security capability of a firewall, a wide range of additional features and
functionalities are being integrated into standard firewall products. These include support for
public Web and e-mail servers, normally referred to as a demilitarized zone (DMZ), content
filtering, virtual private networking (VPN) encryption support, and antivirus support.
Demilitarized Zone Firewalls
A firewall that provides DMZ protection is effective for companies that invite customers to contact
their network from any external source, through the Internet or any other route—for example, a
company that hosts a Web site or sells its products or services over the Internet. The deciding
factors for a DMZ firewall would be the number of outsiders or external users who access
information on the network and how often they access it. A DMZ firewall creates a protected
(“demilitarized”) information area on the network. Outsiders can get to the protected area but can’t
get to the rest of the network. This allows outside users to get to the information you want them to
have and prevents them from getting to the information you don’t want them to have.

Figure 4.6 Types of Firewalls


Content Filtering
A Web site filter or content filter extends the firewall’s capability to block access to certain Web
sites. You can use this add-on to ensure that employees do not access particular content, such as
pornography or racially intolerant material. With this functionality you can define categories of
unwelcome material and obtain a service that lists thousands of Web sites that include such
material. You can then choose whether to totally block those sites, or to allow access but log it.
Such a service should automatically update its list of banned Web sites on a regular basis.

4.7 What is Intrusion Detection System - IDS?


Intrusion Detection is the art of detecting inappropriate, incorrect, or anomalous activity. Among
other tools, an Intrusion Detection System can be used to determine if a computer network or
server has experienced an unauthorized intrusion.
Intrusion detection systems (IDSs) are usually deployed along with other preventive security
mechanisms, such as access control and authentication, as a second line of defense that protects
information systems. There are several reasons that make intrusion detection a necessary part of
the entire defense system. First, many traditional systems and applications were developed without
security in mind. In other cases, systems and applications were developed to work in a different
environment and may become vulnerable when deployed Intrusion detection complements these
protective mechanisms to improve the system security.
Moreover, even if the preventive security mechanisms can protect information systems
successfully, it is still desirable to know what intrusions have happened or are happening, so that
we can understand the security threats and risks and thus be better prepared for future attacks.

Figure 4.7 Intrusion Detection System


4.8 Kerberos
Kerberos is an authentication service developed at MIT (Massachusetts Institute of
Technology).that uses symmetric key encryption techniques and a key distribution center; it is an
add-system that can be used with existing network. Kerberos provides a means of verifying the
identities of principals on an open (unprotected) network. This is accomplished without relying on
authentication by the host operating system, without basing trust on host address, without requiring
physical security of all the hosts on the network, and under the assumption that packets travelling
along the network can be read, modified, and inserted at will.
Kerberos performs authentication under these conditions as a trusted third party authentication
service by using conventional cryptography. It trusted in the sense that each of its clients believes
Kerberos’s judgment as to the identity of each of its other clients to be accurate. The problem that
Kerberos addresses is this: a distributed system in which users at workstations wish to access
services on servers distributed throughout the network. We would like for servers to be able to
restricted access to authorized users and to be able to authenticate requests for service. In this
system the following three threats exist:
1. A user may gain access to a particular workstation and pretend to be another user operating
from that workstation.
2. A user may alter the network address of a workstation so that the requests sent from the
altered workstation appear to come from the impersonated workstation.
3. A user may eavesdrop on exchanges and use a reply attack to gain entrance to a server or
to disrupt operations.
In any of these cases, an unauthorized user may be able to gain access to services and data that he
or she is not authorized to access. Kerberos provides a centralized authentication server whose
function is to authenticate users to servers and servers to users. Kerberos provides the following
requirements:
Secure: A network eavesdropper should not be able to obtain the necessary information to
impersonate a user.
Reliable: Kerberos should be highly reliable and should employ a distributed server architecture,
with one system able to back up another.
Transparent: The user should not be aware that the authentication is taking place, beyond the
requirement to enter a password.
Scalable: The system should be capable of supporting large numbers of clients and servers.
A full-service Kerberos environment, consisting of a Kerberos server, a number of clients and a
number of application servers, requires that the Kerberos server must have the user ID (UID) and
hashed passwords of all participating users in its database. All users are registered with the
Kerberos server. Such an environment is referred as a realm. Moreover, the Kerberos server must
share a secret key with each server and every server is registered with the Kerberos server.
A simple authentication procedure must involve three steps:
1. The client C requests the user password and then send a message to the AS of the Kerberos
system that includes the user’s ID, the server’s ID and the user’s password.
2. The AS check its database to see if the user has supplied the proper password for this user
ID and whether this user is permitted access to the server V. If both tests are passed, the
AS accept the user as authentic and must now convince the server that this user is authentic.
Thus the AS creates and sends back to C a ticket that contains the user’s ID and network
address and the server’s ID. Then it is encrypted with the secret key shared by the AS and
the server V.
3. C can now apply to V for the service. It sends a message to V containing C’s ID and the
ticket. V decrypts the ticket and verifies that the user ID in the ticket is the same of the one
which came with the ticket. If these two match, the server grants the requested service to
the client.
This scheme is correct, but this is not enough in a distributed environment: it has some leaks related
to security and requires that the user introduce the password every time he needs to access to a
service. Indeed, the client sends its password unencrypted to the AS; an eavesdropper could
capture the password and use any services accessible to the victim. Moreover, it is better to
minimize the number of times that the user has to enter a password.
To solve these additional problems, we introduce a scheme for avoiding plaintext passwords and
a new server, known as the Ticket-Granting Server (TGS). The new service issues tickets to users
who have been authenticated to AS. Each time the user require access to a new service, the client
applies to the TGS using the ticket supplied by the AS to authenticate itself. The TGS then grants
a ticket to the particular service and the client saves this ticket for future use.
1. The client request a ticket-granting ticket on behalf of the user by sending its user’s ID to the
AS, together with the TGS ID, indicating a request to use the TGS service.

2. The AS responds with a message, encrypted with a key derived from the user’s password that
contains the ticket for the TGS. The encrypted message also contains a copy of the session key
used by C and the TGS. In this way, only the user’s client can read it. The same session key is
included in the ticket, which can be read only by the TGS. Now C and the TGS share a common
key.

3. Armed with the ticket and the session key, C is ready to approach the TGS. C sends the TGS
a message that includes the ticket plus the ID of the requested service. In addition, C transmits
an authenticator, which includes the ID and address of C’s user and a timestamp; it is encrypted
with the session key known only by C and TGS. Unlike the ticket, which is reusable, the
authenticator is intended to use only once and has a very shot lifetime. The TGS can decrypt
the ticket with the key that it shares with the AS. This ticket indicates that the user C has been
provided with the session key Kc. Thus the TGS decrypt the authenticator with Kc gained from
the ticket and check the name and the address from the authenticator with that of the ticket and
with the network address of the incoming message. If all match, then the TGS is assured that
the sender of the ticket is indeed the ticket’s real owner.
4. Then the TGS replies to the client, sending a message encrypted with the common key that
they share. It includes a new session key to be used with the server V that must provide the
service, the ID of V, the ticket valid for a specific service and the timestamp of the ticket.
The ticket itself includes the new session key.

5. C now has a reusable service-granting ticket for V. When C presents this ticket, it also
sends an authenticator. The server can decrypt the ticket, recover the session key and
decrypt the authenticator.
Finally, at the conclusion of this process, the client and the server share a secret key. This key can
be used to encrypt future messages between the two or to exchange a new session key for that
purpose.

Figure 4.8 Kerberos Authentication Dialogue


CHAPTER FIVE
NETWORK AND INTERNET SECURITY
5.1 Web Security
The World Wide Web is fundamentally a client/server application running over the Internet and
TCP/IP intranets. The reason to come up with web security are the following.
 The Internet is two-way. Unlike traditional publishing environments even electronic
publishing systems involving tele text, voice response, or fax-back the Web is vulnerable to
attacks on the Web servers over the Internet.
 The Web is increasingly serving as a highly visible outlet for corporate and product
information and as the platform for business transactions. Reputations can be damaged and
money can be lost if the Web servers are subverted.
 Although Web browsers are very easy to use, Web servers are relatively easy to configure
and manage, and Web content is increasingly easy to develop, the underlying software is
extraordinarily complex. This complex software may hide many potential security flaws.
The short history of the Web is filled with examples of new and upgraded systems, properly
installed, that are vulnerable to a variety of security attacks.
 A Web server can be exploited as a launching pad into the corporation’s or agency’s entire
computer complex. Once the Web server is subverted, an attacker may be able to gain access
to data and systems not part of the Web itself but connected to the server at the local site.
 Casual and untrained (in security matters) users are common clients for Web-based services.
Such users are not necessarily aware of the security risks that exist and do not have the tools
or knowledge to take effective countermeasures.
Table 5.1 provides a summary of the types of security threats faced when using the Web. One way
to group these threats is in terms of passive and active attacks. Passive attacks include
eavesdropping on network traffic between browser and server and gaining access to information
on a Web site that is supposed to be restricted. Active attacks include impersonating another user,
altering messages in transit between client and server, and altering information on a Web site.
Another way to classify Web security threats is in terms of the location of the threat: Web server,
Web browser, and network traffic between browser and server. Issues of server and browser
security fall into the category of computer system security; this chapter addresses the issue of
system security in general but is also applicable to Web system security.
A number of approaches to providing Web security are possible. The various approaches that have
been considered are similar in the services they provide and, to some extent, in the mechanisms
that they use, but they differ with respect to their scope of applicability and their relative location
within the TCP/IP protocol stack. Figure 5.1 illustrates this difference. One way to provide Web
security is to use IP security (IPsec) (Figure 5.1a). The advantage of using IPsec is that it is
transparent to end users and applications and provides a general-purpose solution.
Furthermore, IPsec includes a filtering capability so that only selected traffic need incur the
overhead of IPsec processing.

Another relatively general-purpose solution is to implement security just above TCP (Figure 5.1b).
The foremost example of this approach is the Secure Sockets Layer (SSL) and the follow-on
Internet standard known as Transport Layer Security (TLS). At this level, there are two
implementation choices. For full generality, SSL (or TLS) could be provided as part of the
underlying protocol suite and therefore be transparent to applications. Alternatively, SSL can be
embedded in specific packages. For example, Netscape and Microsoft Explorer browsers come
equipped with SSL, and most Web servers have implemented the protocol.
Application-specific security services are embedded within the particular application. Figure 5.1c
shows examples of this architecture. The advantage of this approach is that the service can be
tailored to the specific needs of a given application.
5.2 Secure Socket Layer and Transport Layer Security
SSL is designed to make use of TCP to provide a reliable end-to-end secure service. SSL is not a
single protocol but rather two layers of protocols, as illustrated in Figure 5.2. The SSL Record
Protocol provides basic security services to various higher-layer protocols. In particular, the
Hypertext Transfer Protocol (HTTP), which provides the transfer service for Web client/server
interaction, can operate on top of SSL. Three higher-layer protocols are defined as part of SSL:
the Handshake Protocol, the Change Cipher Spec Protocol, and the Alert Protocol. These SSL-
specific protocols are used in the management of SSL exchanges.
Two important SSL concepts are the SSL session and the SSL connection, which are defined in
the specification as follows.
 Connection: A connection is a transport (in the OSI layering model definition) that provides
a suitable type of service. For SSL, such connections are peer-to-peer relationships. The
connections are transient. Every connection is associated with one session.
 Session: An SSL session is an association between a client and a server. Sessions are created
by the Handshake Protocol. Sessions define a set of cryptographic security parameters which
can be shared among multiple connections. Sessions are used to avoid the expensive
negotiation of new security parameters for each connection.

Between any pair of parties (applications such as HTTP on client and server), there may be
multiple secure connections. In theory, there may also be multiple simultaneous sessions between
parties, but this feature is not used in practice. There are a number of states associated with each
session. Once a session is established, there is a current operating state for both read and write (i.e.,
receive and send). In addition, during the Handshake Protocol, pending read and write states are
created. Upon successful conclusion of the Handshake Protocol, the pending states become the
current states.
SSL Record Protocol
The SSL Record Protocol provides two services for SSL connections:
 Confidentiality: The Handshake Protocol defines a shared secret key that is used for
conventional encryption of SSL payloads.
 Message Integrity: The Handshake Protocol also defines a shared secret key that is
used to form a message authentication code (MAC).
Figure 5.3 indicates the overall operation of the SSL Record Protocol. The Record Protocol takes
an application message to be transmitted, fragments the data into manageable blocks, optionally
compresses the data, applies a MAC, encrypts, adds a header, and transmits the resulting unit in a
TCP segment. Received data are decrypted, verified, decompressed, and reassembled before being
delivered to higher-level users.
The first step is fragmentation. Each upper-layer message is fragmented into blocks of 214 bytes
(16384 bytes) or less. Next, compression is optionally applied. Compression must be lossless and
may not increase the content length by more than 1024 bytes. In SSLv3 (as well as the current
version of TLS), no compression algorithm is specified, so the default compression algorithm is
null. The next step in processing is to compute a message authentication code over the
compressed data. For this purpose, a shared secret key is used.

Next, the compressed message plus the MAC are encrypted using symmetric encryption.
Encryption may not increase the content length by more than 1024 bytes, so that the total length
may not exceed 214+ 2048. The following encryption algorithms are permitted:
For stream encryption, the compressed message plus the MAC are encrypted. Note that the MAC
is computed before encryption takes place and that the MAC is then encrypted along with the
plaintext or compressed plaintext.
For block encryption, padding may be added after the MAC prior to encryption. The padding is in
the form of a number of padding bytes followed by a one-byte indication of the length of the
padding. The total amount of padding is the smallest amount such that the total size of the data to
be encrypted (plaintext plus MAC plus padding) is a multiple of the cipher’s block length.
An example is a plaintext (or compressed text if compression is used) of 58 bytes, with a MAC of
20 bytes (using SHA-1), that is encrypted using a block length of 8 bytes (e.g., DES). With the
padding-length byte, this yields a total of 79 bytes. To make the total an integer multiple of 8, one
byte of padding is added. The final step of SSL Record Protocol processing is to prepare a header
consisting of the following fields:
Content Type (8 bits): The higher-layer protocol used to process the enclosed fragment.
Major Version (8 bits): Indicates major version of SSL in use. For SSLv3, the value is 3.
Minor Version (8 bits): Indicates minor version in use. For SSLv3, the value is 0.
Compressed Length (16 bits): The length in bytes of the plaintext fragment (or compressed
fragment if compression is used). The maximum value is 214+2048.
Change Cipher Spec Protocol
The Change Cipher Spec Protocol is one of the three SSL-specific protocols that use the SSL
Record Protocol, and it is the simplest. This protocol consists of a single message (Figure 5.5a),
which consists of a single byte with the value 1. The sole purpose of this message is to cause the
pending state to be copied into the current state, which updates the cipher suite to be used on this
connection.

Handshake Protocol
The most complex part of SSL is the Handshake Protocol. This protocol allows the server and
client to authenticate each other and to negotiate an encryption and MAC algorithm and
cryptographic keys to be used to protect data sent in an SSL record. The Handshake Protocol is
used before any application data is transmitted. The Handshake Protocol consists of a series of
messages exchanged by client and server. Each message has three fields.

5.3 Virtual Private Network


Ideally, any organization would want its own private network for communication to ensure
security. However, it may be very costly to establish and maintain such private network over
geographically dispersed area. It would require to manage complex infrastructure of
communication links, routers, DNS, etc.

IPsec provides an easy mechanism for implementing Virtual Private Network (VPN) for such
organizations. VPN technology allows organization’s inter-office traffic to be sent over public
Internet by encrypting traffic before entering the public Internet and logically separating it from
other traffic.

Figure 5.3 Virtual Private Network

Overview of IPsec
The IPsec suite can be considered to have two separate operations, when performed in unison,
providing a complete set of security services. These two operations are IPsec Communication and
Internet Key Exchange.

IPsec Communication
It is typically associated with standard IPsec functionality. It involves encapsulation, encryption,
and hashing the IP datagrams and handling all packet processes. It is responsible for managing the
communication according to the available Security Associations (SAs) established between
communicating parties. It uses security protocols such as Authentication Header (AH) and
Encapsulated SP (ESP). IPsec communication is not involved in the creation of keys or their
management. IPsec communication operation itself is commonly referred to as IPsec.
Internet Key Exchange (IKE)
IKE is the automatic key management protocol used for IPsec. Technically, key management is
not essential for IPsec communication and the keys can be manually managed. However, manual
key management is not desirable for large networks. IKE is responsible for creation of keys for
IPsec and providing authentication during key establishment process.
Though, IPsec can be used for any other key management protocols, IKE is used by default. IKE
defines two protocol (Oakley and SKEME) to be used with already defined key management
framework Internet Security Association Key Management Protocol (ISAKMP). ISAKMP is not
IPsec specific, but provides the framework for creating SAs for any protocol.

IPsec Communication Modes

IPsec Communication has two modes of functioning; transport and tunnel modes. These modes
can be used in combination or used individually depending upon the type of communication
desired.

Transport Mode

IPsec does not encapsulate a packet received from upper layer. The original IP header is maintained
and the data is forwarded based on the original attributes set by the upper layer protocol.

Tunnel Mode

This mode of IPsec provides encapsulation services along with other security services. In tunnel
mode operations, the entire packet from upper layer is encapsulated before applying security
protocol. New IP header is added. Tunnel mode is typically associated with gateway activities.
The encapsulation provides the ability to send several sessions through a single gateway. As far as
the endpoints are concerned, they have a direct transport layer connection.
The datagram from one system forwarded to the gateway is encapsulated and then forwarded to
the remote gateway. The remote associated gateway de-encapsulates the data and forwards it to
the destination endpoint on the internal network. Using IPsec, the tunneling mode can be
established between the gateway and individual end system as well.

Features of IPsec Protocol

IPsec is a suite of protocols for securing network connections. It is rather a complex mechanism,
because instead of giving straightforward definition of a specific encryption algorithm and
authentication function, it provides a framework that allows an implementation of anything that
both communicating ends agree upon. Authentication Header (AH) and Encapsulating Security
Payload (ESP) are the two main communication protocols used by IPsec. While AH only
authenticate, ESP can encrypt and authenticate the data transmitted over the connection.
Transport Mode provides a secure connection between two endpoints without changing the IP
header. Tunnel Mode encapsulates the entire payload IP packet. It adds new IP header. The latter
is used to form a traditional VPN, as it provides a virtual secure tunnel across an untrusted Internet.
Setting up an IPsec connection involves all kinds of crypto choices. Authentication is usually built
on top of a cryptographic hash such as MD5 or SHA-1. Encryption algorithms are DES, 3DES,
Blowfish, and AES being common. Other algorithms are possible too.

Both communicating endpoints need to know the secret values used in hashing or encryption.
Manual keys require manual entry of the secret values on both ends, presumably conveyed by
some out-of-band mechanism, and IKE (Internet Key Exchange) is a sophisticated mechanism for
doing this online.

PPTP

Point-to-Point Tunneling Protocol (PPTP) is the oldest of the three protocols used in VPNs. It was
originally designed as a secure extension to Point-to-Point Protocol (PPP). PPTP works at the data
link layer of the OSI model. PPTP offers two different methods of authenticating the user:

 Extensible Authentication Protocol (EAP)

 Challenge Handshake Authentication Protocol (CHAP)

EAP was actually designed specifically for PPTP and is not proprietary. CHAP is a three-way
process whereby the client sends a code to the server, the server authenticates it, and then the server
responds to the client. CHAP also periodically re-authenticates a remote client, even after the
connection is established. PPTP uses Microsoft Point-to-Point Encryption (MPPE) to encrypt
packets. MPPE is actually a version of DES. DES is still useful for many situations; however,
newer versions of DES, such as DES 3, are preferred.

L2TP

Layer 2 Tunneling Protocol (L2TP) was explicitly designed as an enhancement to PPTP. Like
PPTP, it works at the data link layer of the OSI model. It has several improvements to PPTP. First,
it offers more and varied methods for authentication—PPTP offers two, whereas L2TP offers five.
In addition to CHAP and EAP, L2TP offers PAP, SPAP, and MS-CHAP.

In addition to more authentication protocols available for use, L2TP offers other enhancements.
PPTP will only work over standard IP networks, whereas L2TP will work over X.25 networks (a
common protocol in phone systems) and ATM (asynchronous transfer mode, a high-speed
networking technology) systems. L2TP also uses IPsec for its encryption.

5.4 E-mail Security


Nowadays, e-mail has become very widely used network application. Let’s briefly discuss the e-
mail infrastructure before proceeding to know about e-mail security protocols.
E-mail Infrastructure

The simplest way of sending an e-mail would be sending a message directly from the sender’s
machine to the recipient’s machine. In this case, it is essential for both the machines to be running
on the network simultaneously.

However, this setup is impractical as users may occasionally connect their machines to the
network. Hence, the concept of setting up e-mail servers arrived. In this setup, the mail is sent to
a mail server which is permanently available on the network. When the recipient’s machine
connects to the network, it reads the mail from the mail server.

In general, the e-mail infrastructure consists of a mesh of mail servers, also termed as Message
Transfer Agents (MTAs) and client machines running an e-mail program comprising of User
Agent (UA) and local MTA. Typically, an e-mail message gets forwarded from its UA, goes
through the mesh of MTAs and finally reaches the UA on the recipient’s machine.

The protocols used for e-mail are as follows:

 Simple Mail Transfer Protocol (SMTP) used for forwarding e-mail messages.
 Post Office Protocol (POP) and Internet Message Access Protocol (IMAP) are used to
retrieve the messages by recipient from the server.

MIME

Basic Internet e-mail standard was written in 1982 and it describes the format of e-mail message
exchanged on the Internet. It mainly supports e-mail message written as text in basic Roman
alphabet. By 1992, the need was felt to improve the same. Hence, an additional standard
Multipurpose Internet Mail Extensions (MIME) was defined. It is a set of extensions to the basic
Internet E-mail standard. MIME provides an ability to send e-mail using characters other than
those of the basic Roman alphabet such as Cyrillic alphabet (used in Russian), the Greek alphabet,
or even the ideographic characters of Chinese. Another need fulfilled by MIME is to send non-text
contents, such as images or video clips. Due to this features, the MIME standard became widely
adopted with SMTP for e-mail communication.

E-Mail Security Services

Growing use of e-mail communication for important and crucial transactions demands provision
of certain fundamental security services as the following:

 Confidentiality: E-mail message should not be read by anyone but the intended recipient.
 Authentication: E-mail recipient can be sure of the identity of the sender.
 Integrity: Assurance to the recipient that the e-mail message has not been altered since it
was transmitted by the sender.
 Non-repudiation: E-mail recipient is able to prove to a third party that the sender really
did send the message.
 Proof of submission: E-mail sender gets the confirmation that the message is handed to
the mail delivery system.
 Proof of delivery: Sender gets a confirmation that the recipient received the message.

Security services such as privacy, authentication, message integrity, and non-repudiation are
usually provided by using public key cryptography.

PGP

Pretty Good Privacy (PGP) is an e-mail encryption scheme. It has become the de-facto standard
for providing security services for e-mail communication. As discussed above, it uses public key
cryptography, symmetric key cryptography, hash function, and digital signature. It provides:

 Privacy
 Sender Authentication
 Message Integrity
 Non-repudiation

Along with these security services, it also provides data compression and key management support.
PGP uses existing cryptographic algorithms such as RSA, IDEA, MD5, etc., rather than inventing
the new ones.

PGP Certificate

PGP key certificate is normally established through a chain of trust. For example, A’s public key
is signed by B using his public key and B’s public key is signed by C using his public key. As this
process goes on, it establishes a web of trust. In a PGP environment, any user can act as a certifying
authority. Any PGP user can certify another PGP user's public key. However, such a certificate is
only valid to another user if the user recognizes the certifier as a trusted introducer.

Several issues exist with such a certification method. It may be difficult to find a chain leading
from a known and trusted public key to desired key. Also, there might be multiple chains which
can lead to different keys for desired user. PGP can also use the PKI infrastructure with
certification authority and public keys can be certified by CA (X.509 certificate).

S/MIME

S/MIME stands for Secure Multipurpose Internet Mail Extension. S/MIME is a secure e-mail
standard. It is based on an earlier non-secure e-mailing standard called MIME.

Working of S/MIME

S/MIME approach is similar to PGP. It also uses public key cryptography, symmetric key
cryptography, hash functions, and digital signatures. It provides similar security services as PGP
for e-mail communication. The most common symmetric ciphers used in S/MIME are RC2 and
TripleDES. The usual public key method is RSA, and the hashing algorithm is SHA-1 or MD5.
S/MIME specifies the additional MIME type, such as “application/pkcs7-mime”, for data
enveloping after encrypting. The whole MIME entity is encrypted and packed into an object.
S/MIME has standardized cryptographic message formats (different from PGP). In fact, MIME is
extended with some keywords to identify the encrypted and/or signed parts in the message.
S/MIME relies on X.509 certificates for public key distribution. It needs top-down hierarchical
PKI for certification support.

Employability of S/MIME

Due to the requirement of a certificate from certification authority for implementation, not all users
can take advantage of S/MIME, as some may wish to encrypt a message, with a public/private key
pair. For example, without the involvement or administrative overhead of certificates.

In practice, although most e-mailing applications implement S/MIME, the certificate enrollment
process is complex. Instead PGP support usually requires adding a plug-in and that plug-in comes
with all that is needed to manage keys. The Web of Trust is not really used. People exchange their
public keys over another medium. Once obtained, they keep a copy of public keys of those with
whom e-mails are usually exchanged.

One of the schemes, either PGP or S/MIME, is used depending on the environment. A secure e-
email communication in a captive network can be provided by adapting to PGP. For e-mail security
over Internet, where mails are exchanged with new unknown users very often, S/MIME is
considered as a good option.
CHAPTER SIX
DATABASE SECURITY
6.1 Introduction to Database Security
Data is the most valuable asset in today’s world as it is used in day –to –day life from a single
individual to large organizations. To make the retrieval and maintenance of data easy and efficient
it is stored in a database. Databases are a favorite target for attackers because of the data these are
containing and also because of their volume. There are many ways a database can be compromised.

The goal of database security is to prevent unauthorized or accidental access to data. Because the
database environment has become more complex and more decentralized, management of data
security and integrity has become a more complex and time consuming job for data administrators.

Threats to the database can be numerous. The threat can be accidental or intentional. In either case,
security of the database and the entire system, including the network, operating system, the
physical area where the database resides and the personnel allowed access, must all be considered.

A database designer must consider a security strategy to address potential threats to data. Generally
a data security plan includes such procedures, including physical protection methods and use of
data management software.

The following threats must be addressed:

Accidental losses Includes losses as a result of human error, software problems,


hardware problems

Theft and fraud Includes intentional security breaches of data and unauthorized
data manipulation because of inadequate access controls,
inadequate physical security of the database

Loss of privacy and Includes loss of protection of an individual's data files. Loss of
confidentiality confidentiality refers to loss of protection of organizational data.
Failure to control these types of losses may lead to blackmail,
public embarrassment and bribery and loss of competitiveness

Loss of data integrity Includes data corruption and invalid data. Failure to control data
corruption can severely impact an organization

Loss of data availability Includes the damage of hardware, the network or applications that
may cause data to become unavailable to users
Controlling outside access and threats to the database is another form of security.
These methods are described below.

Includes procedures for backup and recovery of the database. Also


includes journal and checkpoint facilities. Backup and recovery
Backup and recovery
provides a means of recovering the database if data is corrupted or made
invalid in some way.

Generally third-party software used to detect incoming viruses from


Virus checking email and other internet, intranet activity. Virus checking software may
also be able to cleanse the system of a virus.

Includes authorization rules at the operating system or network


Authorization rules operating system level. This is typically the first level of authorization
and includes passwords, user groups and access privileges.

These are programs that are installed to prevent outside threats to a


computer network. Firewalls filter traffic to and from networks, can
Firewalls
reject or accept encrypted data and can notify users of attempted
security breaches.

This includes the actual physical space that houses the database and/or
a computer system. Physical protection may include password entry to
Physical protection
personnel in the form or key card or notepad entry, video cameras in
the area, lead-based doors, adequate lighting, etc.

Figure 6.1 Threats to Data Security


Database security is a growing concern evidenced by an increase in the number of reported
incidents of loss of or unauthorized exposure to sensitive data. As the amount of data collected,
retained and shared electronically expands, so does the need to understand database security. At
its core, database security strives to insure that only authenticated users perform authorized
activities at authorized times. It includes the system, processes, and procedures that protect a
database from unintended activity. Traditionally database security focused on user authentication
and managing user privileges to database objects.

6.2 Database Vulnerabilities


A Vulnerability Database is a platform aimed at collecting, maintaining, and disseminating
information about discovered vulnerabilities targeting real computer systems. Currently, there are
many vulnerabilities databases that have been widely used to collect data from different sources
on software vulnerabilities (e.g., bugs). These data essentially include the description of the
discovered vulnerability, its exploitability, its potential impact, and the workaround to be applied
over the vulnerable system. Examples of web-based vulnerabilities databases are the National
Vulnerability Database and the Open Source Vulnerability Database.
Considering the importance of data it is essential to secure it. Security in today’s world is one of
the important and challenging tasks that people are facing all over the world in every aspect of
their lives. Similarly security in electronic world has a great significance. Protecting the
confidential/sensitive data stored in a repository is actually the database security. There are various
security layers in a database.
These layers are: database administrator, system administrator, security officer, developers and
employee and security can be breached at any of these layers by an attacker. An attacker can be
categorized into three classes:
Intruder: An intruder is a person who is an unauthorized user means illegally accessing a
computer system and tries to extract valuable information.
Insider: An insider is a person who belongs to the group of trusted users and makes abuse of her
privileges and tries to get information beyond his own access rights.
Administrator: An administrator is a person who has privileges to administer a computer system,
but uses her administration privileges illegally according to organization’s security policy to spy
on DBMS behavior and to get valuable information.
An attacker, after breaching through all levels of protection, he will try to do one of the two
following attacks:
Direct attacks: A direct attack means attacking the target directly. These are obvious attacks and
are successful only if the database does not implement any protection mechanism. If this attack
fails, the attacker moves to the next.
Indirect attacks: Indirect attacks are the attacks that are not directly executed on the target but
information from or about the target can be received through other intermediate objects.
Combinations of queries are used some of them having the purpose to cheat the security
mechanisms. These attacks are difficult to track.
The attacker executes the above attacks in different ways. Attacks on database can also be
classified into passive and active attacks:

Figure 6.2 Database Vulnerabilities


Passive Attack
In passive attack, attacker only observes data present in the database.
Passive attack can be done in following three ways:
1) Static leakage: In this type of attack, information about database plaintext values can be
obtained by observing the snapshot of database at a particular time.
2) Linkage leakage: Here, information about plain text values can be obtained by linking the
database values to position of those values in index.
3) Dynamic leakage: In this, changes performed in database over a period of time can be observed
and analyzed and information about plain text values can be obtained.

Active Attacks
In active attack, actual database values are modified. These are more problematic than passive
attacks because they can mislead a user. For example a user getting wrong information in result of
a query. There are some ways of performing such kind of attack which are mentioned below:
Spoofing – In this type of attack, cipher text value is replaced by a generated value.
Splicing – Here, a cipher text value is replaced by different cipher text value.
Replay – replay is a kind of attack where cipher text value is replaced with old version previously
updated or deleted.
6.3 Security Threats to Database
Databases are a favorite target for attackers because of their data. There are many ways in which
a database can be compromised. There are various types of attacks and threats from which a
database should be protected. Solutions to most of the threats mentioned above have been found,
although some solutions are good while some are only temporary.
Excessive Privilege Abuse: When users (or applications) are granted database access privileges
that exceed the requirements of their job function, these privileges may be abused for malicious
purpose. For example, a computer operator in an organization requires only the ability to change
employee contact information may take advantage of excessive database update privileges to
change salary information.
Legitimate Privilege Abuse: Legitimate privilege abuse is when an authorized user misuses their
legitimate database privileges for unauthorized purposes. Legitimate privilege abuse can be in the
form of misuse by database users, administrators or a system manager doing any unlawful or
unethical activity. It is, but not limited to, any misuse of sensitive data or unjustified use of
privileges.
Privilege Elevation: Sometimes there are vulnerabilities in database software and attackers may
take advantage of that to convert their access privileges from an ordinary user to those of an
administrator, which could result in bogus accounts, transfer of funds, and misinterpretation of
certain sensitive analytical information.
A database rootkit is such a program or a procedure that is hidden inside the database and that
provides administrator-level privileges to gain access to the data in the database. These rootkits
may even turn off alerts triggered by Intrusion Prevention Systems (IPS). It is possible to install a
rootkit only after compromising the underlying operating system.
Platform Vulnerabilities: Vulnerabilities in operating systems and additional services installed
on a database server may lead to unauthorized access, data corruption, or denial of service. For
example, the Blaster Worm took advantage of a Windows 2000 vulnerability to create denial of
service conditions.
Inference: Even in secure DBMSs, it is possible for users to draw inferences from the information
they obtain from the database. A user can draw inference from a database when the user can guess
or conclude more sensitive information from the retrieved information from the database or
additionally with some prior knowledge. An inference presents a security breach if more highly
classified information can be inferred from less classified information. There are two important
cases of the inference problem, which often arise in database systems.
1) Aggregation problem: occurs when a collection of data items is more sensitive i.e. classified
at a higher level than the levels of individual data items. For example in an organization the profit
of each branch is not sensitive but total profit of organization is at higher level of classification.
2) Data association problem: occurs whenever two values seen together are classified at a higher
level than the classification of either value individually. As an example, the list containing the
names of all employees and the list containing all employee salaries are unclassified, while a
combined list giving employee names with their salaries is classified.
SQL Injection: In a SQL injection attack, an attacker typically inserts (or “injects”) unauthorized
SQL statements into a vulnerable SQL data channel. Typically targeted data channels include
stored procedures and Web application input parameters. These injected statements are then passed
to the database where they are executed. For example in a web application the user inserts a query
instead of his name. Using SQL injection, attackers may gain unrestricted access to an entire
database.
Unpatched DBMS: In database, as the vulnerabilities are kept changing that are being exploited
by attackers, database vendors release patches so that sensitive information in databases remain
protected from threats. Once these patches are released they should be patched immediately. If left
unpatched, hackers can reverse engineer the patch, or can often find information online on how to
exploit the unpatched vulnerabilities, leaving a DBMS even more vulnerable that before the patch
was released.
Unnecessary DBMS Features Enabled: In a DBMS there are many unneeded features which are
enabled by default and which should be turned off otherwise they would be the reason for the most
effective attacks on a database.
Misconfigurations: Unnecessary features are left on because of poor configuration at the database
level. Database misconfigurations provide weak access points for hackers to bypass authentication
methods and gain access to sensitive information. These flaws become the main targets for
criminals to execute certain types of attacks. Default settings may not have been properly re-set,
unencrypted files may be accessible to non-privileged users, and unpatched flaws may lead to
unauthorized access of sensitive data.
Buffer Overflow: When a program or process tries to store more data in a buffer than it was
intended to hold, this situation is called buffer overflow. Since buffers contains only a finite
amount of data, the extra data - which has to go somewhere - can overflow into adjacent locations,
corrupting or overwriting the valid data held in those locations. For example, a program is waiting
for a user to enter his or her name. Rather than entering the name, the hacker would enter an
executable command that exceeds the size of buffer. The command is usually something short.
Weak Audit Trails: A database audit policy ensures automated, timely and proper recording of
database transactions. Such a policy should be a part of the database security considerations since
all the sensitive database transactions have an automated record and the absence of which poses a
serious risk to the organization’s databases and may cause instability in operations. Weak database
audit policy represents a serious organizational risk on many levels.
Denial of Service: In this type of attack all users (including legitimate users) are denied access to
data in the database. Denial of service (DOS) conditions may be created via many techniques -
many of which are related to the other mentioned vulnerabilities.
For example, DOS may be achieved by taking advantage of a database platform vulnerability to
crash a database server. Other common DOS techniques include data corruption, network flooding,
and server resource overload (memory, CPU, etc.).
Covert Channel: A covert channel is an indirect means of communication in a computer system
which can be used to weaken the system's security policy. A program running at a secret level is
prevented from writing directly to unclassified data item. There are, however, other ways of
communicating information to unclassified programs.
For example, the secret program wants to know the amount of memory available. Even if the
unclassified program is prevented from directly observing the amount of free memory, it can do
so indirectly by making a request for a large amount of memory itself. Granting or denial of this
request will convey some information about free memory to the unclassified program. Such
indirect methods of communication are called covert channels.
Database Communication Protocol Vulnerabilities: Large number of security weaknesses is
being identified in the database communication protocols of all database retailers. Fraudulent
activities directing these vulnerabilities can vary from illegal data access to data exploitation and
denial of service and many more.
Advanced Persistent Threats: This type threat happens whenever large, well-funded
organizations makes highly focused assaults on large stores of critical data. These attacks are
relentless, defined, and perpetrated by skilled, motivated, organized, and well-funded groups.
Organized criminals and state-sponsored cyber-professionals are targeting databases those where
they can harvest data in bulk. They target large repositories of personal and financial information.
Once stolen, these data records can be sold on the information black market or used and
manipulated by other governments.
Insider Mistakes: Some attacks are not intentional, they just happen unknowingly, by mistake.
This type of attack can be called as “unintentional authorized user attack” or insider mistake. It
can occur in two situations. The first one is when an authorized user inadvertently accesses
sensitive data and mistakenly modifies or deletes the information.
The latter can occur accidentally when a user makes an unauthorized copy of sensitive information
for the purpose of backup or “taking work home.” Although it is not a malicious act, but the
organizational security policies are being violated and results in data residing on a storage device
which, if compromised, could lead to an unintentional security breach. For example a laptop
containing sensitive information can be stolen.
Social Engineering: In this, users unknowingly provide information to an attacker via a web
interface like a compromised website or through an email response to what appears to be a
legitimate request. An example of this is the RSA breach, which occurred when legitimate users
unknowingly provided security keys to attackers as a result of sophisticated phishing techniques.
Weak Authentication: Weak authentication schemes allow attackers to assume the identity of
legitimate database users by stealing or otherwise obtaining login credentials. An attacker may
employ any number of strategies to obtain credentials.
1) Brute Force: In this strategy, attacker repeatedly enters username/password combinations until
he finds the correct one. The brute force process may involve simple guesswork systematic
enumeration of all possible username/password combinations. The attacker can often use
automated programs to accelerate the brute force process.
2) Direct Credential Theft: An attacker may steal login credentials from the authorized user.
Backup Data Exposure: Backup database storage media is often completely unprotected from an
attack as well as a natural calamity like flood, earthquake etc. As a result, several high profile
security breaches have involved theft of database backup tapes and hard disks.

6.4 Database Security Measures


Database Security means protecting the database from unauthorized access, modification, or
destruction. Privacy is the right of individuals to have some control over information about
themselves, and is protected by law in many countries. Confidentiality refers to the need to keep
certain information from being known. Both privacy and confidentiality can be protected by
database security.
Security violations can be accidental or deliberate, and security breaches can be accomplished in
a variety of ways. A security control plan should begin with physical security measures for the
building and especially for the computer facilities. Security control of workstations involves user
authentication, verifying the identity of users. The operating system normally has some means of
establishing a user's identity, using user profiles, user IDs, passwords, authentication procedures,
badges, keys, or physical characteristics of the user. Additional authentication can be required to
access the database.
Most database management systems designed for multiple users have a security subsystem. These
subsystem provide for authorization, by which users are assigned rights to use database objects.
Most have an authorization language that allows the DBA to write authorization rules specifying
which users have what type of access to database objects. Access control matrix can be used to
identify what types of operations different users are permitted to perform on various database
objects. The DBA can sometimes delegate authorization powers to others.
Views can be used as a simple method for implementing access control. A security log is a journal
for storing records of attempted security violations. An audit trails records all access to the
database, keeping information about the requester, the operation performed, the workstation used,
and the time, data items, and values involved. Triggers can be used to set up an audit trail.
Encryption uses a cipher system that consists of an encrypting algorithm that converts plaintext
into cipher text, an encryption key, a decrypting algorithm that reproduces plaintext into cipher
text, an encryption key, a decrypting algorithm that reproduces plaintext from cipher text, and a
decryption key. Widely used schemes for encryption are the 3DES, AES, and Public Key
Encryption. DES/AES uses a standard algorithm, which is often hardware implemented. Public-
key encryption uses a product of primes as a public key and prime factors of the product as a
private key.
SQL has a Data Control Language, an authorization language to provide security. The GRANT
statement is used for authorization, and the REVOKE statement is used to retract authorization.
Privileges can be given to individuals or to a role, and then the role is given to individuals. SQL
injection poses a significant threat to database applications by taking advantage of vulnerabilities
associated with the dynamic construction of queries with user input that has not been validated.
Database developers can avoid SQL injection attacks by following more secure software
development techniques for the dynamic construction of queries.
When the database is accessible through the Internet, special security techniques are needed. These
include firewalls, IPsec, certification authorities such as Verisign that issue digital certificates
using SSL or S-HTTP, Kerberos or similar protocols for user authentication, stronger protocols
for financial information, and digital signatures.
CHAPTER SEVEN
SECURITY ADMINISTRATION AND EVALUATION
7.1 Software Security
Majority of security incidents result from defects in software design or code. Attackers exploit the
security holes left out by software developers. Post-deployment security is more popular than pre-
deployment because:
 Easily understood by administrators
 Difficult to get security “assurance” from vendor
 Vendors are obsessed by “time-to-market”
 Difficult to know/tailor security requirements for general purpose software
Risk Management
Risk: “The possibility of suffering harm or loss”
Management: “The act or art of treating, directing, carrying on, or using for a purpose”
Risk Management is the process concerned with identification, measurement, control and
minimization of security risks in information systems to a level that commensurate with the value
of the assets protected.
Methods of risk treatment:
 Mitigate or suppress
 Accept
 Transfer (insurance)
 Ignore (poor – often used)
Types of countermeasures:
 Preventive
 Detective
 Corrective
In case of risk acceptance:
 Request documented justification
 Get formal approval (sign-off) by senior management
 Have the decision reviewed after a year
 Use a high quality software engineering methodology
 Risk analysis should be performed at every stage of the development
- Requirement analysis
- Design
- Coding
- Testing, etc
Selecting Technologies
Languages:
The choice of a programming language has an impact on how secure the software will be and
security problems are common for some languages:
- C, C++ => Buffer overflow
- Java => Exception handling, etc
High level languages hide what they are doing (ex. Swapping to disk)
- The programmer doesn’t know that
- The attackers may use this
Operating Systems:
Typical Operating Systems (Windows, Linux, etc) have
- Authentication of users
- Resource access control (authorization & limitation) Memory, Files, etc.
- Integrity of shared resources
Operating systems have different levels of security
Authentication technologies:
- Password
- Host-Based (ex. IP)
- Physical token (ex. Smartcard)
- Biometrics
Open Source or Closed Source:
Free Software - Freedoms to use, copy, study, modify and redistribute both modified and
unmodified copies of software programs.
Open Source Software (OSS) - Similar in idea to "free software" but slightly less rigid
FOSS/FLOSS - Free/Libre/Open-Source Software is the name used by those who wish to be
inclusive
OSS provides a number of benefits to security, because security by obscurity does not work.
Hackers may not always need the code to find security vulnerabilities. Reverse engineering is
possible: Disassemblers / Decompilers.
Open Source Software:
OSS model gives some economic incentives for others to review your code. Users of the software
may want to check the security of the software. Some users who want to make changes to the
software will look at the software. However, you cannot be sure of the security of the software just
because it is OSS. Many vulnerabilities are hard to detect. Some software sources are difficult to
read. Some software sources don’t have many readers and additional vulnerabilities. Code
scanning can be used by attackers.
Auditing Software:
Auditing software’s functionality is a complex activity. Auditing software’s security is even more
complex. Most software development companies consider security of their software only once or
twice during the development cycle. Software teams prefer to use their time mainly on developing
new functionalities that can be seen. Ideally every software project should have an independent
security person or team. A good time for an initial security analysis is after the preliminary design.
You can avoid security risks in the architecture of your software with limited cost. You will be
more willing to make major changes.

7.2 Policies, Standards and Procedures


Security attacks come from various security threats and vulnerabilities. Security
techniques/solutions are available to minimize the risks. The human factor is a major concern in
security. Organizations need to ensure that the security of their information is protected
irrespective of the employees they may have.
They achieve this objective using: (Policies, Standards and Procedures)
A policy is a high-level statement of enterprise’s beliefs, goals, and procedures; and the general
means for their attainment. Standards are mandatory requirements that support individual policies.
Procedures are mandatory step-by-step, detailed actions required to complete a task successfully.
Guidelines are similar to standards but are not mandatory. The objective of an information security
is to protect the integrity, confidentiality and availability of the information. An information
protection program should be part of an overall asset protection program. Information security
policies, standards and procedures enable organizations to
- Ensure that their security policies are properly addressed
- Every employee knows what he/she needs to do to ensure the information security of the company
- Similar response is given for every problem
Developing policies: A good policy should
- Be easy to understand (By all people who will have to read the policy)
- Be applicable (Don’t copy others’ policy word by word since it may not be applicable to you)
- Be doable (The restrictions should not stop work!)
- Be enforceable (If it cannot be enforced, it will probably remain on paper)
- Be phased in (Organizations need time to digest policy)
- Be proactive (Say what needs to be done rather than what is not allowed)
- Avoid absolute (Be diplomatic)
- Meet business objectives (Should lower the security risks to a level acceptable by the
organization without hampering the work of the organization to unacceptable level)
Developing policies: There are three types (Tiers) of policies
Global policies (Tier 1) - Used to create the organization’s overall vision and direction
Topic specific policies (Tier 2) - Address particular subject of concern, Ex. Antivirus, E-mail
Application-specific policies (Tier 3) - Decisions taken by management to control particular
applications, Ex. Accounting system
Global Policy
The components of a global policy typically include:
- Scope
- Responsibilities (Who is responsible for what)
- Compliance or consequences (What will happen if you are not compliant)
Writing a policy requires a lot of (multiple) skills and attention. A global policy is developed by a
steering committee established for this purpose.
Examples of Policy Statements
“Exchanges of information and software between a company and any other organization will be
controlled in accordance with its classification. The exchange of information will comply with any
regulatory policies.”
“To ensure protection of corporate information, the owner shall use a formal review process to
classify information into one of the following three classifications: Public, Internal-Use and
Confidential.”
Information Classification Policy
 Why classify?
- Among the information available in the enterprise there are (approx.)
- 10% confidential information
- 80% internal use information
- 10% public information
- It would be a big waste of resources to give the same level of security for all the information
- You don’t put everything you own in a safe!

 What is a confidential information?


- Information, if disclosed, could
- Violate privacy of individuals
- Reduce company’s competitive advantage
- Cause damage to the organization
Many organizations classify information into different classes of security. Part of the asset
classification policy. An information or asset classification process is a business decision process.
Example of information classification could be:
Top Secret, Confidential, Restricted, Internal-Use, Public
How to develop classification levels (standards):
 Discuss with other organizations’ specialists and learn from their experiences
 Discuss with the management of the organization
 Prepare a draft and discuss it with the management
 Avoid the temptation of having too many levels
The information classification policy defines the department’s policy with regards to classification,
declassification and reclassification of information.
Developing Standards:
Standards define what is to be accomplished in specific terms. Every industry has standards that
try to ensure some quality of product or service, or enable interoperability. Many industry
standards have information security issues, Ex. Banking, Healthcare. Some of the standards
become national regulations and organizations will have to follow that Organizations can also
develop their own standards (enterprise standards). Standards are easier to update than global
policies. Standards have to be reviewed regularly (every year for example).
Standards must be:
 Reasonable
 Flexible
 Practical
 Applicable
 Up-to-date
 Reviewed regularly
Standards should enable the enterprise to fulfill its business objectives while minimizing the
security risks.
Developing Procedures:
Developing a procedure should be faster than developing a policy since it does not need to be
approved by management. Procedure writing process:
 Interview with the SME (Subject Matter Expert)
 Preparation of a draft
 Review of the draft by the SME
 Update of the procedures based on the comments
 Final review by SME
 Update of the procedures based on the comments
 Testing of the procedures
 Publishing of the procedures & Procedures should be reviewed regularly
Selling policies, standards, and procedures:
If you write policies, standards, and procedures; publish them and do nothing else, it is very
probable that nobody will use them. You should therefore ensure acceptance of the policies,
standards, and procedures at all levels.
Selling points:
 Formal risk analysis will show the management how important it is to avoid the risks using
your policies, standards, and procedures
 Examples of security problems
 Examples of problems created because of lack of policies, standards, and procedures (ex. Y2K
cost, 9/11 crises)
 Show how policies, standards, and procedures facilitate the work of the employees and not
make it more difficult

7.3 Legal Issues and Information Security


Computer Forensics:
Information security and privacy often become a major issue for law makers since it can touch
fundamental rights of individuals. Computer Forensics is a branch of forensic science that deals
with the application of computer investigation and analysis techniques in the interests of
determining potential legal evidence. Computer forensics is also known as digital forensics.
Computer forensics has sub branches within it such as firewall forensics, network forensics,
database forensics and mobile device forensics.
Steps taken in Computer Forensics on the subject computer:
Protects the subject computer system during the forensic examination from any possible alteration,
damage, data corruption, or virus introduction. Discovers all files on the subject system. This
includes existing normal files, deleted yet remaining files, hidden files, password-protected files,
and encrypted files. Recovers discovered deleted files.
Reveals the contents of hidden files as well as temporary or swap files used by both the application
programs and the operating system. Accesses the contents of protected or encrypted files. Analyzes
all possibly relevant data found in special areas of a disk. Prints out an overall analysis of the
subject computer system, as well as a listing of all possibly relevant files and discovered file data.
Provides expert consultation and/or testimony.
Cryptography and Wiretapping:
Cryptography has been considered as an arm since it is a major force in armed conflicts. Until
recently the US law forbids people to export cryptographic equipments, unless an explicit
authorization is given by the government. International arms agreements bind many governments
to implement export controls on cryptographic equipment.
Wiretapping (Twisted Pair, Thin/Thick Coaxial, Fiber Optics)
 Governments have always wanted to control communication services
Ex. Post, telegraph, telephone, Internet
 Wiretapping by government has become such a problem that law makers have been obliged
to enact laws to limit its use
 Traffic analysis is major source for police intelligence
Protection of Privacy:
Data protection directive was established in Europe in 1995. European laws give data subjects the
right to inspect data held on them. In the US, data protection is mostly left for self-regulation.
Evidential Issues:
There is still no consensus on whether computer evidences are admissible in court. Some countries
admit computer evidences. Some others admit them conditionally (ex. the computer should be
accompanied with a certificate that says that it is working properly).
Reliability of evidence means, when a computer is taken as an evidence, law enforcement officers
generally mirror the hard disk for subsequent examination. However, since today’s operating
systems and application software products are very complex the interpretation can be complex.
More and more countries are now accepting electronic signatures and contracts.
Cybercrime laws: (Example Romania)
Art.42
(a) The illegal access to a computer system is a crime and is punished with
imprisonment from 6 months to 3 years.
(b) If the fact mentioned at item (1) is performed by infringing the security
measures, the punishment is imprisonment from 3 to 12 years.
Art.43
(a) The illegal interception of any transmission of computer date that is not
published to, from or within a computer system is a criminal offence and is
punished with imprisonment from 2 to 7 years.
(b)
The same punishment is applied also for the illegal interception, of
electromagnetic emissions from a computer system carrying non-public
computer data.

7.4 Security Evaluation


The main purpose of a security evaluation is to discover weak points in the architecture of IT
infrastructure and it’s described below:
Figure 7.4 Three Important Steps in Security Evaluation

Parameters for Security Evaluation (Security Assessments/ Security Audits)


• Network Architecture and Configuration
• Hardware Firewalls and Routers Configuration
• User Authentication and Access Management
• Updates and Patches Management
• System Configuration
• System Services and Applications Configuration
• Antivirus Software Management
• Confidential Data Handling and Encryption
• Backup System Management
• Local Security Policy Review
• Presence and Qualification of Internal Incident Response Team
• Physical Security
Assessment results will identify the most important and critical IT threats and risks for their
further elimination.
Security Assessment Components
Major security components involved in the assessment process:
a) Network Security
b) System Security
c) Application Security
d) Operational Security
e) Physical Security

a) Network Security Assessment


Gather and evaluate network maps, installation procedures, checklists, Scan networks and
networked systems.
 Vulnerability Scanners: Nessus, OpenVAS, Retina, Nipper, SAINT, CoreImpact, etc.
 Port Scanners: Nmap, Hping, Amap, TCPDump, Metasploit, etc.
 Application Scanners: whisker, nikto
Target Selection
 Key systems (where the goodies are stored)
 Exposed systems (where the bad guys play)
 Gateway systems (intersection of networks)
b) System Security Assessment
Gather and evaluate software/system inventory information, security standards, checklists,
management procedures. Review configurations with the administrator. Use a security checklist
to evaluate current configuration.
Target Selection:
 Database Systems and File Servers
 Network Application Servers
 A typical Desktop
c) Application Security Assessment
 Gather and evaluate application and internal development documents and source code
 Review source code for common programming flaws
 Use static code analysis tools (Fortify, RATS, ITS4, FlawFinder)
 Skill dependent task; time consuming
 At minimum, evaluate development procedures
d) Operational Security Assessment
 Gather and evaluate procedures and contingency plans
 Evaluate overall security management
 Review backup and disposal procedures
 Examine business continuity and disaster recovery plans
 Look at automated security tasks (virus updates, patches, integrity checks)
 Look at administrator security practices

e) Physical Security Assessment


Gather policy and procedure documents. Examine facility and take pictures.
Building:
 Life Safety (fire/smoke detection, alarms)
 Burglar alarms, security guards, police response time
Security Perimeter:
 Strong doors, locks, visitor areas, sign-in procedures
Server Rooms:
 Environmental controls and monitoring
 Sufficient power and HVAC (Heating, Ventilation, and Air Conditioning)
 Locked cabinets and equipment

Reporting and Follow-up


 Once the assessment is complete, a report is needed to inform the client of issues found
 Report should explain findings in simple terms (remember the audience)
 Reports should incorporate suggestions on how to overcome vulnerabilities and threats
 Be available to answer questions and provide explanations

Evaluation is not a Security?


The evaluation of a product either with a standard or a criteria does not mean that the product is
assured of security. No evaluation of any product can guarantee such security. However, an
evaluated product can demonstrate certain features and assurances from the evaluating criteria,
that the product does have certain security parameters to counter those threats.
The development of new security standards and criteria, will no doubt continue to result in better
ways of security evaluations and certification of computer products and will, therefore, enhance
computer systems’ security.

7.5 Organizations and Certifications


Numerous security advisory and support organizations exist that every security professional should
be aware of, including:
 SANS Institute
 Center for Internet Security (CIS)
 CERT Coordination Center (CERT/CC)
 Forum of Incident Response and Security Teams (FIRST)
 Microsoft Security Response Center (MSRC)

Certifications for security professionals are a way of ensuring one’s skills are up to date and stand
out from the crowd. Some of the popular certifications available include
 Certified Information Systems Security Professional (CISSP)
 System Security Certified Practitioner (SSCP)
 Certificate Information Systems Auditor (CISA)
 Global Information Assurance Certification (GIAC)

Web Links:
https://www.isc2.org/cissp/default.aspx
https://www.isc2.org/sscp/default.aspx
https://www.isaca.org/
https://www.giac.org/
Glossary

This section contains terms from both hackers and security professionals. To truly understand
Computer Security, you must be familiar with both worlds. General networking terms are also
included in this glossary.

Admin: Short for system administrator.

Adware: Software that is used to display advertisements.

Audit: A check of a system’s security. This check usually includes a review of documents,
procedures, and system configurations.

Authentication: The process of proving that someone is who he or she claims to be.

Back Door: A hole in the security system deliberately left by the creator of the system.

Black Hat Hacker: Someone who uses hacking skills for malicious and illegal purposes.

BlowFish: A well-known symmetric key encryption algorithm that uses a variable-length key and
was invented by Bruce Schneier.

Braindump: The act of telling someone everything one knows.

Breach: To successfully break into a system, to breach the security.

Brute Force: To try to crack a password by simply trying every possible combination.

Bug: A flaw in a system.

Caesar cipher: One of the oldest encryption algorithms. It uses a basic mono-alphabetic cipher.

CHAP: Challenge Handshake Authentication Protocol, a commonly used authentication protocol.

CIA triangle: A common security acronym: confidentiality-integrity-accessibility.

Cipher: Synonym for cryptographic algorithm.

Cipher text: Encrypted text.

Code: The source code for a program; or the act of programming, as in “to code an algorithm.”

Cookie: A small bit of data, often in plain text, that is stored by web browsers.
Computer Security: Is the process of preventing and detecting unauthorized use of your
computer. It involves the process of safeguarding against intruders from using your computer
resources for malicious intents or for their own gains (or even gaining access to them accidentally).

Cracker: One who breaks into a system in order to do something malicious, illegal, or harmful.
Synonymous with black hat hacker.

Cracking: Breaking into a system or code.

Crash: A sudden and unintended failure, as in “My computer crashed.”

Cryptography: The study of encryption and decryption.

Cyber Fraud: Using the Internet to defraud someone.

Cyber Stalking: Using the Internet to harass someone.

Demigod: A hacker with years of experience, who has a national or international reputation.

DDoS: Distributed Denial of Service attacks are denial of service attacks launched from multiple
source locations.

Denial of Service (DoS): An attack that prevents legitimate users from accessing a resource. This
is usually done by overloading the target system with more workload than it can handle.

DES: Data Encryption Standard, a block cipher that was developed in the 1970s. It uses a 56-bit
key on 64-bit blocks. It is no longer considered secure enough.

Encrypting File System: Also known as EFS, this is Microsoft’s file system that allows users to
encrypt individual files. It was first introduced in Windows 2000.

Elliptic Curve: A class of algorithms that provide asymmetric encryption.

Encryption: The act of encrypting a message. Encryption usually involves altering a message so
that it cannot be read without the key and the decryption algorithm.

Espionage: Illicitly gathering information, usually from a government or corporate source.

Ethical Hacker: One who hacks into systems to accomplish some goal that he feels is ethically
valid.

Firewall: A device or software that provides a barrier between your machine or network and the
rest of the world.

Gray Hat Hacker: A hacker who usually obeys the law but in some instances will cross the line
into black hat hacking.
Hacker: One who tries to learn about a system by examining it in detail by reverse-engineering.

Hash: An algorithm that takes variable length input and produced fixed-length output and is not
reversible.

Honey Pot: A system or server designed to be very appealing to hackers, when in fact it is a trap
to catch them.

Hub: A device for connecting computers.

IKE: A method for managing the exchange of encryption keys.

Information Warfare: Attempts to influence political or military outcomes via information


manipulation.

Intrusion-Detection System (IDS): A system for detecting attempted intrusions.

IP: Internet Protocol, one of the primary protocols used in networking.

IPsec: Internet Protocol Security, a method used to secure VPNs.

IP spoofing: Making packets seem to come from a different IP address than they really originated
from.

Key Logger: Software that logs key strokes on a computer.

MAC Address: The physical address of a network card. It is a 6-byte hexadecimal number. The
first 3 bytes define the vendor.

Malware: Any software that has a malicious purpose, such as a virus or Trojan horse.

MS-CHAP: A Microsoft Extension to CHAP.

Multi-Alphabet Substitutions: Encryption methods that use more than one substitution alphabet.

NIC: Network interface card.

Packet Filter Firewall: A firewall that scans incoming packets and either allows them to pass or
rejects them. It only examines the header, not the data, and does not consider the context of the
data communication.

Penetration testing: Assessing the security of a system by attempting to break into the system.
Penetration testing is the activity of most sneakers.

Phreaker: Someone who hacks into phone systems.


Port Scan: Sequentially pinging ports to see which ones are active.

PPP: Point-to-Point Protocol, a somewhat older connection protocol.

PPTP: Point-to-Point Tunneling Protocol, an extension to PPP for VPNs.

Proxy Server: A device that hides your internal IP addresses and presents a single IP address to
the outside world.

Router: A device that connects two networks.

RSA: A public key encryption method developed in 1977 by three mathematicians: Ron Rivest,
Adi Shamir, and Leonard Adleman. The name RSA is derived from the first letter of each
mathematician’s last name.

RST cookie: A simple method for alleviating the danger of certain types of DoS attacks.

Script kiddy: A slang term for an unskilled person who purports to be a skilled hacker.

Smurf: A specific type of distributed denial of service attack.

Sneaker: Someone who is attempting to compromise a system in order to assess its vulnerability.

Sniffer: A program that captures data as it travels across a network. Also called a packet sniffer.

Snort: A widely used, open source, intrusion-detection system.

Social Engineering: The use of persuasion on human users in order to gain information required
to access a system.

SPAP: Shiva Password Authentication Protocol. SPAP is a proprietary version of PAP. This
protocol basically adds encryption to PAP.

Spoofing: Pretending to be something else, as when a packet might spoof another return IP address
(as in the smurf attack) or when a website is spoofing a well-known e-commerce site.

Spyware: Software that monitors computer use.

Stack Tweaking: A complex method for protecting a system against DoS attacks. This method
involves reconfiguring the operating system to handle connections differently.

Stateful Packet Inspection: A type of firewall that not only examines packets but knows the
context within which the packet was sent.
Symmetric Key System: An encryption method where the same key is used to encrypt and decrypt
the message.

SYN Cookie: A method for ameliorating the dangers of SYN floods.

SYN Flood: Sending a stream of SYN packets (requests for connection) then never responding,
thus leaving the connection half open.

Tribal Flood Network: A tool used to execute DDoS attacks.

Trin00: A tool used to execute DDoS attacks.

Trojan horse: Software that appears to have a valid and benign purpose but really has another,
nefarious purpose.

Virus: Software that is self-replicating and spreads like a biological virus.

War-Dialing: Dialing phones waiting for a computer to pick up. War-dialing is usually done via
some automated system.

War-driving: Driving and scanning for wireless networks that can be compromised.

White Hat Hacker: A hacker who does not break the law, often synonymous with ethical hacker.

Worm: A virus that can spread without human intervention.

References

 Security in Computing, Charles P. Pfleeger and Shari L. Pfleeger.


 Computer Security, Dicter Gouman, John Wiley & Sons.
 Computer Security: Art and Science, Mathew Bishop, Addison-Wesley.
 Principles of Information Security, Whitman, Thomson.
 Network security, Kaufman, Perl man and Speciner, Pearson Education.
 Cryptography and Network Security, 5th Edition William Stallings, Pearson Education.
 Introduction to Cryptography, Buchmann, Springer.
 Encyclopedia of Security, Mitch Tulloch, Microsoft Press.

You might also like