Chapter Four Security Techniques Origin of Cryptography

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Wolkite University College of Computing and Informatics Department of IT

Chapter Four
Security Techniques
Introduction
Origin of Cryptography
Human being from ages had two inherent needs − (a) to communicate and share information
and (b) to communicate selectively. These two needs gave rise to the art of coding the messages
in such a way that only the intended people could have access to the information. Unauthorized
people could not extract any information, even if the scrambled messages fell in their hand.
The art and science of concealing the messages to introduce secrecy in information security is
recognized as cryptography.
The word ‘cryptography’ was coined by combining two Greek words, ‘Krypto’ meaning hidden
and ‘graphene’ meaning writing.
History of Cryptography
The art of cryptography is considered to be born along with the art of writing. As civilizations
evolved, human beings got organized in tribes, groups, and kingdoms. This led to the
emergence of ideas such as power, battles, supremacy, and politics. These ideas further fueled
the natural need of people to communicate secretly with selective recipient which in turn
ensured the continuous evolution of cryptography as well.
The roots of cryptography are found in Roman and Egyptian civilizations.
Hieroglyph − the Oldest Cryptographic Technique
The first known evidence of cryptography can be traced to the use of ‘hieroglyph’. Some 4000
years ago, the Egyptians used to communicate by messages written in hieroglyph. This code
was the secret known only to the scribes who used to transmit messages on behalf of the kings.
One such hieroglyph is shown below.

Later, the scholars moved on to using simple mono-alphabetic substitution ciphers during 500
to 600 BC. This involved replacing alphabets of message with other alphabets with some secret
rule. This rule became a key to retrieve the message back from the garbled message.

1
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

The earlier Roman method of cryptography, popularly known as the Caesar Shift


Cipher, relies on shifting the letters of a message by an agreed number (three was a common
choice), the recipient of this message would then shift the letters back by the same number and
obtain the original message.

Steganography
Steganography is similar but adds another dimension to Cryptography. In this method, people
not only want to protect the secrecy of an information by concealing it, but they also want to
make sure any unauthorized person gets no evidence that the information even exists.
For example, invisible watermarking.
In steganography, an unintended recipient or an intruder is unaware of the fact that observed
data contains hidden information. In cryptography, an intruder is normally aware that data is
being communicated, because they can see the coded/scrambled message.

2
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Evolution of Cryptography
It is during and after the European Renaissance, various Italian and Papal states led the rapid
proliferation of cryptographic techniques. Various analysis and attack techniques were
researched in this era to break the secret codes.
 Improved coding techniques such as Vigenere Coding came into existence in the
15th century, which offered moving letters in the message with a number of variable
places instead of moving them the same number of places.
 Only after the 19th century, cryptography evolved from the ad hoc approaches to
encryption to the more sophisticated art and science of information security.
 In the early 20th century, the invention of mechanical and electromechanical machines,
such as the Enigma rotor machine, provided more advanced and efficient means of
coding the information.
 During the period of World War II, both cryptography and cryptanalysis became
excessively mathematical.
With the advances taking place in this field, government organizations, military units, and some
corporate houses started adopting the applications of cryptography. They used cryptography to
guard their secrets from others. Now, the arrival of computers and the Internet has brought
effective cryptography within the reach of common people.
Modern Cryptography
Modern cryptography is the cornerstone of computer and communications security. Its
foundation is based on various concepts of mathematics such as number theory, computational-
complexity theory, and probability theory.
Characteristics of Modern Cryptography
There are three major characteristics that separate modern cryptography from the classical
approach.

Classic Cryptography Modern Cryptography

It manipulates traditional characters, i.e., It operates on binary bit sequences.


letters and digits directly.

It is mainly based on ‘security through It relies on publicly known mathematical


obscurity’. The techniques employed for algorithms for coding the information. Secrecy is
coding were kept secret and only the parties obtained through a secrete key which is used as the
involved in communication knew about seed for the algorithms. The computational

3
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

them. difficulty of algorithms, absence of secret key, etc.,


make it impossible for an attacker to obtain the
original information even if he knows the
algorithm used for coding.

It requires the entire cryptosystem for Modern cryptography requires parties interested in
communicating confidentially. secure communication to possess the secret key
only.

Context of Cryptography
Cryptology, the study of cryptosystems, can be subdivided into two branches −

o Cryptography
o Cryptanalysis

What is Cryptography?
Cryptography is the art and science of making a cryptosystem that is capable of providing
information security.
Cryptography deals with the actual securing of digital data. It refers to the design of
mechanisms based on mathematical algorithms that provide fundamental information security
services.
What is Cryptanalysis?
The art and science of breaking the cipher text is known as cryptanalysis.
Cryptanalysis is the sister branch of cryptography and they both co-exist. The cryptographic
process results in the cipher text for transmission or storage. It involves the study of
cryptographic mechanism with the intention to break them. Cryptanalysis is also used during the
design of the new cryptographic techniques to test their security strengths.

4
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Note − Cryptography concerns with the design of cryptosystems, while cryptanalysis studies the
breaking of cryptosystems.
Security Services of Cryptography
The primary objective of using cryptography is to provide the following four fundamental
information security services.
 Confidentiality
Confidentiality is the fundamental security service provided by cryptography. It is a security
service that keeps the information from an unauthorized person. It is sometimes referred to
as privacy or secrecy.
Confidentiality can be achieved through numerous means starting from physical securing to the
use of mathematical algorithms for data encryption.
 Data Integrity
It is security service that deals with identifying any alteration to the data. The data may get
modified by an unauthorized entity intentionally or accidently. Integrity service confirms that
whether data is intact or not since it was last created, transmitted, or stored by an authorized
user.
Data integrity cannot prevent the alteration of data, but provides a means for detecting whether
data has been manipulated in an unauthorized manner.
 Authentication
Authentication provides the identification of the originator. It confirms to the receiver that the
data received has been sent only by an identified and verified sender.
Authentication service has two variants −
 Message authentication identifies the originator of the message without any regard
router or system that has sent the message.
 Entity authentication is assurance that data has been received from a specific entity, say
a particular website.
 Non-repudiation
It is a security service that ensures that an entity cannot refuse the ownership of a previous
commitment or an action. It is an assurance that the original creator of the data cannot deny the
creation or transmission of the said data to a recipient or third party.
Non-repudiation is a property that is most desirable in situations where there are chances of a
dispute over the exchange of data. For example, once an order is placed electronically, a
purchaser cannot deny the purchase order, if non-repudiation service was enabled in this
transaction.
5
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Cryptography Primitives
Cryptography primitives are nothing but the tools and techniques in Cryptography that can be
selectively used to provide a set of desired security services −

o Encryption
o Hash functions
o Message Authentication codes (MAC)
o Digital Signatures
The following table shows the primitives that can achieve a particular security service on their
own.

Cryptosystems
A cryptosystem is an implementation of cryptographic techniques and their accompanying
infrastructure to provide information security services. A cryptosystem is also referred to as
a cipher system.
Let us discuss a simple model of a cryptosystem that provides confidentiality to the information
being transmitted. This basic model is depicted in the illustration below −

The illustration shows a sender who wants to transfer some sensitive data to a receiver in such a
way that any party intercepting or eavesdropping on the communication channel cannot extract
the data.
The objective of this simple cryptosystem is that at the end of the process, only the sender and
the receiver will know the plaintext.
Components of a Cryptosystem
The various components of a basic cryptosystem are as follows −

6
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 Plaintext. It is the data to be protected during transmission.


 Encryption Algorithm. It is a mathematical process that produces a cipher text for any
given plaintext and encryption key. It is a cryptographic algorithm that takes plaintext
and an encryption key as input and produces a cipher text.
 Cipher text. It is the scrambled version of the plaintext produced by the encryption
algorithm using a specific the encryption key. The cipher text is not guarded. It flows on
public channel. It can be intercepted or compromised by anyone who has access to the
communication channel.
 Decryption Algorithm, It is a mathematical process, that produces a unique plaintext for
any given cipher text and decryption key. It is a cryptographic algorithm that takes a
cipher text and a decryption key as input, and outputs a plaintext. The decryption
algorithm essentially reverses the encryption algorithm and is thus closely related to it.
 Encryption Key. It is a value that is known to the sender. The sender inputs the
encryption key into the encryption algorithm along with the plaintext in order to compute
the cipher text.
 Decryption Key. It is a value that is known to the receiver. The decryption key is related
to the encryption key, but is not always identical to it. The receiver inputs the decryption
key into the decryption algorithm along with the cipher text in order to compute the
plaintext.
For a given cryptosystem, a collection of all possible decryption keys is called a key space.
An interceptor (an attacker) is an unauthorized entity who attempts to determine the plaintext.
Types of Cryptosystems
Fundamentally, there are two types of cryptosystems based on the manner in which encryption-
decryption is carried out in the system –

 Public Key Cryptosystem (Symmetric Key Encryption)


 Private Key Cryptosystem (Asymmetric Key Encryption)

Private Key Cryptosystem (Symmetric Key Encryption)


The encryption process where same keys are used for encrypting and decrypting the
information is known as Symmetric Key Encryption.
The study of symmetric cryptosystems is referred to as symmetric cryptography. Symmetric
cryptosystems are also sometimes referred to as secret key cryptosystems.

7
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

A few well-known examples of symmetric key encryption methods are − Digital Encryption
Standard (DES), Triple-DES (3DES), International Data Encryption Algorithm (IDEA), and
BLOWFISH.

The salient features of cryptosystem based on symmetric key encryption are −


Persons using symmetric key encryption must share a common key prior to exchange of
information.
Keys are recommended to be changed regularly to prevent any attack on the system.
A robust mechanism needs to exist to exchange the key between the communicating
parties. As keys are required to be changed regularly, this mechanism becomes
expensive and cumbersome.
In a group of n people, to enable two-party communication between any two persons, the
number of keys required for group is n × (n – 1)/2.
Length of Key (number of bits) in this encryption is smaller and hence, process of
encryption-decryption is faster than asymmetric key encryption.
Processing power of computer system required to run symmetric algorithm is less.
Challenge of Symmetric Key Cryptosystem
There are two restrictive challenges of employing symmetric key cryptography.
 Key establishment − before any communication, both the sender and the receiver need
to agree on a secret symmetric key. It requires a secure key establishment mechanism in
place.
 Trust Issue − since the sender and the receiver use the same symmetric key, there is an
implicit requirement that the sender and the receiver ‘trust’ each other. For example, it
may happen that the receiver has lost the key to an attacker and the sender is not
informed.

8
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Public Key Cryptosystem (Asymmetric Key Encryption)


The encryption process where different keys are used for encrypting and decrypting the
information is known as Asymmetric Key Encryption. Though the keys are different, they are
mathematically related and hence, retrieving the plaintext by decrypting cipher text is feasible.
The process is depicted in the following illustration −

The salient features of this encryption scheme are as follows −


 Every user in this system needs to have a pair of dissimilar keys, private key and public
key. These keys are mathematically related − when one key is used for encryption, the
other can decrypt the cipher text back to the original plaintext.
 It requires to put the public key in public repository and the private key as a well-
guarded secret. Hence, this scheme of encryption is also called Public Key Encryption.
 Though public and private keys of the user are related, it is computationally not feasible
to find one from another. This is a strength of this scheme.
 When Host1 needs to send data to Host2, he obtains the public key of Host2 from
repository, encrypts the data, and transmits.
 Host2 uses his private key to extract the plaintext.
 Length of Keys (number of bits) in this encryption is large and hence, the process of
encryption-decryption is slower than symmetric key encryption.
 Processing power of computer system required to run asymmetric algorithm is higher.
Challenge of Public Key Cryptosystem
Public-key cryptosystems have one significant challenge − the user needs to trust that the public
key that he is using in communications with a person really is the public key of that person and
has not been spoofed by a malicious third party.

9
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

This is usually accomplished through a Public Key Infrastructure (PKI) consisting a trusted
third party. The third party securely manages and attests to the authenticity of public keys.
When the third party is requested to provide the public key for any communicating person X,
they are trusted to provide the correct public key.
The third party satisfies itself about user identity by the process of attestation, notarization, or
some other process − that X is the one and only, or globally unique, X. The most common
method of making the verified public keys available is to embed them in a certificate which is
digitally signed by the trusted third party.
Relation between Encryption Schemes
A summary of basic key properties of two types of cryptosystems is given below –
Private/Asymmetric Key Public/ Symmetric Key
Cryptosystems Cryptosystems
Relation B/n Keys Same Different, but mathematically related
Encryption Key Symmetric Public
Decryption Key Symmetric Private

Data Encryption Standard (DES)


The Data Encryption Standard (DES) is a symmetric-key block cipher published by the
National Institute of Standards and Technology (NIST).
DES is an implementation of a Feistel Cipher. It uses 16 round Feistel structure. The block size
is 64-bit. Though, key length is 64-bit, DES has an effective key length of 56 bits, since 8 of the
64 bits of the key are not used by the encryption algorithm (function as check bits only).
General Structure of DES is depicted in the following illustration −

10
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Since DES is based on the Feistel Cipher, all that is required to specify DES is −

o Round function
o Key schedule
o Any additional processing − Initial and final permutation
Initial and Final Permutation
The initial and final permutations are straight Permutation boxes (P-boxes) that are inverses of
each other. They have no cryptography significance in DES. The initial and final permutations
are shown as follows −

Round Function
The heart of this cipher is the DES function, f. The DES function applies a 48-bit key to the
rightmost 32 bits to produce a 32-bit output.

11
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 Expansion Permutation Box − since right input is 32-bit and round key is a 48-bit, we
first need to expand right input to 48 bits. Permutation logic is graphically depicted in
the following illustration −

 The graphically depicted permutation logic is generally described as table in DES


specification illustrated as shown −

 XOR (Whitener). − After the expansion permutation, DES does XOR operation on the
expanded right section and the round key. The round key is used only in this operation.
 Substitution Boxes. − The S-boxes carry out the real mixing (confusion). DES uses 8 S-
boxes, each with a 6-bit input and a 4-bit output. Refer the following illustration −

 The S-box rule is illustrated below −

12
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 There are a total of eight S-box tables. The output of all eight s-boxes is then combined
in to 32 bit section.
DES Analysis
The DES satisfies both the desired properties of block cipher. These two properties make cipher
very strong.
 Avalanche effect − A small change in plaintext results in the very great change in the
cipher text.
 Completeness − each bit of cipher text depends on many bits of plaintext.
During the last few years, cryptanalysis have found some weaknesses in DES when key selected
are weak keys. These keys shall be avoided.
DES has proved to be a very well designed block cipher. There have been no significant
cryptanalytic attacks on DES other than exhaustive key search.
 Triple DES
The speed of exhaustive key searches against DES after 1990 began to cause discomfort
amongst users of DES. However, users did not want to replace DES as it takes an enormous
amount of time and money to change encryption algorithms that are widely adopted and
embedded in large security architectures.
The pragmatic approach was not to abandon the DES completely, but to change the manner in
which DES is used. This led to the modified schemes of Triple DES (sometimes known as
3DES).
There are two variants of Triple DES known as 3-key Triple DES (3TDES) and 2-key Triple
DES (2TDES).
 3-KEY Triple DES
Before using 3TDES, user first generate and distribute a 3TDES key K, which consists of three
different DES keys K1, K2 and K3. This means that the actual 3TDES key has length 3×56 = 168
bits. The encryption scheme is illustrated as follows −

13
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

The encryption-decryption process is as follows −


 Encrypt the plaintext blocks using single DES with key K1.

 Now decrypt the output of step 1 using single DES with key K2.

 Finally, encrypt the output of step 2 using single DES with key K3.

 The output of step 3 is the cipher text.

 Decryption of a cipher text is a reverse process. User first decrypt using K3, then encrypt
with K2, and finally decrypt with K1.
Due to this design of Triple DES as an encrypt–decrypt–encrypt process, it is possible to use a
3TDES (hardware) implementation for single DES by setting K1, K2, and K3 to be the same
value. This provides backwards compatibility with DES.
Second variant of Triple DES (2TDES) is identical to 3TDES except that K3is replaced by K1.
In other words, user encrypt plaintext blocks with key K1, then decrypt with key K2, and finally
encrypt with K1 again. Therefore, 2TDES has a key length of 112 bits.
Triple DES systems are significantly more secure than single DES, but these are clearly a much
slower process than encryption using single DES.
Advanced Encryption Standard
The more popular and widely adopted symmetric encryption algorithm likely to be encountered
nowadays is the Advanced Encryption Standard (AES). It is found at least six time faster than
triple DES.
A replacement for DES was needed as its key size was too small. With increasing computing
power, it was considered vulnerable against exhaustive key search attack. Triple DES was
designed to overcome this drawback but it was found slow.
The features of AES are as follows −
14
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 Symmetric key symmetric block cipher


 128-bit data, 128/192/256-bit keys
 Stronger and faster than Triple-DES
 Provide full specification and design details
Operation of AES
AES is an iterative rather than Feistel cipher. It is based on ‘substitution–permutation network’.
It comprises of a series of linked operations, some of which involve replacing inputs by specific
outputs (substitutions) and others involve shuffling bits around (permutations).
Interestingly, AES performs all its computations on bytes rather than bits. Hence, AES treats the
128 bits of a plaintext block as 16 bytes. These 16 bytes are arranged in four columns and four
rows for processing as a matrix −
Unlike DES, the number of rounds in AES is variable and depends on the length of the key.
AES uses 10 rounds for 128-bit keys, 12 rounds for 192-bit keys and 14 rounds for 256-bit
keys. Each of these rounds uses a different 128-bit round key, which is calculated from the
original AES key.
The schematic of AES structure is given in the following illustration −

Encryption Process
Here, we restrict to description of a typical round of AES encryption. Each round comprise of
four sub-processes. The first-round process is depicted below −

15
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 Byte Substitution (Sub Bytes)


The 16 input bytes are substituted by looking up a fixed table (S-box) given in design. The
result is in a matrix of four rows and four columns.

 Shift rows
Each of the four rows of the matrix is shifted to the left. Any entries that ‘fall off’ are re-inserted
on the right side of row. Shift is carried out as follows −
 First row is not shifted.

 Second row is shifted one (byte) position to the left.

 Third row is shifted two positions to the left.

 Fourth row is shifted three positions to the left.

 The result is a new matrix consisting of the same 16 bytes but shifted with respect to
each other.
 Mix Columns
Each column of four bytes is now transformed using a special mathematical function. This
function takes as input the four bytes of one column and outputs four completely new bytes,
which replace the original column. The result is another new matrix consisting of 16 new bytes.
It should be noted that this step is not performed in the last round.
 Add round key
The 16 bytes of the matrix are now considered as 128 bits and are XORed to the 128 bits of the
round key. If this is the last round then the output is the cipher text. Otherwise, the resulting 128
bits are interpreted as 16 bytes and we begin another similar round.
Decryption Process
The process of decryption of an AES cipher text is similar to the encryption process in the
reverse order. Each round consists of the four processes conducted in the reverse order −

16
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 Add round key


 Mix columns
 Shift rows
 Byte substitution
Since sub-processes in each round are in reverse manner, unlike for a Feistel Cipher, the
encryption and decryption algorithms needs to be separately implemented, although they are
very closely related.
AES Analysis
In present day cryptography, AES is widely adopted and supported in both hardware and
software. Till date, no practical cryptanalytic attacks against AES has been discovered.
Additionally, AES has built-in flexibility of key length, which allows a degree of ‘future-
proofing’ against progress in the ability to perform exhaustive key searches.
However, just as for DES, the AES security is assured only if it is correctly implemented and
good key management is employed.
Digital signatures
Digital signatures are the public-key primitives of message authentication. In the physical
world, it is common to use handwritten signatures on handwritten or typed messages. They are
used to bind signatory to the message.
Similarly, a digital signature is a technique that binds a person/entity to the digital data. This
binding can be independently verified by receiver as well as any third party.
Digital signature is a cryptographic value that is calculated from the data and a secret key
known only by the signer.
In real world, the receiver of message needs assurance that the message belongs to the sender
and he should not be able to repudiate the origination of that message. This requirement is very
crucial in business applications, since likelihood of a dispute over exchanged data is very high.
Model of Digital Signature
As mentioned earlier, the digital signature scheme is based on public key cryptography. The
model of digital signature scheme is depicted in the following illustration −

17
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

The following points explain the entire process in detail −


 Each person adopting this scheme has a public-private key pair.
 Generally, the key pairs used for encryption/decryption and signing/verifying are
different. The private key used for signing is referred to as the signature key and the
public key as the verification key.
 Signer feeds data to the hash function and generates hash of data.
 Hash value and signature key are then fed to the signature algorithm which produces the
digital signature on given hash. Signature is appended to the data and then both are sent
to the verifier.
 Verifier feeds the digital signature and the verification key into the verification
algorithm. The verification algorithm gives some value as output.
 Verifier also runs same hash function on received data to generate hash value.
 For verification, this hash value and output of verification algorithm are compared.
Based on the comparison result, verifier decides whether the digital signature is valid.
 Since digital signature is created by ‘private’ key of signer and no one else can have this
key; the signer cannot repudiate signing the data in future.
Importance of Digital Signature
Apart from ability to provide non-repudiation of message, the digital signature also provides
message authentication and data integrity. Let us briefly see how this is achieved by the digital
signature −
Message authentication − When the verifier validates the digital signature using public
key of a sender, he is assured that signature has been created only by sender who possess
the corresponding secret private key and no one else.
Data Integrity − In case an attacker has access to the data and modifies it, the digital
signature verification at receiver end fails. The hash of modified data and the output

18
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

provided by the verification algorithm will not match. Hence, receiver can safely deny
the message assuming that data integrity has been breached.
Non-repudiation − Since it is assumed that only the signer has the knowledge of the
signature key, he can only create unique signature on a given data. Thus the receiver can
present data and the digital signature to a third party as evidence if any dispute arises in
the future.
Encryption with Digital Signature
In many digital communications, it is desirable to exchange an encrypted messages than
plaintext to achieve confidentiality. In public key encryption scheme, a public (encryption) key
of sender is available in open domain, and hence anyone can spoof his identity and send any
encrypted message to the receiver.
This makes it essential for users employing PKC for encryption to seek digital signatures along
with encrypted data to be assured of message authentication and non-repudiation.
This can archived by combining digital signatures with encryption scheme. Let us briefly
discuss how to achieve this requirement. There are two possibilities, sign-then-
encrypt and encrypt-then-sign.
o However, the crypto system based on sign-then-encrypt can be exploited by receiver to
spoof identity of sender and sent that data to third party. Hence, this method is not
preferred.
o The process of encrypt-then-sign is more reliable and widely adopted. This is depicted in
the following illustration −

The receiver after receiving the encrypted data and signature on it, first verifies the signature
using sender’s public key. After ensuring the validity of the signature, he then retrieves the data
through decryption using his private key.
Benefits of Cryptography
Cryptography is an essential information security tool. It provides the four most basic services
of information security −

19
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Confidentiality − Encryption technique can guard the information and communication


from unauthorized revelation and access of information.
Authentication − the cryptographic techniques such as MAC and digital signatures can
protect information against spoofing and forgeries.
Data Integrity − the cryptographic hash functions are playing vital role in assuring the
users about the data integrity.
Non-repudiation − the digital signature provides the non-repudiation service to guard
against the dispute that may arise due to denial of passing message by the sender.
All these fundamental services offered by cryptography has enabled the conduct of business
over the networks using the computer systems in extremely efficient and effective manner.
Drawbacks of Cryptography
Apart from the four fundamental elements of information security, there are other issues that
affect the effective use of information −
 A strongly encrypted, authentic, and digitally signed information can be difficult to
access even for a legitimate user at a crucial time of decision-making. The network or
the computer system can be attacked and rendered non-functional by an intruder.
 High availability, one of the fundamental aspects of information security, cannot be
ensured through the use of cryptography. Other methods are needed to guard against the
threats such as denial of service or complete breakdown of information system.
 Another fundamental need of information security of selective access control also
cannot be realized through the use of cryptography. Administrative controls and
procedures are required to be exercised for the same.
 Cryptography does not guard against the vulnerabilities and threats that emerge from
the poor design of systems, protocols, and procedures. These need to be fixed through
proper design and setting up of a defensive infrastructure.
 Cryptography comes at cost. The cost is in terms of time and money −
o Addition of cryptographic techniques in the information processing leads to
delay.
o The use of public key cryptography requires setting up and maintenance of public
key infrastructure requiring the handsome financial budget.
 The security of cryptographic technique is based on the computational difficulty of
mathematical problems. Any breakthrough in solving such mathematical problems or
increasing the computing power can render a cryptographic technique vulnerable.

20
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Access Control

 Access control is a way of limiting access to a system or to physical or virtual resources.


i.e. access Control refers to the much more general way of controlling access to web
resources, including restrictions based on things like the time of day, the IP address of the
HTTP client browser, the domain of the HTTP client browser, the type of encryption the
HTTP client can support, number of times the user has authenticated that day, the
possession of any number of types of hardware/software tokens, or any other derived
variables that can be extracted or calculated easily.
 It is a process by which users are granted access and certain privileges to systems,
resources or information.

A more secure method for access control involves two-factor authentication. The person who
desires access must show credentials and a second factor to corroborate identity. The second
factor could be an access code, a PIN or even a biometric reading.

Access control mechanisms are a necessary and crucial design element to any application's
security. In general, a web application should protect front-end and back-end data and system
resources by implementing access control restrictions on what users can do, which resources they
have access to, and what functions they are allowed to perform on the data. Ideally, an access
control scheme should protect against the unauthorized viewing, modification, or copying of
data. Additionally, access control mechanisms can also help limit malicious code execution, or
unauthorized actions through an attacker exploiting infrastructure dependencies (DNS server,
ACE server, etc.).

There are three factors that can be used for authentication:

 Something only known to the user, such as a password or PIN


 Something that is part of the user, such as a fingerprint, retina scan or another biometric
measurement
 Something that belongs to the user, such as a card or a key

Before choosing the access control mechanisms specific to your web application, several steps
can help expedite and clarify the design process;

1. Try to quantify the relative value of information to be protected in terms of


Confidentiality, Sensitivity, Classification, Privacy, and Integrity related to the
organization as well as the individual users.

2. Determine the relative interaction that data owners and creators will have within the web
application.

21
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

3. Specify the process for granting and revoking user access control rights on the system,
whether it be a manual process, automatic upon registration or account creation, or
through an administrative front-end tool.

4. Clearly delineate the types of role driven functions the application will support. Try to
determine which specific user functions should be built into the web application (logging
in, viewing their information, modifying their information, sending a help request, etc.)
as well as administrative functions (changing passwords, viewing any users data,
performing maintenance on the application, viewing transaction logs, etc.).

5. Try to align your access control mechanisms as closely as possible to your organization's
security policy.

Types of Access Control

Discretionary Access Control (DAC)


 It is a means of restricting access to information based on the identity of users and/or
membership in certain groups. Access decisions are typically based on the authorizations
granted to a user based on the credentials he presented at the time of authentication (user
name, password, hardware/software token, etc.). In most typical DAC models, the owner
of information or any resource is able to change its permissions at his discretion (thus the
name).
 DAC is a kind of access control system that holds the owner responsible for deciding
people making way into a premise or unit.
 DAC has the drawback of the administrators not being able to centrally manage these
permissions on files/information stored on the web server.
A DAC access control model often exhibits one or more of the following attributes.
o Data Owners can transfer ownership of information to other users
o Data Owners can determine the type of access given to other users (read, write,
copy, etc.)
o Repetitive authorization failures to access the same resource or object generates
an alarm and/or restricts the user's access
o Special add-on or plug-in software required to apply to an HTTP client to prevent
indiscriminant copying by users ("cutting and pasting" of information)
o Users who do not have access to information should not be able to determine its
characteristics (file size, file name, directory path, etc.)
o Access to information is determined based on authorizations to access control lists
based on user identifier and group membership.
A typical example of this system is Access Control Lists (ACLs).
22
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Mandatory Access Control (MAC)


 The MAC system doesn’t permit owners to have a say in the entities having access in a
unit or facility. Typically, this classifies all users and entities and provides label which
permits them to pass through the security and gain entry.
 MAC secures information by assigning sensitivity labels on information and comparing
this to the level of sensitivity a user is operating it.
 MAC ensures that the enforcement of organizational security policy does not rely on
voluntary web application user compliance.
 In general, MAC access control mechanisms are more secure than DAC yet have
tradeoffs in performance and convenience to users. MAC mechanisms assign a security
level to all information, assign a security clearance to each user, and ensure that all users
only have access to that data for which they have a clearance. MAC is usually appropriate
for extremely secure systems including multilevel secure military based organization or
mission critical data applications.
A MAC access control model often exhibits one or more of the following attributes.

o Only administrators, not data owners, make changes to a resource's security label.

o All data is assigned security level that reflects its relative sensitivity, confidentiality, and
protection value.

o All users can read from a lower classification than the one they are granted (A "secret"
user can read an unclassified document).

o All users can write to a higher classification (A "secret" user can post information to a
Top-Secret resource).

o All users are given read/write access to objects only of the same classification (a "secret"
user can only read/write to a secret document).

o Access is authorized or restricted to objects based on the time of day depending on the
labeling on the resource and the user's credentials (driven by policy).

o Access is authorized or restricted to objects based on the security characteristics of the


HTTP client (e.g. SSL bit length, version information, originating IP address or domain,
etc.)

Role-Based Access Control Technology (RBAC)

23
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 It is also known as Non-Discretionary Access Control, this is probably one of the most
demanded and successful technologies utilized in access control systems.
 In the RBAC model, the access provided is stringently based on the subject’s role in the
organization.
 At Store-Guard, we strive to make self-storage facilities secure. Therefore, we offer a
range of access control systems, door alarms, keypads, fingerprint readers and scanners,
and other password-protected security solutions

 In Role-Based Access Control (RBAC), access decisions are based on an individual's


roles and responsibilities within the organization or user base. The process of defining
roles is usually based on analyzing the fundamental goals and structure of an organization
and is usually linked to the security policy.

 An RBAC access control framework should provide web application security


administrators with the ability to determine who can perform what actions, when, from
where, in what order, and in some cases under what relational circumstances.

The following aspects exhibit RBAC attributes to an access control model.

o Roles are assigned based on organizational structure with emphasis on the organizational
security policy

o Roles are assigned by the administrator based on relative relationships within the
organization or user base. For instance, a manager would have certain authorized
transactions over his employees. An administrator would have certain authorized
transactions over his specific realm of duties (backup, account creation, etc.)

o Each role is designated a profile that includes all authorized commands, transactions, and
allowable information access.

o Roles are granted permissions based on the principle of least privilege.

o Roles are activated statically and dynamically as appropriate to certain relational triggers
(help desk queue, security alert, initiation of a new project, etc.)

o Roles can be only be transferred or delegated using strict sign-offs and procedures.

o Roles are managed centrally by a security administrator or project leader.

24
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Firewall
Firewall is a network device that isolates organization’s internal network from larger outside
network/Internet. It can be a hardware, software, or combined system that prevents
unauthorized access to or from internal network.
All data packets entering or leaving the internal network pass through the firewall, which
examines each packet and blocks those that do not meet the specified security criteria.

Deploying firewall at network boundary is like aggregating the security at a single point. It is
analogous to locking an apartment at the entrance and not necessarily at each door.
Firewall is considered as an essential element to achieve network security for the following
reasons −
 Internal network and hosts are unlikely to be properly secured.
 Internet is a dangerous place with criminals, users from competing companies,
disgruntled ex-employees, spies from unfriendly countries, vandals, etc.
 To prevent an attacker from launching denial of service attacks on network resource.
 To prevent illegal modification/access to internal data by an outsider attacker.

Firewall is categorized into three basic types −

 Packet filter (Stateless & State full)


 Application-level gateway
 Circuit-level gateway
These three categories, however, are not mutually exclusive. Modern firewalls have a mix of
abilities that may place them in more than one of the three categories.

25
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Packet Filtering (Stateless & Stateful) Firewall


In this type of firewall deployment, the internal network is connected to the external
network/Internet via a router firewall. The firewall inspects and filters data packet-by-packet.
It allow or block the packets mostly based on criteria such as source and/or destination IP
addresses, protocol, source and/or destination port numbers, and various other parameters
within the IP header.
The decision can be based on factors other than IP header fields such as ICMP message type,
TCP SYN and ACK bits, etc.
Packet filter rule has two parts −
 Selection criteria − It is a used as a condition and pattern matching for decision making.
 Action field − this part specifies action to be taken if an IP packet meets the selection
criteria. The action could be either block (deny) or permit (allow) the packet across the
firewall.
Packet filtering is generally accomplished by configuring Access Control Lists (ACL) on
routers or switches. ACL is a table of packet filter rules.
As traffic enters or exits an interface, firewall applies ACLs from top to bottom to each
incoming packet, finds matching criteria and either permits or denies the individual packets.

Stateless firewall is a kind of a rigid tool. It looks at packet and allows it if its meets the criteria
even if it is not part of any established ongoing communication.
Stateful firewalls this type of firewalls offers a more in-depth inspection method over the only
ACL based packet inspection methods of stateless firewalls.
Stateful firewall monitors the connection setup and teardown process to keep a check on
connections at the TCP/IP level. This allows them to keep track of connections state and
determine which hosts have open, authorized connections at any given point in time.

26
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

They reference the rule base only when a new connection is requested. Packets belonging to
existing connections are compared to the firewall's state table of open connections, and decision
to allow or block is taken. This process saves time and provides added security as well. No
packet is allowed to trespass the firewall unless it belongs to already established connection. It
can timeout inactive connections at firewall after which it no longer admit packets for that
connection.
Application Level Gateways
An application-level gateway acts as a relay node for the application-level traffic. They
intercept incoming and outgoing packets, run proxies that copy and forward information across
the gateway, and function as a proxy server, preventing any direct connection between a
trusted server or client and an untrusted host.
The proxies are application specific. They can filter packets at the application layer of the OSI
model.
Application-specific Proxies

An application-specific proxy accepts packets generated by only specified application for which
they are designed to copy, forward, and filter. For example, only a Telnet proxy can copy,
forward, and filter Telnet traffic.
If a network relies only on an application-level gateway, incoming and outgoing packets cannot
access services that have no proxies configured. For example, if a gateway runs FTP and
Telnet proxies, only packets generated by these services can pass through the firewall. All other
services are blocked.
Application-level Filtering
An application-level proxy gateway, examines and filters individual packets, rather than simply
copying them and blindly forwarding them across the gateway. Application-specific proxies
check each packet that passes through the gateway, verifying the contents of the packet up

27
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

through the application layer. These proxies can filter particular kinds of commands or
information in the application protocols.
Circuit-Level Gateway
The circuit-level gateway is an intermediate solution between the packet filter and the
application gateway. It runs at the transport layer and hence can act as proxy for any
application.
Similar to an application gateway, the circuit-level gateway also does not permit an end-to-end
TCP connection across the gateway. It sets up two TCP connections and relays the TCP
segments from one network to the other. But, it does not examine the application data like
application gateway. Hence, sometime it is called as ‘Pipe Proxy’.
SOCKS
Refers to a circuit-level gateway. It is a networking proxy mechanism that enables hosts on one
side of a SOCKS server to gain full access to hosts on the other side without requiring direct IP
reachability. The client connects to the SOCKS server at the firewall. Then the client enters a
negotiation for the authentication method to be used, and authenticates with the chosen method.
The client sends a connection relay request to the SOCKS server, containing the desired
destination IP address and transport port. The server accepts the request after checking that the
client meets the basic filtering criteria. Then, on behalf of the client, the gateway opens a
connection to the requested untrusted host and then closely monitors the TCP handshaking that
follows.
The SOCKS server informs the client, and in case of success, starts relaying the data between
the two connections. Circuit level gateways are used when the organization trusts the internal
users, and does not want to inspect the contents or application data sent on the Internet.
Firewall Deployment with DMZ
A firewall is a mechanism used to control network traffic ‘into’ and ‘out’ of an organizational
internal network. In most cases these systems have two network interfaces, one for the external
network such as the Internet and the other for the internal side.
The firewall process can tightly control what is allowed to traverse from one side to the other.
An organization that wishes to provide external access to its web server can restrict all traffic
arriving at firewall expect for port 80 (the standard http port). All other traffic such as mail
traffic, FTP, SNMP, etc., is not allowed across the firewall into the internal network.
An example of a simple firewall is shown in the following diagram.

28
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

In the above simple deployment, though all other accesses from outside are blocked, it is
possible for an attacker to contact not only a web server but any other host on internal network
that has left port 80 open by accident or otherwise.
Hence, the problem most organizations face is how to enable legitimate access to public
services such as web, FTP, and e-mail while maintaining tight security of the internal network.
The typical approach is deploying firewalls to provide a Demilitarized Zone (DMZ) in the
network.
In this setup (illustrated in following diagram), two firewalls are deployed; one between the
external network and the DMZ, and another between the DMZ and the internal network. All
public servers are placed in the DMZ.

Intrusion Detection Systems (IDS)


 Intrusion Detection Systems sit off to the side of the network, monitoring traffic at many
different points, and provide visibility into the security state of the network. In case of
reporting of anomaly by IDS, the corrective actions are initiated by the network
administrator or other device on the network.

29
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 An intrusion detection system (IDS) is a device or software application that monitors


network or system activities for malicious activities or policy violations and produces
reports to a Management Station. Some systems may attempt to stop an intrusion attempt
but this is neither required nor expected of a monitoring system.
 Intrusion detection systems are primarily focused on identifying possible incidents,
logging information about them, and reporting attempts. In addition, organizations use
IDPSes for other purposes, such as identifying problems with security policies,
documenting existing threats and deterring individuals from violating security policies.
IDPSes have become a necessary addition to the security infrastructure of nearly every
organization.
 IDPSs typically record information related to observed events, notify security
administrators of important observed events, and produce reports. Many IDPSes can also
respond to a detected threat by attempting to prevent it from succeeding. They use
several response techniques, which involve the IDPS stopping the attack itself, changing
the security environment (e.g. reconfiguring a firewall), or changing the attack’s content.
Types of IDS: There are two main types of IDS:
Host-Based Detection
By installing a software agent on all hosts, host-based IDSs monitor all system activities, log
files, and network traffic received. Host-based systems might also monitor the OS, system calls,
audit logs, and error messages on the host system. While network probes can detect an attack,
only host-based systems can determine whether the attack was successful. Additionally, host-
based systems can record what the attacker performed on the compromised host.
Not all attacks are performed over the network. By gaining physical access to a computer
system, an intruder can attack a system or data without generating any network traffic. Host-
based systems can detect attacks that are out-of-band or performed from the console, but a savvy
intruder with knowledge of the IDSs can quickly disable any detection software once physical
access is accomplished.
Another advantage for host-based monitoring is its capability to detect stealth attacks. Some
network-based IDS systems can be fooled by playing with the fragmentation or TTL of the
packets. For a host to process traffic, it must be received and reassembled, and, therefore, it’s
subject to monitoring by the host-based IDS.
Host-Based Drawbacks
Host-based IDSs have four key disadvantages or weaknesses:
 Manageability  Compromised hosts
 Micro view of network attacks  Operating systems limitations

30
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Network-Based IDSs
 Network-based IDSs uses probes or sensors installed throughout the network. These
probes sniff the network looking for traffic that matches a defined profile or signature.
Sensors receive and analyze traffic in real time. Once a traffic pattern or signature is
recognized, the probe sends an alert to the management platform and can be configured
to take action to prevent any additional access.
 Sniffing the network is a term used to describe a network interface that receives all
network traffic.
 Network-based detection has the advantage of seeing intrusions from a network
perspective. While host-based detection can’t detect a ping sweep or a port scan across
multiple hosts, network-based IDSs can easily detect such reconnaissance attacks.
Network-based sensors generate an alert when these reconnaissance attacks are
discovered.
 Network-based IDSs don’t depend on the resources of a host machine and, therefore,
won’t use valuable network and CPU cycles on mission-critical servers. Additionally,
because there’s no software to install, there are no issues with OS compatibility. With
network-based IDSs, the IDS software runs only on the director and the sensors.
To ensure proper security, the network-based IDS should have a separate and highly secure
management network. The director platforms and sensors will communicate with one another via
this secured management network. Each probe will use this network to send alarms to the
director platform, and the director platform will use this management network to configure and
update each network sensor.
Network-Based Drawbacks
Network-based intrusion detection has four basic weaknesses:
 Bandwidth  TTL manipulation
 Packet fragmentation and reassembly  Encryption

Authentication

 Authentication is about validating your credentials like User Name/User ID and password
to verify your identity. The system determines whether you are what you say you are
using your credentials. In public and private networks, the system authenticates the user
identity via login passwords.

 Authentication is used by a server when the server needs to know exactly who is
accessing their information or site. Usually, authentication by a server entails the use of a
user name and password. Other ways to authenticate can be through cards, retina scans,
voice recognition, and fingerprints.

31
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

 It used by a client when the client needs to know that the server is system it claims to be.
client usually involves the server giving a certificate to the client in which a trusted third
party such as Verisign or Thawte states that the server belongs to the entity (such as a
bank) that the client expects it to.

 In authentication, the user or computer has to prove its identity to the server or client.

 Authentication does not determine what tasks the individual can do or what files the
individual can see. Authentication merely identifies and verifies who the person or
system is.

 Authentication is the first step of authorization so always comes first.

For example, students of a particular university are required to authenticate themselves before
accessing the student link of the university’s official website. This is called authentication.

Based on the security level, authentication factor can vary from one of the following:

o Single-Factor Authentication – It’s the simplest authentication method which


commonly relies on a simple password to grant user access to a particular system such as
a website or a network. The person can request access to the system using only one of the
credentials to verify his identity. The most common example of a single-factor
authentication would be login credentials which only require a password against a
username.

o Two-Factor Authentication – it’s a two-step verification process which not only


requires a username and password, but also something only the user knows, to ensure an
additional level of security, such as an ATM pin, which only the user knows. Using a
username and password along with an additional piece of confidential information makes
it virtually impossible for fraudsters to steal valuable data.

o Multi-Factor Authentication – It’s the most advanced method of authentication which


uses two or more levels of security from independent categories of authentication to grant
user access to the system. All the factors should be independent of each other to eliminate
any vulnerability in the system. Financial organizations, banks, and law enforcement
agencies use multiple-factor authentication to safeguard their data and applications from
potential threats.

For example, when you enter your ATM card into the ATM machine, the machine asks you to
enter your pin. After you enter the pin correctly, the bank then confirms your identity that the
card really belongs to you and you’re the rightful owner of the card. By validating your ATM
card pin, the bank actually verifies your identity, which is called authentication. It merely
identifies who you are, nothing else.
32
IAS Chapter 4 Lecture Note Prepared by Abraham A
Wolkite University College of Computing and Informatics Department of IT

Authorization

 Authorization is a process by which a server determines if the client has permission to


use a resource or access a file.

 The type of authentication required for authorization may vary; passwords may be
required in some cases but not in others.

 Authorization is used when a person shows his or her boarding pass to the flight attendant
so he or she can board the specific plane he or she is supposed to be flying on. A flight
attendant must authorize a person so that person can then see the inside of the plane and
use the resources the plane has to fly from one place to the next.

 Authorization, occurs after your identity is successfully authenticated by the system,


which ultimately gives you full permission to access the resources such as information,
files, databases, funds, locations, almost anything. In simple terms, authorization
determines your ability to access the system and up to what extent. Once your identity is
verified by the system after successful authentication, you are then authorized to access
the resources of the system.

For example, the process of verifying and confirming employees ID and passwords in an
organization is called authentication, but determining which employee has access to which floor
is called authorization. Let’s say you are traveling and you’re about to board a flight. When you
show your ticket and some identification before checking in, you receive a boarding pass which
confirms that the airport authority has authenticated your identity. But that’s not it. A flight
attendant must authorize you to board the flight you’re supposed to be flying on, allowing you
access to the inside of the plane and its resources.

Access to a system is protected by both authentication and authorization. Any attempt to access
the system might be authenticated by entering valid credentials, but it can only be accepted after
successful authorization. If the attempt is authenticated but not authorized, the system will deny
access to the system.

33
IAS Chapter 4 Lecture Note Prepared by Abraham A

You might also like