Professional Documents
Culture Documents
Riassunto Computer Security
Riassunto Computer Security
Computer security deals with the prevention and detection of unauthorized actions by users of a computer
system.
Information security is more general, it deals with information independent of computer systems.
Security concerns the protection of assets (= valuable resources) from threats, which are the potential for
abuse of assets. Owner’s value (= consider something important) their assets and want to protect them
from threat agents, that seek to abuse them. Owners analyse threats to determine which ones apply,
which are the risks. This helps the selection of countermeasures, which reduce as much as possible the
vulnerabilities.
Security properties:
There is also authorization. The difference between authentication and authorization is that the former
verifies the identity of a user or service whilst the latter determines their access rights.
Protection countermeasures:
Prevention: Try to prevent security breaches by system design and employing appropriate security
technologies as defences (e.g., firewall).
Detection: In the event of a security breach, we try to ensure that it will be detected.
Response: In the event of a security breach, we must respond or recover the assets.
Security solution:
Security analysis: surveys the threats which pose risks to assets, and then proposes policy and
solutions at an appropriate cost.
Threat model: documents the possible threats to a system, imagining all the vulnerabilities which
might be exploited.
Risk assessment: studies the likelihood of each threat in the system environment and assigns a cost
value, to find the risks.
Security policy: addresses the threats and describes a coherent set of countermeasures. The costs
of countermeasures is compared against the risks, and juggled to make a sensible trade-off.
Introduction to Cryptography
Fundamental concepts (cryptography, cryptanalysis, general cryptographic schema)
Cryptography is the technology that enables us to turn untrustworthy channels of communication into
trustworthy ones, achieving confidentiality, integrity, and authentication (and sometimes non-repudiation
as well). It’s the science of secret writing. Contrary to steganography, the message we want to convey is not
hidden, it’s scrambled.
Security depends on secrecy of the key used to encrypt or decrypt the message, not of the algorithm. There
are two types of cryptographic algorithms:
Symmetric algorithms, in which the same key is used to encrypt and decrypt a message or two
distinct are used but are easily derived from each other.
Asymmetric or public key algorithms: Different keys, which cannot be derived from each other.
Public key can be published without compromising private key.
Unconditional security: measured using information theory and corresponds to the case in which
even if the adversary has unbounded computing power, they are not able to break the system.
Conditional security: measured using complex theory and corresponds to the case in which system
can be broken in principle, but this requires more computing power than a realistic adversary
would have.
Cryptanalysis is the science of recovering the plaintext (or preferably the key) from ciphertext without the
key. There are two approaches:
Brute-force attack: a method of defeating a cryptographic scheme by trying every key. It’s always
possible, but its cost depends on key size and it assumes that plaintext is known or recognisable.
Cryptanalytic attack: here are different kinds of cryptanalytics attacks:
o Cyphertext only: From the ciphertext deduce plaintext or the algorithm to compute it.
o Known plaintext: Given an original message and ciphertext, deduce inverse key or
algorithm to compute any original message from its cyphertext.
o Chosen plaintext: Same as above, but the cryptanalyst can choose the message to start
with.
o Adaptive chosen plaintext: Cryptanalyst can not only choose plaintext, but he can modify
the plaintext based on encryption results.
o Chosen ciphertext: Cryptanalyst can choose different ciphertexts to be decrypted and gets
access to the decrypted plaintext.
Model of attack:
To construct an encryption scheme requires fixing a message space M (which is a subset of A*, where A is
the alphabet and is a finite set), a ciphertext space C, and a key space K, as well as encryption
transformations ({Ee : e ϵ K}, a set of bijective functions from M to C) and corresponding decryption
transformations ({Dd : d ϵ K}, a set of bijective functions from C to M). (e, d) form a key pair, that can be
identical. The message m ϵ M is also called plaintext.
An encryption scheme {Ee : e ϵ K} and {Dd : d ϵ K is symmetric-key if for each pair (e, d), e and d are
computationally “easy” to be derived from each other. Basically, sender and recipient share a common key.
All classical algorithms are of this type since public-key was invented in 1970’s and it’s by far the most used
type of encryption.
Substitution cryptography:
Cipher: Replace letters. A block cipher is an encryption scheme that breaks up the plaintext
message into strings (blocks) of fixed length t and encrypts one block at a time. If the fixed length t
is one, it’s called stream cipher.
Code: Replace words. The translation is given by a code book that associates each word of the
message with a code.
Caesar cipher: Each plaintext character is replaced by the character three to the right modulo 26 (0
= A and 25 = Z).
ROT13: shift each letter by 13.
Alphanumeric: substitute numbers for letters.
Monoalphabetic substitution ciphers: We can generalise Caeser cipher by allowing an arbitrary
substitution. In this case, the key space K is the set of all permutations on alphabet A. Another example of
monoalphabetic substitution ciphers are affine ciphers, in which e(m) = (a.m + b) mod |A| and d(c) = a-1(c -
b) mod |A|. The positive integers a and b are key of the cipher and the numbers a and |A| must be
relatively primes, that is, the only positive integer that divides both of them must be 1, in order to be able
to decrypt the message using the modular multiplicative of a, a-1.
Security of substitution ciphers: Although key spaces are typically huge, they can be easily cracked using
frequency analysis (e.g., counting frequent letters and digrams in the ciphertext and comparing them to
frequent letters, like E and T in English, and digrams, like TH in English).
A way to make frequency analysis more difficult is to use a homophonic substitution cipher, which replaces
each a (character from alphabet A) with a randomly chosen string from the set H(a). For example, if A = {x,
y} and H(x) = {00, 10} and H(y) = {01, 11}, the plaintext “xy” encrypts to one of 0001, 0011, 1001, 1011. The
cons of this technique are data expansion and more work for decryption.
Polyalphabetic substitution ciphers: They are block ciphers with block length t where the encryption of a
message m with length n (m = m1m2…mn) under key e = (e1e2…et) is Ee(m) = c1c2…cn, where ci = e (i mod t) (mi) for
i = 1, 2 … n. In Vigenère ciphers, e (i mod t) (mi) = (mi + kj) mod |A|, for j = 1 … t (j = i mod t). In most examples, t
= 3.
Malleability of one-time pads: In the corollary, C1 corresponds to E(K, M1) and F is the function that
performs the xor of its input, C1, with (M1 xor M2). Then, K xor M2 corresponds to C2 and E(K, G(M2)) with G
being the identity function (like M2 xor 0).
Transposition cryptography: For block length t, the K be the set of permutations of {1…t}. For each e ϵ k
and m ϵ M (where m is a substring of the original message M of block length t), E(m) = me(1) me(2)… me(t) To
decrypt the message, it’s necessary to use the same permutation on the ciphertext. Letters unchanged so
one can exploit frequency analysis for diphthongs, triphthongs, words, etc. In the example below, t = 4.
Composite ciphers: Ciphers based on just substitutions or transpositions are not secure. Ciphers can be
combined. However, two substitutions/transposition are only one more difficult substitution/transposition.
A substitution followed by a transposition thought makes a new harder cipher that are difficult to do by
hand, leading to the invention of cipher machines (such as Enigma).
Symmetric Cryptography
Block and stream ciphers:
Block ciphers process messages in blocks (64 bits or more), each of which is then en/decrypted. It’s
like a substitution on a very large alphabet. Many current ciphers are block ciphers since it has a
broader range of applications.
Stream ciphers process messages a bit or byte at a time when en/decrypting.
Ideal block cipher: Would need table of 2n entries for a n-bit block and
hence a “key” size of n x 2n. A total of 2n! transformations are possible.
Even though the statistical information of the plaintext is lost, this cipher
is infeasible (impossible in real life).
Substitution-Permutation Ciphers: S-P nets are based on the two primitive cryptographic operations seen
before when differentiating substitution and transposition ciphers:
Substitution (S-box): Confuse input bits. Confusion makes relationship between ciphertext and key
as complex as possible.
Permutation (P-box): Diffuse bits across S-box inputs. Diffusion dissipates statistical structure of
plaintext over bulk of ciphertext.
Encryption:
To be able to decrypt ciphertext to recover messages efficiently, approximate the ideal block cipher by
utilizing the concept of a product cipher (a combination of simple ciphers in such a way that the result is
cryptographically stronger that any of the component ciphers).
In practice, develop a block cipher with a key length of k bits and a block length of n bits, allowing a total of
2k possible transformations, rather than the 2n! transformations available with the ideal block cipher.
This cipher implements the S-P net concept by partitioning initial input block into two halves (R 0 e L0) and
process them through multiple rounds. In each round, it performs a substitution on left data half based on
round function of right half and subkey (F and xor), and then have a permutation swapping halves (X). The
round function F can be an S-P network, or any (not necessarily invertible) cipher.
Decryption:
Encryption and decryption are structurally identical, though the subkeys used during encryption at each
round are taken in reverse order during decryption.
block size
key size
number of rounds
subkey generation algorithm
round function
fast software en/decryption
ease of analysis
DES: Data Encryption Standard is the first encryption standard, and it’s heavily
used in banking applications. It’s a block cipher, encrypting 64-bit blocks. It uses
56-bit keys expressed as 64-bit numbers (the 8 remaining bits are for parity
checking).
It’s overall form is composed by 16 rounds Feistel cipher and a key-scheduler, that uses an algorithm that
derives subkeys Ki from original key K. It performs initial permutation at the start and inverse permutation
at the end and f consists of two permutations and a s-box substitution.
Overall form vs A single round (i is the number of the round, starting from 1)
DES presents a strong avalanche effect, a desirable property of cryptographic algorithm wherein a small
change in an input (either the key or the plaintext) should cause a drastic change in the output (ciphertext).
In this case, a change of one input or key bit results in changing approx. half output bits.
Security of DES:
Key size: Even though the numbers of possible keys is large, it’s now possible to brute-force them
in a few hours.
Analytic attack: Utilise some deep structure of the cipher by gathering information about
encryptions. Attackers can eventually recover all the sub-key bits or some of them exhaustively
searching for the rest.
Timing attack: Use knowledge of consequences of implementation of the cipher to derive
information about some/all subkey bits. Specifically use fact that calculations can take varying
times depending on the value of the inputs to it
AES: Based on the Rijndael cipher, it’s now used worldwide and supersedes DES. It’s based on substitution-
permutation network but unlike DES, AES does not use a Feistel network. While AES has a fixed block size of
128 bits, and a key size of 128, 192, or 256 bits, Rijndael works with block and key sizes that may be any
multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits.
The key size used for an AES cipher specifies the number of repetitions of transformation rounds that
convert the plaintext into ciphertext:
Modes of operation
There are different modes of operation to use a block cipher when the messages exceed block-length:
Electronic Code Book (ECB): The message is split into m blocks and each block is encrypted
individually. The limitations are:
o Information leak: identical ciphertext blocks map to identical plaintext blocks.
o Limited integrity: decryption doesn’t indicate if ciphertext blocks have been changed,
deleted, or duplicated.
Cipher-Block Chaining (CBC): Cipher input is xor of plaintext block with preceding ciphertext. The
first is generated using an initializing vector (C0).
Properties:
o Identical plaintext blocks
mapped to different
ciphertext.
o Chaining dependencies: Cj
depends on all preceding
plaintexts hence it can’t be
parallelized.
o Self-synchronizing: if an error occurs (changed bits, dropped blocks) in Cj but not in Cj+1,
then Cj+2 correctly decrypted.
Stream Ciphers: Same idea of Vernam cipher but use pseudorandom generator (in place of a truly random
generator) using the seed as key (RC4 is an example)
Stream vs block ciphers: Stream ciphers are usually faster and easier to implement and with a properly
designed pseudorandom number generator, a stream cypher can be as secure as a block cipher of
comparable key length, but with block ciphers keys can be reused while with stream ciphers if two
plaintexts are encrypted with the same key the xor between the ciphertext will be the same of the plaintext
(as seem before).
Placement encryption: Encryption can be placed at various layers in the OSI Reference Model. As we move
higher, less information is encrypted and it’s more secure, but it’s also more complex and with more
entities and keys.
End-to-end encryption: At layers 3, 4, 6 and 7. When use this kind of encryption, headers must be
left clear so network can correctly route information.
Link encryption: At layers 1 or 2. Since end-to-end encryption can only protect but not traffic flows
between parties from being monitored, link encryption does so but overall traffic volumes in
networks and at end-points is still visible. Traffic padding can further obscure flows but at cost of
continuous traffic.
The key distribution problem: Symmetric schemes require both parties to share a common secret key. The
issue is how to securely distribute this key. Given parties A and B have various key distribution alternatives:
Session key: used for encryption of data between users for one logical session then discarded
Master key: used to encrypt session keys shared by user and key distribution centre (KDC)
Weakness: B cannot check freshness of Ks (3). If Ks is compromised, then it can be replayed by the attacker.
Hierarchies of KDC’s required for large networks, but must trust each other
Session key lifetimes should be limited for greater security
Use of automatic key distribution on behalf of users, but must trust system
Use of decentralized key distribution
Controlling key usage
Public-Key Cryptography
Public-key cryptography was born in the 1970s, the child of two problems: the key distribution problem
and the problem of signatures. The first is solved using public and private keys. The public key is a
trapdoor one-way function, and the private key is its inverse.
A one-way function is when f is “easy” to compute while its inverse f-1 is “hard”. A trapdoor one-way
function is a one-way function in which both y = f(x) and x = f-1(y) are easy to compute if x or y and the key k
are known, but f-1(y) = x is “hard” if k is not known. K is called trapdoor information.
Secrecy
(confidentiality): If A
encrypts a message
using B’s public key,
only B can decrypt it
by using its private
key.
Authentication (and
non-repudiation): B
can verify a message
from A is authentic by
decrypting the
message using A’s
public key since A is the only party that could have used the private key to encrypt the message.
Public-key Cryptoanalysis:
Brute-force attacks: A countermeasure would be using large keys, but trade-off as complexity of
encryption/decryption may not scale linearly with the length of the key. In practice, public-key
encryption is confined to key management and digital signature.
Computing private key from public key: There is no proof that this attack is unfeasible.
Probable-message attack: A relatively short message m is encrypted using a public key. An attacker
can try to encrypt with the public key all possible plaintexts and when the result matches with the
ciphertext, they have discovered the plaintext. A countermeasure is appending some random bits
to m.
Conversely we can determine the greatest common divisor by comparing their prime factorizations and
using least powers.
Modular arithmetic: We write the remainder r of the integer division of a by n as a mod n. The integers a
and b are congruent modulo n if a mod n = b mod n (a =n b). Two properties that were used in the exams:
The RSA algorithm: Most popular public-key algorithm, it was named after inventors: Rivest, Shamir,
Adleman and its security comes from difficulty of factoring large numbers. In fact, Keys are functions of a
pairs of large, ≥ 100 digits, prime numbers.
Useful concepts:
Break message in blocks of length floor(log2(n)) bits, each representing a number Mi smaller than n.
(example, n = 16, blocks of length 4 that contain numbers from 0 to 15).
Compute Ci = Mie mod n.
It is possible to find values of e, d and n such that M = Med mod n for all M < n.
It’s relatively easy to calculate C and P using their respective formulas.
It’s unfeasible to determine d given e and n.
RSA security: Computation of secret d given (e, n) is as difficult as factorization (if we can factor n as pq, we
can calculate φ(n) and therefore also d). No known polynomial time algorithm, but given progress in
factoring, n should have at least 1024 bits. There is no proof that to compute M given C and (e, n) is
unfeasible.
RSA is malleable. For this reason, RSA is commonly used together with padding method.
Public key encryption algorithms can be used to support symmetric cryptography, which are faster, by
distributing the secret key. Two approaches seen:
Secret key distribution with RSA: We encrypt the key using the RSA algorithm and then
concatenate it with the message encrypted with a symmetric algorithm that uses the key k.
Strengths of Diffie-Hellman:
Weakness of Diffie-Hellman: Keys are unauthenticated and thus it is vulnerable to the following man-in-
the-middle attack: the attacker shares secret key K2 with A and K1 with B, while A and B think they are
communicating with each other. The attacker does so by intercepting Ya and sending Yd1 instead to B and by
intercepting Yb and sending Yd2 to A instead. B calculates its key as Yd1Xb and the attacker calculates its key as
YbXd1. Basically, the same happens to A, that calculates its keys as Yd2Xa while the attacker calculates its key as
YaXd2. A countermeasure is to sign the exponents, but this requires shared keys.
Message Authentication and Digital Signatures
Message authentication is concerned with:
1. Message encryption: The ciphertext of the entire message serves as its authenticator.
2. Message Authentication Code (MAC): A function of the message and a secret key that produces a
fixed-length value that serves as authenticator.
3. Cryptographic Hash function: A function that maps a message of any length into a fixed-length
hash value, which serves as authenticator.
1. Message encryption:
(a) Provides confidentially because only A
and B know the key K. K also provides
a degree of authentication since the
message could only come from A, it
has not been altered in transit by C
and it requires some
formatting/redundancy. It doesn’t
provide signature though, since A
could deny given the fact that B can
forge a message.
(b) Provides confidentially since only B
can open the message, but anyone
could have used B’s public key to
encrypt it so it doesn’t provide
authentication.
(c) Provides authentication and signature
since only A could have used its private
key to encrypt the message, it has not
been altered in transit by C and it
requires some formatting/redundancy.
In addition to it, any party can verify its
authenticity by using the A’s public
key.
(d) Provides both confidentially and authentication. See points b and c.
Plaintext needs some structure that is easily recognised but cannot be replicated without the encryption
function to automatically determine if incoming ciphertext decrypts to intelligible plaintext, such as
appending a checksum to the message before encryption (|| means concatenation):
2. Message Authentication Code (MAC):
A MAC function accepts a variable-size message M and a secret key K as input and produces a fixed-size
output C (M, K), which is called message authentication code (MAC) or cryptographic checksum. It is a
many-to-one function as potentially many messages have same MAC but finding these is very difficult.
The Data Authentication Algorithm is based on DES and split input message in 64-bit chunks. It’s one of the
most widely used MACs, but security weaknesses have been discovered.
3. Cryptographic Hash function:
H can be applied to a
block of data of any
size and produces a
fixed-length output
(like MAC).
H(x) is relatively easy to
compute for any given
x.
One-way property: For
any given value y, it is
computationally
infeasible to find x such
that H(x) = y. It’s
important in message
authentication
techniques involving the use of a secret value.
Weak collision resistance or pre-image resistance: For any value x, it’s computationally infeasible
to find y != x such that H(y) = H(x). It prevents forgery when an encrypted hash code is used. It’s
also useful to protect password files (for password p, store H(p) instead of p or even better the pair
(s, H(s||p)), to protect against dictionary attacks, since if p = p’ then H(p) = H(p’)).
Strong collision resistance or 2nd pre-image resistance: It is computationally infeasible to find any
pair (x, y) such that H(x) = H(y). It’s useful against the birthday attack.
S/Key: One-time password system developed from dumb terminals and untrusted public computers that
does not require to type a long-term password. Passwords are printed or computed by portable device and
because they are used only once, they are useless to passwords sniffers.
The initial secret W is generated, and the hash function is applied to it n times, creating a set of n
passwords one-time. W is discarded while Hn(n), the nth password, is the only password stored in the
server. The user instead uses all passwords from n-1 to 1, discarding the nth password. That’s because the
server, to authenticate the user, applies the hash function to the password the user provides and confronts
to the password it has stored. In the first round, the password the server has stored is the password n and
it’s equal to Hn-1(w).
Happy birthday! Let our hash function H have 2m possible output. H must be applied to 2m/2 inputs so that
the probability of a collision is greater than 0.5. Proof below:
It divides the original message in L blocks from Y0 to YL-1. Some examples of protocols that use this scheme is
MD5 and SHA (the first has known weakness while the second is assumed secure).
Digital signature: Message authentication protects two parties who exchange messages from any third
party, but does not protect the two parties against themselves, such as in the point (a) of message
encryption using a symmetric algorithm.
Provide the means to verify the author and the date and time of the signature.
Authenticate the contents of at the time of the signature
Be verifiable by third parties, to resolve dispute.
Direct Digital Signature: The destination must know the public key of the source because the entire
message (such as in the point (c) of message encryption, using the private key to authenticate a
party) or the hash of the message will be encrypted using the private key of the source. Security of
the schema depends on security of the sender’s private key. Every signed message should contain a
timestamp (date and time) and compromised keys should be promptly reported to central
authority. But an opponent can steal a private key at time T and then forge messages stamped with
a time before T.
Arbitrated Digital Signature: The problem associated with direct digital signatures can be
addressed by using an arbiter (trusted third party). The main role of the arbiter is to add the
timestamp.
Secure mail
PGP: Pretty good privacy.
(a): The hash H of the message M is encrypted using A’s private key and them concatenated with the
original message. The zip of the two combined is sent and then B unzips it, compute the hash of the
message received and compare it with the hash decrypted.
(b): The message is zipped to make it smaller. Then, it’s encrypted using a symmetric algorithm using the
key Ks. The key Ks is encrypted as well, using B’s public key and then concatenated with the encrypted
message. B then decrypts Ks using its private key and then decrypts the message and unzip it.
(c): First, A authenticates as seen in (a), but instead of sending the concatenation of the encrypted hash and
the message, it zips them both and then encrypt using a symmetric algorithm and send the key Ks used
encrypted using a public-key algorithm.
A summary of the above diagrams. The message digest corresponds to the hash H and the rectangular
blocks corresponds to a database in which the private and public keys of A and B are stored.
Public Key Infrastructure (PKI)
Digital certificates are digital objects that serve for key distribution and authentication (with non-
repudiation). They play a key role in securing the Web.
Symmetric vs public-key encryption: The former is faster and uses a unique short key to encrypt and
decrypt the message while the latter uses the public key of the receiver to encrypt, and the receiver uses its
private key to decrypt. Both keys are longer and it’s a computationally expensive mechanism.
Man-in-the-middle attack: It’s possible to an attacker to impersonate someone else by saying that their
public key is B’s public key to A and then communicates with B using its public key as if they were A.
The Public Key Infrastructure (PKI) provides the technical and legal elements that enable the secure usage
of the Web.
But what about the security of CA’s public key? The client gets it from a message from CA that is self-signed
using its private key and contains its public key. In this way, the client can verifies if the private key used to
sign a message they receive is correct or not.
Domain Validation (DV) certificates are by far the most common type. The only validation the CA is
required to perform is to verify that the requester has effective control of the domain. The CA is
not required to attempt to verify the requester's real-world identity. This is not sufficient to avoid
phishing attacks, for example, since an attacker could use a domain that is similar to the website
they are trying to impersonate in order to mislead users into trusting them.
Organization Validation (OV) and Extended Validation (EV) certificates, where the process is
intended to also verify the real-world identity of the requester.
Security Protocols
Basic notions:
A protocol consists of a set of rules (conventions) that determine the exchange of messages between two
or more principals. In short, a distributed algorithm with emphasis on communication. Security (or
cryptographic) protocols use cryptographic mechanisms to achieve security objectives. Design them is
analogous to program Satan’s computer since the channels are untrustworthy.
Notation
Example: In a remote keyless system, the first security goal is that the receiver (R) sends unlock command
to actuator only if car owner previously pressed unlock button on key fob (KF).
Attempts:
1. KF -> R: unlock, SN: Attacker can overhear SN and replay it subsequently. The secrecy of SN is
compromised, and R cannot check authenticity.
2. KF -> R: {unlock, SN}k: Even though R now can check authenticity since only KF could have
encrypted the message using the common key k, attacker can still overhear and replay the
encrypted request despite not being able to access SN anymore.
3. KF -> R: {unlock, T}k: The original security goal is modified in order to meet the freshness
requirement by substituting the word “previously” with “recently”. In fact, the timestamp T used
prevents replay attack, but it requires synchronized clocks on KF and R. There is no need anymore
to use SN because KF can be identified by shared key.
4. R sends KF a challenge, a nonce (number
used only once), that prevents replay
attacks, since it works in a similar way to a
timestamp (it expires after a certain time).
Synchronized clocks are not needed anymore, since there is no timestamp, but additional steps are
necessary.
Protocol notation:
Dolev-Yao attacker model: If the protocol can avoid an active attacker, which only optimistic assumption is
that they don’t know how to break the cryptography, you can avoid all other types of attackers, because
they’re all weaker. An active attacker can intercept and read all messages. They can also decompose and
build messages.
Kinds of attacks:
Examples of protocols:
1. NSPK
2. Otway-Rees
3. Andrew Secure RPC
4. Denning & Sacco
1. Needham-Schroeder Public Key Authentication Protocol (NSPK):
Its goal is mutual (principal) authentication. In fact, Alice sends an
identifier A and a nonce encrypted using Bob’s public key. Then
Bob decrypts the message e since he’s able to do that he must be
Bob. After that, Bob sends a message with his nonce to Alice using
her public key. Since Alice can read the nonce Bob had sent and
sent it back, Alice must be her. Recall principals can be involved in multiple runs. Goal should hold
in all interleaved protocol runs.
2. Otway-Rees Protocol:
It’s a server-based protocol providing authenticated key distribution (with key authentication and key
freshness). The letter I is an integer, the protocol run identifier.
In M1 and M2, A
and B prepare
information to
give to the server
S, encrypting their
nonces using the K
they have in common with the Server.
In M3 and M4, S sends back to them their nonces and the session key A and B can use to communicate,
encrypting them using the keys they have in common. B receives both his and A’s message in M3, so he
needs to send A’s message to her in M4.
Attack 1: Mallory, the attacker, uses a reflection/type flaw attack. A type flow is when A sends a message
to B and B accepts it as valid, but interprets the bits differently than A. In this case, Mallory intercepts the
message from A to B in step M1 and reflects the encrypted part back to A (concatenated with I), omitting
the steps M2 and M3. A doesn’t realize the message is wrong because the length of |I, A, B| is the same of
|Kab|, so she ends up thinking I, A, B is the key.
Attack 2: Like the previous attack but even more powerful, in this one Mallory plays the role of the server
by reflecting the encrypted components of M2 with the identifier I back to B in step M3. In this way, A and
B accept wrong key and M can decrypt their subsequent communication. Key authentication failed in this
case.
3. Andrew Secure RPC Protocol: Exchange a fresh, authenticated, secret, shared key for a new session
between two principals sharing a symmetric key (and nonce for future usage). Here the man-in-the-
middle attack is possible as well, since Mallory can record M2, then intercepts M3 and send M2
back to A. In this way, A is fooled into thinking that it’s Na+1 the session key, so authentication is
violated, even though secrecy is not, contrary to attack 2 of the previous example.
4. Denning & Sacco Protocol: The server S provides the certificates C for A and B. A then shares to B
the certificates and encrypts an
authenticated message (authentication with
A’s private key) using B’s public key. This
message contains a session key for A and B
and a timestamp that limits the key.
Man-in-the-
middle attack:
Alice shares a
session key with
Charlie using the protocol, but Charlie uses the protocol with B as well, but impersonating Alice by
replaying her last message. In this way, Bob ends up using Charlie and Alice’s session key to encrypt
his messages, so the key is neither authenticated nor secure. To avoid this attack, simply put the
name
principals in
last step.
Kerberos: It’s a protocol for authentication/access control for client/server applications. In Greek
mythology, Kerberos is 3-headed dog guarding entrance to Hades. Modern Kerberos intended to have
three components to guard a network’s gate: authentication, accounting, and audit. Last two heads never
implemented.
Kerberos’ architecture (remember that authentication is determining the identity of a given party whilst
authorization is who can do what inside a system):
The Kerberos’ authentication protocol is inspired in NSSK, the differences are that nested encryption in step
2 is avoided and instead of nonces timestamps are used to ensure key freshness (in particular, to solve step
3 vulnerability in the NSSK protocol).
Authentication phase:
Alice logs onto workshop and requests network resources to the KAS. The KAS accesses database and sends
Alice a session key, KA,TGS, which has lifetime of several hours and the user is logged out when it expires, and
an encrypted ticket AuthTicket that will be presented to the TGS in the next phase. A’s password is used to
decrypt the results (in fact KAS is derived from user’s password, such as using a hash function) and then is
forgotten, while the ticket AuthTicket and session key KA,TGS are saved.
Authorization phase:
A presents the ticket received in message 2 to TGS together with a new authenticator, that will be stored in
the server. The authenticator has a short seconds lifetime in order to avoid replay attacks. TGS issues Alice
a new session key KAB that has a lifetime of a few minutes to communicate to the network resource B and a
new ticket ServTicket, that will be presented to the resource B in the next phase.
Service phase:
Alice presents to the resource B the ServTicket received in message 4 and a new authenticator, for the
same reason as before. A might sent other information too. B then replies adding 1 to the timestamp and
authenticating the service.
Limitations of Kerberos:
Logical functionality built in layered way (TCP/IP protocol reference model, for example, or the OSI model
that has three additional layers, presentation, physical, and transport/session distinction):
i-th layer of one node communicates with i-th layer of different node, each using services provided by their
lower layers. Headers/trailers are added to (or stripped from) packets as they traverse the protocol stack.
Internet: Confederation of networks using TCP/IP protocols. Neither provide security: no authentication or
confidentiality since addresses can be faked and payload can be read and modified. Different subnetworks
may not be trustworthy and 15+ hops for a packet from source to destination is common. How do we
secure communication/applications?
Transmission Control Protocol (TCP): establishes reliable communication between systems across a
network, that is, either all data delivered without loss, duplication, or reordering, or the connection
is terminated.
Internet Protocol (IP): deliver data across a network. Packet headers specify source and destination
addresses. Protocol computes path and forwards packets over multiple links from source to
destination.
For most implementations of IP stacks, transport layer and below implemented in operating system and
above transport layer implemented in user process.
Layers to implement security:
Application (or end-to-
end): No assumptions
needed about security
of protocols used,
routers, etc. and
security decision can
be based on user-ID,
data, etc. The problem
is that applications must be designed “security aware”.
Between application layer and transport layer (SSL): No modification to OS and minimal
changes to applications, but it might have problems interacting with TCP (SSL may reject
data that TCP accepts. In this case, SSL must then drop connection).
IPsec: Transport layer security without modifying applications, but it only authenticates IP
addresses, no user authentication. More is possible, but it requires changing API and
applications.
IP security: Provides a secure channel for all applications through encryption and/or authentication of
traffic and ability to do filtering, based on a policy database (just as if there were a firewall between the two
ends). It’s installed in operating systems for end-to-end security and security gateways (firewalls or
routers). Latter used for implementing Virtual Private Networks (VPNs).
Authentication Header
(AH): protects the
integrity and the
authenticity of IP
datagrams (but not their
confidentiality).
Encapsulating Security
Payload (ESP): protects
confidentiality and
optionally also integrity.
Key Management (IKE):
Internet Key Exchange
Protocol.
A security association is a one-way relationship between sender and receiver defining security services.
Specifies things like authentication algorithm (AH), encryption algorithm (ESP), keys, etc. Identified by fields
in AH/ESP headers including the Security Parameters Index and destination address. SA is established using
IKE, or possibly some other protocol. Implementation stores these in a security association database.
Authentication header (AH): Extra header between layers 3 and 4 (IP and TCP) providing destination
enough information to identify SA. AH guarantees integrity only, but also protects part of IP header.
Sequence number is initialized to zero and incremented by the sender for each packet.
Transport mode: AH inserted after IP header, before IP payload. MAC taken of entire packet
(except for mutable fields). It provides end-to-end protection between IPsec-enabled systems.
Tunnel mode: Entire original packet authenticated (new outer IP header). Inner header carries
ultimate source/destination address whilst new outer header also protected (except mutable
fields) and may contain different IP addresses.
Encapsulating Security Payload (ESP): Header specifies encryption and optional authentication.
Denial of Service (DoS) attack against Diffie-Hellman: Attacker sends series of request packets, each with
different spoofed source IP address, so that the server R must process each request. Expensive
exponentiation and storage (of ys).
Cookies against DOS: I and R send “cookies” CI and CR to partner so as an attacker must be at address and
complete a cookie exchange for each address it spoofs (and it’s complicated).
HyperText Transfer Protocol (HTTP): It’s an application-level protocol that transfers hypertext requests and
information between server and browsers and doesn’t support sessions. The user navigates through URLs
and their side (client) initiates all communication.
The server instead delivers data upon request of the client. The data can be static (e.g., HTML pages) or
dynamic (i.e., computed on demand by a web application).
Server-Side (Perl)
Client-Side (JavaScript)
Data is posted to the application through HTTP methods, this data is processed by the relevant script and
result returned to the user’s browser.
Three tier architecture: Application server on second tier and databases on third tier.
On each request, the client sends a HTTP header to the server. Normally headers are sent unencrypted
unless it’s HTTPS. Headers contain different types of information, including private ones that combined
allows server providers to have a reasonable tracking of the user’s behaviour:
FROM: the users email address, critical due to user tracking and address harvesting (spam).
AUTORIZATION: contains authentication information, such as username and password. (In HTTP,
“authorization” means “authentication”!).
COOKIE: a piece of data given to the client by the server and returned by the client to the server in
subsequent requests.
REFERER: the page from which the client came, including search terms used in search engines
(Google earns money when from Nike if you access their website by it).
Cookies and privacy: Cookies can be used to track users, so privacy is attacked from many sides. Cookies
are like medical reports: they are confidential information that is not supposed to be understandable by the
part of the client, it’s a mean to the doctor to “restore the session”. Neither the doctor/server nor the
patient/client keep the reports/cookies, the former because there are too many and the latter because the
browser does that for them. There are some techniques to prevent cookies from being stolen, like dropping
a session if the location of the session id (the cookie) is changing in a strange way, for example (Italy then
Singapore). Cookies with very long-life spams are suspicious as well.
HTTP Authentication:
Basic authentication: It’s widely used and is login/password based. Information is sent unencrypted
and credentials are sent on every request to the same realm.
Digest authentication: It’s seldom used. The server sends a nonce that then is hashed by the client
with login/password (h(u || p || N)). Client sends only cryptographic hash over the net.
Unvalidated inputs: Many of the top 10 web application risks are attacks that take advantage of the fact
that web applications use input from HTTP requests, which can be tempered by anyone and therefore it’s
possible to send unexpected data. Only server-side input validation can prevent these attacks.
Injection flows: A special injection “unvalidated input” attack. Attacker tries to inject commands to be
executed in the back-end system, that can be:
the underlying operating system (system commands).
the database servers (SQL commands).
used scripting languages (e.g., Perl, Python).
Same-origin policy: Since with JavaScript it’s possible to access and modify information of the document
(such as the DOM, cookies, or forms), access to read or write data is only granted to documents
downloaded from the same site as the script. “Same” means the same protocol (HTTP or HTTPS), the same
server’s name (same domain) and same port.
Cross-Site Scripting (XSS): We can still inject JavaScript code without violating the same-origin policy. A way
to do so is to insert as input in a website a piece of code between the tag <script> </script> that then is sent
back by the server and rendered as if it was part of the HTML code, thus executed by the client’s browser.
This can be a real problem if, instead of being just reflected to the same user that injected the code, it’s
stored in a database and then shown to other users (such as YouTube’s comment section).
Cross-Site Request Forgery (CSRF): It’s a type of attack that occurs when a malicious Web site, email, blog,
instant message, or program causes a user’s Web browser to perform an unwanted action on a trusted site
for which the user is currently authenticated. The solution is to give application strong control on whether
the user intended to submit the desired requests.
Clickjacking: It’s a malicious technique that consists of deceiving a web user into interacting with something
different to what the user believes she is interacting with. Since frames of a website can overlap (like
advertisements), you can also specify the transparency of a frame. The malicious frame on top can be fully
transparent and the user can’t see it.
Security on the server side (unvalidated input, broken authentication and session management, cross-site
scripting, injection flaws, denial of service, ...)
Secure Programming
A buffer is a contiguous region of memory storing data of the same type (for example, characters). A buffer
overflow occurs when data is written past a buffer’s end. The resulting damage depends on
buf[0] = *(buf+0)
&(buf[1]) = buf + 1
buf[1] = *(buf+1)
Memory named by virtual addresses: Each process has its own virtual address space and they cannot access
memory in the space of another program.
But here is still a problem: If input is too long then ebp or return may
be effected.
Where would a malicious attacker jump to? One common target is to code that creates a (root)shell
Where in memory does this code go? Common approach: place exploit code on the stack. Usually,
place within the very buffer that is overflowed. Alternatively, attacker places exploit code:
On the stack: into parameters or other local variables
On the heap: into some dynamically allocated memory region
Into environment variables (on stack)
Another alternative is to abuse existing code, for example, jump to fragments of the program code
or library functions
Return address must then point exactly to the exploit’s entry point
Defensive programming:
A canary is a random value (hard for attacker to guess) or a value composed of different string
terminators (CR, LF, Null, -1) on the stack whose value is tested before returning.
Avoid unsafe library functions, e.g., strcpy, gets: Replace with safe variants, e.g., strncpy, fgets.
Always check bound of arrays when iterating over them.
Use a language that is type safe instead of C/C++, if possible.
Avoid buffers on stack. Instead, use heap storage, e.g., allocate space with malloc(). As return
address is on the stack, it can not be overwritten by a buffer-overflow on the heap. Heap overflows
are also a real problem and while they have no effect on control-flow integrity, they can violate
data integrity.
Use Non-Executable Buffers, but consider that it has cons as well.
Compiler/hardware support for preventing overflows is available. It helps, but be aware of the
limitations and overhead.
Security policy: Defines what is and what is not allowed in a system in terms of high-level rules or
requirements. It’s analogous to a set of laws and describes access restrictions (that is, the
relationship between subjects and objects). They can be grouped into three main classes:
o Discretionary (DAC): (authorization-based) policies control access based on the identity of
the requestor and on access rules stating what requestors are (or are not) allowed to do.
o Mandatory (MAC): policies control access based on mandated regulations determined by a
central authority.
o Role-base (RBAC): policies control access depending on the roles that users have within the
system and on rules stating what accesses are allowed to users in given roles.
DAC and RBAC policies are usually coupled with (or include) an administrative policy that
defines who can specify authorizations/rules governing access control.
Security model: Provides a formal representation of a security policy.
Security mechanism: Defines the low-level hardware and software functions that implements the
security model.
Reference architecture:
After, the authentication of the user in the system is done, every request passes through the reference
monitor, a trusted component that is tamper-proof (it’s impossible to alter), non-bypassable (it must
mediate all accesses), security kernel (it must be confined in a limited part of the system), small (easier to
apply rigorous verification methods).
Discretionary Access Control (DAC): The owner of a resource controls their accesses to their discretion.
Owners can usually transfer ownership to other users as well. It’s flexible, but also open to mistakes,
negligence or abuse (like trojan horses). It also requires that all system users understand and respect
security policy and understand access control mechanisms.
Access Control Matrix Model: Simple framework for describing a protection system by describing
the privileges of subjects (users) on objects (data). Used to represent a finite relation AC subset of
the cartesian product Subjects x Objects x Privileges, given as matrix.
A starting state s0 = (S0, O0, M0) and a set of commands C determines a state-transition system.
end
A protection state is a triple (S, O, M), where S is a set of subjects, O a set of objects and M a matrix
defining privileges for each (s, o) ϵ S x O, where M(s, o) = {p ϵ Privileges | (s, o, p) ϵ AC}.
State transitions described by commands that transforms one state into another by changing its parts.
These are the six primitive operations:
Access-control (authorization) list (ACL): use lists to express view of each object o. Owner has the
sole authority to grant, revoke or decrease access rights to F to other users.
Capability list: Subject view of AC matrix. They’re less common than ACLs because more difficult to
manage.
DAC example
Unix provides a mechanism based on ACLs suitable. Controls access per object (file, directory, ...) using
permission scheme owner/group/other. Objects are assigned to a single user (owner) and a single group
(normally the group of the directory containing it). They can be changed by using the chown and chgrp
commands, respectively. Each user can be assigned to multiple groups using useradd. Permission bits can
be assigned to objects by their owners or by the administrator (root) using chmod.
Not all policies can be directly mapped onto this mechanism since it supports limited delegation of rights
using setuid or setgid, which is open to abuse and can cause many security holes.
Users are passive entities for whom authorizations can be specified. Once connected to the system, users
originate processes (subjects) that execute on their behalf and submit requests to the system. Since
discretionary policies don’t distinguish users from subjects, they are vulnerable from processes executing
malicious programs exploiting the authorizations of the user on behalf of whom they are executing. Access
control system can thus be bypassed by Trojan Horses, which are computer programs which contain hidden
functions that exploit the legitimate authorizations of the invoking process. It’s possible to delete all files of
a user or leak information to users not allowed to read it. All this can happen without the cognizance of the
data administrator/owner, and despite each single access request being controlled against the
authorizations.
Example of Trojan
horse: Vicky has
the permission to
write on John’s
file. She executes
a malicious
application that
has the hidden
function of
reading File
Market, owned by
Vicky and that
John can’t access,
and then write its
content on File
Stolen, a file that
can be accessed
by John.
Mandatory Access Control (MAC): AC decisions are formalized by comparing security labels indicating
sensitivity/criticality of objects with formal authorization of subjects. It’s more rigid than DAC, but also
more secure. There is a system wide access restriction to objects, and it’s called mandatory because
subjects may not transfer their access rights. An example is the military, in which clearance levels are
assigned to objects and users and users can only access objects of equal or lower levels.
Models can capture policies for confidentiality (Bell-LaPadula) or for integrity (Biba, Clark-Wilson).
o Read down means a subject’s clearance must dominate
(>=) the security of the object being read. Read up is the
contrary.
o Write up instead means that a subject’s clearance must
be dominated (<=) the security level of the object being
written (often it’s =, since otherwise the subject couldn’t
read that they wrote, and if the subject needs to write
lower data, it will have to lower their clearance). Write
down is the contrary.
o For confidentiality, it’s read down and write up, and for integrity is read up and write down.
Some models apply to environments with static policies (Bell-LaPadula), others consider dynamic
changes of access rights (Chinese Wall).
Security models can be informal (Clark-Wilson), semi-formal, or formal (Bell-LaPadula, Harrison-
Ruzzo-Ullman)
A lattice (L, <=) consists of a set of security levels L and a partial ordering <=, so that for every 2 elements a,
b ϵ L there exists a least upper bound (that is, a minimal level) u ϵ L and a greatest lower bound (that is, a
maximal level) l ϵ L.
In the first case, if a and b are two objects, the former of level 1 and the latter of level 2, u = 2
because 1 <= 2 and 2 <= 2 and for v = 2 and v = 3, 2 <= 2 and 2 <= 3.
In the second case, if a and b are two subjects, the former of level 2 and the latter of level 3, l = 2,
because 2 <= 2 and 2 <= 3 and for k = 1 and k = 2, 1 <= 2 and 2 <= 2.
Another example (in blue, (h1, c1) is dominated by (h2, c2) if and only if …):
The Bell-LaPadula (BLP) Model: It models confidentiality aspects of multi-user systems by combining
aspects of DAC and MAC:
Access permissions are defined both through an AC matrix and through security levels.
Multi-level security (MLS): mandatory policies prevent information flowing downwards from a high
security level to a low one.
BLP is a static model: security levels (labels) never change.
Level diagrams are used for BLP, but they are not able to represent lattice properties of the security labels
under the dominates (<=) relation (when a subject tries to access an object with an unrelated security label,
that is, neither of them dominate over each other).
BLP defines different security properties for a state, for example, the following two:
Simple security property (ss-property, no-read-up, NRU): A subject s can read an object o only if
the security level of s dominates the security level of o.
*-property (star property, no-write-down, NWD): A subject s can write an object o only if the
security level of o dominates the security level of s. It prevents a high-level subject from sending
messages to a low-level one (possible solutions to avoid this property if needed are temporally
downgrading the level of s or identify a set of trusted subjects which are permitted to violate this
property).
These two properties prevent untrusted subjects from simultaneously having read access to information at
one level and write access to information at a lower level.
It does not specify how to change access rights or how to create and delete subjects and objects, a
problem that can be addressed by employing the Harrison-Ruzzo-Ullman model.
It is a static model (controversial tranquility property).
It contains covert channels, that is, information-flows that are not controlled by security
mechanisms.
Tranquility property: Some say BLP needs to be improved because a system can brought into a state where
everyone is allowed to read everything using a transition, which is not secure, while others say that this is
not a problem of BLP but rather of correctly capturing the security requirements. If the user requirements
call for such a transition, then it should be allowed in the model, else it should not be implemented). At the
root of this disagreement is a state transition that changes security levels (and access rights). BLP is
however static model: security levels are fixed.
Strong tranquility property: the security levels of subjects and objects never change during system
operation.
Weak tranquility property: the security levels never change in such a way as to violate a defined
security policy (for example, the level of an object never be changed while it is being used by some
subject).
Covert channels: Sometimes it is not sufficient to hide the contents of objects, but also their existence must
be hidden. In BLP, the AC mechanism itself can be used to construct a covert channel (where information
flows from a high security level to a low one): in fact, telling a subject that a certain operation is not
permitted already constitutes information-flow. Problem can be solved: for example, an object may have
different values at different security levels (polyinstantiation).
The Biba Model: It’s an integrity model with the opposite rules of BLP:
Biba and BLP can be combined to model both confidentiality and integrity. Addresses integrity in terms of
access by subjects to objects using a model like that of BLP, but unlike BLP, there is no single high-level
integrity policy but rather a variety of policies.
The Chinese Wall Model: A commercially inspired confidentiality model (whereas most commercial models
focus on integrity). Models access rules in a consultancy business where analysts must make sure that no
conflicts of interest arise when they are dealing with different companies. Informally, conflicts arise
because clients are direct competitors in the same market, or because of the ownership of companies. An
adaptation of Bell-LaPadula, with three levels of abstraction:
ss-property: s is permitted to access an object o only if all the other objects already accessed by s were in
the same company dataset as o (cd(o) = cd(o’) for all o’ with N(s, o’) = true, where N returns true if a subject
has accessed a determined object) or if o belongs to a company dataset that is not present in the conflict of
interest classes of the objects already accessed by s (cd(o) !ϵ cic(o’) for all o’ with N(s, o’) = true).
*-property: grant write access to an object only if no other object that can be read by s is in a different
company dataset and contains unsanitized information (cd(o) != cd(o’) and cic(o) != empty set, because a
sanitized object has cic(o’) = empty set because it doesn’t contain sensitive information, thus it can be
accessed by every company).Indirect information-flow is still possible with this property, since two analysts
can accesses two companies that are competitors in parallel and then have both the right to write data
about those companies on a Bank for example (and consequently access data of the rival company as well,
without violating the property).
The Clark-Wilson Integrity Model: An informal integrity model motivated by the way commercial
organizations control the integrity of their data, quite different from level-oriented ones (like BLP and Biba).
In this model, objects (data items) are partitioned into:
S has permission p on object o if and only if s has role r and r has permission p on o.
(Role hierarchy) S has permission p on object o if and only if there exists roles r and r’ so that r >= r’,
s has role r and r’ has permission p on o.
Further advantages:
Least privilege: Roles allow a user to sign on with the least privilege required for the particular task
she needs to perform. This minimizes the danger of damage due to inadvertent errors or Trojan
Horses.
Separation of duties: no user should be given enough privileges to misuse the system on their own.
For instance, the person authorizing a paycheck should not be the same person who can prepare
them.
Constraints enforcement: Roles provide a basis for the specification and enforcement of further
protection requirements. For instance, cardinality constraints, that restrict the number of users
allowed to activate a role or the number of roles allowed to exercise a given privilege.
Administrative Role-Based Access Control (ARBAC): The simple hierarchical relationship is not sufficient to
model the different kinds of relationships that can occur. For example, a secretary may need to be allowed
to write specific documents on behalf of her manager, but neither role is a specialization of the other.
Different ways of propagating privileges (delegation) should then be supported. In some cases, the
requestor’s identity needs to be considered: For instance, a doctor may be allowed to specify treatments
and access files but she may be restricted to treatments and files for her own patients, where the doctor-
patient relationships is defined based on their identity.
In URA97 submodel, administrative actions can only modify the User Assignment (UA) relation. Example:
Tips
The properties are (not for security protocols):
- Authentication (that implies integrity): The channel is secure, no third-party could have acceded to
the information or even tampered it.
- Non-repudiation (that implies authentication): Neither party can forge nor deny a message,
because it’s strictly linked to one individual using their private key to encrypt it.
- Secrecy/Confidentiality: The message can’t be read by anyone who has not the secret k used to
encrypt it (like a session key).
- Freshness of the key.
Who authenticates? Authentication is done through challenges. If A sends a challenge to B, the purpose of
the protocol is B to be authenticated to A, and vice versa. If they both send challenges, it’s a mutual
authentication.
Replay vs reflection attacks: Replay attack would be to capture the command and just send it again to open
the door again, without knowing what the command is (encrypted) or knowing how to sign it (it is already
signed). Reflection attack is using the target to authenticate its own challenge (nonce). Step 3 could be used
for a replay attack (the attacker sends an old Kab to establish a communication with Bob and Bob would
think it was Alice who sent the message) while step 4 and 5 could have been a reflection attack if there
wasn’t Nb – 1 (if it was only Nb in step 5, an attacker could replay step 4 message to Bob).
Nonce vs timestamps: Both can be used to verify the freshness of a key, for example (timestamps are a
better option if the receiver has not communicated with the sender yet). Nonces can be used for
challenges, that is, authentication (for example, A can send an encrypted nonce to B using B’s public key
and if B sends it back, always encrypted with A’s public key, A can be sure it’s B who she’s communicating
with).
The Bell-LaPadula exercises are like algebra: you have a security label (classification, categories) for subjects
and objects. Since Bell-LaPadula guarantees confidentiality, it’s read down and write up:
In order to allow the subject reads the object, you need to check if the classification of the subject
is greater than or equal to the classification of the object (unclassified, confidential, secret, top
secret) and the categories of the subject includes the categories of the object (set inclusion).
In order to allow the subject writes the object, you need to check if the classification of the subject
is lower than or equal to the classification of the object (unclassified, confidential, secret, top
secret) and the categories of the subject is included in the categories of the object (set inclusion).
A subject might both read and write an object or not being able to do any operation (not comparable).
Smartcard and PC: Una smartcard contiene la chiave privata del suo possessore e la utilizza per firmare
documenti. Nonostante sia in grado di cifrare stringhe di bit relativamente corte (a causa della limitata
capacità computazionale), è comunque sufficiente ai fini della firma digitale. Infatti:
Per la firma, il PC calcola e invia alla smartcard solo l’hash del documento da firmare. La smartcard
cifra il valore ricevuto con la chiave privata in essa memorizzata e invia al PC il risultato. In nessun
caso la smartcard trasmette verso l’esterno la chiave privata in essa memorizzata e non è
necessario essere connessi alla rete.
Per la verifica della firma serve un certificato digitale di colui che ha firmato il documento (la
smartcard non gioca alcun ruolo, dato che non è più necessaria la chiave privata del suo
possessore). Non è strettamente necessario essere connessi alla rete se di dispone del certificato
digitale di colui che ha firmato il documento e di una Certificate Revocation List recente (per
verificare che non sia deprecato).
Un certificato digitale contiene l’identità del possessore del certificato e dell’Autorità di Certificazione che
ha prodotto il certificato, la firma digitale del certificato stesso prodotta dall’Autorità di Certificazione e la
chiave pubblica del possessore del certificato (che appunto, serve per verificare la firma).
IPSec: IPSec permette la realizzazione di diverse VPN e non è un singolo protocollo, ma piuttosto
un’architettura di sicurezza a livello Network composta da vari protocolli, i principali sono tre:
Sia AH che ESP evitano attacchi di tipo replay e possono essere utilizzati in modalità:
Trasporto: Si aggiungono gli header dei protocolli utilizzati (AH e/o ESP) tra l’header IP e l’header
del protocollo di trasporto (TCP o UDP).
Tunnel: Il pacchetto originario viene interamente incapsulato (si aggiunge un nuovo header IP e si
inserisce l’header dei protocolli AH/ESP tra questo nuovo IP e quello originale).
Da notare che nel caso dell’ESP, l’header IP non viene autenticato, a differenza dell’AH, inoltre si aggiunge
un trailer e auth indietro al pacchetto. Sia negli header AH che ESP è presente il campo SPI (Security
Parameters Index) che serve per identificare la Security Association utilizzata.
Security Association (SA): È una connessione logica che serve per condividere i meccanismi di sicurezza da
utilizzare (l’algoritmo di crittografia, le chiavi, ecc.) e per stabilirla viene usato il protocollo IKE. Loro sono
unidirezionali, per cui sono necessarie due SA per permettere a due host di comunicare tra loro. Tutte le SA
attive su un host sono contenute in un database detto SAD (Security Association Database), mentre esiste
un altro database per le politiche di sicurezza detto SPD (Security Policy Database). È tramite l’SPD che il
sistema decide se il pacchetto debba essere scartato, lasciare passare in chiaro oppure elaborare trammite
IPSec.
Traffico in uscita: Bisogna cercare nel SPD un selettore da applicare al pacchetto. Se questo richiede
un’elaborazione IPSec, bisogna associare il pacchetto a una SA esistente nel SDA oppure utilizzare
IKE per crearne una nuova. Poi si esegue l’elaborazione necessaria e si passa il pacchetto al livello
inferiore.
Traffico in entrata: Si identificano i pacchetti che devono essere elaborati con IPSec (si usano i
valori identificativi nel campo protocol dell’header IP per verificare se AH o ESP), si identifica la SA
relativa grazie al valore SPI (Security Parameters Index), si applica l’elaborazione IPSec richiesta e si
controlla il SPD. Infine, il pacchetto viene inoltrato verso la destinazione successiva o al livello
superiore.
- Replay
- Reflection: È possibile rinviare un nonce, sia perché chi lo riceve non è in grado di associarlo a una identità
(manca identificatore) o perché la challenge consiste in rinviare lo stesso nonce ed è crittografata con
chiave simmetrica (si dovrebbe dover per forza aprire il messaggio per applicare una funzione in comune
tra A e B in modo da evitare questo genere di attacco, ma non sempre questo va bene).
- Freshness: il ricevente non è in grado di stabilire se si tratta di una chiave vecchia o nuova e I potrebbe
inviare una vecchia che è stato in grado di ricomporre tramite crittoanalisi. Spesso la soluzione coinvolge
inserire dei timestamp oppure nonce.
- Bit: I bit spesso contano perché se sono pochi è possibile creare una tabella apposita per fare associazioni
tra cosa viene inviato e cosa viene ricevuto.
- Messaggi in chiaro: Qualsiasi cosa in chiaro potrebbe essere vulnerabile poiché I può usare i messaggi e
metterli insieme.