Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

Cryptography

Michele Laurenti
January 18, 2017

Abstract

Notes taken during the Cryptography lectures held by Daniele Ven-


turi (http://danieleventuri.altervista.org/crypto.shtml) in fall 2016
at Sapienza.

Contents

1 Introduction 1
1.1 Perfect secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Randomness Extraction . . . . . . . . . . . . . . . . . . . . . . 6

2 Computational Cryptography 7
2.1 One Way Functions . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Computational Indistinguishability . . . . . . . . . . . . . . . . 8
2.3 Pseudo Random Generators . . . . . . . . . . . . . . . . . . . . 9
2.4 Hard Core Predicates . . . . . . . . . . . . . . . . . . . . . . . 10

3 Symmetric Key Encryption Schemes 12


3.1 Chosen Plaintext Attacks and Pseudo Random Functions . . . 14
3.2 Computationally Secure MACs . . . . . . . . . . . . . . . . . . 17
3.3 Domain Extension . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Chosen Cyphertext Attacks and Authenticated Encryption . . 22
3.5 Pseudo Random Permutations . . . . . . . . . . . . . . . . . . 24
3.6 Collision Resistant Hash Functions . . . . . . . . . . . . . . . . 27
3.7 Claw-Free Permutations . . . . . . . . . . . . . . . . . . . . . . 30
3.8 Random Oracle Model . . . . . . . . . . . . . . . . . . . . . . . 31

4 Number Theory 31
4.1 Elliptic Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Diffie-Hellman Key Exchange . . . . . . . . . . . . . . . . . . . 35
4.3 Pseudo Random Generators . . . . . . . . . . . . . . . . . . . . 35
4.4 Pseudo Random Functions (PRFs): Naor-Reingold . . . . . . . 36
4.5 Number Theoretic Claw-Free Permutation . . . . . . . . . . . . 37

i
5 Public Key Encryption 37
5.1 Factoring Assumption and RSA . . . . . . . . . . . . . . . . . . 40
5.2 Trapdoor Permutation from Factoring . . . . . . . . . . . . . . 43
5.3 Cramer-Shoup Encryption (1998) . . . . . . . . . . . . . . . . . 44

6 Signature Schemes 49
6.1 Waters Signature Scheme . . . . . . . . . . . . . . . . . . . . . 52
6.2 Identification Schemes . . . . . . . . . . . . . . . . . . . . . . . 55

List of Definitions 60

List of Constructions 61

Acronyms 62

ii
Figure 1: Message exchange between A and B using symmetric encryption. E
is the eavesdropper.

1 Introduction
Solomon,
I’m concerned about security; I think, when we email each other,
we should use some sort of code.

Confidentiality is our goal. We want to encrypt and decrypt a (plaintext)


message m, using a key, to obtain a cyphertext c. As per Kirkoff’s principle,
only the key is secret.
Our encryption schemes have the following syntax:

Π = (Gen, Enc, Dec) .

A and B, the actors of our communication exchange (fig. 1), share k, the key,
taken from some key space K. The elements of our encryption scheme play the
following roles:

1. Gen outputs a random key from the key space K, and we write this as
k ← $Gen;
2. Enc : K × M → C is the encryption function, mapping a key and a
message to a cyphertext;
3. Dec : K × C → M is the decryption function, mapping a key and a
cyphertext to a message.

We expect an encryption scheme to be at least correct:

∀k ∈ K, ∀m ∈ M.Dec(k, Enc(k, m)) = m.

1.1 Perfect secrecy


Shannon defined “perfect secrecy”, i.e., the fact that the cyphertext carries no
information about the plaintext.

Definition 1 (Perfect secrecy). Let M be a Random Variable (RV) over M,


and K be a uniform distribution over K.

1
(Enc, Dec) has perfect secrecy if

∀M, ∀m ∈ M, c ∈ C. Pr [M = m] = Pr [M = m|C = c]

where C = Enc(k, m) is a third RV. 

We have equivalent definitions for perfect secrecy.

Theorem 1. The following definitions are equivalent:


1. definition 1;
2. M and C are independent;
3. ∀m, m0 ∈ M, ∀c ∈ C

Pr [Enc(k, m) = c] = Pr [Enc(k, m0 ) = c]

where k is a random key in K chosen with uniform probability. 

Proof of theorem 1. First, we show that 1 implies 2.

Pr [M = m] = Pr [M = m|C = c]
Pr [M = m ∧ C = c]
= (by Bayes)
Pr [C = c]
=⇒
Pr [M = m] Pr [C = c] = Pr [M = m ∧ C = c]

which is the definition of independence.


Now we show that 2 implies 3. Fix m ∈ M and c ∈ C.

Pr [Enc(k, m) = c] = Pr [Enc(k, M ) = c|M = m] (we fixed m)


= Pr [C = c|M = m] (definition of the RV C)
= Pr [C = c] . (by 2)

Since m is arbitrary, we can do the same for m0 , and obtain

Pr [Enc(k, m0 ) = c] = Pr [C = c]

which gives us 3.

2
Now we want to show that 3 implies 1. Take any c ∈ C.
X
Pr [C = c] = Pr [C = c ∧ M = m0 ]
m0 ∈M
X
= Pr [C = c|M = m0 ] Pr [M = m0 ] (by Bayes)
m0 ∈M
X
= Pr [Enc(k, M ) = c|M = m0 ] Pr [M = m0 ]
m0 ∈M
X
= Pr [Enc(k, m0 ) = c] Pr [M = m0 ]
m0 ∈M
X
= Pr [Enc(k, m) = c] Pr [M = m0 ]
m0 ∈M
| {z }
1
(Enc is indepenendent of M , so we take it out)
= Pr [Enc(k, M ) = c|M = m] = Pr [C = c|M = m] .

We are left to show that Pr [M = m] = Pr [M = m|C = c], but this is easy with
Bayes.

One Time Pad


Now we’ll see a perfect encryption scheme, the One Time Pad (OTP).

Construction 1 (One Time Pad). The message space, the cyphertext space,
and the key space are all the same, i.e., M = K = C = {0, 1}l , with l ∈ N+ .
Encryption and decryption use the xor operation:

• Enc(k, m) = k ⊕ m = c;
• Dec(k, c) = c ⊕ k = (k ⊕ m) ⊕ k = m.

Seeing that this is correct is immediate.


This can actually be done in any finite abelian group (G, +), where you just
do k + m to encode and c − k to decode.

Theorem 2. OTP is perfectly secure. 

Proof of theorem 2. Fix m ∈ M, c ∈ C, and choose a random key.


1
Pr [Enc(k, m) = c] = Pr [k = c − m] = .
|G|

This is true for any m, so we are done.

OTP has two problems:

3
Figure 2: Message exchange between A and B using symmetric authentication.
E is the eavesdropper.

1. the key is long (as long as the message);


2. we can’t reuse the key:

c=k+m
=⇒ c − c0 = m − m0 =⇒ m0 = m − (c − c0 ).
c0 = k + m0

Theorem 3 (Shannon, 1949). In any perfectly secure encryption scheme the


size of the key space is at least as large as the size of the message space, i.e.,
|K| ≥ |M|. 

Proof of theorem 3. Assume, for the sake of contradiction, that |K| < |M|. Fix
M to be the uniform distribution over M, which we can do as perfect secrecy
works for any distribution. Take a cyphertext c ∈ C such that Pr [C = c] > 0,
i.e., ∃m, k such that Enc(k, m) = c.
Consider M0 = {Dec(k, c) : k ∈ K}, the set of all messages decrypted from
c using any key. Clearly, |M0 | ≤ |K| < |M|, so ∃m0 ∈ M such that m0 6∈ M0 .
This means that
1
Pr [M = m0 ] = 6= Pr [M = m0 |C = c] = 0
|M|
in contradiction with perfect secrecy.

In the rest of the course we will forget about perfect secrecy, and strive for
computational security, i.e., bound the computational power of the adversary.

1.2 Authentication
The aim of authentication is to avoid tampering of E with the messages ex-
changed between A and B (fig. 2).
A Message Authentication Code (MAC) is defined as a tuple Π = (Gen, Mac, Vrfy),
where:
• Gen, as usual, outputs a random key from some key space K;
• Mac : K × M → Φ maps a key and a message to an authenticator in
some authenticator space Φ;

4
• Vrfy : K × M × Φ → {0, 1} verifies the authenticator.
As usual, we expect a MAC to be correct, i.e.,
∀m ∈ M, ∀k ∈ K.Vrfy(k, m, Mac(k, m)) = 1.
If the Mac function is deterministic, then it must be that Vrfy(k, m, φ) = 1
if and only if Mac(k, m) = φ.
Security for MACs is that forgery must be hard: you can’t come up with
an authenticator for a message if you don’t know the key.
Definition 2 (Information theoretic MAC). (Mac, Vrfy) has ε-statistical se-
curity if for all (possibly unbounded) adversary A, for all m ∈ M,
 
k ← $KeyGen;
Pr Vrfy(k, m0 , φ0 ) = 1 ∧ m0 6= m : φ = Mac(k, m); ≤ε
(m0 , φ0 ) ← A(m, φ)
i.e., the adversary forges a (m0 , φ0 ) that verifies with key k with low probability,
even if it knows a valid pair (m, φ). 
As an exercise, prove that the above is impossible if ε = 0.
Information theoretic security is also called unconditional security. Later
we’ll see conditional security, based on computational assumptions.
Definition 3 (Pairwise independence). Given a family H = {hk : M → Φ}k∈K
of functions, we say that H is pairwise independent if for all distinct m, m0 we
have that (hk (m), hk (m0 )) ∈ Φ2 is uniform over the choice of k ← $K. 
We show straight away a construction of a pairwise independent family of
function.
Construction 2 (Pairwise independent function). Let p be a prime, the func-
tions in our family H are defined as
ha,b (m) = am + b mod p
with K = Z2p , and with M = Φ = Zp . 
Theorem 4. Construction 2 is pairwise independent. 
Proof of theorem 4. For any m, m0 , φ, φ0 , we want to find the value of
Pr [am + b = φ ∧ am0 + b = φ0 ]

for a, b ← $Z2p . This is the same as


      "   −1  #
m 1 a φ a m 1 φ 1
Pr = = Pr = = 2.
a,b m0 1 b φ0 a,b b m0 1 φ0 |Φ|
 
m 1 −1 φ

This is true since m0 1 φ0
is just a couple of (constant) numbers, so the
1
probability of choosing (a, b) such that they equal the constant is just |Φ|2
.

5
If hk is part of a pairwise independent family of functions, then Mac(k, m) =
hk (m), and Vrfy(k, m, φ) is simply computing hk (m) and comparing it with φ,
i.e.,
Vrfy(k, m, φ) = 1 ⇐⇒ hk (m) = φ.
We now prove that this is an information theoretic MAC.
1
Theorem 5. Any pairwise independent function is |Φ| -statistical secure. 

Proof of theorem 5. Take any two distinct m, m0 , and two φ, φ0 . We show that
the probability that Mac(k, m0 ) = φ0 is exponentially small.
1
Pr [Mac(k, m) = φ] = Pr [hk (m) = φ] = .
k k |Φ|

Now look at the joint probabilities:

Pr [Mac(k, m) = φ ∧ Mac(k, m0 ) = φ0 ] = Pr [hk (m) = φ ∧ hk (m0 ) = φ0 ]


k k
(by definition)
1 1 1
= 2 = |Φ| · |Φ| .
|Φ|

The last steps come from the fact that hk is pairwise independent. To see that
1
the construction is |Φ| -statistical secure:

Pr [Mac(k, m0 ) = φ0 |Mac(k, m) = φ] = Pr [hk (m0 ) = φ0 |hk (m) = φ]


k k
Pr [hk (m) = φ ∧ hk (m0 ) = φ0 ]
k
=
Pr [hk (m) = φ]
k
1
= .
|Φ|

Note that Construction 2 (hk (m) = am + b mod p) is insecure if the same


key k = (a, b) is used for two messages.

Theorem 6. Any t-time 2−λ -statistically secure MAC has key of size (t +
1)λ. 

1.3 Randomness Extraction


X is a random source (possibly not uniform). Ext(X) = Y is a uniform RV.
First, let’s see a construction for a binary RV. Let B be a RV such that
Pr [B = 1] = p and Pr [B = 0] = 1 − p, with p 6= 1 − p. We take two samples,
B1 and B2 from B, and we want to obtain an unbiased RV B 0 .

6
1. Take two samples, b1 ← $B1 and b2 ← $B2 ;
2. if b1 = b2 , sample again;
3. if (b1 = 1 ∧ b2 = 0), output 1; if (b1 = 0 ∧ b2 = 1), output 0.

It’s easy to verify that B 0 is uniform:

Pr [B 0 = 1] = Pr [B1 = 1 ∧ B2 = 0] = p(1 − p)
Pr [B 0 = 0] = Pr [B1 = 0 ∧ B2 = 1] = (1 − p)p.

How many trials do we have to make before outputting something? 2(1 − p)p
is the probability that we output something. The probability that we don’t
output anything for n steps is thus (1 − 2(1 − p)p)n .

Something is missing here on randomness extraction and min-entropy.

2 Computational Cryptography

Some details should be added on negligible functions.

To introduce computational cryptography we first have to define a computa-


tional model. We assume the adversary is efficient, i.e., it is a Probabilistic
Polynomial Time (PPT) adversary.
We want that the probability of success of the adversary is tiny, i.e., neg-
ligible for some λ ∈ N. A function ε : N → R is negligible if ∀c > 0.∃n0 such
that ∀n > n0 .ε(n) < n−c .
We rely on computational assumptions, i.e., in tasks believed to be hard
for any efficient adversary. In this setting we make conditional statements, i.e.,
if a certain assumption holds then a certain crypto-scheme is secure.

2.1 One Way Functions


A simple computational assumption is the existence of One Way Functions
(OWFs), i.e., functions for which is hard to compute the inverse.

Definition 4 (One Way Function). A function f : {0, 1}? → {0, 1}? is a OWF
if f (x) can be computed in polynomial time for all x and for all PPT adversaries
A it holds that

Pr f (x0 ) = y : x ← ${0, 1}? ; y = f (x); x0 ← A(1λ , y) ≤ ε(λ).


 


7
The 1λ given to the adversary A is there to highlight the fact that A is
polynomial in the length of the input (λ).
Russel Impagliazzo proved that OWFs are equivalent to One Way Puzzles,
i.e., couples (Pgen, Pver) where Pgen(1λ ) → (y, x) gives us a puzzle (y) and a
solution to it (x), while Pver(x, y) → 0/1 verifies if x is a solution of y.
Another object of interest in this classification are average hard NP-puzzles,
for which you can only get an instance, i.e., Pgen(1λ ) → y.
Impagliazzo says we live in one of five worlds:

1. Algorithmica, where P = NP;


2. Heuristica, where there are no average hard NP-puzzles, i.e., problems
without solution;
3. Pessiland, where you have average hard NP-puzzles;
4. Minicrypt, where you have OWF, one-way NP-puzzles, but no Public
Key Cryptography (PKC);
5. Cryptomania, where you have both OWF and PKC.
We’ll stay in Minicrypt for now.
OWF are hard to invert on average. Two examples:
• factoring the product of two large prime numbers;
• compute the discrete logarithm, i.e., take a finite group (G, ·), and com-
pute y = g x for some g ∈ G. The find x = logg (y). This is hard to
compute in some groups, e.g., Z?p .

2.2 Computational Indistinguishability


Definition 5 (Distribution Ensemble). A distribution ensemble X = {Xn }n∈N
is a sequence of distributions Xi over some space {0, 1}λ . 

Definition 6 (Computational Indistinguishability). Two distribution ensem-


bles Xλ and Yλ are computationally indistinguishable, written as Xλ ≈c Yλ , if
for all PPT distinguishers D it holds that

Pr [D(Xλ ) = 1] − Pr [D(Yλ ) = 1] ≤ ε(λ).

Lemma 1 (Reduction). If X ≈c Y, then for all PPT functions f , f (X ) ≈c


f (Y). 

8
Proof of lemma 1. Assume, for the sake of contradiction, that ∃f such that
f (X ) 6≈c f (Y): then we can distinguish X and Y. Since f (X ) 6≈c f (Y), then
∃p = poly(λ), D such that, for infinitely many λs
1
Pr [D(f (Xλ )) = 1] − Pr [D(f (Yλ )) = 1] ≥ .

p(λ)

D distinguishes Xλ and Yλ with non-negligible probability. Consider the


following D0 , which is given
(
x ← $Xλ ;
z=
y ← $Yλ .

D0 runs D(f (z)) and outputs whatever it outputs, and has the same probability
of distinguishing X and Y of D, in contradiction with the fact that X ≈c Y.

Now we show that computational indistinguishability is transitive.


Lemma 2 (Hybrid Argument). Let X = {Xλ }, Y = {Yλ }, Z = {Zλ } be
distribution ensembles. If Xλ ≈c Yλ and Yλ ≈c Zλ , then Xλ ≈c Zλ . 

Proof of lemma 2. This follows from the triangular inequality.



Pr [D(Xλ ) = 1] − Pr [D(Zλ ) = 1] = Pr [D(Xλ ) = 1] − Pr [D(Yλ ) = 1]


+ Pr [D(Yλ ) = 1] − Pr [D(Zλ ) = 1]


≤ Pr [D(Xλ ) = 1] − Pr [D(Yλ ) = 1]


+ Pr [D(Yλ ) = 1] − Pr [D(Zλ ) = 1]

≤ 2ε(λ). (negligible)

We often prove X ≈c Y by defining a sequence H0 , H1 , . . . , Ht of distribu-


tions ensembles such that H0 ≡ X and Ht ≡ Y, and that for all i, Hi ≈c Hi+1 .

2.3 Pseudo Random Generators


Let’s see our first cryptographic primitive. Pseudo Random Generators (PRGs)
take in input a random seed and generate pseudo random sequences with some
stretch, i.e., output longer than input, and indistinguishable from a true ran-
dom sequence.
Definition 7 (Pseudo Random Generator). A function G : {0, 1}λ → {0, 1}λ+l(λ)
is a PRG if and only if
1. G is computable in polynomial time;

9
Figure 3: Extending a PRG with 1 bit stretch to a PRG with l bit stretch.

2. |G(s)| = λ + l(λ) for all s ∈ {0, 1}λ ;


3. G (Uλ ) ≈c Uλ+l(λ) .

Theorem 7. If ∃ PRG with 1 bit of stretch, then ∃ PRG with l(λ) bits of
stretch, with l(λ) = poly(λ). 

Proof of theorem 7. We’ll prove this just for some fixed constant l(λ) = l ∈ N.
First, let’s look at the construction (fig. 3). We replicate our PRG G with
1 bit stretch l times. The PRG G l that we define takes in input s ∈ {0, 1}λ ,
computes (s1 , b1 ) = G(s), where s1 ∈ {0, 1}l and b1 ∈ {0, 1}, outputs b1 and
feeds s1 to the second copy of PRG G, and so on until the l-th PRG.
To show that our construction is a PRG, we define l hybrids, with H0λ ≡
G (Uλ ), where G l : {0, 1}λ → {0, 1}λ+l is our proposed construction, and Hiλ
l

takes b1 , . . . , bi ← ${0, 1}, si ← ${0, 1}λ , and outputs (b1 , . . . , bi , sl ), where


sl ∈ {0, 1}λ+l−i is sl = G l−i (si ), i.e., the output of our construction restricted
to l − i units.
Hlλ takes b1 , . . . , bl ← ${0, 1} and sl ← ${0, 1}l and outputs (b1 , . . . , bl , sl )
directly.
We need to show that Hiλ ≈c Hi+1 λ
. To do so, fix some i. The only difference
between the two hybrids is that si+1 , bi+1 are pseudo random in Hiλ , and are
λ
truly random in Hi+1 . All bits before them are truly random, all bits after are
pseudo random.
Assume these two hybrids are distinguishable, then we can break the PRG.
Consider the PPT function fi defined by f (si+1 , bi+1 ) = (b1 , . . . , bl , sl ) such
that b1 , . . . bi ← ${0, 1} and, for all j ∈ [i + 1, l] (bj , sj ) = G(sj−1 ).
By the security of PRGs we have that G(Uλ ) ≈c Uλ+1 . By reduction, we
also have that f (G(Uλ )) ≈c f (Uλ+1 ). Thus, Hiλ ≈c Hi+1 λ
.

2.4 Hard Core Predicates


Definition 8 (Hard Core Predicate - I). A polynomial time function h :
{0, 1}n → {0, 1} is hard core for f : {0, 1}n → {0, 1}n if for all PPT adversaries
A
1
Pr [A(f (x)) = h(x) : x ← ${0, 1}n ] ≤ + ε(λ).
2


10
The 12 in the upper bound tells us that the adversary can’t to better than
guessing.
Definition 9 (Hard Core Predicate - II). A polynomial time function h :
{0, 1}n → {0, 1} is hard core for f : {0, 1}n → {0, 1}n if for all PPT adversaries
A  
  A(f (x), b) = 1 :
Pr A(f (x), h(x)) = 1:

n − Pr  x ← ${0, 1}n ;  ≤ ε(λ).
x ← ${0, 1}
b ← ${0, 1}

Theorem 8. Definition 8 and definition 9 are equivalent. 

Proof of this theorem is left as exercise.


Luckily for us, every OWF has a Hard Core Predicate (HCP). There isn’t
a single HCP h for all OWFs f . Suppose ∃ such h, then take f and let
f 0 (x) = h(x)||f (x). Then, if f 0 (x) = y||b for some x, it will always be that
h(x) = b.
But, given a OWF, we can create a new OWF for which h is hard core.

Theorem 9 (Goldreich-Levin (GL), 1983). Let f : {0, 1}n → {0, 1}n be a


OWF, and define g(x, r) = f (x)||r for r ← ${0, 1}n . Then g is a OWF, and
n
X
h(x, r) = hx, ri = xi · ri mod 2
i=1

is hardcore for g. 

Definition 10 (One Way Permutation). We say that f : {0, 1}n → {0, 1}n is
a One Way Permutation (OWP) if f is a OWF, ∀x. |x| = |f (x)|, and for all
distinct x, x0 .f (x) 6= f (x0 ). 

Corollary 1. Let f be a OWP, and consider g : {0, 1}n → {0, 1}n from the
GL theorem. Then G(s) = (g(s), h(s)) is a PRG with stretch 1. 

Proof of corollary 1.

G(U2n ) = (g(x, r), h(x, r))


= (f (x)||r, hx, ri)
≈c (f (x)||r, b) (GL)
≈c U2n+1 .

11
UNCLEAR
Assume instead f is a OWF, and that is 1-to-1 (injective). Consider
X = g m (x) = (g(x1 ), h(x1 ), . . . , g(xm ), h(xm )), where x1 , . . . , xm ∈ {0, 1}n
(i.e., , x ∈ {0, 1}nm ). You can construct a PRG from a OWF as shown by
H.I.L.L.

Fact 1. X is indistinguishable from X 0 such that H∞ (X 0 ) ≥ k = n·m+m,


since f is injective. 

Now G(s, x) = (s, Ext(s, g m (x))) where Ext : {0, 1}d × {0, 1}nm →
{0, 1}l , and l = nm + 1. This works for m = ω(log(n)). You get extraction
error ε ≈ 2−m .

3 Symmetric Key Encryption Schemes


Definition 11 (Symmetric Key Encryption scheme). We call Π = (Gen, Enc, Dec)
a Symmetric Key Encryption (SKE) scheme.
• Gen outputs a key k ← $K;
• Enc(k, m) = c for some m ∈ M, c ∈ C;
• Dec(k, c) = m.
As usual, we want Π to be correct. 
We want to introduce computational security: a bounded adversary can
not gain information on the message given the cyphertext.
Definition 12 (One time security). A SKE scheme Π = (Gen, Enc, Dec) has
one time computational security if for all Probabilistic Polynomial Time (PPT)
adversaries A ∃ a negligible function ε such that
 
one time
  one time
Pr GΠ,A (λ, 0) = 1 − Pr GΠ,A (λ, 1) = 1 ≤ ε(λ)

one time
where GΠ,A (λ, b) is the following “game” (or experiment):
1. pick k ← $K;
2. A outputs two messages (m0 , m1 ) ← A(1λ ) where m0 , m1 ∈ M and
|m0 | = |m1 |;
3. =
¸ Enc(k, mb ) with b input of the experiment;
4. output b0 ← A(1λ , c), i.e., the adversary tries to guess which message
was encrypted. 
Let’s look at a construction.

12
Construction 3 (SKE scheme from PRG). Let G : {0, 1}n → {0, 1}l be a
Pseudo Random Generator (PRG). Set K = {0, 1}n , and M = C = {0, 1}l .
Define Enc(k, m) = G(k) ⊕ m and Dec(k, c) = G(k) ⊕ c. 
Theorem 10. If G is a PRG, the SKE in Construction 3 is one-time compu-
tationally secure. 
Proof of theorem 10. Consider the following experiments:
one time
• H0 (λ, b) is like GΠ,A :
1. k ← ${0, 1}n ;
2. (m0 , m1 ) ← A(1λ );
3. c = G(k) ⊕ mb ;
4. b0 ← A(1λ , c).
• H1 (λ, b) replaces G with something truly random:
1. (m0 , m1 ) ← A(1λ );
2. r ← ${0, 1}l ;
3. c = r ⊕ mb , basically like One Time Pad (OTP);
4. b0 ← A(1λ , c).
• H2 (λ) is just randomness:
1. (m0 , m1 ) ← A(1λ );
2. c ← ${0, 1}l ;
3. b0 ← A(1λ , c).
First, we show that H0 (λ, b) ≈c H1 (λ, b), for b ∈ {0, 1}. Fix some value for
b, and assume exists a PPT distinguisher D between H0 (λ, b) and H1 (λ, b): we
then can construct a distinguisher D0 for the PRG.
D0 , on input z, which can be either G(k) for some k ← ${0, 1}n , or directly
z ← ${0, 1}l , does the following:
• get (m0 , m1 ) ← D(1λ );
• feed z ⊕ mb to D;
• output the result of D.
Now, we show that H1 (λ, b) ≈c H2 (λ, b), for b ∈ {0, 1}. By perfect secrecy
of OTP we have that (m0 ⊕ r) ≈ z ≈ (m1 ⊕ r), so H1 (λ, 0) ≈c H2 (λ) ≈c
H1 (λ, 1).
Corollary 2. One-time computationally secure SKE schemes are in Minicrypt.

This scheme is not secure if the adversary knows a (m1 , c1 ) pair, and we
reuse the key. Take any m, c, then c ⊕ c1 = m ⊕ m1 , and you can find m. This
is called a Chosen Plaintext Attack (CPA), something we will defined shortly
using a Pseudo Random Function (PRF).

13
3.1 Chosen Plaintext Attacks and Pseudo Random
Functions
Definition 13 (Pseudo Random Function). Let F = {Fk : {0, 1}n → {0, 1}l }
be a family of functions, for k ∈ {0, 1}λ . Consider the following two experi-
ments:
real
• GF ,A (λ), defined as:

1. k ← ${0, 1}λ ;
2. b0 ← AFk (·) (1λ ), where A can query an oracle for values of Fk (·),
without knowing k.
rand
• GF ,A (λ), defined as:

1. R ← $R(n → l), i.e., a function R is chosen at random from all


functions from {0, 1}n to {0, 1}l ;
2. b0 ← AR(·) (1λ ), where A can query an oracle for values of R(·).
The family F of functions is a PRF family if for all PPT adversaries A ∃ a
negligible function ε such that
 
real
  rand
Pr GF ,A (λ) = 1 − Pr GF ,A (λ) = 1 ≤ ε(λ). 

To introduce CPAs and CPA-secure SKE schemes, we first introduce the


game of CPA. As usual, a SKE scheme is a tuple Π = (Gen, Enc, Dec).
Definition 14 (CPA-secure SKE scheme). Let Π = (Gen, Enc, Dec) be a SKE
cpa
scheme, and consider the game GΠ,A (λ, b), defined as:

1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·) (1λ ). A is given access to an oracle for Enc(k, ·), so
she knows some (m, c) couples, with c = Enc(k, m);
3. c ← Enc(k, mb );
4. b0 ← AEnc(k,·) (1λ , c).
Π is CPA-secure if for all PPT adversaries A
cpa cpa
GΠ,A (λ, 0) ≈c GΠ,A (λ, 1). 

Deterministic schemes cannot achieve this, i.e., when Enc is deterministic


the adversary could cipher m0 and then compare c to Enc(k, m0 ), and output
0 if and only if c = Enc(k, m0 ).
Let’s construct a CPA-secure SKE scheme using PRFs.
Construction 4 (SKE scheme from PRF). Let F be a PRF, we define the
following SKE scheme Π = (Gen, Enc, Dec):

14
• Gen takes k ← ${0, 1}λ ;
• Enc(k, m) = (r, Fk (r) ⊕ m), with r ← ${0, 1}n . Note that, since Fk :
{0, 1}n → {0, 1}l , we have that M = {0, 1}l and C = {0, 1}n+l ;
• Dec(k, (c1 , c2 )) = Fk (c1 ) ⊕ c2 . 
Our construction is both one time computationally secure, and secure against
CPAs.
Theorem 11. If F is a PRF, Construction 4 is CPA-secure. 
cpa
Proof of theorem 11. First, we define the experiment H0 (λ, b) ≡ GΠ,A (λ, b) as
follows:
1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·) (1λ );
3. c? ← (r? , Fk (r? ) ⊕ mb ), where r? ← ${0, 1}n ;
4. output b0 ← AEnc(k,·) (1λ , c? ).
Note that in the CPA game the adversary has access to an encryption oracle
using the chosen key.
Now, for the first hybrid H1 (λ, b), where we sample a random function R
in place of Fk :
1. R ← $R(n → l);
2. (m0 , m1 ) ← AEnc(R,·) (1λ ), where now Enc(R, m) = (r, R(r)⊕m) for some
random r;
3. c? ← (r? , R(r? ) ⊕ mb ), where r? ← ${0, 1}n ;
4. output b0 ← AEnc(R,·) (1λ , c? ).
Our first claim is that H0 (λ, b) ≈c H1 (λ, b) for b ∈ {0, 1}. As usual, we
assume that exists an adversary A which can distinguish the experiments, i.e.,
that can distinguish the oracles, and use A to create APRF that breaks the
PRF.
APRF has access to some oracle O(·), with is one of two possibilities:
(
Fk (x) for k ← ${0, 1}λ
O(x) =
R(x) for R ← $R(n → l).

A gives APRF some message m. APRF picks r ← ${0, 1}n , and queries O(r) to
get z ∈ {0, 1}l . Then it gives (r, z ⊕ m) to A. This is repeated as long as A
asks for encryption queries.
Then A gives to APRF (m0 , m1 ), which repeats the same procedure using
m0 as a message (to distinguish H0 (λ, 0) from H1 (λ, 0)) to compute c? . A,

15
Figure 4: First two levels of a GGM tree.

after receiving c? , asks some more encryption queries, and then outputs b0 . If
b0 = 1, APRF says R(·), otherwise it says Fk (·).
Now for the third experiment, H2 (λ), which uses Enc(m) = (r1 , r2 ) with
(r1 , r2 ) ← ${0, 1}n+l , i.e., it outputs just randomness.
Our second claim is that H1 (λ, b) ≈c H2 (λ) for b ∈ {0, 1}. To see this, note
that H1 and H2 are identical as long as collisions don’t happen when choosing
the rs. It suffices for us to show that collisions happen with small probability.
Call Ei,j the eventW“random ri collides with random rj ”. The event of a
collision is thus E = i,j Ei,j , and its probability can be upper bounded as
follows:
q2
 
X X q −n
Pr [E] = Pr [Ei,j ] = Coll (Un ) ≤ 2 ≤ n
i,j i,j
2 2

where q is the (polynomial) number of queries that the adversary does, and
Coll (Un ) is the probability of a collision when using a uniform distribution,
which is 2−n .
Theorem 12 (GGM, 1982). PRFs can be constructed from PRGs. 
Corollary 3. PRFs are in Minicrypt. 
Construction 5 (GGM tree). Assume we have a length doubling PRG G :
4
{0, 1}λ → {0, 1}2λ . We say that G(x) =(G0 (x), G1 (x)) to distinguish the first λ
bits from the second λ bits.
Now, to build the PRF we construct a Goldreich-Goldwasser-Micali (GGM)
tree (fig. 4) starting with a key k ∈ {0, 1}λ . On input x = (x1 , . . . , xn ) ∈
{0, 1}n , with n being the height of the tree, the PRF picks a path in the tree:

Fk (x) = Gxn (. . . Gx1 (k) . . . ) .

16
Lemma 3. Let G : {0, 1}λ → {0, 1}2λ be a PRG. Then for all t(λ) = poly(λ)
we have that
(G(k1 ), . . . , G(kt )) ≈c (U2λ , . . . , U2λ ) . 
| {z }
t times

Proof of lemma 3. We define t hybrids, where Hi (λ) is defined as

Hi (λ) = (G(k1 ), . . . , G(kt−i ), U2λ , . . . , U2λ )


| {z }
i times

thus H0 (λ) = (G(k1 ), . . . , G(kt )) and Ht (λ) = (U2λ , . . . , U2λ ). To prove that
H1 (λ) ≈c Ht (λ), we show that for any i it holds that Hi (λ) ≈c Hi+1 (λ). This
relies on the fact that G(kt−i ) ≈c U2λ : assume that exists a distinguisher D for
Hi (λ) and Hi+1 (λ), we then break the PRG.
We build D0 , which takes in input some z from either G(kt−i ) or U2λ . D0
takes k1 , . . . , kt−(i+1) ← ${0, 1}λ , and feeds (G(k1 ), . . . , G(kt−(i+1) ), z, U2λ , . . . , U2λ )
to D, and returns whatever it returns.

Proof that Construction 5 is a PRF. We’ll define a series of hybrids to show


that the GGM tree is a PRF. H0 (λ) ≡ our GGM tree.
Hi (λ), for i ∈ [1, n], will replace the tree up to depth i with a true random
function. Hi (λ) initially has two empty arrays T1 and T2 . On input x ∈ {0, 1}n ,
it checks if x = (x1 , . . . , xi ) ∈ T1 . If not, Hi (λ) picks kx ← ${0, 1}λ and adds x
to T1 and kx to T2 . If x ∈ T1 , it just retrieves kx from T2 . Then Hi (λ) outputs
the following: 
Gxn Gxn−1 . . . Gxi+1 (kx ) . . . .
If i = 0 we have that x = ⊥ and that k⊥ ← ${0, 1}λ , so H0 (λ) ≡ the GGM
tree. On the other hand, if i = n, each input x leads to a random output, so
Hn (λ) is just a true random function.
Assume now that exists an adversary A capable of telling apart Hi (λ) from
Hi+1 (λ), we could break the PRG.

3.2 Computationally Secure MACs


A computationally secure Message Authentication Code (MAC) should be hard
to forge, even if you see polynomially many authenticated messages.

Definition 15 (UFCMA MAC). Let Π = (Gen, Mac, Vrfy) be a MAC, and


ufcma
consider the game GΠ,A (λ) defined as:

1. pick k ← ${0, 1}λ ;


2. (m? , φ? ) ← AMac(k,·) (1λ ), where the adversary can query an authentica-
tion oracle;
3. output 1 if Vrfy(k, (m? , φ? )) = 1 and m? is “fresh”, i.e., it was never
queried to Mac.

17
We say that Π is Unforgeable Chosen Message Attack (UFCMA) if for all PPT
adversaries A it holds that
 ufcma 
Pr GΠ,A (λ) = 1 ≤ negl (λ) . 

As a matter of fact, any PRF is a MAC.


Construction 6 (MAC from PRF). Let F = {Fk : {0, 1}n → {0, 1}l }k∈{0,1}λ
be a PRF family, and let K = {0, 1}λ . Define Mac(k, m) = Fk (m). 
Theorem 13. If F is a PRF, the MAC shown in Construction 6 is UFCMA.

Proof of theorem 13. Consider the game H(λ) where:
1. R ← $R(n → l) is a random function;
2. (m? , φ? ) ← AR(·) (1λ );
3. output 1 if R(m? ) = φ? and m? is “fresh”.
ufcma
Our first claim is that H(λ) ≈c GΠ,A (λ) for all PPT adversaries A. Assume
ufcma
not, then ∃ a distinguisher D for H(λ) and GΠ,A (λ), and we can construct a
distinguisher D0 for the PRF. D0 has access to an oracle O(·) which is either
Fk (·) for some random k, or R(·) for some random function R. D0 feeds a game
to D using O(·).
Our second claim is that Pr [H(λ) = 1] ≤ 2−λ , since R(·) is random and the
only way to predict it is by guessing.
Up to this point we have shown that One Way Function (OWF), PRG,
PRF and MAC are all in Minicrypt.

3.3 Domain Extension


We look now at domain extension. Suppose we have a PRF family F = {Fk :
{0, 1}n → {0, 1}l } as above, and we have a message m = m1 || . . . ||mt , with
mi ∈ {0, 1}n , and with t being the number of blocks of m.
Let’s look at some constructions that won’t work.
 L 
t
1. φ = Mac k, i=1 mi does not work, since with m = m1 ||m2 we could
swap the bits in position i of m1 and m2 and have the same authenticator;
2. φi = Mac(k, mi ) and φ = φ1 || . . . ||φt does not work, since we could
rearrange the blocks of the authenticator and of the message and still get
a valid couple. i.e., take m0 = m1 ||m3 ||m2 and φ0 = φ1 ||φ3 ||φ2 ;
3. φi = Mac(k, hii ||mi ) and φ = φ1 || . . . ||φt , where hii is the binary rep-
resentation of integer i, does not work, since we could cut and paste
blocks from different message/authenticator couples and to get a fresh
valid couple.

18
Now, for the real one. To extend the domain of a PRF F we need a function
h : {0, 1}nt → {0, 1}n for which is hard to find a collision, i.e., two distinct
messages m0 , m00 such that h(m0 ) = h(m00 ). To do this, we introduce Collision
Resistant Hash Functions (CRHs), an object found in Cryptomania. We add
a key to the hash function.

Definition 16 (Universal Hash Function). The family of functions H = {hs :


{0, 1}N → {0, 1}n }s∈{0,1}λ is universal (as in Universal Hash Function (UHF))
if for all distinct x, x0 we have that

Pr [hs (x) = hs (x0 )] ≤ ε.


s←${0,1}λ

Two cases are possible, depending on what ε is:

• if ε = 2−n , then H is said to be Perfect Universal (PU);


• if ε = negl (λ), with λ = |s|, then H is said to be Almost Universal
(AU). 
With UHF we can extend the domain of a PRF.

Theorem 14. If F is a PRF and H is a AU family of hash functions, then


F(H), defined as

F(H) = Fk (hs (·)) : {0, 1}N → {0, 1}l k0 =(k,s)




is a PRF. 

Proof of theorem 14. Consider the following games:


real
• GF (H),A (λ), defined as:

1. k ← ${0, 1}λ , s ← ${0, 1}λ ;


2. b0 ← AFk (hs (·)) (1λ ).
rand
• G$,A (λ), defined as:

1. R ← $R(N → l);
2. b ← AR(·) (1λ ).
Consider also the hybrid H$,H,A (λ):

1. s ← ${0, 1}λ ;
2. R ← $R(n → l);
3. b ← AR(hs (·)) (1λ ).

19
real
The first claim, i.e., that GF (H),A (λ) ≈c H$,H,A (λ), is left as exercise.
rand
The second claim is that H$,H,A (λ) ≈c G$,A (λ). Assume the adversary asks
q distinct queries. Consider the event E = “∃(xi , xj ) such that hs (xi ) = hs (xj )
rand
with i 6= j”, and with i, j ≤ q. If E doesn’t happen, H$,H,A (λ) and G$,A (λ)
are the same. This event is the same as getting first all the inputs that the
λ
adversary wants to try, and then sampling s ← ${0, 1} , so its probability can
be bounded as
 
q
Pr [E] = Pr [∃i, j : hs (xi ) = hs (xj )] ≤ ε ≤ q 2 ε.
2

Now, let’s look at a construction.

Construction 7 (UHF with Galois field). Let F be a finite field, such as the
Galois field over 2n . In the Galois field, a bit string represents the coefficients
of a polynomial of degree n − 1. Addition is the usual, while for multiplication
an irreducible polynomial p(x) of degree n is fixed, and the operation is carried
out modulo p(x).
We pick s ∈ F, and x = x1 || . . . ||xt with xi ∈ F for all i. The hash function
is defined as
t
X
hs (x) = hs (x1 || . . . ||ht ) = xi · si−1 = Qx (s).
i=1

A collision is two distinct x, x0 such that


t
X
Qx (s) = Q (s) ⇐⇒ Q
x0 x−x0 (s) = 0 ⇐⇒ (xi − x0i )si−1 = 0.
i=1

This means that s is a root of Qx−x0 . So the probability of a collision is:


t−1 t−1
Pr [hs (x) = hs (x0 )] = = n . (negligible)
|F| 2


We now look at a computational variant of hash functions. We want hash


functions for which collisions are difficult to find for any PPT adversary A,
i.e., families of functions such that

Pr hs (x) = hs (x0 ) : (x, x0 ) ← A(1λ ) ≤ ε.


 
s

We want to use some PRF family F to define H. Enter Cypher Block


Chain (CBC)-MAC (fig. 5). CBC-MAC is defined as

hs (x1 , . . . , xt ) = Fs (xt ⊕ Fs (xt−1 ⊕ . . . ⊕ Fs (x1 )) . . . ).

20
Figure 5: Construction of the CBC-MAC.

Theorem 15. CBC-MAC is a computationally secure AU hash function if F


is a PRF. 
There’s also the encrypted CBC-MAC, i.e., Fk (CBC-MAC(s, x)).
Theorem 16. CBC-MAC is a PRF. 
Theorem 17. CBC-MAC is AU. 
CBC-MAC is insecure with variable length messages.
XOR-MAC is defined as follows: take η, a random value (nonce), and output
(η, Fk (η) ⊕ hs (x)). Note that here the input is shrinked to the output size of
the PRF, while before we shrinked to the input size of the PRF.
Suppose the adversary is given a pair (m, (η, v)) from a XOR-MAC. She
could try to output (m0 , (η, v ⊕ a)), trying to guess an a such that hs (m) ⊕ a =
hs (m0 ), so that this is still a valid tag. If a is hard to find (as should be), we
have “almost xor universality”. Almost universality is the special case where
a = 0.
From a PRF family we can get a MAC for Fixed Input Length (FIL) mes-
sages (a FIL-MAC). Table 1 compares the constructions we have seen earlier
for FIL-MACs and Variable Input Length (VIL)-MACs.
CBC-MAC cannot be extended securely to VIL. As an example, take
CBC-MAC(m1 || . . . ||mt ) = Fk (mt ⊕ . . . ⊕ Fk (m1 ) . . . ). (1)
If we have (m1 , φ1 ), with φ1 = Fk (m1 ). We could then take m2 = m1 ||φ1 ⊕ m1 ,
and φ1 would be a valid authenticator for m2 :
CBC-MAC(m2 ) = Fk (m1 ⊕ φ1 ⊕ Fk (m1 )) = Fk (m1 ⊕ φ
1 ⊕ φ
 1 ) = Fk (m1 ) = φ1 .

21
FIL-PRF FIL-MAC VIL-MAC
F(H) X X
CBC-MAC X
E-CBC-MAC X X X
XOR-MAC X X
Table 1: Constructions for FIL-PRF, FIL-MAC, and VIL-MAC.

3.4 Chosen Cyphertext Attacks and Authenticated


Encryption
In Chosen Cyphertext Attack (CCA) security, the adversary is allowed to
choose the cyphertext, and to see its decryption.
Definition 17 (CCA-security). Let Π = (Gen, Enc, Dec) be a SKE scheme,
cca
and consider the following game GΠ,A (λ, b):

1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·),Dec(k,·) (1λ );
3. c ← Enc(k, mb );
?
4. b0 ← AEnc(k,·),Dec (k,·)
(1λ , c) where Dec? does not accept c.
Π is CCA-secure if for all PPT adversaries A we have that
cca cca
GΠ,A (λ, 0) ≈c GΠ,A (λ, 1). 

CCA-security implies a property called malleability: if you change a bit the


cyphertext you don’t get similar messages.
Claim 1. The SKE scheme consisting of Enc(k, m) = (r, Fk (r) ⊕ m) (for
random r) and Dec(k, (c1 , c2 )) = Fk (c1 ) ⊕ c2 = m is not CCA-secure. 

Proof of Claim 1. 1. Output m0 = 0n and m1 = 1n ;


2. get c = (c1 , c2 ) = (r, Fk (r) ⊕ mb );
3. let c02 = c2 ⊕ 10n−1 ;
4. query Dec(k, (c1 , c02 )) (which is different from c);
5. if you get 10n−1 , output 0, else output 1.
This always works:
mb
z }| {
Dec(k, (c1 , c02 )) = Fk (c1 ) ⊕ c02 = Fk (c1 ) ⊕ c2 ⊕10n−1
= mb ⊕ 10n−1 = 10n−1 ⇐⇒ mb = 0n .

22
We’ll build now Authenticated Encryption. It’s both CPA and Integrity
(of cyphertext) (INT), i.e., it’s hard for the adversary to generate a valid
cyphertext not queried to the encryption oracle.
As an exercise, formalise the fact that CPA and INT imply CCA, i.e.,
reduce CCA to CPA.
Any CPA-secure encryption scheme, together with a MAC, gives you CCA
security. This is called an encrypted MAC.
Construction 8 (Encrypted MAC). Consider the encryption scheme Π1 =
(Gen, Enc, Dec), with key space K1 , and the MAC Π2 = (Gen, Mac, Vrfy),
with key space K2 . We build the encryption scheme Π0 = Gen0 , Enc0 , Dec0 ,


with key space K0 = K1 × K2 as follows:

1. Enc0 (k 0 , m) = (c, φ) = c0 , with c ← Enc(k1 , m) and φ ← Mac(k2 , c);


2. Dec0 (k 0 , (c, φ)) checks if Mac(k2 , c) = φ: if not, it outputs ⊥, else it
outputs Dec(k1 , c). 

Theorem 18. If Π1 is CPA-secure and Π2 is strongly UFCMA-secure, then


Π0 is CPA and INT. 

I don’t know what I wrote here?


Strong UFCMA security means you output (m? , φ? ) where the couple
was never asked. So if you know (m, φ), you can output (m, φ0 ).

Proof of theorem 18. We need to show that Π0 is both CPA and INT.

1. The proof for CPA is just a reduction to the CPA-security of Π1 . Assume


A0 breaks CPA-security of Π0 , we can construct A1 which breaks CPA of
Π1 .
A1 picks a key to impersonate the MAC, then for each message m gets
its encryption c from Π1 , and then does Mac of c to get the authenticator
φ. Then it returns (c, φ) to A0 . When it receives m0 , m1 from A0 , it
receives c? from Π1 , computes its Mac, and gives the result to A0 . Then
it outputs whatever A0 outputs.
2. For INT, assume A00 breaking INT of Π0 , we can build A2 which breaks
INT of Π2 .
We ask A2 the encryption of m. A2 picks a key k, computes Enc(k, m) =
c, and gives c to the Mac(·) oracle. Then it gives (c, φ) to A00 . Later on,
A00 gives A2 some (c? , φ? ), which is a valid validator if Π0 is not INT, so
A2 has broken Π2 .

The approach to CCA consisting of Encrypt-then-MAC works. Other ap-


proaches don’t work in general:

23
Figure 6: The Feistel permutation.

• Encrypt-and-MAC, which is what Secure Shell (SSH) does, i.e., c ←


Enc(kc , m) and φ ← Mac(ks , m);
• MAC-then-Encrypt, which is what Transport Layer Security (TLS) does,
i.e., φ ← Mac(ks , m) and c ← Enc(kc , m||φ).

3.5 Pseudo Random Permutations


Block cyphers are Pseudo Random Permutations (PRPs), a function family
that is a PRF but also a permutation. A PRP family cannot be distinguished
from a true permutation. For a strong PRP family, the adversary has access
to the inverse of the permutation. From PRFs we can build both PRPs and
strong PRPs.
Definition 18 (Feistel function). Let F : {0, 1}n → {0, 1}n , then the Feistel
function (fig. 6) is defined as

ψF ( x, y ) = (y, x ⊕ F (y)) = (x0 , y 0 ). 


|{z} | {z }
2n 2n

It’s easy to see that the Feistel function is invertible:

ψF−1 (x0 , y 0 ) = (F (x0 ) ⊕ y 0 , x0 ) = (  ⊕ F (y)


F (y)
  ⊕ x, y) = (x, y).


We can “cascade” several Feistel functions, to create a Feistel network. Take


F1 , . . . , Fl , and define the following function:

ψF [l] (x, y) = ψFl (ψFl−1 (. . . ψF1 (x, y) . . . ))

and its inverse:


−1
ψF [l] (x0 , y 0 ) = ψF−1
1
(. . . ψF−1
l−1
(ψF−1
l
(x0 , y 0 )) . . . ).

24
Theorem 19 (Luby-Rackoff). If F = {Fk : {0, 1}n → {0, 1}n }k∈{0,1}λ is a
PRF, then ψF [3] is a PRP and ψF [4] is a strong PRP. 
We will prove only that ψF [3] is a PRP.
Theorem 20. If F is a PRF, ψF [3] is a PRP. 
Recall that 
ψF [3] = ψFk3 ψFk2 ψFk1 (x, y) .
Proof of theorem 20. Consider these experiments:
ψFk ψFk ψFk
H0 : (x, y) →1 (x1 , y1 ) →2 (x2 , y2 ) →3 (x3 , y3 )
ψR ψR ψR
H1 : (x, y) →1 (x1 , y1 ) →2 (x2 , y2 ) →3 (x3 , y3 ).
H2 is just like H1 , but we stop if y1 “collides”. A collision happens when
we have (x, y) → (x1 = y, y1 = x ⊕ R1 (y)), and y1 = y.
In H3 we replace y2 = R2 (y1 ) ⊕ x1 with y2 ← ${0, 1}n , and set x2 = y1 .
In H4 we replace y3 = R3 (y2 ) ⊕ x2 with y3 ← ${0, 1}n , and set x3 = y2 .
R
In H5 we directly map (x, y) → (x3 , y3 ), with R ← $R(2n → 2n) being a
random permutation.
First claim: H0 ≈c H1 , since we replaced the PRF with truly random
functions. The proof is the usual proof by hybrids.
Second claim: H1 ≈c H2 . Consider the event E of a collision, defined as
“∃(x, y) 6= (x0 , y 0 ) such that x ⊕ R1 (y) = x0 ⊕ R1 (y 0 ), i.e., x ⊕ x0 = R1 (y) ⊕
R1 (y 0 )”. If y = y 0 , there can’t be a collision, so we can assume that y 6= y 0 . So
the probability of a collision is:
Pr [E] = Pr [R1 (y) ⊕ R1 (y 0 ) = x ⊕ x0 ] ≤ 2−n .

Third claim: H2 ≈c H3 . In H2 , y1 is just a stream of independent val-


ues. Since y1 never collides and R2 is random, all R2 (y1 ) are uniform and
independent, and so is y2 = R2 (y1 ) ⊕ x1 .
Fourth claim: H3 ≈c H4 . Now y3 is random, and x3 = y2 . It suffices that
y2 never collides, since R3 (y2 ) is a sequence of one time pad keys.
 
q −n
Pr [y2 collides] ≤ 2 .
2
Fifth claim: H4 ≈c H5 . Just notice that H4 is simply a random function
from 2n bits to 2n bits, and H5 is a random permutation. They can only be
distinguished if there is a collision in H4 , which again has negligible probability.

A strong PRP P leads to CCA security with the following construction:


• Enc(k, m) = P (k, m||r), with m, r ∈ {0, 1}n ;
• Dec(k, c) = P −1 (k, c) and take the first n bits.

25
Domain Extension for Pseudo Random Permutations
Assume we have a message m = m1 || . . . ||mt , with mi ∈ {0, 1}n , and you are
given a PRP from n bits to n bits.

• One natural thing to do is Electronic Code Book (ECB). Let c =


c1 || . . . ||ct with ci = Pk (mi ). This is very fast, parallelisable, but in-
secure (since it’s deterministic).
• Cypher Feed Back (CFB): sample c0 at random, then let ci = Pk (ci−1 ) ⊕
mi . This is CPA secure and parallelisable for decryption (but not for
encryption).
• CBC: sample c0 at random, then let ci = Pk (ci−1 ⊕ mi ). This is also
CPA secure. Note that if you output all ci this does not work as a MAC
(recall that CBC-MAC outputs just ct ).
• Counter (CTR)-mode: sample r ← $[N ], with N = 2n . Let ci = Pk (r +
i−1 mod N )⊕mi , and output c0 ||c1 || . . . ||ct with c0 = r. For decryption,
compute mi = Pk (c0 + i − 1 mod N ) ⊕ ci .
Theorem 21. If F is a PRF family, then CTR-mode is CPA-secure for VIL.


Proof of theorem 21. Consider the game H0 (λ, b), defined as:

1. k ← ${0, 1}λ ;
2. the adversary asks encryption queries, for messages m = m1 || . . . ||mt :
• c0 = r ← $[N ] (with N = 2n );
• ci = Fk (r + i − 1 mod N ) ⊕ mi ;
• output c0 ||c1 || . . . ||ct .
3. challenge: the adversary gives (m?0 , m?1 ), and take m?b = m?b1 || . . . m?bt
(both messages have the same length);
4. compute c? from m?b and output to adversary;
5. adversary asks more encryption queries, then it outputs b0 .

We want to show that H0 (λ, 0) ≈c H1 (λ, 1).


First, we define the hybrid H1 (λ, b) which samples R ← $R(n → n) and
uses R(·) in place of Fk (·). The proof that H0 (λ, b) ≈c H1 (λ, b) is a usual
proof by reduction to security of the PRF. Assume there exists a distinguisher
D for H0 and H1 , we build a distinguisher D0 for the PRF, which has access
to some oracle that is either Fk (·) or R(·) for random R. D0 plays the game
defined above with D using the oracle, and outputs whatever it outputs, thus
distinguishing the PRF.

26
Now consider the hybrid H2 (λ), which outputs a uniformly random chal-
lenge cyphertext c? ← ${0, 1}n(t+1) . We claim that H1 (λ, b) ≈s H2 (λ) for
b ∈ {0, 1}, i.e., they are statistically indistinguishable: we accept unbounded
adversaries that can only ask a polynomial number of queries.
Consider c? , and the values of R(r? ), R(r? +1), . . . , R(r? +t? −1). Then, con-
sider the i-th encryption query, ci , and the values of R(ri ), R(ri +1), . . . , R(ri +
ti −1). If R(ri +j) is always different from R(r? +j 0 ) for any j 0 , this is basically
OTP. So, calling E the event “∃j1 , j2 such that R(ri + j1 ) = R(r? + j2 )”, it
suffices to show that Pr [E] ≤ negl (λ).
Assume there are q queries, and fix a maximum length t of the queried
messages, and let Ei , forP i ∈ [q], be the event of an overlap happening at
q
query i. Clearly Pr [E] ≤ i=1 Pr [Ei ]. For Ei to happen, we must have that
r? − t + 1 ≤ ri ≤ r? + t − 1 (this is not tight!). So the size of the interval in
which ri must be is 2t − 1. Then, Pr [Ei ] = 2t−1 2t−1
2n , and Pr [E] ≤ q 2n , which is
negligible.

3.6 Collision Resistant Hash Functions


When we saw domain extensions for PRFs, we showed that F(H) is a PRF if
F is a PRF and H is AU. This doesn’t work for MACs: to extend the domain
of a MAC we need CRHs.
A hash function is used to compress l bits into n bits, with n << l. A
family of hash functions is defined as H = {Hs : {0, 1}l → {0, 1}n }s∈{0,1}λ .
AU means that it’s hard to find a collision for an adversary that does not
know s. Collision resistance means that it’s hard to find a collision even if the
adversary knows s.

Definition 19 (Collision Resistant Hash Function). Let H = {hs : {0, 1}l →


{0, 1}n }s∈{0,1}λ be a family of functions, and consider the game GH,A
CR
(λ), de-
fined as

1. s ← ${0, 1}λ ;
2. (x, x0 ) ← A(1λ , s);
3. output 1 if x 6= x0 ∧ hs (x) = hs (x0 ).
H is a CRH if for all PPT adversaries A there is a negligible function ε(λ) such
that  CR 
Pr GH,A (λ) = 1 ≤ ε(λ). 

We’ll show how to construct CRHs in two steps.

1. Start with a compression function hs : {0, 1}2n → {0, 1}n and obtain a
domain extension, i.e., a function h0s : {0, 1}? → {0, 1}n .
2. Build a Collision Resistant (CR) compression function.

27
Figure 7: The Merkle-Damgard Collision Resistant Hash Function.

There are two famous constructions.

Construction 9 (Merkle-Damgard). It goes from tn bits to n bits, for fixed


t. Let m = x1 || . . . ||xt , and define intermediate outputs yi for i ∈ [0, t]. We
define y0 = 0n , and yi = hs (xi ||yi−1 ) (fig. 7). The output is y = yt . 

Construction 10 (Merkle tree). It goes from 2d n bits to n bits, with d being


the height of the tree. 

With the Merkle tree one could give a partially hashed version of a file.
Theorem 22. Merkle-Damgard (MD) (Construction 9) is a CRH H0 = {h0s :
{0, 1}tn → {0, 1}n } if the function H = {hs : {0, 1}2n → {0, 1}n } is CR. 

Proof of theorem 22. Let A0 be an adversary capable of outputting x 6= x0 such


that h0s (x) = h0s (x0 ). Then we can construct A breaking H (which is CR).
If h0s (x) = h0s (x0 ) but x 6= x0 , there must be some j ∈ [1, t] (with x =
x1 || . . . ||xt and x0 = x01 || . . . ||x0t ) such that (xj , yj−1 ) 6= (x0j , yj−1
0
), but after
0 0
that they are equal. Then hs (xj , yj−1 ) = hs (xj , yj−1 ), which is a collision.

Construction 11 (Strengthened Merkle-Damgard). Strengthened MD is de-


fined as
h0s (x1 || . . . ||xt ) = hs (hti ||hs (xt || . . . hs (x1 ||0n ) . . . )). 

With the strengthened MD we have suffix-free messages, which give us a


VIL MD.

Theorem 23. Strengthened MD (Construction 11) is CR. 

Proof of theorem 23. Assume there is a collision x = x1 || . . . ||xt 6= x0 = x01 || . . . ||x0t0 ,


i.e., they are such that h0s (x) = h0s (x0 ). Two cases are possible:
1. if t = t0 , we have already shown this in the proof of theorem 22;
2. if t 6= t0 , the collision is on hs (hti ||yt ) and hs (ht0 i ||yt0 0 ).

Construction 12 (Davies-Meyer). HE (s, x) = Es (x) ⊕ x where E(·) (·) is a


block cypher. 

28
For this construction to be CR, it should be hard to find (x, s) 6= (x0 , s0 )
such that Es (x) ⊕ x = Es0 (x0 ) ⊕ x0 .
There’s something strange going on: a block cypher is a PRP, and OWF
give us PRP, but we know that it’s impossible to get CRH from OWF. Further-
more, this is actually insecure for concrete PRPs. Let E be a PRP, and define
E 0 such that E 0 (0n , 0n ) = 0n and E 0 (1n , 1n ) = 1n , and otherwise E 0 (x) = E(x).
This is still a PRP, but E 0 (0n , 0n ) ⊕ 0n = E 0 (1n , 1n ) ⊕ 1n .
We must assume that we have no bad keys. This is called the Ideal Cypher
Model (ICM).
In the ICM, one has a truly random permutation E(·) (·) for all keys. When
the adversary asks for (s1 , x1 ), choose random y1 ← ${0, 1}n and set Es1 (x1 ) =
Si
y1 . On later query (s1 , xi ), pick yi ← ${0, 1}n \ j=1 {yj }, and set Es1 (xi ) = yi .
Theorem 24. Davies-Meyer is CR in ICM. 
Proof of theorem 24. As usual, we consider a series of experiments.
CR
H0 (λ) ≡ GDM,A (λ).
A makes two types of queries: forward queries, in the form (s, x), to get Es (x),
and backward queries, in the form (s, y), to get Es−1 (y). Imagine of keeping
the value of z = x ⊕ y.
Now, we define H1 (λ): after picking y for some (s, x), we don’t remove y
from the range (and the same for backward queries).
The first claim is that, assuming A asks q queries,
q2
Pr [H0 (λ) = 1] − Pr [H1 (λ) = 1] ≤ 2−n .

2
To verify this, note that H0 and H1 are distinguishable if and only if a collision
happens in Es (·) or in Es−1 (·).
H2 (λ): check for collisions in z = x ⊕ y. If not all z values are distinct,
abort.
The second claim is that
q2
Pr [H1 (λ) = 1] − Pr [H2 (λ) = 1] ≤ 2−n .

2
Verification is as usual.
H3 (λ): when A outputs (x, s), (x0 , s0 ) collision, check that (x, s, y, z) and
(x0 , s0 , y 0 , z 0 ) are in the table. If not, define the entries.
Third claim:
4q
Pr [H2 (λ) = 1] − Pr [H3 (λ) = 1] ≤ n .

2
y can collide with q values, with probability 2qn . The same goes for z, y 0 , z 0 , so
we get 24qn .
H4 (λ): there are no collisions on z. Pr [H4 (λ) = 1] = 0 since (x, s, y, z), (x0 , s0 , y 0 , z 0 ) =⇒
z 6= z 0 .
H3 ≈c H4 , and we’re done.

29
3.7 Claw-Free Permutations
We introduce a new assumption: Claw-Free Permutations (CFPs). A claw is
a pair of functions (f1 , f0 ) and two values (x1 , x0 ) such that f1 (x1 ) = f0 (x0 ),
with f1 6= f0 .

Definition 20 (Claw-Free Permutation). A CFP is a tuple Π = (Gen, f0 , f1 )


where pk ← $Gen(1λ ) defines a domain Xpk , and f0 (pk, ·) and f1 (pk, ·) are
permutations over Xpk .
CF
Consider the game GΠ,A (λ), defined as:

1. pk ← $Gen(1λ );
2. (x0 , x1 ) ← A(pk);
3. output 1 ⇐⇒ f0 (pk, x0 ) = f1 (pk, x1 ).

Π is claw-free if for all PPT adversaries A


 CF 
Pr GΠ,A (λ) = 1 ≤ ε(λ). 

We can now define a CR compression function.

Construction 13 (CR Compression Function from CFP). Let Π = (Gen, f0 , f1 ),


and assume Xpk = {0, 1}n , and let m ∈ {0, 1}l . Define Hpk as:

Hpk (x||m) = fml (fml−1 (· · · fm1 (x) · · · )).


Theorem 25. If Π is a CFP, Hpk (Construction 13) is collision resistant. 

Proof of theorem 25. Assume H is not CR, we can then find a collision for Π,
i.e., we find (x, m) 6= (x0 , m0 ) such that Hpk (x||m) = Hpk (x0 ||m0 ).
Two cases are possible:

1. m = m0 , so the sequence of permutations is the same, and also x and x0


must be the same.
−1 −1
x = fm 1
(· · · fm l
(y) · · · ),
−1 −1
x0 = fm 0 (· · · fm0 (y) · · · ).
1 l

2. m 6= m0 , then ∃i such that mi 6= m0i , and

m = ml . . . mi . . . m1 ,
m0 = ml . . . m0i . . . m01

i.e., m and m0 are equal from index i + 1 to l.

30
Assume, without loss of generality, that mi = 0 and m0i = 1. Since from
i + 1 we apply the same sequence of permutations, the collision must be
on i. Consider

y 0 = fm
−1
i+1
−1
(. . . fm l
(y) . . . ),
x0 = fmi−1 (. . . fm1 (x) . . . ),
x1 = fm0i−1 (. . . fm01 (x) . . . ).

x0 and x1 are a claw, since f0 (x0 ) = y 0 = f1 (x1 ).

3.8 Random Oracle Model


The Random Oracle (RO) is an ideal hash function. The only way to compute
the hash is to query the oracle. The adversary is given RO(·) ← $R(l → m).
With RO we can do

• CRH: define H RO (x) = RO(x);


• PRG: GRO (x) = RO(x||0)||RO(x||1);
• PRF: F RO (x) = RO(k||x) = MacRO (k, x).

The last one is not secure with a real hash function.

4 Number Theory
We use modular arithmetic, with modulus some n, in (Zn , +, ·).
(Zn , +) is a group, i.e., it has an identity (or null) element, it’s closed,
commutative, associative, and has an inverse for each element. (Zn , ·) is not a
group, as you don’t have an inverse for everyone.

Lemma 4. If gcd(a, n) > 1, a is not invertible in (Zn , ·). 

Proof of lemma 4. We can write a = b · q + (a mod b). Assume a is invertible,


then a · b = q · n + 1. But then gcd(a, n) divides a · b − q · n = 1, which is a
contradiction.

Operations in Zn (at least multiplication and addition) are efficient. The


inverse of an element can be computed with the euclidean algorithm.

Lemma 5. Let a, b such that a ≥ b > 0, then

gcd(a, b) = gcd(b, a mod b). 

Proof of lemma 5. It suffices to show that a common divisor of a and b also


divides a mod b, and that a common divisor of b and a mod b also divides a.
Write a = q · b + a mod b.

31
For the first implication, a common divisor of a and b divides a − q · b = a
mod b.
For the second implication, a common divisor of b and a mod b divides
b · q + a mod b = a.
Theorem 26. Given a, b we can compute gcd(a, b) in polynomial time in
max{kak , kbk}. Also, we can find u, v such that a · u + b · v = gcd(a, b). 
Proof of theorem 26. We can use lemma 5 recursively:
a = b · q1 + r1 (0 ≤ r1 < b)
gcd(a, b) = gcd(b, r1 ) (r1 = a mod b)
b = r1 · q2 + r2 (0 ≤ r2 < r1 )
gcd(b, r1 ) = gcd(r1 , r2 ) (r2 = b mod r1 )
...
ri = ri−1 · qi+1 + ri+1 .
Repeat until rt+1 = 0, we then have that gcd(a, b) = rt .
We prove that ri+2 ≤ r2i for all 0 ≤ i ≤ t − 2. Clearly, ri+1 < ri . If we have
that ri+1 ≤ r2i , then it’s trivial. If ri+1 > r2i , then
ri ri
ri+2 = ri mod ri+1 = ri − qi+2 ri+1 < ri − ri−1 < ri − = .
2 2
Take a ∈ Zn such that gcd(a, n) = 1, then there are u, v such that a · u +
n · v = gcd(a, n) = 1. But then a · u ≡ 1 mod n, so u is the inverse of a.
Another operation in Zn is modular exponentiation, i.e., ab mod n. The
square and multiply algorithm is polynomial. Pt
Write b in binary, as bt bt−1 . . . b0 , so b ∈ {0, 1}t+1 . Since b = i=0 bi · 2i ,
we can write
t t 
i bi
Pt  b1 4 b2  t bt
2i bi i
Y Y
ab = a i=0 = a2 bi
= a2 = ab0 a2 a . . . a2 .
i=0 i=0

We have t multiplications and t squares, so modular exponentiation happens


in polynomial time.
Theorem 27 (Prime Number Theorem).
x
Π(x) = “# of primes ≤ x” ≥
3 log2 (x)
x
which is roughly log(x) . 

Theorem 27 means that


2λ −1
3 log2 (2λ −1) 1
Pr x is prime : x ← $2λ−1 ≥
 
≈ .
2λ−1 3λ
Furthermore, primality testing can be done efficiently.

32
Theorem 28 (Miller-Rabin ’80s, Agramal-Kayer-Saxana ’02). You can test in
polynomial time if x is prime. 

We can sample x ← $2λ − 1, test primality, and eventually resample.


 t  λ
1 1
Pr [no output after t samples] ≤ 1− ≤ ∈ negl (λ)
3λ e
x
for t = 3λ2 , since 1 − x1 ≤ 1e .
Now, for some new number theoretic assumptions.

Assumption. Integer factorisation of the product of two λ-bit primes is a


One Way Function (OWF). National Institute of Standards and Technology
(NIST) recommendation is λ ≈ 2048.
We need cyclic groups.

Theorem 29 (Lagrange). If H is a subgroup of G, then |H| |G|. 

We define the order of a to b e the minimum i such that ai ≡ 1 mod n.


Consider Zn . Note that |Zn | = n. We define

Z?n = {a ∈ Zn : a is invertible} = {a ∈ Zn : gcd(a, n) = 1}.


4

|Z?n | = ϕ(n). If n is a prime p, then Z?p = {1, 2, . . . , p − 1}, and Z?p = ϕ(p) =
p − 1.

Corollary 4. For all a ∈ Z?n , we have


1. ab = ab mod ϕ(n)
mod n;
2. aϕ(n) = 1 mod n;
3. ap−1 = 1 mod p. 

(Z?p , ·) is cyclic, i.e., ∃ a generator g such that

Z?p = hgi = {g 0 , g 1 , . . . , g p−2 }.

Can we find a generator for Z?p ? We can do it if we know a factorisation of


Qt
p − 1 = i=1 pα t
i . Since pi ≥ 2, we have that 2 ≤ p < 2
i λ+1
, thus t ≤ λ. The
algorithm requires t steps.
By theorem 29 we know that the order of any y ∈ Z?p divides p − 1. Thus
y is not a generator ⇐⇒ ∃1 ≤ i ≤ t such that
p−1
y pi
≡q mod p.

If this is the case, y is a generator of a subgroup of Z?p . This leads us to the


next computational assumption.

33
Assumption The Discrete Log (DL) problem, that is finding x given g x ∈ G,
is hard, i.e., for all Probabilistic Polynomial Time (PPT) A,

(G, g, q) ← GroupGen(1λ ); x ← $Zq ;


 
x0
Pr y = g : ≤ ε(λ).
y = g x ; x0 ← A(y, G, g, q, 1λ )

4.1 Elliptic Curves


G is a group of points over some cubic curve modulo p, i.e.,

y = x3 + ax2 + bx + c mod p.

We can define an operation + that makes G a group.


(G, g, q) ← GroupGen(1λ ), and take as an example G = Z?p with · operation
(multiplication). q = p − 1. Take y ∈ Z?p , we have that y = g x for x ∈ Zp−1 .
In the general case, we have y ∈ G, and that y = g x for x ∈ Zq . q is the
order of the group, while g is a generator for G. Since y = g x we have that
logg (y) = x in G.
Z?p is a special case, since modular exponentiation is a One Way Permutation
(OWP).

Definition 21 (Computational Diffie-Hellman). Intuitively, given (g, g x , g y ),


it’s hard to find g xy . Computational Diffie-Hellman (CDH) holds in G if ∀
PPT A we have
 
params = (G, g, q); x, y ← $Zq ;
Pr z = g xy : ≤ negl (λ) . 
z ← A(params, g x , g y )

Observation 1. CDH =⇒ DL. 

Proof of Observation 1. Assume not, then DL does not hold if CDH holds. But
if we can compute y = logg (g y ), then it’s easy to find g xy = (g x )y .

Whether DL =⇒ CDH or not is not known.

Definition 22 (Decisional Diffie-Hellman). Intuitively,

(g, g x , g y , g xy ) ≈c (g, g x , g y , g z ) .
| {z } | {z }
DDH tuple non-DDH tuple

Decisional Diffie-Hellman (DDH) holds in G if for all PPT A



λ x y xy
 
x,y←$Zq A(1 , (G, g, q), g, g , g , g ) = 1
Pr

λ x y z
 
− Pr A(1 , (G, g, q), g, g , g , g ) = 1 ≤ negl (λ) .
x,y,z←$Zq

34
Observation 2. CDH =⇒ DDH is true. DDH =⇒ CDH maybe is not
true. 

Proposition 1. DDH does not hold in Z?p . 

Proof of proposition 1. Consider the set

QRp = {y ∈ Z?p : y = x2 , x ∈ Z?p } = {y = g z : z even}.


p−1 0
We can test if y ∈ QRp by checking if y 2 ≡ 1 mod p, because if y = g 2z
p−1 z 0
then y 2 ≡ g p−1 ≡ 1 mod p. If not, then it must be that y = 2z 0 + 1,
and as such p−1 z0 p−1
y 2 ≡ g p−1 · g 2 ≡ −1 mod p.
Clearly,
g xy ∈ QRp ⇐⇒ g x ∈ QRp ∨ g y ∈ QRp .
Thus g xy ∈ QRp with probability 43 , but g z ∈ QRp only with probability 21 . So
we have a distinguisher.
ADDH (g, g x , g y , g z ) outputs 1 if g x or g y are in QRp , but g z 6∈ QRp , and 0
otherwise. ADDH distinguishes with probability greater than 83 .

This result can be improved by returning 0 if and only if g x , g y and g z are


compatible with QRp considerations, obtaining a distinguisher operating with
probability 12 .
If we take G = QRp , with q = p−1 2 , and where both q and p are prime
(Sophie Germain primes), maybe DDH holds there. This is not always true.

4.2 Diffie-Hellman Key Exchange


Take (G, g, q) ← GroupGen(1λ ). Alice samples x ← $Zq , and sends X = g x
to Bob. Bob samples y ← $Zq , and sends Y = g y to Alice. Alice computes
KA = Y x , Bob computes KB = X y . It’s easy to see that KA = KB .
CDH implies that an adversary can’t compute the key. DDH implies that
the keys are indistinguishable from random.
This is not secure against man in the middle attacks. Authentication would
be needed, an initial fixed key.

4.3 Pseudo Random Generators


(G, g, q) ← GroupGen(1λ ), x, y ← $Zq . From DDH we know that

Gg,q (x, y) = (g x , g y , g xy ) ≈c UG3

so we have a Pseudo Random Generator (PRG) Gg,q : Z2q → G3 .

35
Construction 14 (PRG from DDH). We can construct directly from DDH a
PRG with polynomial stretch
Gg,q : Zl+1
p : G2l+1
defined as
Gg,q (x, y1 , . . . , yl ) = (g x , g y1 , g xy1 , . . . , g yl , g xyl ). 
Theorem 30. If DDH holds, Construction 14 is a PRG. 
Proof of theorem 30. Using the standard hybrid argument, since DDH is ε-
hard, we would use l hybrids, obtaining that the above PRG is (ε · l)-secure.
This proof is not tight, and we want to make a tight one.
We make a direct reduction to DDH. Let ADDH be an adversary who is
given (g, g x , g y , g z ) with z = β + xy, where β is either 0 (so that is a DDH
tuple) or β ← $Zq (non-DDH tuple).
We want to prove that
(g x , g y1 , g xy1 , . . . , g yl , g xyl ) ≈c (g x , g y , g z , . . . )
by generating exactly the first if the tuple is DDH, and exactly the other one
if the tuple is not DDH.
u
For j ∈ [l], pick uj , vj ← $Zq , and define g yj = (g y ) j · g vj , and g zj =
z uj x vj
(g ) · (g ) . ADDH returns the same as
APRG (g x , g y1 , g z1 , . . . , g yl , g zl ).
This trick generates many tuples in the correct way: just look at the expo-
nents.
yj = uj · y + vj
zj = uj · z + vj · x = uj · β + uj · x · y + vj · x
= uj · β + x · (uj · y + vj ) = uj · β + x · yj
which is either sampled random from Zq if β ← $Zq , or xyj if β = 0, with
yj ← $Zq . So breaking the PRG is equivalent to breaking DDH, since ADDH
can perfectly simulate the PRG, query APRG and break DDH.

4.4 Pseudo Random Functions (PRFs): Naor-Reingold


Construction 15 (Naor-Reingold). (G, g, q) ← GroupGen(1λ ).
FNR = {Fq,g,a : {0, 1}n → G}a∈Zn+1
q
.

a is a vector of exponents.
Pn
ai xi
Fq,g,a (x1 , . . . , xn ) = (g a0 ) i=1
. 
DDH =⇒ FNR is a PRF. The proof can be sketched in a similar way as
the proof for Goldreich-Goldwasser-Micali (GGM). Note that this is like an
optimised version of GGM.

36
4.5 Number Theoretic Claw-Free Permutation
Construction 16 (Number Theoretic CFP). 1. params = (G, g, q) ← GroupGen(1λ );
2. y ← $G, pk = (params, y);
3. define f = (f0 , f1 ), with:

f0 (pk, x0 ) = g x0 ,
f1 (pk, x1 ) = y · g x1 .

G = Z?p , q = p − 1, Xpk = Zq . Let (x0 , x1 ) be a claw, i.e., f0 (pk, x0 ) =


f1 (pk, x1 ). Then

g x0 = y · g x1 =⇒ y = g x0 −x1 =⇒ x0 − x1 = logg (y).

Theorem 31. Under DL Construction 16 is a Claw-Free Permutation (CFP).




Proof of theorem 31. It’s a simple simulation.

Hpk (x||m) = fml (. . . fm1 (x) . . . ).


If l = 1, Hpk (x||b) = fb (pk, m) = y b g x . Collision means (x, b) 6= (x0 , b0 ) such
0 0
that y b g x = y b g x . Clearly
0 0 0 0 −1
b 6= b0 =⇒ y b−b = g x −x =⇒ y = g (x −x)(b−b ) .

Take G = QRp , p = 2q + 1, with p and q primes. Then ∃(b − b0 )−1 .

Hpk (x1 , x2 ) = y x1 g x2
Hpk : Z2q → G.

5 Public Key Encryption


There’s also a thing called Hybrid Encryption: encrypt key k with pk, then use
k for something like Advanced Encryption Standard (AES).

Definition 23 (Public Key Encryption Scheme). A Public Key Encryption


(PKE) scheme is a tuple Π = (Gen, Enc, Dec), where
1. (pk, sk) ← $Gen(1λ );
2. Enc(pk, m) = c;

37
3. Dec(sk, c) = m. 
We require correctness from a PKE scheme:

Pr Dec(sk, Enc(pk, m)) = m : (pk, sk) ← $Gen(1λ ) = 1 − negl (λ) .


 

Now we define games for Chosen Plaintext Attack (CPA), Chosen Cypher-
text Attack (CCA)1, CCA2 for PKE.
CPAone-time
Definition 24 (CPA for PKE). GA,Π (λ, b):

1. (pk, sk) ← $Gen(1λ );


2. (m0 , m1 ) ← A(1λ , pk);
3. c ← Enc(pk, mb );
4. b0 ← A(pk, c). 
CCA1
Definition 25 (CCA-1 for PKE). GA,Π (λ, b):

1. (pk, sk) ← $Gen(1λ );


2. (m0 , m1 ) ← ADec(sk,·) (1λ , pk);
3. c ← Enc(pk, mb );
4. b0 ← A(pk, c). 
CCA2
Definition 26 (CCA-2 for PKE). GA,Π (λ, b):

1. (pk, sk) ← $Gen(1λ );


2. (m0 , m1 ) ← ADec(sk,·) (1λ , pk);
3. c ← Enc(pk, mb );
?
4. b0 ← ADec (sk,·)
(pk, c), where Dec? (sk, c0 ) outputs 1 if c0 = c, and Dec(sk, c0 )
otherwise. 
For r ∈ {CPA, CCA1, CCA2} we say that Π is SUCCESS if ∀ Probabilistic
Polynomial Time (PPT) A
r r
GΠ,A (λ, 0) ≈c GΠ,A (λ, 1).

In CCA-1 no decryption queries can be made after receiving the cyphertext,


while in CCA-2 decryption queries can be made for cyphertexts different from
the challenge cyphertext.

CCA-2 =⇒ CCA-1 =⇒ CPA.

The other way around is not true in general.

38
ElGamal PKE
Construction 17 (ElGamal). Consider Π = (KGen, Enc, Dec), where:

• KGen(1λ ): (G, g, q) ← GroupGen(1λ ), x ← $Zq , h = g x , pk = g, sk = x.


Public keys = hG, q, g, hi;
• Enc(pk, m, r): r ← $Zq , c = (c1 , c2 ) = (g r , hr · m);
c2
• Dec(sk, (c1 , c2 )) = cx .
1

Note that
c2 hr · m
x = r x = m. 
c1 (g )
Theorem 32. Under Decisional Diffie-Hellman (DDH), ElGamal (Construc-
tion 17) is CPA-secure. 
CPA
Proof of theorem 32. Let H0 (λ, b) = GΠ,A (λ, b). Then, let

r, z ← $Zq ; c = (g r , g z · mb );
 
H1 (λ, b) : ElGamal
b0 ← A(h, (c1 , c2 ))

i.e., we multiply the message for g z for random z.


First we claim that ∀b ∈ {0, 1}, H0 (λ, b) ≈c H1 (λ, b). To verify it, note that
(g x , g r , g xr ) ≈c (g x , g y , g z ) by DDH. Assume AH0 ,H1 distinguishes the two,
then ADDH , on input (g x , g r , g z ), with z either random or equal to x · r, can
forward h = g x to AH0 ,H1 , receives the two messages m0 and m1 , and return
her (g r , g z · m0 ), and output whatever AH0 ,H1 outputs.
The second claim is that H1 (λ, 0) ≈c H1 (λ, 1). Since g z is random, so g z ·mb
is random, and hides b.

A few properties of ElGamal:

1. homomorphic: given pk, (c1 , c2 ), (c01 , c02 ), with (c1 , c2 ) = Enc(pk, m) and
(c01 , c02 ) = Enc(pk, m0 ), note that
0 0
(c1 · c01 , c2 · c02 ) = (g r+r , hr+r (m · m0 )) = Enc(pk, m · m0 , r + r0 );

2. blindness: given c = (c1 , c2 ) = (g r , hr · m), compute m0 · c2 = hr (m · m0 ).


You have that (c1 , m0 · c2 ) = Enc(pk, m · m0 , r);
3. re-randomisability: given a cyphertext c = (c1 , c2 ) you can always re-
randomise it. Take r0 ← $Zq , and compute
0 0
(g r · c1 , hr · c2 ) = Enc(pk, m, r + r0 ).

39
Property 2 implies that ElGamal is not CCA-2 secure: take (c1 , c2 ) challenge,
ask for Dec(sk, (c1 , c2 · m0 )), and look if the result is m0 · m0 or m1 · m0 .
Something cool comes from property 1: fully homomorphic encryption. You
can compute the encryption of the product of two messages.
What if we could do it for every function? Assume having c = Enc(pk, m),
and to have some function Eval(pk, c, f ) which outputs c0 = Enc(pk, f (m)). If
you are in a group, it suffices to have addition and multiplication. You could
build a client/server architecture, where the client C wants to compute f (x)
without sharing x. Then C computes c = FHECPK(x), sends c, f to the server
S, and obtains c0 = FHCPK(f (x)).

5.1 Factoring Assumption and RSA


The factoring assumption will lead us to Rivest-Shamir-Adleman (RSA). But
first, we introduce Trapdoor Permutations (TDPs), which can give us PKE.

Definition 27 (Trapdoor Permutation). A TDP is a tuple (Gen, f, f −1 ),


where:

1. (pk, sk) ← $Gen(1λ ) on some efficiently sampleable domain Xpk ;


2. f (pk, x) = y is a permutation over Xpk ;
3. f −1 (sk, y) = x is the trapdoor.
A TDP has two properties:

1. correctness:
∀x ∈ Xpk .f −1 (sk, f (pk, x)) = x;

2. one-way: for all PPT A,

(pk, sk) ← $Gen(1λ ); x ← $Xpk ;


 
0
Pr x = x : ≤ negl (λ) . 
y = f (pk, x); x0 ← A(pk, y)

Basically, we are able to invert f if we have sk. Note that f is deterministic!


We don’t get PKE directly.
The problem is that randomness is missing. Recall hardcore functions.

Construction 18 (PKE scheme from TDP). Take (Gen, f, f −1 ), and let h(·)
be hardcore for f . Then do the following:

1. (pk, sk) ← $Gen(1λ );


2. Enc(pk, m) = (f (pk, r), h(r) ⊕ m) = (c1 , c2 ) for r ← $Xpk ;
3. Dec(sk, (c1 , c2 )) = h(f −1 (sk, c1 )) ⊕ c2 = m.


40
Theorem 33. If (Gen, f, f −1 ) is a TDP and h(·) is hardcore for f (·), then
the PKE in Construction 18 is CPA-secure. 

Since (f (pk, r), h(r)) ≈c (f (pk, r), U), this works. How many bits can we
encrypt?
We know from theorem 9 that we have at least a bit, for any TDP. This
can be extended to O(log(n)) bits for any TDP. For specific TDPs we can
get some Ω(λ), like for factoring, RSA, Discrete Log (DL). From a Random
Oracle (RO) one can get anything in {0, 1}? .
Let’s now look at modular operations modulo n (not necessarily prime).
For example, take m = p · q with p, q primes.
Theorem 34 (Chinese Remainder Theorem). Let n1 , . . . , nk be pairwise co-
prime numbers, and any a1 , . . . , ak such that 1 ≤ ai ≤ ni for all i ∈ [k]. Then
Qk
∃!x such that 0 ≤ x < n = i=1 ni and such that x ≡ ai mod ni for all
i ∈ [k]. 

Special case: when k = 2, n1 = p and n2 = q are primes, and n = p · q.


Take any x ∈ Zn . To x correspond xp ≡ x mod p and xq ≡ x mod q. This is
a bijection: in fact, Zn ' Zp × Zq (i.e., they are isomorphic).
Now, let’s look at fe (x) = xe mod n, for 0 ≤ x < n. Recall that Z?n =
{x ∈ Zn : gcd(x, n) = 1}, and that #Z?n = (p − 1)(q − 1) = ϕ(n).

Fact 2. If gcd(e, ϕ(n)) = 1, then fe (·) is a permutation over Z?n . 

We can indeed invert it. Since gcd(e, ϕ(n)) = 1, then ∃d such that d · e = 1
mod ϕ(n). Consider fd−1 (x) = xd mod n.
d
fd−1 (fe (m)) = fe (m) mod n
e d
= (m ) mod n
e·d
=m mod n.

Since d · e ≡ 1 mod ϕ(n), then d · e = t · ϕ(n) + 1 for some t.

me·d = mtϕ(n)+1 = m · ( m ϕ(n) t


| {z } ) = m mod n.
1 mod n

−1
Assumption (GenRSA , fRSA , fRSA ) is a TDP.
• (n, d, e) ← $GenRSA (1λ ) with n = p · q and e · e ≡ 1 mod ϕ(n). (n, e) =
pk, (n, d) = sk;
• fRSA (e, x) = xe mod n;
−1
• fRSA (d, y) = y d mod n.

41
Figure 8: Optimal Asymmetric Encryption Padding.

We have seen that this is correct. The RSA assumption is that, for all PPT A,
(n, d, e) ← $GenRSA (1λ );
 
0
Pr x = x : ≤ negl (λ) .
x ← $Z?n ; y = xe ; x0 ← A(e, n, y)
Under this assumption, we have a 1-bit hardcore predicate, so a 1-bit PKE.
This is a different assumption than factoring. If you can factor n you can
compute ϕ(n) and thus d. RSA implies factoring, but the other way around is
not known.
−1
Textbook RSA, i.e., using (GenRSA , fRSA , fRSA ) as is does not work: it’s
deterministic! To do RSA encryption in practice pick r ← ${0, 1}t and define
e
Enc(e, m) = (m̂) mod n, where m̂ = r||m.
• This is insecure if t is O(log(λ)).
• If m ∈ {0, 1}, this is secure under RSA.
• If t is between log(λ) and |m|, we don’t actually know.
It’s proven to be CCA-secure in the RO model.
Construction 19 (OAEP - PKCS#1 V.2). Optimal Asymmetric Encryption
Padding (OAEP), shown in fig. 8. This is basically a 2-Feistel (G, H), with
G, H hash functions.

42
• λ0 , λ1 ∈ N;
• m0 = m||0λ1 , r ← ${0, 1}λ0 ;
• s = m0 ⊕ G(r), t = r ⊕ H(s);
• m̂ = s||t;
• Enc(pk, m) = m̂e . 
Theorem 35. OAEP (Construction 19) is CCA-2 secure under RSA in the
RO model. 

5.2 Trapdoor Permutation from Factoring


Let’s see how to build a TDP from factoring. We’ve seen that fe (x) = xe
mod n is a TDP. What about e = 2? We have n = p · q and e · d ≡ 1 mod n,
but for e = 2, f2 (x) = x2 mod n is not a permutation, since the image of f2 (·)
is QRn = {y : y = x2 mod n for some x ∈ Z?n }, and QRn ⊂ Z?n .
For some p and some n, f2 (·) is actually a permutation. By the Chinese
Remainder Theorem (CRT) (theorem 34), x 7→ (xp , xq ) with xp ≡ x mod p
and xq ≡ x mod q. Let us look at f (x) = x2 mod p.
p−1
Z?p = {g 0 , g 1 , . . . , g ( 2 )−1 , g p−1
2 , . . . , g p−2 }

QRp = {g 0 , g 2 , . . . , g p−3 , g 0 , . . . }

In QRp are only even powers of the generator. This means that |QR| = p−1 2 .
p−1
p−1 0
Note also that, since g ≡ g ≡ 1 mod p, we have that g 2 ≡ −1 mod p.
There is an easy way to check if a number is a square modulo p.
Fix p, q ≡ 3 mod 4, so p, q ≡ 4t + 3 for some t ∈ N. Consider n = p · q, also
called a Bloom integer. If this condition is met, squaring is a TDP modulo n.
2
Let y = x2 mod p, x = y t+1 mod p. y t+1 = y 2t+2 . Now, since p =
4t + 3,
4t + 3 − 3 p−3 p+1 p−1
2t + 2 = +2= +2= = + 1.
2 2 2 2
Now,
p−1 p−1
y 2t+2 = y 2 +1 =y 2 · y = 1 · y = y.
p−1
t+1
So, x = ±y mod p. Note that −1 = g 6∈ QRp , because p = 4t + 3 =⇒
2

p−1 4t+2
2 = 2 = 2t + 1, which is odd, and thus −1 is not a square.
Now, let’s look at fRABIN (x) = x2 mod n (quadratic residue modulo n,
which is in QRp ).
x2 7→ (x2p , x2q ).
−1
Note that fRABIN (y) is one of the following:

{(xp , xq ), (−xp , xq ), (xp , −xq ), (−xp , −xq )}.

43
But doing this without knowing p and q is not easy. One can show that
y ∈ QRn ⇐⇒ x2p ∈ QRn , x2q ∈ QRn . Then it’s easy to see that just one of the
above pre-images is a square modulo n, and this is because only one between
xp and −xp is a square (and same goes for xq , −xq ).

ϕ(n)
|QRn | = .
4
With p, q you can invert fRABIN (x) = x2 = y mod n, without them it’s hard
as factoring.

Lemma 6. Given x, z such that x2 = z 2 = y mod n, and x 6= ±z, we can


factor n. 

Proof of lemma 6. Recall that f −1 (y) ∈ {(±xp , ±xq )}. Thus, if x = (xp , xq ),
then z 6= (−xp , −xq ), and as such x + z ∈ {(2xp , 0), (0, 2xq )}.
Assume x + z = (2xp , 0). Then x + z = 0 mod q, but x + z 6= 0 mod p.
This means that gcd(x + z, n) = q, i.e., x + z is a multiple of q, which is a
divisor of n. After finding q, we can find p = nq . Done.

Theorem 36. Under factoring over Bloom integers, fRABIN is a TDP. 

Proof of theorem 36. Assume not, then ∃A and some p(λ) ∈ poly(λ) such that

(p, q, n) ← $Gen(1λ ); x ← $QRn ;


 
2 1
Pr z = y : ≥ .
y = x2 mod n; z ← A(y, n) p(λ)

x is taken from QRn because fRABIN is a permutation over QRn . Consider


A1 (n), trying to factor n. She chooses x, and lets y = x2 mod n. Then she
runs A(y, n) to get z. If z 6= ±x, which happens with probability 12 , A1 can
factor n, with probability 12 · p(λ)
1
.

5.3 Cramer-Shoup Encryption (1998)


We will build a simplified version that is CCA-1. We call it Cramer-Shoup (CS)
lite.

Construction 20 (Cramer-Shoup lite). Consider Π = (KGen, Enc, Dec), where

• KGen(1λ ): (G, g, q) ← GroupGen(1λ ), then let g1 = g, take α ← $Zq ,


and let g2 = g1α . With high probability g2 is a generator. Now pick
x1 , x2 , y1 , y2 ← $Zq . Let h1 = g1x1 · g2y1 and h2 = g1x2 · g2y2 . Then,
pk = (h1 , h2 , g1 , g2 ) and sk = (x1 , y1 , x2 , y2 );
• Enc(pk, m): pick r ← $Zq , and output

c = (g1r , g2r , hr1 · m, hr2 ) = (c1 , c2 , c3 , c4 ).

c4 = hr2 serves the purpose to ensure that the message is correct;

44
• Dec(sk, (c1 , c2 , c3 , c4 )): if c4 6= cx1 2 · cy22 , return ⊥; else, return
c3
.
cx1 1 · cy21

To verify correctness of the construction:


r x2 y2
c4 = hr2 = (g1x2 · g2y2 ) = (g1r ) · (g2r ) = cx1 2 · cy22 ,
c3 = hr1 · m = (g1x1 · g2y1 )r · m = (g1r )x1 · (g2r )y1 · m = (c1 )x1 · (c2 )y1 · m.


Theorem 37. Under DDH, CS-lite is CCA-1 secure. 
CCA1
Proof of theorem 37. Start with H0 (λ, b) = GA,Π (λ, b), where H0 (λ, b):

1. (G, g, q) ← GroupGen(1λ ), x1 , x2 , y1 , y2 ← $Zq , g1 = g, g2 = g1α , h1 =


g1x1 · g2y1 , h2 = g1x2 · g2y2 , pk = (g1 , g2 , h1 , h2 ), sk = (x1 , y1 , x2 , y2 );
2. (m?0 , m?1 ) ← ADec(sk,·) (1λ , pk), where you pick r ← $Zq , and let

c? = (g1r , g2r , hr1 · mb , hr2 ) = (c?1 , c?2 , c?3 , c?4 );

3. b0 ← A(pk, c? ).

Now consider H1 (λ, b), that changes just the way the cyphertext is com-
puted. Pick r ← $Zq , and let g3 = g1r , g4 = g3α = g2r . Then

c? = (g3 , g4 , g3x1 · g4y1 · mb , g3x2 · g4y2 ) .

Clearly, H0 (λ, b) ≡ H1 (λ, b). Just look at c? :

c? = (g1r , g2r , (g1r )x1 · (g2r )y1 ·mb , (g1r )x2 · (g2r )y2 ).
| {z } | {z }
hr1 hr2

0 00
Now, we define H2 (λ, b), where instead of using g4 we use g1r = g2r for
some random r0 , r00 . The next claim is that H1 (λ, b) ≈c H2 (λ, b), by reduction
to the DDH.
To finish the proof, we introduce a lemma and two technical claims, that
imply the theorem.
Lemma 7. b is information theoretically hidden in H2 (λ, b) for all A asking
t = poly(λ) Dec queries. 
0
Proof of lemma 7. Remember that g3 = g1r and that g4 = g2r , and that g2 =
g1α . For the proof we assume that α 6= 0 and that r 6= r0 .
From h1 = g1x1 · g2y1 , A knows that

logg1 (h1 ) = x1 + y1 logg1 (g2 ) = x1 + αy1 mod q. (2)

45
There are q possible pairs (x1 , y1 ).
00 00
We say that query c = (c1 , c2 , c3 , c4 ) is legal if c1 = g1r and c2 = g2r , or
equivalently if logg1 (c1 ) = logg2 (c2 ).
By Claim 2 and Claim 2, before getting c? , A knows only that (x1 , x2 )
satisfy eq. (2).
c? = (g3 , g4 , g3x1 · g4y1 · mb , g3x2 · g4y2 ).
We want to show that g3x1 · g4y1 is uniform by A point of view. Fix an arbitrary
h in G. If h = g3x1 · g4y1 , then

logg1 (h) = x1 logg1 (g3 ) + y1 logg1 (g4 ).


0
Here g3 = g1r and g4 = g2r , thus

logg1 (h) = x1 r + αy1 r0 . (3)

Equation (2) and eq. (3) are independent: consider the system of equations
    
1 α x1 logg1 (h1 )
=
r αr0 y1 logg1 (h).
| {z }
B

det(B) = αr0 − αr 6= 0 since α 6= 0 and r 6= r0 .


Since h is arbitrary,
1
Pr [g3x1 · g4y1 = h] = .
|G|
g3x1 · g4y1 is uniform, and thus the message is hidden.
Claim 2. A obtains additional info about x1 , y1 only if it ever makes a Dec
query for c = (c1 , c2 , c3 , c4 ) such that
• logg1 (c1 ) 6= logg2 (c2 ), i.e., it’s illegal;
• Dec(sk, c) 6= ⊥. 
Proof of Claim 2. If Dec(sk, c) = ⊥, then c4 6= cx1 2 · cy22 .
You learn nothing
about x1 , y1 if you make this query.
Assume it’s not ⊥, but the query is legal. A learns that
c3
m=
cx1 1 · cy21
=⇒
logg1 (m) = logg1 (c3 ) − x1 logg1 (c1 ) − y1 logg1 (c2 )
= logg1 (c3 ) − r00 x1 − r00 y1 α.

Look at the determinant of


 
1 α
−r00 −αr00

46
which is −αr00 + αr00 = 0. This does not tell us anything, so the only way to
learn something is a query that is not ⊥ and that is illegal.

Claim 3.

Pr [A makes illegal Dec query c for which Dec(c) 6= ⊥] ≤ negl (λ) .

Proof of Claim 3. Assume c = (c1 , c2 , c3 , c4 ) is illegal, i.e.,

r1 = logg1 (c1 ) 6= logg2 (c2 ) = r2 .

For c to be not rejected, we need that c4 = cx1 2 · cy22 A knows that

logg1 (h2 ) = x2 + αy2 mod q. (4)

Fix arbitrary c4 ∈ G. We need that c4 = cx1 2 · cy22 .


A also knows that

logg1 (c4 ) = x2 logg1 (c1 ) + y2 logg1 (c2 ) = x2 r1 + y2 r2 α.

These, too, are independent, since det r11 r2αα = α(r2 − r1 ) 6= 0. For the first


query, A can guess c4 with probability less than or equal to 1q .


But if the query is rejected, A has excluded one pair. At query i + 1, A can
1
guess c4 with probability q−i . The probability that ∃i such that the (i + 1)-th
query is not rejected is less than or equal to
p−1
X p
Pr [i-th query is not rejected] ≤ = negl (λ)
i=0
q−p

with λ ≈ log(q).

Since Claim 2 and Claim 3 imply lemma 7, and that in turn implies the
thesis, we are done.

Let’s see where the proof breaks for CCA-2. After having c? = (c?1 , c?2 , c?3 , c?4 ),
A knows that
logg1 (c?4 ) = x2 logg1 (g3 ) + y2 logg1 (g4 ).
This is independent of eq. (4), and A can recover x2 , y2 . A constructs
x y
c = (g1r1 , g2r2 , c3 , (g1r1 ) 2 , (g2r2 ) 2 ) .
c3
A learns m = x y
c1 1 ·c2 1
, and can recover x1 , y1 and decrypt to find b.
c 
3
logg1 = x1 r1 + αy1 r2 .
m

47
Real Cramer-Shoup
Construction 21 (Cramer-Shoup). Consider the CS PKE scheme Π = (KGen, Enc, Dec),
where

• KGen(1λ ):
1. (G, g, q) ← GroupGen(1λ );
2. let g1 = g, take α ← $Zq , and let g2 = g1α ;
3. let H : {0, 1}? → Zq be a Collision Resistant Hash Function (CRH);
4. pick x1 , x2 , y1 , y2 , x3 , y3 ← $Zq ;
5. let hi = g1xi · g2yi , for i ∈ {1, 2, 3};
6. pk = (g1 , g2 , h1 , h2 , h3 ), sk = (x1 , x2 , x3 , y1 , y2 , y3 );
• Enc(pk, m): pick r ← $Zq , and output
  r 
c = g1r , g2r , hr1 · m, h2 · hβ3 = (c1 , c2 , c3 , c4 )

where β = H(c1 , c2 , c3 ).
• Dec(sk, (c1 , c2 , c3 , c4 )): check that

c1x2 +βx3 · cy22 +βy3 = c4 .

If not, output ⊥. Otherwise, output


c3
m= .
cx1 1 · cy21

Correctness:
x2 βx3 y2 βy3
cx1 2 +βx3 · cy22 +βy3 = (g1r ) · (g1r ) · (g2r ) · (g2r )
r βr
= (g1x2 · g2y2 ) · (g1x3 · g2y3 )
 r
= hr2 · hβr
3 = h 2 · hβ
3 .

The proof of CCA-2 security looks much similar to the proof for CCA-1
security of CS lite (theorem 37).

Theorem 38. The CS PKE scheme (Construction 21) is CCA-2 under DDH.


Proof of theorem 38. Start with (g, g α , g r , g αr ).


 r 
c? = g r , g αr , g x1 αr · g y1 αr · m, g x2 · g x3 · g αβy2 · g αβy3 .

48
These have the same distribution, so the first tuple is indistinguishable from
0
(g, g α , g r , g αr ). The first hybrid uses the DDH tuple, the second uses the
non-DDH tuple.
From the pk, A knows that

logg1 (h2 ) = x2 + αy2 ,


logg1 (h3 ) = x3 + αy3 .

Let’s analyse the probability of rejecting illegal queries. Let g3 = g1r , and let
0
g4 = g2r , with r 6= r0 . Given c? , A knows that

logg1 (c?4 ) = (x2 + β ? x3 )r + (y2 + β ? y3 )αr0 .

Take arbitrary c = (c1 , c2 , c3 , c4 ) 6= c? , and assume c is illegal, i.e., logg1 (c1 ) 6=


logg2 (c2 ). Three cases are possible:

1. (c1 , c2 , c3 ) = (c?1 , c?2 , c?3 ), but c4 6= c?4 . It’s impossible, so it’s always
rejected;
2. (c1 , c2 , c3 ) 6= (c?1 , c?2 , c?3 ), but

H(c1 , c2 , c3 ) = β = β ? = H(c?1 , c?2 , c?3 ).

It’s impossible (highly unlikely) by CRH;


3. as before, (c1 , c2 , c3 ) 6= (c?1 , c?2 , c?3 ) and β 6= β ? .

6 Signature Schemes
We are still inside Public Key Cryptography, but we stop with encryption,
and instead look at a new primitive: Signature Schemes (SSs). Basically, it’s a
primitive just like Message Authentication Code (MAC), but this time without
the two parties sharing a secret.
A big difference between MAC and SS is that in SS the signature is publicly
verifiable. What we want from a SS is that it is hard to forge a message together
with a valid signature.

Definition 28 (Signature Scheme). A SS is a tuple Π = (KGen, Sign, Vrfy)


where

• KGen(1λ ) → (pk, sk);


• Sign(sk, m) → σ, with m ∈ Mpk (the message space could depend on
the public key pk);
• Vrfy(pk, (m, σ)) → 0/1.

49
As usual, we want correctness, i.e., for all (pk, sk) output of KGen, for all
m ∈ Mpk ,

Pr [Vrfy(pk, (m, Sign(sk, m))) = 1] ≥ 1 − negl (λ) .

Sometimes “bad things” happen, and the signature does not work: hence the
probability. 
ufcma
Definition 29 (UFCMA for SSs). Consider the game GΠ,A (λ):

• (pk, sk) ← KGen(1λ );


• (m? , σ ? ) ← ASign(sk,·) (1λ , pk);
• output 1 ⇐⇒ Vrfy(pk, (m? , σ ? )) = 1 and m? is fresh, i.e., it is not a
signature query.
Π is Unforgeable Chosen Message Attack (UFCMA) if for all Probabilistic
Polynomial Time (PPT) A ∃ a negligible ε : N → [0, 1] such that
 ufcma 
Pr GΠ,A (λ) = 1 ≤ ε(λ). 

Identity Based Encryption (IBE) is an idea of Shamir.


A natural idea is to swap pk and sk in Rivest-Shamir-Adleman (RSA), since
they are symmetric.
1. (n, p, q, d, e) ← GenRSA (1λ ), n = p · q, d · e ≡ 1 mod φ(n). pk = (n, e),
sk = d);
2. Sign(sk, m) = md mod n;
3. Vrfy(pk, (m, σ)) = 1 ⇐⇒ σ e = m mod n.
But this is not secure.
1. Pick σ ∈ Z?n , let m = σ e mod n, output (m, σ).
2. Pick m? ∈ Z?n such that we can find m1 , m2 such that m1 · m2 = m? .
Then Sign(sk, m1 ) · Sign(sk, m2 ) = Sign(sk, m? ) to the oracle.
d
m? m?

m2 = , σ1 = md1 , σ2 = , σ ? = σ1 · σ2 .
m1 m1

If we hash the message before signing it, we get this to work in the Random
Oracle (RO) model.
Construction 22 (SS from TDP + RO). We have a Trapdoor Permutation
(TDP) (Gen, f, f −1 ), with Xpk being the domain of the TDP. We assume a
function H : {0, 1}? → Xpk , i.e., the message space is Mpk = {0, 1}? .
1. KGen(1λ ) = (pk, sk) ← Gen(1λ );

50
2. Sign(sk, m) = f −1 (sk, H(m)) = σ;
3. Vrfy(pk, (m, σ)) = 1 ⇐⇒ f (pk, σ) = H ( m). 

Theorem 39. The SS in Construction 22 is UFCMA in the RO model if


(Gen, f, f −1 ) is a TDP. 

Proof of theorem 39. Assume exists A capable of forging, i.e., be a PPT ad-
versary such that
 ufcma  1
Pr GΠ,A (λ) = 1 ≥ .
poly(λ)
We use her to break the TDP.
Without loss of generality, we make the following assumptions on A:

1. A never repeats her queries;


2. before making signature query mi , A asks mi to the RO.
We describe B which is given pk and y = f (pk, x) for x ← $Xpk , i.e., B(1λ , pk, y):

1. run A(1λ , pk), pick j ← $[qh ] (number of queries of A). With probability
≈ q1h (one over polynomial) we guess j, the last message;

2. upon query mi ∈ [0, 1]


a) if i 6= j, sample x0 ← $Xpk , and let yi = f (pk, xi ) and return yi ;
b) if i = j, return y to A;
3. upon seeing query mi , return xi as long as mi 6= m, else abort;
4. when A gives (m? , σ ? ), output σ ? .
B perfectly simulates pk, and also signature queries as long as it does not
abort.
1. yi are random, since xi are random and f (pk, ·) is a permutation;
2. Sign(sk, mi ) = f −1 (sk, H(mi )) = f −1 (sk, f (pk, xi )) = xi .
Additionally, if (m? , σ ? ) is valid it means that σ ? = f −1 (sk, H(m? )) = f −1 (sk, y) =
x, and B inverts the TDP.
The probability of breaking the TDP is the probability of not aborting
times the probability that A breaks the scheme.
1 1
Pr [“B wins”] = · 6= negl (λ) .
qh poly(λ)

Theorem 40. SSs exist ⇐⇒ One Way Functions (OWFs) exist. 


But this is inefficient.

51
6.1 Waters Signature Scheme
Computational Diffie-Hellman (CDH) in bilinear groups.
Definition 30 (Bilinear Group). (G, GT , q, g, ê) ← Bilin(1λ ).

1. G, GT are groups of order q with some efficient operation “·”. T stands


for target;
2. g is a generator of G;
3. ê : G × G → GT is a bilinear map, i.e.,
a) bilinearity: ∀g, h ∈ G, ∀a, b ∈ Zq ,

ê(g a , hb ) = ê(g, h)ab ;

b) non-degeneracy: ê(g, g) 6= 1 for generator g. 

Observation 3. They are pairing on elliptic curves and CDH is believed to


hold in such groups. Decisional Diffie-Hellman (DDH) comes easy in G with
a bilinear map. 

Construction 23 (Waters Signature Scheme). Consider Π = (KeyGen, Sign, Vrfy),


where

• KeyGen(1λ ): params = (G, GT , q, g, ê) ← $Bilin(1λ ), a ← $Zq , g1 = g a ,


g2 , µ1 , . . . , µk ← $G; pk = (params, g1 , g2 , µ0 , . . . , µk ), sk = g2a .
• Sign(sk, m): let m = (m[1], . . . , m[k]) ∈ {0, 1}k , and α(m) = αµ0 ,...,µk (m) =
Qk m[i]
µ0 i=1 µi . Pick r ← $Zq , output σ = (g2a · α(m)r , g r ) = (σ1 , σ2 );
• Vrfy(pk, (m, (σ1 , σ2 ))): check that ê(g, σ1 ) = ê(σ2 , α(m)) · ê(g1 , g2 ).
Correctness:

ê(g, σ1 ) = ê(g, g2a · α(m)r ) = ê(g, g2a ) · ê(g, α(m)r )


= ê(g a , g2 ) · ê(g r , α(m)) = ê(g1 , g2 ) · ê(σ2 , α(m)).

Security: partitioning argument. M = {0, 1}k , and X is some auxiliary infor-


mation. Given X, one can simulate signature on m ∈ MX 1 and moreover with
high probability all queries will belong to MX
1 . Forgery will be in MX
2 which
will allow to break CDH. 
Theorem 41. Under CDH, Waters Signature Scheme (WSS) is UFCMA. 

Proof of theorem 41. Let A be a PPT adversary capable of breaking WSS with
1
probability greater than p(λ) , with p(λ) ∈ poly(λ). Consider the adversary B
given params = (G, GT , q, g, ê), g1 = g a , g2 = g b , and which wants to compute
g ab . B(params, g1 , g2 ):

52
1. set l = kqs , with qs being the number of queries (<< q);
2. pick x0 ← $[−kl, 0] and x1 , . . . , xk ← $[0, l] integers, and y0 , . . . , yk ←
$Zq ;

3. set µi = g2xi g yi for all i ∈ [k], and define the functions


k
X
β(m) = x0 + m[i]xi ,
i=1
k
X
γ(m) = y0 + m[i]yi ;
i=1

4. run A(params, g1 , g2 , µ0 , . . . , µk );
5. upon a query m ∈ {0, 1}k from A:
• if β = β(m) = 0 mod q, abort;
• else γ = γ(m), r ← $Zq , and output
−1 −1
σ = (σ1 , σ2 ) = (g2βr · g γr · g1−γβ , g r · g1−β );

6. when A outputs (m? , (σ1? , σ2? )):


• if β(m? ) 6= 0 mod q, abort;
• else output
σ1?
γ(m? )
.
(σ2? )
If β(m) 6= 0, B aborts on output, but answers queries. If β(m) = 0, B aborts
on queries, but outputs result.
Public key has random distribution as desired: perfect simulation of pk.
Conditioning on B not aborting, the following two holds:
1. signature answers have the right distribution;
2. B breaks CDH.
To verify point 1: a = logg (g1 ), b = logg (g2 ). A signature σ on m is
σ = (σ1 , σ2 ) = (g2a · α(m)r̄ , g r̄ ) for random r̄ ← $Zq . Take now r̄ = r − αβ −1
and let β = β(m), γ = γ(m).
k k
m[i]
Y Y
α(m) = µ0 µi = g2x0 g y0 (g2xi g yi )m[i] = g2β g γ .
i=1 i=1

−1
σ1 = g2a · α(m)r̄ = g2a · α(m)r−αβ
−1
= g2a · g2βr · g γr ·   · g −αγβ −1 = g βr · g γr · g −γβ
g2−α 2 1

53
−1 −1
σ2 = g r̄ = g r · g −αβ = g r · g1−β
Since r̄ = r − αβ −1 is random, we get the same distribution.
To verify point 2: let (σ1? , σ2? ) be the forgery such that β(m? ) = 0 mod q.

ê(g, σ1? )
= ê(g1 , g2 )
ê(σ2? , α(m? ))

because it’s valid. Since ê(g1 , g2 ) = ê(g, g)ab we have

ê (g, σ1? ) ê (g, σ1? ) ê (g, σ1? )


ê(g, g)ab = = ? =
ê (σ2? , α(m? ))

β(m ) γ(m? )
ê(σ2? , g1 ·g )) ê (σ2? )γ(m? ) , g
| {z }
1
?
σ1?
 
ab σ 1
= ê g, ? γ(m? ) =⇒ g ab = ? γ(m

=⇒ ê g, g
(σ2 ) (σ2 ) ? )

Next, we claim that B aborts with negligible probability.


To verify it, we see that B aborts if the following happens:

E = β(m? ) 6= 0 ∨ (β(m? ) = 0 ∧ β(m1 ) = 0) ∨ · · · ∨ (β(m? ) = 0 ∧ β(mqs ) = 0).

Since |β(m)| ≤ kl by definition of β(·) =⇒ |β(m)| < q =⇒ we drop the


modulus. By the union bound
qs
X
Pr [E] ≤ Pr [β(m? ) 6= 0] + Pr [β(m? ) = 0 ∧ β(mi ) = 0] .
i=1

• Fix x1 , . . . , xk and m? , there is a single x0 such that β(m? ) = 0, thus

kl
Pr [β(m? ) 6= 0] = ;
kl + 1

• fix i ∈ [qs ], and let m̄ = mi . Since m? 6= mi , ∃j : m? [j] 6= m̄[j]. Assume


without loss of generality that m? [j] = 1 ∧ m̄[j] = 0. Given arbitrary
x1 , . . . , xj−1 , xj+1 , . . . , xk then

x0 + xj = − i6=j xi m? [i]
 P 
(β(m? ) = ∧β(m̄) = 0) ⇐⇒ P
x0 = − i6=j xi m̄[i]

∃! solution.
1 1
Pr [β(m? ) = 0 ∧ β(mi ) = 0] = · .
kl + 1 l + 1
So the probability of aborting is:
kl 1 1
Pr [E] ≤ + qs · ·
kl + 1 kl + 1 l + 1

54
which gives us the probability of not aborting:
  kl 1 1
Pr Ē ≥ 1 − − qs · ·
kl + 1 kl + 1 l + 1
   
1 qs 1 qs
= kl + 1 − 
 kl − = 1− (l = kqs )
kl + 1 l+1 kl + 1 l+1
 
1 l 1 1
= 1− ≥ = 0
kl + 1 (l + 1)k 4kqs + 2 p (λ)

with p0 (λ) ∈ poly(λ).


Finally, the probability that B breaks CDH:
1 1
Pr [B breaks CDH] = · ∈ poly(λ)
p(λ) p0 (λ)
| {z } | {z }
breaks WSS not abort

6.2 Identification Schemes


An Identification Scheme (IDS) is an interactive protocol between a prover P
and a verifier V, both PPT.

Definition 31 (Identification Scheme). An IDS is a tuple Π = (Gen, P, V). τ is


the message exchange between P and V, denoted as τ ← $(P(pk, sk)  V(pk)),
with (pk, sk) ← $Gen(1λ ). outV (P(pk, sk)  V(pk)) is a random variable, and
decides if P knows sk or not.
Correctness requirement: for all λ ∈ N, for all (pk, sk) output by Gen(1λ ),

Pr [outV (P(pk, sk)  V(pk)) = 1] = 1.

This is called perfect correctness. 

We also have a security requirement. It should be hard to impersonate


P without knowing the secret key. This does not capture the fact that one
transcript could work always. The security requirement is that it should be
hard to impersonate P even after seeing many τ s from honest executions.
id
Definition 32 (Passively secure IDS). Consider the game GΠ,A (λ), with A =
(A1 , A2 ), defined by:

1. (pk, sk) ← $Gen(1λ );


Trans(pk,sk)
2. s ← A1 (pk), where Trans(pk, sk) upon empty input outputs
τ ← $(P(pk, sk)  V(pk)), and s is some “state”;
3. return outV (A2 (s, pk)  V(pk)).

55
Π = (Gen, P, V) is passively secure if for all PPT A
 id 
Pr GΠ,A (λ) = 1 ≤ negl (λ) . 

It’s natural to build IDSs using SSs. The idea is to sign random messages
from V.
1. V samples m ← $M, and sends it to P;
2. P computes σ = Sign(sk, m) and sends it to V;
3. V outputs Vrfy(pk, (m, σ)).
Active security is when the IDS resists to an adversary that replaces V, and
thus chooses what to send to P. The proposed scheme is both passively secure
and actively secure.
Encrypting and decrypting random messages also works as an IDS.
Definition 33 (Canonical IDS). In a canonical IDS we have that P = (P1 , P2 )
and that V = (V1 , V2 ). Just three messages are exchanged:
1. α ← $P1 (pk, sk) is sent to V;
2. β ← $Bλ,pk = V1 , the challenge, is sent to P;
3. γ ← $P2 (pk, sk, α, β) is sent to V;
4. V outputs V2 (pk, α, β, γ) ∈ {0, 1}.
The verifier is randomised: this is called a (three-move) “public coins” ver-
ifier. Sometimes P1 and P2 share a state, i.e., (α, a) ← $P1 (pk, sk) and
γ ← $P2 (pk, sk, α, β, a).
From a canonical IDS we want non-degeneracy, i.e., it’s hard to predict the
first message of the prover:

∀α̂. Pr [α = α̂ : α ← $P1 (pk, sk)] ≤ negl (λ) . 

We can construct a SS using an IDS.


Construction 24 (Fiat-Shamir Transform). Let Π = (Gen, P, V) be an IDS.
Consider the SS Π0 = (KGen, Sign, Vrfy) defined as:
• KGen(1λ ) = Gen(1λ );
• Sign(sk, m):
1. α ← $P1 (pk, sk);
2. β = H(α, m) with H : {0, 1}? → Bpk ;
3. γ ← $P2 (pk, sk, α, β);
4. σ = (α, γ);

56
• Vrfy(pk, (m, σ)) computes β = H(α, m), then returns V2 (pk, α, β, γ).

Theorem 42. If Π is passively secure, Π0 as defined in Construction 24 is
UFCMA. 

Proof of theorem 42. We do a reduction from A0 forging in GΠ ufcma


0 ,A0 (λ) into A
id
winning GΠ,A (λ), both PPT.
Assumptions we make: A0 does not repeat hash queries, and if (m? , σ ? ) is
the forgery, with (σ ? , γ ? ), then A0 queried H(α? , m? ) to the RO.
The reduction to ATrans(pk,sk) (pk) is as follows:
1. forward pk to A0 . Pick i? ← $qh , where qh is the number of RO queries;
2. query Trans(pk, sk) and get τi = (αi , βi , γi ) for i ∈ [qs ], with qs being the
number of signature queries;
3. upon (αj , mj ) RO query from A0 :
a) if j = i? send aj to the verifier V in GΠ,A
id
, receive β ? from V and
? 0 ?
send β to A , with β ← $Bpk,λ ;
b) else, return random βj ← $Bpk,λ
4. upon mi signature query from A0 :
a) check if αi in τi = (αi , βi , γi ) is such that (αi , mi ) is already queried
to the RO. If it is, abort;
b) else return (αi , γi ) and set H(αi , mi ) = βi in the RO;
5. when A0 outputs (m? , (α? , γ ? )), forward γ ? to the verifier.
If we didn’t abort, and A0 is successful, and we guessed i? , we have

Pr [A wins] ≥ Pr [A0 wins] · Pr [α? = αi? ] · Pr [not abort] .


| {z } | {z }
1 1
poly(λ) qh

Let Ei be the event that we abort in the i-th signature query. By the
non-degeneracy property of Π we have that Pr [Ei ] ∈ negl (λ), and thus

qs
X
Pr [abort] ≤ Pr [Ei ] ≤ qs · ε(λ).
i=1

Thus ∃µ ∈ negl (λ) such that


1 1
Pr [A wins] ≥ · (1 − µ(λ)) ·
poly(λ) qs
which is non negligible.

57
Two properties are sufficient for an IDS to have passive security.
Definition 34 (Honest Verifier Zero Knowledge). An IDS is Honest Verifier
Zero Knowledge (HVZK) if exists PPT simulator S such that

(pk, sk) ← $KGen(1λ ) : (pk, sk) ← $KGen(1λ ) :


   
≈c .
(pk, sk), Trans(pk, sk) (pk, sk), S(pk)

HVZK says you don’t learn anything from a transcript. It’s called the sim-
ulation paradigm: if a protocol is HVZK, then exists a simulator capable of
simulating the transcripts. 
SP
Definition 35 (Special Soundness). Consider the game GΠ,A , defined as:

1. (pk, sk) ← $KGen(1λ );


2. (α, β, γ, β 0 , γ 0 ) ← A(pk);
3. output 1 if β 6= β 0 and

V2 (pk, (α, β, γ)) = V2 (pk, (α, β 0 , γ 0 )) = 1.

Special Soundness (SP) is about canonical IDS: it’s hard to have two transcripts
(α, β, γ) and (α, β 0 , γ 0 ).
A canonical IDS Π satisfies special soundness if for all PPT A exists a
negligible ε such that  SP 
Pr GΠ,A (λ) = 1 ≤ ε(λ). 

Theorem 43. Let Π be a canonical IDS, if Π satisfies SP and HVZK, and the
size of Bpk is ω(log(λ)), then Π is passively secure. 

Proof of theorem 43. We want a reduction from Passive Security (PS) to SP.
We do one step first.
id
Let H0 = GΠ,A , and consider hybrid H1 where the oracle Trans(pk, sk) is
replaced by S(pk). By HVZK, H0 ≈c H1 . Proof of this is left as exercise.
Second step: we claim that Pr [H1 (λ) = 1] is negligible. Assume we have
an adversary for PS, we want to break SP. Assume A = (A1 , A2 ) breaks H1
1
with probability ε(λ) ≥ poly(λ) , and consider A0 breaking SP:
SP
1. recover pk from GΠ,A 0 (λ);

2. run A1 (pk). Upon input an oracle query from A1 , return (α, β, γ) ←


$S(pk). Let s be the output of A1 (pk);

3. run A2 (pk, s) and receive α, reply β ← $Bpk,λ and obtain γ;


4. rewind A2 (pk, s) to the point it already sent α, get a new β 0 and receive
some γ 0 ;
5. output (α, β, γ, β 0 , γ 0 ).

58
Denote by z the state of A2 after it sent α, and call pz = Pr [Z = z], where Z
is the Random Variable (RV) corresponding to the state.
Write δz = Pr [H1 (λ) = 1|Z = z]. By definition,
X
ε(λ) = pz δz = E[δz ].
z∈Z

Let “Good” be the event β 6= β 0 . Then


 SP   SP 
Pr GΠ,A0 (λ) = 1 = Pr GΠ,A0 (λ) = 1 ∧ Good

 SP 
≥ Pr GΠ,A 0 (λ) = 1|Good − Pr [¬Good]

| {z }
Pr[β=β 0 ]
 SP  −1
= Pr GΠ,A 0 (λ) = 1|Good − |Bpk,λ |

−1 −1
X
= pz δz2 − |Bpk,λ | = E(δz2 ) − |Bpk,λ |
z∈Z
2 −1 −1
≥ (E(δz )) − |Bpk,λ | = ε(λ)2 − |Bpk,λ |
| {z } | {z }
non negligible negligible
1 1
≥ − negl (λ) ≈ .
poly(λ) poly(λ)
So we break SP with non negligible probability.
Construction 25 (Schnorr Protocol). Schnorr protocol is defined as follows:
• KGen(1λ ): (G, g, q) ← GroupGen(1λ ), sk = x ← $Zq , pk = y = g x ;
• P samples a ← $Zq and computes α = g a , and sends α to V;
• V samples β ← $Zq = Bpk,λ , and sends β to P;
• P computes γ = βx + a mod q, and sends γ to V;
• V checks that g γ · y −β = α.
Completeness:
g γ · y −β = g βx+a · g −βx = g
β
 x−
β
 x+a
= ga .
Schnorr protocol has a simulator: pick β, γ at random, let α = g γ · y −β , and
output (α, β, γ). In a real execution, γ = βx + a is uniform regardless of β,
and α is the unique value such that α = g γ · y −β . 
SP holds under Discrete Log (DL) assumption. Assume A breaks SP with
non negligible probability. A0 gets y = g x for random x ← $Zq , and sets pk = y
in a game with A. A outputs (α, β, γ, β 0 , γ 0 ), with β 6= β 0 and g γ · y −β = α =
0 0
g γ · y −β . We get that
0 0 0 0
)(β−β 0 )−1
g γ y −β = g γ y−β 0 =⇒ g γ−γ = y β−β =⇒ y = g (γ−γ
so x = (γ − γ 0 )(β − β 0 )−1 .

59
List of Definitions

Perfect secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Information theoretic MAC . . . . . . . . . . . . . . . . . . . . . . . 5
Pairwise independence . . . . . . . . . . . . . . . . . . . . . . . . . . 5

One Way Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


Distribution Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Computational Indistinguishability . . . . . . . . . . . . . . . . . . . 8
Pseudo Random Generator . . . . . . . . . . . . . . . . . . . . . . . 9
Hard Core Predicate - I . . . . . . . . . . . . . . . . . . . . . . . . . 10
Hard Core Predicate - II . . . . . . . . . . . . . . . . . . . . . . . . . 11
One Way Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Symmetric Key Encryption scheme . . . . . . . . . . . . . . . . . . . 12


One time security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Pseudo Random Function . . . . . . . . . . . . . . . . . . . . . . . . 14
CPA-secure SKE scheme . . . . . . . . . . . . . . . . . . . . . . . . . 14
UFCMA MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Universal Hash Function . . . . . . . . . . . . . . . . . . . . . . . . . 19
CCA-security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Feistel function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Collision Resistant Hash Function . . . . . . . . . . . . . . . . . . . 27
Claw-Free Permutation . . . . . . . . . . . . . . . . . . . . . . . . . 30

Computational Diffie-Hellman . . . . . . . . . . . . . . . . . . . . . . 34
Decisional Diffie-Hellman . . . . . . . . . . . . . . . . . . . . . . . . 34

Public Key Encryption Scheme . . . . . . . . . . . . . . . . . . . . . 37


CPA for PKE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
CCA-1 for PKE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
CCA-2 for PKE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Trapdoor Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Signature Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
UFCMA for SSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Bilinear Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Identification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Passively secure IDS . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Canonical IDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Honest Verifier Zero Knowledge . . . . . . . . . . . . . . . . . . . . . 58
Special Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

60
List of Constructions

One Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


Pairwise independent function . . . . . . . . . . . . . . . . . . . . . . 5

SKE scheme from PRG . . . . . . . . . . . . . . . . . . . . . . . . . 13


SKE scheme from PRF . . . . . . . . . . . . . . . . . . . . . . . . . 14
GGM tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
MAC from PRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
UHF with Galois field . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Encrypted MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Merkle-Damgard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Merkle tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Strengthened Merkle-Damgard . . . . . . . . . . . . . . . . . . . . . 28
Davies-Meyer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
CR Compression Function from CFP . . . . . . . . . . . . . . . . . . 30

PRG from DDH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36


Naor-Reingold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Number Theoretic CFP . . . . . . . . . . . . . . . . . . . . . . . . . 37

ElGamal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
PKE scheme from TDP . . . . . . . . . . . . . . . . . . . . . . . . . 40
OAEP - PKCS#1 V.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Cramer-Shoup lite . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Cramer-Shoup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

SS from TDP + RO . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Waters Signature Scheme . . . . . . . . . . . . . . . . . . . . . . . . 52
Fiat-Shamir Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Schnorr Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

61
Acronyms
AES Advanced Encryption Standard
AU Almost Universal
DL Discrete Log
CBC Cypher Block Chain
CCA Chosen Cyphertext Attack
CDH Computational Diffie-Hellman
CFB Cypher Feed Back
CFP Claw-Free Permutation
CPA Chosen Plaintext Attack
CR Collision Resistant
CRH Collision Resistant Hash Function
CRT Chinese Remainder Theorem
CS Cramer-Shoup
CTR Counter
DDH Decisional Diffie-Hellman
DH Diffie-Hellman
ECB Electronic Code Book
FIL Fixed Input Length
GGM Goldreich-Goldwasser-Micali
GL Goldreich-Levin
HCP Hard Core Predicate
HVZK Honest Verifier Zero Knowledge
IBE Identity Based Encryption
ICM Ideal Cypher Model
IDS Identification Scheme
INT Integrity (of cyphertext)
MAC Message Authentication Code

62
MD Merkle-Damgard
NIST National Institute of Standards and Technology
OAEP Optimal Asymmetric Encryption Padding
OTP One Time Pad
OWF One Way Function
OWP One Way Permutation
PKC Public Key Cryptography
PKE Public Key Encryption
PPT Probabilistic Polynomial Time
PRF Pseudo Random Function
PRG Pseudo Random Generator
PRP Pseudo Random Permutation
PS Passive Security
PU Perfect Universal
RO Random Oracle
RSA Rivest-Shamir-Adleman
RV Random Variable
SKE Symmetric Key Encryption
SP Special Soundness
SS Signature Scheme
SSH Secure Shell
TDP Trapdoor Permutation
TLS Transport Layer Security
UFCMA Unforgeable Chosen Message Attack
UHF Universal Hash Function
VIL Variable Input Length
WSS Waters Signature Scheme

63

You might also like