Professional Documents
Culture Documents
CRYPTOGRAPHY Student Notes PDF
CRYPTOGRAPHY Student Notes PDF
Michele Laurenti
January 18, 2017
Abstract
Contents
1 Introduction 1
1.1 Perfect secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Randomness Extraction . . . . . . . . . . . . . . . . . . . . . . 6
2 Computational Cryptography 7
2.1 One Way Functions . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Computational Indistinguishability . . . . . . . . . . . . . . . . 8
2.3 Pseudo Random Generators . . . . . . . . . . . . . . . . . . . . 9
2.4 Hard Core Predicates . . . . . . . . . . . . . . . . . . . . . . . 10
4 Number Theory 31
4.1 Elliptic Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Diffie-Hellman Key Exchange . . . . . . . . . . . . . . . . . . . 35
4.3 Pseudo Random Generators . . . . . . . . . . . . . . . . . . . . 35
4.4 Pseudo Random Functions (PRFs): Naor-Reingold . . . . . . . 36
4.5 Number Theoretic Claw-Free Permutation . . . . . . . . . . . . 37
i
5 Public Key Encryption 37
5.1 Factoring Assumption and RSA . . . . . . . . . . . . . . . . . . 40
5.2 Trapdoor Permutation from Factoring . . . . . . . . . . . . . . 43
5.3 Cramer-Shoup Encryption (1998) . . . . . . . . . . . . . . . . . 44
6 Signature Schemes 49
6.1 Waters Signature Scheme . . . . . . . . . . . . . . . . . . . . . 52
6.2 Identification Schemes . . . . . . . . . . . . . . . . . . . . . . . 55
List of Definitions 60
List of Constructions 61
Acronyms 62
ii
Figure 1: Message exchange between A and B using symmetric encryption. E
is the eavesdropper.
1 Introduction
Solomon,
I’m concerned about security; I think, when we email each other,
we should use some sort of code.
A and B, the actors of our communication exchange (fig. 1), share k, the key,
taken from some key space K. The elements of our encryption scheme play the
following roles:
1. Gen outputs a random key from the key space K, and we write this as
k ← $Gen;
2. Enc : K × M → C is the encryption function, mapping a key and a
message to a cyphertext;
3. Dec : K × C → M is the decryption function, mapping a key and a
cyphertext to a message.
1
(Enc, Dec) has perfect secrecy if
∀M, ∀m ∈ M, c ∈ C. Pr [M = m] = Pr [M = m|C = c]
Pr [Enc(k, m) = c] = Pr [Enc(k, m0 ) = c]
Pr [M = m] = Pr [M = m|C = c]
Pr [M = m ∧ C = c]
= (by Bayes)
Pr [C = c]
=⇒
Pr [M = m] Pr [C = c] = Pr [M = m ∧ C = c]
Pr [Enc(k, m0 ) = c] = Pr [C = c]
which gives us 3.
2
Now we want to show that 3 implies 1. Take any c ∈ C.
X
Pr [C = c] = Pr [C = c ∧ M = m0 ]
m0 ∈M
X
= Pr [C = c|M = m0 ] Pr [M = m0 ] (by Bayes)
m0 ∈M
X
= Pr [Enc(k, M ) = c|M = m0 ] Pr [M = m0 ]
m0 ∈M
X
= Pr [Enc(k, m0 ) = c] Pr [M = m0 ]
m0 ∈M
X
= Pr [Enc(k, m) = c] Pr [M = m0 ]
m0 ∈M
| {z }
1
(Enc is indepenendent of M , so we take it out)
= Pr [Enc(k, M ) = c|M = m] = Pr [C = c|M = m] .
We are left to show that Pr [M = m] = Pr [M = m|C = c], but this is easy with
Bayes.
Construction 1 (One Time Pad). The message space, the cyphertext space,
and the key space are all the same, i.e., M = K = C = {0, 1}l , with l ∈ N+ .
Encryption and decryption use the xor operation:
• Enc(k, m) = k ⊕ m = c;
• Dec(k, c) = c ⊕ k = (k ⊕ m) ⊕ k = m.
3
Figure 2: Message exchange between A and B using symmetric authentication.
E is the eavesdropper.
c=k+m
=⇒ c − c0 = m − m0 =⇒ m0 = m − (c − c0 ).
c0 = k + m0
Proof of theorem 3. Assume, for the sake of contradiction, that |K| < |M|. Fix
M to be the uniform distribution over M, which we can do as perfect secrecy
works for any distribution. Take a cyphertext c ∈ C such that Pr [C = c] > 0,
i.e., ∃m, k such that Enc(k, m) = c.
Consider M0 = {Dec(k, c) : k ∈ K}, the set of all messages decrypted from
c using any key. Clearly, |M0 | ≤ |K| < |M|, so ∃m0 ∈ M such that m0 6∈ M0 .
This means that
1
Pr [M = m0 ] = 6= Pr [M = m0 |C = c] = 0
|M|
in contradiction with perfect secrecy.
In the rest of the course we will forget about perfect secrecy, and strive for
computational security, i.e., bound the computational power of the adversary.
1.2 Authentication
The aim of authentication is to avoid tampering of E with the messages ex-
changed between A and B (fig. 2).
A Message Authentication Code (MAC) is defined as a tuple Π = (Gen, Mac, Vrfy),
where:
• Gen, as usual, outputs a random key from some key space K;
• Mac : K × M → Φ maps a key and a message to an authenticator in
some authenticator space Φ;
4
• Vrfy : K × M × Φ → {0, 1} verifies the authenticator.
As usual, we expect a MAC to be correct, i.e.,
∀m ∈ M, ∀k ∈ K.Vrfy(k, m, Mac(k, m)) = 1.
If the Mac function is deterministic, then it must be that Vrfy(k, m, φ) = 1
if and only if Mac(k, m) = φ.
Security for MACs is that forgery must be hard: you can’t come up with
an authenticator for a message if you don’t know the key.
Definition 2 (Information theoretic MAC). (Mac, Vrfy) has ε-statistical se-
curity if for all (possibly unbounded) adversary A, for all m ∈ M,
k ← $KeyGen;
Pr Vrfy(k, m0 , φ0 ) = 1 ∧ m0 6= m : φ = Mac(k, m); ≤ε
(m0 , φ0 ) ← A(m, φ)
i.e., the adversary forges a (m0 , φ0 ) that verifies with key k with low probability,
even if it knows a valid pair (m, φ).
As an exercise, prove that the above is impossible if ε = 0.
Information theoretic security is also called unconditional security. Later
we’ll see conditional security, based on computational assumptions.
Definition 3 (Pairwise independence). Given a family H = {hk : M → Φ}k∈K
of functions, we say that H is pairwise independent if for all distinct m, m0 we
have that (hk (m), hk (m0 )) ∈ Φ2 is uniform over the choice of k ← $K.
We show straight away a construction of a pairwise independent family of
function.
Construction 2 (Pairwise independent function). Let p be a prime, the func-
tions in our family H are defined as
ha,b (m) = am + b mod p
with K = Z2p , and with M = Φ = Zp .
Theorem 4. Construction 2 is pairwise independent.
Proof of theorem 4. For any m, m0 , φ, φ0 , we want to find the value of
Pr [am + b = φ ∧ am0 + b = φ0 ]
5
If hk is part of a pairwise independent family of functions, then Mac(k, m) =
hk (m), and Vrfy(k, m, φ) is simply computing hk (m) and comparing it with φ,
i.e.,
Vrfy(k, m, φ) = 1 ⇐⇒ hk (m) = φ.
We now prove that this is an information theoretic MAC.
1
Theorem 5. Any pairwise independent function is |Φ| -statistical secure.
Proof of theorem 5. Take any two distinct m, m0 , and two φ, φ0 . We show that
the probability that Mac(k, m0 ) = φ0 is exponentially small.
1
Pr [Mac(k, m) = φ] = Pr [hk (m) = φ] = .
k k |Φ|
The last steps come from the fact that hk is pairwise independent. To see that
1
the construction is |Φ| -statistical secure:
Theorem 6. Any t-time 2−λ -statistically secure MAC has key of size (t +
1)λ.
6
1. Take two samples, b1 ← $B1 and b2 ← $B2 ;
2. if b1 = b2 , sample again;
3. if (b1 = 1 ∧ b2 = 0), output 1; if (b1 = 0 ∧ b2 = 1), output 0.
Pr [B 0 = 1] = Pr [B1 = 1 ∧ B2 = 0] = p(1 − p)
Pr [B 0 = 0] = Pr [B1 = 0 ∧ B2 = 1] = (1 − p)p.
How many trials do we have to make before outputting something? 2(1 − p)p
is the probability that we output something. The probability that we don’t
output anything for n steps is thus (1 − 2(1 − p)p)n .
2 Computational Cryptography
Definition 4 (One Way Function). A function f : {0, 1}? → {0, 1}? is a OWF
if f (x) can be computed in polynomial time for all x and for all PPT adversaries
A it holds that
7
The 1λ given to the adversary A is there to highlight the fact that A is
polynomial in the length of the input (λ).
Russel Impagliazzo proved that OWFs are equivalent to One Way Puzzles,
i.e., couples (Pgen, Pver) where Pgen(1λ ) → (y, x) gives us a puzzle (y) and a
solution to it (x), while Pver(x, y) → 0/1 verifies if x is a solution of y.
Another object of interest in this classification are average hard NP-puzzles,
for which you can only get an instance, i.e., Pgen(1λ ) → y.
Impagliazzo says we live in one of five worlds:
8
Proof of lemma 1. Assume, for the sake of contradiction, that ∃f such that
f (X ) 6≈c f (Y): then we can distinguish X and Y. Since f (X ) 6≈c f (Y), then
∃p = poly(λ), D such that, for infinitely many λs
1
Pr [D(f (Xλ )) = 1] − Pr [D(f (Yλ )) = 1] ≥ .
p(λ)
D0 runs D(f (z)) and outputs whatever it outputs, and has the same probability
of distinguishing X and Y of D, in contradiction with the fact that X ≈c Y.
≤ 2ε(λ). (negligible)
9
Figure 3: Extending a PRG with 1 bit stretch to a PRG with l bit stretch.
Proof of theorem 7. We’ll prove this just for some fixed constant l(λ) = l ∈ N.
First, let’s look at the construction (fig. 3). We replicate our PRG G with
1 bit stretch l times. The PRG G l that we define takes in input s ∈ {0, 1}λ ,
computes (s1 , b1 ) = G(s), where s1 ∈ {0, 1}l and b1 ∈ {0, 1}, outputs b1 and
feeds s1 to the second copy of PRG G, and so on until the l-th PRG.
To show that our construction is a PRG, we define l hybrids, with H0λ ≡
G (Uλ ), where G l : {0, 1}λ → {0, 1}λ+l is our proposed construction, and Hiλ
l
10
The 12 in the upper bound tells us that the adversary can’t to better than
guessing.
Definition 9 (Hard Core Predicate - II). A polynomial time function h :
{0, 1}n → {0, 1} is hard core for f : {0, 1}n → {0, 1}n if for all PPT adversaries
A
A(f (x), b) = 1 :
Pr A(f (x), h(x)) = 1:
n − Pr x ← ${0, 1}n ; ≤ ε(λ).
x ← ${0, 1}
b ← ${0, 1}
Theorem 8. Definition 8 and definition 9 are equivalent.
is hardcore for g.
Definition 10 (One Way Permutation). We say that f : {0, 1}n → {0, 1}n is
a One Way Permutation (OWP) if f is a OWF, ∀x. |x| = |f (x)|, and for all
distinct x, x0 .f (x) 6= f (x0 ).
Corollary 1. Let f be a OWP, and consider g : {0, 1}n → {0, 1}n from the
GL theorem. Then G(s) = (g(s), h(s)) is a PRG with stretch 1.
Proof of corollary 1.
11
UNCLEAR
Assume instead f is a OWF, and that is 1-to-1 (injective). Consider
X = g m (x) = (g(x1 ), h(x1 ), . . . , g(xm ), h(xm )), where x1 , . . . , xm ∈ {0, 1}n
(i.e., , x ∈ {0, 1}nm ). You can construct a PRG from a OWF as shown by
H.I.L.L.
Now G(s, x) = (s, Ext(s, g m (x))) where Ext : {0, 1}d × {0, 1}nm →
{0, 1}l , and l = nm + 1. This works for m = ω(log(n)). You get extraction
error ε ≈ 2−m .
one time
where GΠ,A (λ, b) is the following “game” (or experiment):
1. pick k ← $K;
2. A outputs two messages (m0 , m1 ) ← A(1λ ) where m0 , m1 ∈ M and
|m0 | = |m1 |;
3. =
¸ Enc(k, mb ) with b input of the experiment;
4. output b0 ← A(1λ , c), i.e., the adversary tries to guess which message
was encrypted.
Let’s look at a construction.
12
Construction 3 (SKE scheme from PRG). Let G : {0, 1}n → {0, 1}l be a
Pseudo Random Generator (PRG). Set K = {0, 1}n , and M = C = {0, 1}l .
Define Enc(k, m) = G(k) ⊕ m and Dec(k, c) = G(k) ⊕ c.
Theorem 10. If G is a PRG, the SKE in Construction 3 is one-time compu-
tationally secure.
Proof of theorem 10. Consider the following experiments:
one time
• H0 (λ, b) is like GΠ,A :
1. k ← ${0, 1}n ;
2. (m0 , m1 ) ← A(1λ );
3. c = G(k) ⊕ mb ;
4. b0 ← A(1λ , c).
• H1 (λ, b) replaces G with something truly random:
1. (m0 , m1 ) ← A(1λ );
2. r ← ${0, 1}l ;
3. c = r ⊕ mb , basically like One Time Pad (OTP);
4. b0 ← A(1λ , c).
• H2 (λ) is just randomness:
1. (m0 , m1 ) ← A(1λ );
2. c ← ${0, 1}l ;
3. b0 ← A(1λ , c).
First, we show that H0 (λ, b) ≈c H1 (λ, b), for b ∈ {0, 1}. Fix some value for
b, and assume exists a PPT distinguisher D between H0 (λ, b) and H1 (λ, b): we
then can construct a distinguisher D0 for the PRG.
D0 , on input z, which can be either G(k) for some k ← ${0, 1}n , or directly
z ← ${0, 1}l , does the following:
• get (m0 , m1 ) ← D(1λ );
• feed z ⊕ mb to D;
• output the result of D.
Now, we show that H1 (λ, b) ≈c H2 (λ, b), for b ∈ {0, 1}. By perfect secrecy
of OTP we have that (m0 ⊕ r) ≈ z ≈ (m1 ⊕ r), so H1 (λ, 0) ≈c H2 (λ) ≈c
H1 (λ, 1).
Corollary 2. One-time computationally secure SKE schemes are in Minicrypt.
This scheme is not secure if the adversary knows a (m1 , c1 ) pair, and we
reuse the key. Take any m, c, then c ⊕ c1 = m ⊕ m1 , and you can find m. This
is called a Chosen Plaintext Attack (CPA), something we will defined shortly
using a Pseudo Random Function (PRF).
13
3.1 Chosen Plaintext Attacks and Pseudo Random
Functions
Definition 13 (Pseudo Random Function). Let F = {Fk : {0, 1}n → {0, 1}l }
be a family of functions, for k ∈ {0, 1}λ . Consider the following two experi-
ments:
real
• GF ,A (λ), defined as:
1. k ← ${0, 1}λ ;
2. b0 ← AFk (·) (1λ ), where A can query an oracle for values of Fk (·),
without knowing k.
rand
• GF ,A (λ), defined as:
1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·) (1λ ). A is given access to an oracle for Enc(k, ·), so
she knows some (m, c) couples, with c = Enc(k, m);
3. c ← Enc(k, mb );
4. b0 ← AEnc(k,·) (1λ , c).
Π is CPA-secure if for all PPT adversaries A
cpa cpa
GΠ,A (λ, 0) ≈c GΠ,A (λ, 1).
14
• Gen takes k ← ${0, 1}λ ;
• Enc(k, m) = (r, Fk (r) ⊕ m), with r ← ${0, 1}n . Note that, since Fk :
{0, 1}n → {0, 1}l , we have that M = {0, 1}l and C = {0, 1}n+l ;
• Dec(k, (c1 , c2 )) = Fk (c1 ) ⊕ c2 .
Our construction is both one time computationally secure, and secure against
CPAs.
Theorem 11. If F is a PRF, Construction 4 is CPA-secure.
cpa
Proof of theorem 11. First, we define the experiment H0 (λ, b) ≡ GΠ,A (λ, b) as
follows:
1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·) (1λ );
3. c? ← (r? , Fk (r? ) ⊕ mb ), where r? ← ${0, 1}n ;
4. output b0 ← AEnc(k,·) (1λ , c? ).
Note that in the CPA game the adversary has access to an encryption oracle
using the chosen key.
Now, for the first hybrid H1 (λ, b), where we sample a random function R
in place of Fk :
1. R ← $R(n → l);
2. (m0 , m1 ) ← AEnc(R,·) (1λ ), where now Enc(R, m) = (r, R(r)⊕m) for some
random r;
3. c? ← (r? , R(r? ) ⊕ mb ), where r? ← ${0, 1}n ;
4. output b0 ← AEnc(R,·) (1λ , c? ).
Our first claim is that H0 (λ, b) ≈c H1 (λ, b) for b ∈ {0, 1}. As usual, we
assume that exists an adversary A which can distinguish the experiments, i.e.,
that can distinguish the oracles, and use A to create APRF that breaks the
PRF.
APRF has access to some oracle O(·), with is one of two possibilities:
(
Fk (x) for k ← ${0, 1}λ
O(x) =
R(x) for R ← $R(n → l).
A gives APRF some message m. APRF picks r ← ${0, 1}n , and queries O(r) to
get z ∈ {0, 1}l . Then it gives (r, z ⊕ m) to A. This is repeated as long as A
asks for encryption queries.
Then A gives to APRF (m0 , m1 ), which repeats the same procedure using
m0 as a message (to distinguish H0 (λ, 0) from H1 (λ, 0)) to compute c? . A,
15
Figure 4: First two levels of a GGM tree.
after receiving c? , asks some more encryption queries, and then outputs b0 . If
b0 = 1, APRF says R(·), otherwise it says Fk (·).
Now for the third experiment, H2 (λ), which uses Enc(m) = (r1 , r2 ) with
(r1 , r2 ) ← ${0, 1}n+l , i.e., it outputs just randomness.
Our second claim is that H1 (λ, b) ≈c H2 (λ) for b ∈ {0, 1}. To see this, note
that H1 and H2 are identical as long as collisions don’t happen when choosing
the rs. It suffices for us to show that collisions happen with small probability.
Call Ei,j the eventW“random ri collides with random rj ”. The event of a
collision is thus E = i,j Ei,j , and its probability can be upper bounded as
follows:
q2
X X q −n
Pr [E] = Pr [Ei,j ] = Coll (Un ) ≤ 2 ≤ n
i,j i,j
2 2
where q is the (polynomial) number of queries that the adversary does, and
Coll (Un ) is the probability of a collision when using a uniform distribution,
which is 2−n .
Theorem 12 (GGM, 1982). PRFs can be constructed from PRGs.
Corollary 3. PRFs are in Minicrypt.
Construction 5 (GGM tree). Assume we have a length doubling PRG G :
4
{0, 1}λ → {0, 1}2λ . We say that G(x) =(G0 (x), G1 (x)) to distinguish the first λ
bits from the second λ bits.
Now, to build the PRF we construct a Goldreich-Goldwasser-Micali (GGM)
tree (fig. 4) starting with a key k ∈ {0, 1}λ . On input x = (x1 , . . . , xn ) ∈
{0, 1}n , with n being the height of the tree, the PRF picks a path in the tree:
16
Lemma 3. Let G : {0, 1}λ → {0, 1}2λ be a PRG. Then for all t(λ) = poly(λ)
we have that
(G(k1 ), . . . , G(kt )) ≈c (U2λ , . . . , U2λ ) .
| {z }
t times
thus H0 (λ) = (G(k1 ), . . . , G(kt )) and Ht (λ) = (U2λ , . . . , U2λ ). To prove that
H1 (λ) ≈c Ht (λ), we show that for any i it holds that Hi (λ) ≈c Hi+1 (λ). This
relies on the fact that G(kt−i ) ≈c U2λ : assume that exists a distinguisher D for
Hi (λ) and Hi+1 (λ), we then break the PRG.
We build D0 , which takes in input some z from either G(kt−i ) or U2λ . D0
takes k1 , . . . , kt−(i+1) ← ${0, 1}λ , and feeds (G(k1 ), . . . , G(kt−(i+1) ), z, U2λ , . . . , U2λ )
to D, and returns whatever it returns.
17
We say that Π is Unforgeable Chosen Message Attack (UFCMA) if for all PPT
adversaries A it holds that
ufcma
Pr GΠ,A (λ) = 1 ≤ negl (λ) .
18
Now, for the real one. To extend the domain of a PRF F we need a function
h : {0, 1}nt → {0, 1}n for which is hard to find a collision, i.e., two distinct
messages m0 , m00 such that h(m0 ) = h(m00 ). To do this, we introduce Collision
Resistant Hash Functions (CRHs), an object found in Cryptomania. We add
a key to the hash function.
is a PRF.
1. R ← $R(N → l);
2. b ← AR(·) (1λ ).
Consider also the hybrid H$,H,A (λ):
1. s ← ${0, 1}λ ;
2. R ← $R(n → l);
3. b ← AR(hs (·)) (1λ ).
19
real
The first claim, i.e., that GF (H),A (λ) ≈c H$,H,A (λ), is left as exercise.
rand
The second claim is that H$,H,A (λ) ≈c G$,A (λ). Assume the adversary asks
q distinct queries. Consider the event E = “∃(xi , xj ) such that hs (xi ) = hs (xj )
rand
with i 6= j”, and with i, j ≤ q. If E doesn’t happen, H$,H,A (λ) and G$,A (λ)
are the same. This event is the same as getting first all the inputs that the
λ
adversary wants to try, and then sampling s ← ${0, 1} , so its probability can
be bounded as
q
Pr [E] = Pr [∃i, j : hs (xi ) = hs (xj )] ≤ ε ≤ q 2 ε.
2
Construction 7 (UHF with Galois field). Let F be a finite field, such as the
Galois field over 2n . In the Galois field, a bit string represents the coefficients
of a polynomial of degree n − 1. Addition is the usual, while for multiplication
an irreducible polynomial p(x) of degree n is fixed, and the operation is carried
out modulo p(x).
We pick s ∈ F, and x = x1 || . . . ||xt with xi ∈ F for all i. The hash function
is defined as
t
X
hs (x) = hs (x1 || . . . ||ht ) = xi · si−1 = Qx (s).
i=1
20
Figure 5: Construction of the CBC-MAC.
21
FIL-PRF FIL-MAC VIL-MAC
F(H) X X
CBC-MAC X
E-CBC-MAC X X X
XOR-MAC X X
Table 1: Constructions for FIL-PRF, FIL-MAC, and VIL-MAC.
1. k ← ${0, 1}λ ;
2. (m0 , m1 ) ← AEnc(k,·),Dec(k,·) (1λ );
3. c ← Enc(k, mb );
?
4. b0 ← AEnc(k,·),Dec (k,·)
(1λ , c) where Dec? does not accept c.
Π is CCA-secure if for all PPT adversaries A we have that
cca cca
GΠ,A (λ, 0) ≈c GΠ,A (λ, 1).
22
We’ll build now Authenticated Encryption. It’s both CPA and Integrity
(of cyphertext) (INT), i.e., it’s hard for the adversary to generate a valid
cyphertext not queried to the encryption oracle.
As an exercise, formalise the fact that CPA and INT imply CCA, i.e.,
reduce CCA to CPA.
Any CPA-secure encryption scheme, together with a MAC, gives you CCA
security. This is called an encrypted MAC.
Construction 8 (Encrypted MAC). Consider the encryption scheme Π1 =
(Gen, Enc, Dec), with key space K1 , and the MAC Π2 = (Gen, Mac, Vrfy),
with key space K2 . We build the encryption scheme Π0 = Gen0 , Enc0 , Dec0 ,
Proof of theorem 18. We need to show that Π0 is both CPA and INT.
23
Figure 6: The Feistel permutation.
24
Theorem 19 (Luby-Rackoff). If F = {Fk : {0, 1}n → {0, 1}n }k∈{0,1}λ is a
PRF, then ψF [3] is a PRP and ψF [4] is a strong PRP.
We will prove only that ψF [3] is a PRP.
Theorem 20. If F is a PRF, ψF [3] is a PRP.
Recall that
ψF [3] = ψFk3 ψFk2 ψFk1 (x, y) .
Proof of theorem 20. Consider these experiments:
ψFk ψFk ψFk
H0 : (x, y) →1 (x1 , y1 ) →2 (x2 , y2 ) →3 (x3 , y3 )
ψR ψR ψR
H1 : (x, y) →1 (x1 , y1 ) →2 (x2 , y2 ) →3 (x3 , y3 ).
H2 is just like H1 , but we stop if y1 “collides”. A collision happens when
we have (x, y) → (x1 = y, y1 = x ⊕ R1 (y)), and y1 = y.
In H3 we replace y2 = R2 (y1 ) ⊕ x1 with y2 ← ${0, 1}n , and set x2 = y1 .
In H4 we replace y3 = R3 (y2 ) ⊕ x2 with y3 ← ${0, 1}n , and set x3 = y2 .
R
In H5 we directly map (x, y) → (x3 , y3 ), with R ← $R(2n → 2n) being a
random permutation.
First claim: H0 ≈c H1 , since we replaced the PRF with truly random
functions. The proof is the usual proof by hybrids.
Second claim: H1 ≈c H2 . Consider the event E of a collision, defined as
“∃(x, y) 6= (x0 , y 0 ) such that x ⊕ R1 (y) = x0 ⊕ R1 (y 0 ), i.e., x ⊕ x0 = R1 (y) ⊕
R1 (y 0 )”. If y = y 0 , there can’t be a collision, so we can assume that y 6= y 0 . So
the probability of a collision is:
Pr [E] = Pr [R1 (y) ⊕ R1 (y 0 ) = x ⊕ x0 ] ≤ 2−n .
25
Domain Extension for Pseudo Random Permutations
Assume we have a message m = m1 || . . . ||mt , with mi ∈ {0, 1}n , and you are
given a PRP from n bits to n bits.
Proof of theorem 21. Consider the game H0 (λ, b), defined as:
1. k ← ${0, 1}λ ;
2. the adversary asks encryption queries, for messages m = m1 || . . . ||mt :
• c0 = r ← $[N ] (with N = 2n );
• ci = Fk (r + i − 1 mod N ) ⊕ mi ;
• output c0 ||c1 || . . . ||ct .
3. challenge: the adversary gives (m?0 , m?1 ), and take m?b = m?b1 || . . . m?bt
(both messages have the same length);
4. compute c? from m?b and output to adversary;
5. adversary asks more encryption queries, then it outputs b0 .
26
Now consider the hybrid H2 (λ), which outputs a uniformly random chal-
lenge cyphertext c? ← ${0, 1}n(t+1) . We claim that H1 (λ, b) ≈s H2 (λ) for
b ∈ {0, 1}, i.e., they are statistically indistinguishable: we accept unbounded
adversaries that can only ask a polynomial number of queries.
Consider c? , and the values of R(r? ), R(r? +1), . . . , R(r? +t? −1). Then, con-
sider the i-th encryption query, ci , and the values of R(ri ), R(ri +1), . . . , R(ri +
ti −1). If R(ri +j) is always different from R(r? +j 0 ) for any j 0 , this is basically
OTP. So, calling E the event “∃j1 , j2 such that R(ri + j1 ) = R(r? + j2 )”, it
suffices to show that Pr [E] ≤ negl (λ).
Assume there are q queries, and fix a maximum length t of the queried
messages, and let Ei , forP i ∈ [q], be the event of an overlap happening at
q
query i. Clearly Pr [E] ≤ i=1 Pr [Ei ]. For Ei to happen, we must have that
r? − t + 1 ≤ ri ≤ r? + t − 1 (this is not tight!). So the size of the interval in
which ri must be is 2t − 1. Then, Pr [Ei ] = 2t−1 2t−1
2n , and Pr [E] ≤ q 2n , which is
negligible.
1. s ← ${0, 1}λ ;
2. (x, x0 ) ← A(1λ , s);
3. output 1 if x 6= x0 ∧ hs (x) = hs (x0 ).
H is a CRH if for all PPT adversaries A there is a negligible function ε(λ) such
that CR
Pr GH,A (λ) = 1 ≤ ε(λ).
1. Start with a compression function hs : {0, 1}2n → {0, 1}n and obtain a
domain extension, i.e., a function h0s : {0, 1}? → {0, 1}n .
2. Build a Collision Resistant (CR) compression function.
27
Figure 7: The Merkle-Damgard Collision Resistant Hash Function.
With the Merkle tree one could give a partially hashed version of a file.
Theorem 22. Merkle-Damgard (MD) (Construction 9) is a CRH H0 = {h0s :
{0, 1}tn → {0, 1}n } if the function H = {hs : {0, 1}2n → {0, 1}n } is CR.
28
For this construction to be CR, it should be hard to find (x, s) 6= (x0 , s0 )
such that Es (x) ⊕ x = Es0 (x0 ) ⊕ x0 .
There’s something strange going on: a block cypher is a PRP, and OWF
give us PRP, but we know that it’s impossible to get CRH from OWF. Further-
more, this is actually insecure for concrete PRPs. Let E be a PRP, and define
E 0 such that E 0 (0n , 0n ) = 0n and E 0 (1n , 1n ) = 1n , and otherwise E 0 (x) = E(x).
This is still a PRP, but E 0 (0n , 0n ) ⊕ 0n = E 0 (1n , 1n ) ⊕ 1n .
We must assume that we have no bad keys. This is called the Ideal Cypher
Model (ICM).
In the ICM, one has a truly random permutation E(·) (·) for all keys. When
the adversary asks for (s1 , x1 ), choose random y1 ← ${0, 1}n and set Es1 (x1 ) =
Si
y1 . On later query (s1 , xi ), pick yi ← ${0, 1}n \ j=1 {yj }, and set Es1 (xi ) = yi .
Theorem 24. Davies-Meyer is CR in ICM.
Proof of theorem 24. As usual, we consider a series of experiments.
CR
H0 (λ) ≡ GDM,A (λ).
A makes two types of queries: forward queries, in the form (s, x), to get Es (x),
and backward queries, in the form (s, y), to get Es−1 (y). Imagine of keeping
the value of z = x ⊕ y.
Now, we define H1 (λ): after picking y for some (s, x), we don’t remove y
from the range (and the same for backward queries).
The first claim is that, assuming A asks q queries,
q2
Pr [H0 (λ) = 1] − Pr [H1 (λ) = 1] ≤ 2−n .
2
To verify this, note that H0 and H1 are distinguishable if and only if a collision
happens in Es (·) or in Es−1 (·).
H2 (λ): check for collisions in z = x ⊕ y. If not all z values are distinct,
abort.
The second claim is that
q2
Pr [H1 (λ) = 1] − Pr [H2 (λ) = 1] ≤ 2−n .
2
Verification is as usual.
H3 (λ): when A outputs (x, s), (x0 , s0 ) collision, check that (x, s, y, z) and
(x0 , s0 , y 0 , z 0 ) are in the table. If not, define the entries.
Third claim:
4q
Pr [H2 (λ) = 1] − Pr [H3 (λ) = 1] ≤ n .
2
y can collide with q values, with probability 2qn . The same goes for z, y 0 , z 0 , so
we get 24qn .
H4 (λ): there are no collisions on z. Pr [H4 (λ) = 1] = 0 since (x, s, y, z), (x0 , s0 , y 0 , z 0 ) =⇒
z 6= z 0 .
H3 ≈c H4 , and we’re done.
29
3.7 Claw-Free Permutations
We introduce a new assumption: Claw-Free Permutations (CFPs). A claw is
a pair of functions (f1 , f0 ) and two values (x1 , x0 ) such that f1 (x1 ) = f0 (x0 ),
with f1 6= f0 .
1. pk ← $Gen(1λ );
2. (x0 , x1 ) ← A(pk);
3. output 1 ⇐⇒ f0 (pk, x0 ) = f1 (pk, x1 ).
Theorem 25. If Π is a CFP, Hpk (Construction 13) is collision resistant.
Proof of theorem 25. Assume H is not CR, we can then find a collision for Π,
i.e., we find (x, m) 6= (x0 , m0 ) such that Hpk (x||m) = Hpk (x0 ||m0 ).
Two cases are possible:
m = ml . . . mi . . . m1 ,
m0 = ml . . . m0i . . . m01
30
Assume, without loss of generality, that mi = 0 and m0i = 1. Since from
i + 1 we apply the same sequence of permutations, the collision must be
on i. Consider
y 0 = fm
−1
i+1
−1
(. . . fm l
(y) . . . ),
x0 = fmi−1 (. . . fm1 (x) . . . ),
x1 = fm0i−1 (. . . fm01 (x) . . . ).
4 Number Theory
We use modular arithmetic, with modulus some n, in (Zn , +, ·).
(Zn , +) is a group, i.e., it has an identity (or null) element, it’s closed,
commutative, associative, and has an inverse for each element. (Zn , ·) is not a
group, as you don’t have an inverse for everyone.
31
For the first implication, a common divisor of a and b divides a − q · b = a
mod b.
For the second implication, a common divisor of b and a mod b divides
b · q + a mod b = a.
Theorem 26. Given a, b we can compute gcd(a, b) in polynomial time in
max{kak , kbk}. Also, we can find u, v such that a · u + b · v = gcd(a, b).
Proof of theorem 26. We can use lemma 5 recursively:
a = b · q1 + r1 (0 ≤ r1 < b)
gcd(a, b) = gcd(b, r1 ) (r1 = a mod b)
b = r1 · q2 + r2 (0 ≤ r2 < r1 )
gcd(b, r1 ) = gcd(r1 , r2 ) (r2 = b mod r1 )
...
ri = ri−1 · qi+1 + ri+1 .
Repeat until rt+1 = 0, we then have that gcd(a, b) = rt .
We prove that ri+2 ≤ r2i for all 0 ≤ i ≤ t − 2. Clearly, ri+1 < ri . If we have
that ri+1 ≤ r2i , then it’s trivial. If ri+1 > r2i , then
ri ri
ri+2 = ri mod ri+1 = ri − qi+2 ri+1 < ri − ri−1 < ri − = .
2 2
Take a ∈ Zn such that gcd(a, n) = 1, then there are u, v such that a · u +
n · v = gcd(a, n) = 1. But then a · u ≡ 1 mod n, so u is the inverse of a.
Another operation in Zn is modular exponentiation, i.e., ab mod n. The
square and multiply algorithm is polynomial. Pt
Write b in binary, as bt bt−1 . . . b0 , so b ∈ {0, 1}t+1 . Since b = i=0 bi · 2i ,
we can write
t t
i bi
Pt b1 4 b2 t bt
2i bi i
Y Y
ab = a i=0 = a2 bi
= a2 = ab0 a2 a . . . a2 .
i=0 i=0
32
Theorem 28 (Miller-Rabin ’80s, Agramal-Kayer-Saxana ’02). You can test in
polynomial time if x is prime.
33
Assumption The Discrete Log (DL) problem, that is finding x given g x ∈ G,
is hard, i.e., for all Probabilistic Polynomial Time (PPT) A,
y = x3 + ax2 + bx + c mod p.
Proof of Observation 1. Assume not, then DL does not hold if CDH holds. But
if we can compute y = logg (g y ), then it’s easy to find g xy = (g x )y .
(g, g x , g y , g xy ) ≈c (g, g x , g y , g z ) .
| {z } | {z }
DDH tuple non-DDH tuple
34
Observation 2. CDH =⇒ DDH is true. DDH =⇒ CDH maybe is not
true.
35
Construction 14 (PRG from DDH). We can construct directly from DDH a
PRG with polynomial stretch
Gg,q : Zl+1
p : G2l+1
defined as
Gg,q (x, y1 , . . . , yl ) = (g x , g y1 , g xy1 , . . . , g yl , g xyl ).
Theorem 30. If DDH holds, Construction 14 is a PRG.
Proof of theorem 30. Using the standard hybrid argument, since DDH is ε-
hard, we would use l hybrids, obtaining that the above PRG is (ε · l)-secure.
This proof is not tight, and we want to make a tight one.
We make a direct reduction to DDH. Let ADDH be an adversary who is
given (g, g x , g y , g z ) with z = β + xy, where β is either 0 (so that is a DDH
tuple) or β ← $Zq (non-DDH tuple).
We want to prove that
(g x , g y1 , g xy1 , . . . , g yl , g xyl ) ≈c (g x , g y , g z , . . . )
by generating exactly the first if the tuple is DDH, and exactly the other one
if the tuple is not DDH.
u
For j ∈ [l], pick uj , vj ← $Zq , and define g yj = (g y ) j · g vj , and g zj =
z uj x vj
(g ) · (g ) . ADDH returns the same as
APRG (g x , g y1 , g z1 , . . . , g yl , g zl ).
This trick generates many tuples in the correct way: just look at the expo-
nents.
yj = uj · y + vj
zj = uj · z + vj · x = uj · β + uj · x · y + vj · x
= uj · β + x · (uj · y + vj ) = uj · β + x · yj
which is either sampled random from Zq if β ← $Zq , or xyj if β = 0, with
yj ← $Zq . So breaking the PRG is equivalent to breaking DDH, since ADDH
can perfectly simulate the PRG, query APRG and break DDH.
a is a vector of exponents.
Pn
ai xi
Fq,g,a (x1 , . . . , xn ) = (g a0 ) i=1
.
DDH =⇒ FNR is a PRF. The proof can be sketched in a similar way as
the proof for Goldreich-Goldwasser-Micali (GGM). Note that this is like an
optimised version of GGM.
36
4.5 Number Theoretic Claw-Free Permutation
Construction 16 (Number Theoretic CFP). 1. params = (G, g, q) ← GroupGen(1λ );
2. y ← $G, pk = (params, y);
3. define f = (f0 , f1 ), with:
f0 (pk, x0 ) = g x0 ,
f1 (pk, x1 ) = y · g x1 .
Hpk (x1 , x2 ) = y x1 g x2
Hpk : Z2q → G.
37
3. Dec(sk, c) = m.
We require correctness from a PKE scheme:
Now we define games for Chosen Plaintext Attack (CPA), Chosen Cypher-
text Attack (CCA)1, CCA2 for PKE.
CPAone-time
Definition 24 (CPA for PKE). GA,Π (λ, b):
38
ElGamal PKE
Construction 17 (ElGamal). Consider Π = (KGen, Enc, Dec), where:
Note that
c2 hr · m
x = r x = m.
c1 (g )
Theorem 32. Under Decisional Diffie-Hellman (DDH), ElGamal (Construc-
tion 17) is CPA-secure.
CPA
Proof of theorem 32. Let H0 (λ, b) = GΠ,A (λ, b). Then, let
r, z ← $Zq ; c = (g r , g z · mb );
H1 (λ, b) : ElGamal
b0 ← A(h, (c1 , c2 ))
1. homomorphic: given pk, (c1 , c2 ), (c01 , c02 ), with (c1 , c2 ) = Enc(pk, m) and
(c01 , c02 ) = Enc(pk, m0 ), note that
0 0
(c1 · c01 , c2 · c02 ) = (g r+r , hr+r (m · m0 )) = Enc(pk, m · m0 , r + r0 );
39
Property 2 implies that ElGamal is not CCA-2 secure: take (c1 , c2 ) challenge,
ask for Dec(sk, (c1 , c2 · m0 )), and look if the result is m0 · m0 or m1 · m0 .
Something cool comes from property 1: fully homomorphic encryption. You
can compute the encryption of the product of two messages.
What if we could do it for every function? Assume having c = Enc(pk, m),
and to have some function Eval(pk, c, f ) which outputs c0 = Enc(pk, f (m)). If
you are in a group, it suffices to have addition and multiplication. You could
build a client/server architecture, where the client C wants to compute f (x)
without sharing x. Then C computes c = FHECPK(x), sends c, f to the server
S, and obtains c0 = FHCPK(f (x)).
1. correctness:
∀x ∈ Xpk .f −1 (sk, f (pk, x)) = x;
Construction 18 (PKE scheme from TDP). Take (Gen, f, f −1 ), and let h(·)
be hardcore for f . Then do the following:
40
Theorem 33. If (Gen, f, f −1 ) is a TDP and h(·) is hardcore for f (·), then
the PKE in Construction 18 is CPA-secure.
Since (f (pk, r), h(r)) ≈c (f (pk, r), U), this works. How many bits can we
encrypt?
We know from theorem 9 that we have at least a bit, for any TDP. This
can be extended to O(log(n)) bits for any TDP. For specific TDPs we can
get some Ω(λ), like for factoring, RSA, Discrete Log (DL). From a Random
Oracle (RO) one can get anything in {0, 1}? .
Let’s now look at modular operations modulo n (not necessarily prime).
For example, take m = p · q with p, q primes.
Theorem 34 (Chinese Remainder Theorem). Let n1 , . . . , nk be pairwise co-
prime numbers, and any a1 , . . . , ak such that 1 ≤ ai ≤ ni for all i ∈ [k]. Then
Qk
∃!x such that 0 ≤ x < n = i=1 ni and such that x ≡ ai mod ni for all
i ∈ [k].
We can indeed invert it. Since gcd(e, ϕ(n)) = 1, then ∃d such that d · e = 1
mod ϕ(n). Consider fd−1 (x) = xd mod n.
d
fd−1 (fe (m)) = fe (m) mod n
e d
= (m ) mod n
e·d
=m mod n.
−1
Assumption (GenRSA , fRSA , fRSA ) is a TDP.
• (n, d, e) ← $GenRSA (1λ ) with n = p · q and e · e ≡ 1 mod ϕ(n). (n, e) =
pk, (n, d) = sk;
• fRSA (e, x) = xe mod n;
−1
• fRSA (d, y) = y d mod n.
41
Figure 8: Optimal Asymmetric Encryption Padding.
We have seen that this is correct. The RSA assumption is that, for all PPT A,
(n, d, e) ← $GenRSA (1λ );
0
Pr x = x : ≤ negl (λ) .
x ← $Z?n ; y = xe ; x0 ← A(e, n, y)
Under this assumption, we have a 1-bit hardcore predicate, so a 1-bit PKE.
This is a different assumption than factoring. If you can factor n you can
compute ϕ(n) and thus d. RSA implies factoring, but the other way around is
not known.
−1
Textbook RSA, i.e., using (GenRSA , fRSA , fRSA ) as is does not work: it’s
deterministic! To do RSA encryption in practice pick r ← ${0, 1}t and define
e
Enc(e, m) = (m̂) mod n, where m̂ = r||m.
• This is insecure if t is O(log(λ)).
• If m ∈ {0, 1}, this is secure under RSA.
• If t is between log(λ) and |m|, we don’t actually know.
It’s proven to be CCA-secure in the RO model.
Construction 19 (OAEP - PKCS#1 V.2). Optimal Asymmetric Encryption
Padding (OAEP), shown in fig. 8. This is basically a 2-Feistel (G, H), with
G, H hash functions.
42
• λ0 , λ1 ∈ N;
• m0 = m||0λ1 , r ← ${0, 1}λ0 ;
• s = m0 ⊕ G(r), t = r ⊕ H(s);
• m̂ = s||t;
• Enc(pk, m) = m̂e .
Theorem 35. OAEP (Construction 19) is CCA-2 secure under RSA in the
RO model.
QRp = {g 0 , g 2 , . . . , g p−3 , g 0 , . . . }
In QRp are only even powers of the generator. This means that |QR| = p−1 2 .
p−1
p−1 0
Note also that, since g ≡ g ≡ 1 mod p, we have that g 2 ≡ −1 mod p.
There is an easy way to check if a number is a square modulo p.
Fix p, q ≡ 3 mod 4, so p, q ≡ 4t + 3 for some t ∈ N. Consider n = p · q, also
called a Bloom integer. If this condition is met, squaring is a TDP modulo n.
2
Let y = x2 mod p, x = y t+1 mod p. y t+1 = y 2t+2 . Now, since p =
4t + 3,
4t + 3 − 3 p−3 p+1 p−1
2t + 2 = +2= +2= = + 1.
2 2 2 2
Now,
p−1 p−1
y 2t+2 = y 2 +1 =y 2 · y = 1 · y = y.
p−1
t+1
So, x = ±y mod p. Note that −1 = g 6∈ QRp , because p = 4t + 3 =⇒
2
p−1 4t+2
2 = 2 = 2t + 1, which is odd, and thus −1 is not a square.
Now, let’s look at fRABIN (x) = x2 mod n (quadratic residue modulo n,
which is in QRp ).
x2 7→ (x2p , x2q ).
−1
Note that fRABIN (y) is one of the following:
43
But doing this without knowing p and q is not easy. One can show that
y ∈ QRn ⇐⇒ x2p ∈ QRn , x2q ∈ QRn . Then it’s easy to see that just one of the
above pre-images is a square modulo n, and this is because only one between
xp and −xp is a square (and same goes for xq , −xq ).
ϕ(n)
|QRn | = .
4
With p, q you can invert fRABIN (x) = x2 = y mod n, without them it’s hard
as factoring.
Proof of lemma 6. Recall that f −1 (y) ∈ {(±xp , ±xq )}. Thus, if x = (xp , xq ),
then z 6= (−xp , −xq ), and as such x + z ∈ {(2xp , 0), (0, 2xq )}.
Assume x + z = (2xp , 0). Then x + z = 0 mod q, but x + z 6= 0 mod p.
This means that gcd(x + z, n) = q, i.e., x + z is a multiple of q, which is a
divisor of n. After finding q, we can find p = nq . Done.
Proof of theorem 36. Assume not, then ∃A and some p(λ) ∈ poly(λ) such that
44
• Dec(sk, (c1 , c2 , c3 , c4 )): if c4 6= cx1 2 · cy22 , return ⊥; else, return
c3
.
cx1 1 · cy21
Theorem 37. Under DDH, CS-lite is CCA-1 secure.
CCA1
Proof of theorem 37. Start with H0 (λ, b) = GA,Π (λ, b), where H0 (λ, b):
3. b0 ← A(pk, c? ).
Now consider H1 (λ, b), that changes just the way the cyphertext is com-
puted. Pick r ← $Zq , and let g3 = g1r , g4 = g3α = g2r . Then
c? = (g1r , g2r , (g1r )x1 · (g2r )y1 ·mb , (g1r )x2 · (g2r )y2 ).
| {z } | {z }
hr1 hr2
0 00
Now, we define H2 (λ, b), where instead of using g4 we use g1r = g2r for
some random r0 , r00 . The next claim is that H1 (λ, b) ≈c H2 (λ, b), by reduction
to the DDH.
To finish the proof, we introduce a lemma and two technical claims, that
imply the theorem.
Lemma 7. b is information theoretically hidden in H2 (λ, b) for all A asking
t = poly(λ) Dec queries.
0
Proof of lemma 7. Remember that g3 = g1r and that g4 = g2r , and that g2 =
g1α . For the proof we assume that α 6= 0 and that r 6= r0 .
From h1 = g1x1 · g2y1 , A knows that
45
There are q possible pairs (x1 , y1 ).
00 00
We say that query c = (c1 , c2 , c3 , c4 ) is legal if c1 = g1r and c2 = g2r , or
equivalently if logg1 (c1 ) = logg2 (c2 ).
By Claim 2 and Claim 2, before getting c? , A knows only that (x1 , x2 )
satisfy eq. (2).
c? = (g3 , g4 , g3x1 · g4y1 · mb , g3x2 · g4y2 ).
We want to show that g3x1 · g4y1 is uniform by A point of view. Fix an arbitrary
h in G. If h = g3x1 · g4y1 , then
Equation (2) and eq. (3) are independent: consider the system of equations
1 α x1 logg1 (h1 )
=
r αr0 y1 logg1 (h).
| {z }
B
46
which is −αr00 + αr00 = 0. This does not tell us anything, so the only way to
learn something is a query that is not ⊥ and that is illegal.
Claim 3.
These, too, are independent, since det r11 r2αα = α(r2 − r1 ) 6= 0. For the first
with λ ≈ log(q).
Since Claim 2 and Claim 3 imply lemma 7, and that in turn implies the
thesis, we are done.
Let’s see where the proof breaks for CCA-2. After having c? = (c?1 , c?2 , c?3 , c?4 ),
A knows that
logg1 (c?4 ) = x2 logg1 (g3 ) + y2 logg1 (g4 ).
This is independent of eq. (4), and A can recover x2 , y2 . A constructs
x y
c = (g1r1 , g2r2 , c3 , (g1r1 ) 2 , (g2r2 ) 2 ) .
c3
A learns m = x y
c1 1 ·c2 1
, and can recover x1 , y1 and decrypt to find b.
c
3
logg1 = x1 r1 + αy1 r2 .
m
47
Real Cramer-Shoup
Construction 21 (Cramer-Shoup). Consider the CS PKE scheme Π = (KGen, Enc, Dec),
where
• KGen(1λ ):
1. (G, g, q) ← GroupGen(1λ );
2. let g1 = g, take α ← $Zq , and let g2 = g1α ;
3. let H : {0, 1}? → Zq be a Collision Resistant Hash Function (CRH);
4. pick x1 , x2 , y1 , y2 , x3 , y3 ← $Zq ;
5. let hi = g1xi · g2yi , for i ∈ {1, 2, 3};
6. pk = (g1 , g2 , h1 , h2 , h3 ), sk = (x1 , x2 , x3 , y1 , y2 , y3 );
• Enc(pk, m): pick r ← $Zq , and output
r
c = g1r , g2r , hr1 · m, h2 · hβ3 = (c1 , c2 , c3 , c4 )
where β = H(c1 , c2 , c3 ).
• Dec(sk, (c1 , c2 , c3 , c4 )): check that
Correctness:
x2 βx3 y2 βy3
cx1 2 +βx3 · cy22 +βy3 = (g1r ) · (g1r ) · (g2r ) · (g2r )
r βr
= (g1x2 · g2y2 ) · (g1x3 · g2y3 )
r
= hr2 · hβr
3 = h 2 · hβ
3 .
The proof of CCA-2 security looks much similar to the proof for CCA-1
security of CS lite (theorem 37).
Theorem 38. The CS PKE scheme (Construction 21) is CCA-2 under DDH.
48
These have the same distribution, so the first tuple is indistinguishable from
0
(g, g α , g r , g αr ). The first hybrid uses the DDH tuple, the second uses the
non-DDH tuple.
From the pk, A knows that
Let’s analyse the probability of rejecting illegal queries. Let g3 = g1r , and let
0
g4 = g2r , with r 6= r0 . Given c? , A knows that
1. (c1 , c2 , c3 ) = (c?1 , c?2 , c?3 ), but c4 6= c?4 . It’s impossible, so it’s always
rejected;
2. (c1 , c2 , c3 ) 6= (c?1 , c?2 , c?3 ), but
6 Signature Schemes
We are still inside Public Key Cryptography, but we stop with encryption,
and instead look at a new primitive: Signature Schemes (SSs). Basically, it’s a
primitive just like Message Authentication Code (MAC), but this time without
the two parties sharing a secret.
A big difference between MAC and SS is that in SS the signature is publicly
verifiable. What we want from a SS is that it is hard to forge a message together
with a valid signature.
49
As usual, we want correctness, i.e., for all (pk, sk) output of KGen, for all
m ∈ Mpk ,
Sometimes “bad things” happen, and the signature does not work: hence the
probability.
ufcma
Definition 29 (UFCMA for SSs). Consider the game GΠ,A (λ):
If we hash the message before signing it, we get this to work in the Random
Oracle (RO) model.
Construction 22 (SS from TDP + RO). We have a Trapdoor Permutation
(TDP) (Gen, f, f −1 ), with Xpk being the domain of the TDP. We assume a
function H : {0, 1}? → Xpk , i.e., the message space is Mpk = {0, 1}? .
1. KGen(1λ ) = (pk, sk) ← Gen(1λ );
50
2. Sign(sk, m) = f −1 (sk, H(m)) = σ;
3. Vrfy(pk, (m, σ)) = 1 ⇐⇒ f (pk, σ) = H ( m).
Proof of theorem 39. Assume exists A capable of forging, i.e., be a PPT ad-
versary such that
ufcma 1
Pr GΠ,A (λ) = 1 ≥ .
poly(λ)
We use her to break the TDP.
Without loss of generality, we make the following assumptions on A:
1. run A(1λ , pk), pick j ← $[qh ] (number of queries of A). With probability
≈ q1h (one over polynomial) we guess j, the last message;
51
6.1 Waters Signature Scheme
Computational Diffie-Hellman (CDH) in bilinear groups.
Definition 30 (Bilinear Group). (G, GT , q, g, ê) ← Bilin(1λ ).
Proof of theorem 41. Let A be a PPT adversary capable of breaking WSS with
1
probability greater than p(λ) , with p(λ) ∈ poly(λ). Consider the adversary B
given params = (G, GT , q, g, ê), g1 = g a , g2 = g b , and which wants to compute
g ab . B(params, g1 , g2 ):
52
1. set l = kqs , with qs being the number of queries (<< q);
2. pick x0 ← $[−kl, 0] and x1 , . . . , xk ← $[0, l] integers, and y0 , . . . , yk ←
$Zq ;
4. run A(params, g1 , g2 , µ0 , . . . , µk );
5. upon a query m ∈ {0, 1}k from A:
• if β = β(m) = 0 mod q, abort;
• else γ = γ(m), r ← $Zq , and output
−1 −1
σ = (σ1 , σ2 ) = (g2βr · g γr · g1−γβ , g r · g1−β );
−1
σ1 = g2a · α(m)r̄ = g2a · α(m)r−αβ
−1
= g2a · g2βr · g γr · · g −αγβ −1 = g βr · g γr · g −γβ
g2−α 2 1
53
−1 −1
σ2 = g r̄ = g r · g −αβ = g r · g1−β
Since r̄ = r − αβ −1 is random, we get the same distribution.
To verify point 2: let (σ1? , σ2? ) be the forgery such that β(m? ) = 0 mod q.
ê(g, σ1? )
= ê(g1 , g2 )
ê(σ2? , α(m? ))
kl
Pr [β(m? ) 6= 0] = ;
kl + 1
x0 + xj = − i6=j xi m? [i]
P
(β(m? ) = ∧β(m̄) = 0) ⇐⇒ P
x0 = − i6=j xi m̄[i]
∃! solution.
1 1
Pr [β(m? ) = 0 ∧ β(mi ) = 0] = · .
kl + 1 l + 1
So the probability of aborting is:
kl 1 1
Pr [E] ≤ + qs · ·
kl + 1 kl + 1 l + 1
54
which gives us the probability of not aborting:
kl 1 1
Pr Ē ≥ 1 − − qs · ·
kl + 1 kl + 1 l + 1
1 qs 1 qs
= kl + 1 −
kl − = 1− (l = kqs )
kl + 1 l+1 kl + 1 l+1
1 l 1 1
= 1− ≥ = 0
kl + 1 (l + 1)k 4kqs + 2 p (λ)
55
Π = (Gen, P, V) is passively secure if for all PPT A
id
Pr GΠ,A (λ) = 1 ≤ negl (λ) .
It’s natural to build IDSs using SSs. The idea is to sign random messages
from V.
1. V samples m ← $M, and sends it to P;
2. P computes σ = Sign(sk, m) and sends it to V;
3. V outputs Vrfy(pk, (m, σ)).
Active security is when the IDS resists to an adversary that replaces V, and
thus chooses what to send to P. The proposed scheme is both passively secure
and actively secure.
Encrypting and decrypting random messages also works as an IDS.
Definition 33 (Canonical IDS). In a canonical IDS we have that P = (P1 , P2 )
and that V = (V1 , V2 ). Just three messages are exchanged:
1. α ← $P1 (pk, sk) is sent to V;
2. β ← $Bλ,pk = V1 , the challenge, is sent to P;
3. γ ← $P2 (pk, sk, α, β) is sent to V;
4. V outputs V2 (pk, α, β, γ) ∈ {0, 1}.
The verifier is randomised: this is called a (three-move) “public coins” ver-
ifier. Sometimes P1 and P2 share a state, i.e., (α, a) ← $P1 (pk, sk) and
γ ← $P2 (pk, sk, α, β, a).
From a canonical IDS we want non-degeneracy, i.e., it’s hard to predict the
first message of the prover:
56
• Vrfy(pk, (m, σ)) computes β = H(α, m), then returns V2 (pk, α, β, γ).
Theorem 42. If Π is passively secure, Π0 as defined in Construction 24 is
UFCMA.
Let Ei be the event that we abort in the i-th signature query. By the
non-degeneracy property of Π we have that Pr [Ei ] ∈ negl (λ), and thus
qs
X
Pr [abort] ≤ Pr [Ei ] ≤ qs · ε(λ).
i=1
57
Two properties are sufficient for an IDS to have passive security.
Definition 34 (Honest Verifier Zero Knowledge). An IDS is Honest Verifier
Zero Knowledge (HVZK) if exists PPT simulator S such that
HVZK says you don’t learn anything from a transcript. It’s called the sim-
ulation paradigm: if a protocol is HVZK, then exists a simulator capable of
simulating the transcripts.
SP
Definition 35 (Special Soundness). Consider the game GΠ,A , defined as:
Special Soundness (SP) is about canonical IDS: it’s hard to have two transcripts
(α, β, γ) and (α, β 0 , γ 0 ).
A canonical IDS Π satisfies special soundness if for all PPT A exists a
negligible ε such that SP
Pr GΠ,A (λ) = 1 ≤ ε(λ).
Theorem 43. Let Π be a canonical IDS, if Π satisfies SP and HVZK, and the
size of Bpk is ω(log(λ)), then Π is passively secure.
Proof of theorem 43. We want a reduction from Passive Security (PS) to SP.
We do one step first.
id
Let H0 = GΠ,A , and consider hybrid H1 where the oracle Trans(pk, sk) is
replaced by S(pk). By HVZK, H0 ≈c H1 . Proof of this is left as exercise.
Second step: we claim that Pr [H1 (λ) = 1] is negligible. Assume we have
an adversary for PS, we want to break SP. Assume A = (A1 , A2 ) breaks H1
1
with probability ε(λ) ≥ poly(λ) , and consider A0 breaking SP:
SP
1. recover pk from GΠ,A 0 (λ);
58
Denote by z the state of A2 after it sent α, and call pz = Pr [Z = z], where Z
is the Random Variable (RV) corresponding to the state.
Write δz = Pr [H1 (λ) = 1|Z = z]. By definition,
X
ε(λ) = pz δz = E[δz ].
z∈Z
SP
≥ Pr GΠ,A 0 (λ) = 1|Good − Pr [¬Good]
| {z }
Pr[β=β 0 ]
SP −1
= Pr GΠ,A 0 (λ) = 1|Good − |Bpk,λ |
−1 −1
X
= pz δz2 − |Bpk,λ | = E(δz2 ) − |Bpk,λ |
z∈Z
2 −1 −1
≥ (E(δz )) − |Bpk,λ | = ε(λ)2 − |Bpk,λ |
| {z } | {z }
non negligible negligible
1 1
≥ − negl (λ) ≈ .
poly(λ) poly(λ)
So we break SP with non negligible probability.
Construction 25 (Schnorr Protocol). Schnorr protocol is defined as follows:
• KGen(1λ ): (G, g, q) ← GroupGen(1λ ), sk = x ← $Zq , pk = y = g x ;
• P samples a ← $Zq and computes α = g a , and sends α to V;
• V samples β ← $Zq = Bpk,λ , and sends β to P;
• P computes γ = βx + a mod q, and sends γ to V;
• V checks that g γ · y −β = α.
Completeness:
g γ · y −β = g βx+a · g −βx = g
β
x−
β
x+a
= ga .
Schnorr protocol has a simulator: pick β, γ at random, let α = g γ · y −β , and
output (α, β, γ). In a real execution, γ = βx + a is uniform regardless of β,
and α is the unique value such that α = g γ · y −β .
SP holds under Discrete Log (DL) assumption. Assume A breaks SP with
non negligible probability. A0 gets y = g x for random x ← $Zq , and sets pk = y
in a game with A. A outputs (α, β, γ, β 0 , γ 0 ), with β 6= β 0 and g γ · y −β = α =
0 0
g γ · y −β . We get that
0 0 0 0
)(β−β 0 )−1
g γ y −β = g γ y−β 0 =⇒ g γ−γ = y β−β =⇒ y = g (γ−γ
so x = (γ − γ 0 )(β − β 0 )−1 .
59
List of Definitions
Perfect secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Information theoretic MAC . . . . . . . . . . . . . . . . . . . . . . . 5
Pairwise independence . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Computational Diffie-Hellman . . . . . . . . . . . . . . . . . . . . . . 34
Decisional Diffie-Hellman . . . . . . . . . . . . . . . . . . . . . . . . 34
Signature Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
UFCMA for SSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Bilinear Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Identification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Passively secure IDS . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Canonical IDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Honest Verifier Zero Knowledge . . . . . . . . . . . . . . . . . . . . . 58
Special Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
60
List of Constructions
ElGamal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
PKE scheme from TDP . . . . . . . . . . . . . . . . . . . . . . . . . 40
OAEP - PKCS#1 V.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Cramer-Shoup lite . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Cramer-Shoup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
SS from TDP + RO . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Waters Signature Scheme . . . . . . . . . . . . . . . . . . . . . . . . 52
Fiat-Shamir Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Schnorr Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
61
Acronyms
AES Advanced Encryption Standard
AU Almost Universal
DL Discrete Log
CBC Cypher Block Chain
CCA Chosen Cyphertext Attack
CDH Computational Diffie-Hellman
CFB Cypher Feed Back
CFP Claw-Free Permutation
CPA Chosen Plaintext Attack
CR Collision Resistant
CRH Collision Resistant Hash Function
CRT Chinese Remainder Theorem
CS Cramer-Shoup
CTR Counter
DDH Decisional Diffie-Hellman
DH Diffie-Hellman
ECB Electronic Code Book
FIL Fixed Input Length
GGM Goldreich-Goldwasser-Micali
GL Goldreich-Levin
HCP Hard Core Predicate
HVZK Honest Verifier Zero Knowledge
IBE Identity Based Encryption
ICM Ideal Cypher Model
IDS Identification Scheme
INT Integrity (of cyphertext)
MAC Message Authentication Code
62
MD Merkle-Damgard
NIST National Institute of Standards and Technology
OAEP Optimal Asymmetric Encryption Padding
OTP One Time Pad
OWF One Way Function
OWP One Way Permutation
PKC Public Key Cryptography
PKE Public Key Encryption
PPT Probabilistic Polynomial Time
PRF Pseudo Random Function
PRG Pseudo Random Generator
PRP Pseudo Random Permutation
PS Passive Security
PU Perfect Universal
RO Random Oracle
RSA Rivest-Shamir-Adleman
RV Random Variable
SKE Symmetric Key Encryption
SP Special Soundness
SS Signature Scheme
SSH Secure Shell
TDP Trapdoor Permutation
TLS Transport Layer Security
UFCMA Unforgeable Chosen Message Attack
UHF Universal Hash Function
VIL Variable Input Length
WSS Waters Signature Scheme
63