Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

Contents

Lesson 1 4
1.1 Problems in Cryptography . . . . . . . . . . . . . . . . . . . . . . 4
1.2 One Time Pad (OTP) . . . . . . . . . . . . . . . . . . . . . . . . 5

Lesson 2 7
2.1 Message Authentication . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Message Authentication Code (MAC or Tag) . . . . . . . 7

Lesson 3 9
3.1 Randomness Extraction . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Von Neumann Extractor . . . . . . . . . . . . . . . . . . . 9

Lesson 4 12
4.1 Negligible function . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 One-Way Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.1 Impagliazzo’s Worlds . . . . . . . . . . . . . . . . . . . . . 14
4.3 Computational Indistinguishability . . . . . . . . . . . . . . . . . 15
4.4 Pseudorandom Generators . . . . . . . . . . . . . . . . . . . . . . 17

Lesson 5 18
5.1 Stretching a prg . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Hardcore predicate . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 One Way Permutation . . . . . . . . . . . . . . . . . . . . . . . . 22

Lesson 6 23
6.1 Computationally secure encryption . . . . . . . . . . . . . . . . . 23
6.2 Pseudorandom functions . . . . . . . . . . . . . . . . . . . . . . . 26
6.2.1 Ggm-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Lesson 7 29
7.1 Cpa-security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Lesson 8 36
8.1 Domain extension . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.1.1 Electronic Codebook mode . . . . . . . . . . . . . . . . . 36
8.1.2 Cipher block chaining mode (cbc) . . . . . . . . . . . . . 37
8.1.3 Counter mode . . . . . . . . . . . . . . . . . . . . . . . . 37

1
Lesson 9 40
9.1 Message Authentication Codes and unforgeability . . . . . . . . . 40
9.2 Domain extension for mac schemes . . . . . . . . . . . . . . . . . 42
9.2.1 Universal hash functions . . . . . . . . . . . . . . . . . . . 43
9.2.2 Hash function families from finite fields . . . . . . . . . . 46

Lesson 10 48
10.1 Domain extension for prf-based mac schemes . . . . . . . . . . . 48
10.1.1 Hash function families from prfs . . . . . . . . . . . . . . 48
10.1.2 xor-mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
10.1.3 cbc-mode mac scheme . . . . . . . . . . . . . . . . . . . 49
10.1.4 XOR MAC . . . . . . . . . . . . . . . . . . . . . . . . . . 49
10.2 Cca-security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.3 Authenticated encryption . . . . . . . . . . . . . . . . . . . . . . 52
10.3.1 Combining ske & mac schemes . . . . . . . . . . . . . . . 53

Lesson 11 56
11.1 Authenticated encryption (continued) . . . . . . . . . . . . . . . 56
11.2 Pseudorandom permutations . . . . . . . . . . . . . . . . . . . . 57
11.2.1 Feistel network . . . . . . . . . . . . . . . . . . . . . . . . 57

Lesson 12 60
12.1 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
12.1.1 Merkle-Damgård construction . . . . . . . . . . . . . . . . 60
12.1.2 Domain extension . . . . . . . . . . . . . . . . . . . . . . 60
12.1.3 Compression functions . . . . . . . . . . . . . . . . . . . . 60

Lesson 13 61
13.1 Number theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.2 Standard model assumptions . . . . . . . . . . . . . . . . . . . . 61

Lesson 14 62
14.0.1 Decisional Diffie-Hellman assumption . . . . . . . . . . . 62
14.0.2 Naor-Reingold encryption scheme . . . . . . . . . . . . . . 62
14.1 Public key encryption schemes . . . . . . . . . . . . . . . . . . . 62

Lesson 15 63
15.1 Public key encryption recap . . . . . . . . . . . . . . . . . . . . . 63
15.1.1 Trapdoor permutation . . . . . . . . . . . . . . . . . . . . 64
15.1.2 TDP examples . . . . . . . . . . . . . . . . . . . . . . . . 64
15.2 Textbook RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
15.2.1 Trapdoor Permutation from Factoring . . . . . . . . . . . 66
15.2.2 Rabin’s Trapdoor permutation . . . . . . . . . . . . . . . 67

Lesson 16 70
16.1 Pke schemes over ddh assumption . . . . . . . . . . . . . . . . . 70
16.1.1 El Gamal scheme . . . . . . . . . . . . . . . . . . . . . . . 70
16.1.2 Cramer-Shoup pke scheme . . . . . . . . . . . . . . . . . 73

2
Lesson 17 76
17.1 Construction of a cca-secure pke . . . . . . . . . . . . . . . . . 76
17.1.1 Instantiation of U-HPS (Universal Hash Proof System) . 80

Lesson 18 83
18.1 Digital signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
18.1.1 Public Key Infrastructure . . . . . . . . . . . . . . . . . . 84

Lesson 19 86
19.1 Bilinear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
19.2 Waters signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Lesson 20 89
20.1 Random Oracle Model (ROM) . . . . . . . . . . . . . . . . . . . 89
20.2 ID Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
20.3 Honest Verifier Zero Knowledge (HVZK) and Special Soundness
(SS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Lesson 21 90
21.1 Full domain hashing . . . . . . . . . . . . . . . . . . . . . . . . . 90

Lesson 22 91
22.1 Examples of ID schemes . . . . . . . . . . . . . . . . . . . . . . . 91

Lesson 23 92
23.1 Bilinear DDH assumption . . . . . . . . . . . . . . . . . . . . . . 92

Lesson 24 93
24.1 CCA proof for ??? . . . . . . . . . . . . . . . . . . . . . . . . . . 93

3
Lesson 1

1.1 Problems in Cryptography


- Confidential communication

T0D0 1: Image of Alice,Bob, Eve in ske

Kerckhoffs’s principle: No security by obscurity.


Security should only rely on the secrecy of the key. However sharing the key
between two parties is a costly operation.
Syntax:

• K = Key Space
• M = Message Space
• C = Ciphertext Space

• Enc : K × M → C
• Dec : K × C → M
Correctness:
∀m ∈ M, ∀k ∈ K : Dec(k, Enc(k, m)) = m

Definition 1. Perfect Secrecy (SHANNON)


Let M be any distribution over M.
Let K be uniform over K.
We say that (Enc, Dec) is perfectly secret if ∀M, ∀m ∈ M, ∀c ∈ C.

P r[M = m] = P r[M = m|C = c]


Meaning that the ciphertext does not reveal anything about the message. 
Theorem 1. Let M, K, C as above. The following are equivalent: 

I P r[M = m] = P r[M = m|C = c]

II M and C are INDEPENDENT


III ∀m, m0 ∈ M, ∀c ∈ C P r[Enc(k, m) = c] = P r[Enc(k, m0 ) = c]

4
Proof. (I) =⇒ (II)

P r[M = m ∧ C = c] = P r[C = c]P r[M = m|C = c] = P r[C = c]P r[M = m]

They are independent.


(II) =⇒ (III) Fix M, any m ∈ M, c ∈ C

P r[Enc(K, m) = c] = P r[Enc(K, M ) = c|M = m] = P r[C = c|M = m] = P r[C = c]

(III) =⇒ (I)
X
P r[C = c] = P r[C = c ∧ M = m0 ] =
m0
X
= P r[Enc(K, M ) = c ∧ M = m0 ] =
m0
X
= P r[Enc(K, M ) = c|M = m0 ]P r[M = m0 ] =
m0
X
= P r[Enc(K, m0 ) = c]P r[M = m0 ] = (Using III)
m0
X
= P r[Enc(K, m) = c] P r[M = m0 ] =
m0
| {z }
=1
= P r[Enc(K, m) = c] = P r[Enc(K, M ) = c|M = m] =
= P r[C = c|M = m] =⇒ P r[M = m]P r[C = c|M = m] =⇒
| {z }
The proof ends here
but we want (I)

P r[C = c]P r[M = m|C = c]


=⇒ P [M = m] =
P r[C = c|M = m]
=⇒ P r[M = m] = P r[M = m|C = c]

1.2 One Time Pad (OTP)


K = M = C = {0, 1}l
K ← ${0, 1}l
• Enc(k, m) = k ⊕ m = c
• Dec(k, c) = k ⊕ c = m
Theorem 2. OTP is perfectly secret 
Proof. Fix any m, c ∈ {0, 1}l :

P r[Enc(k, m) = c] = P r[k ⊕ m = c] =
= P r[k = c ⊕ m] = 2−l (because k is uniform)
= P r[Enc(k, m0 ) = c]∀m0

This satisfies (III)

5
Problems:
1. |k| = |m| (the key must have the same length of the message)

2. There must be one different key for each message (One Time).
Otherwise:
c1 = k ⊕ m1 ; c2 = k ⊕ m2 =⇒ c1 ⊕ c2 = m1 ⊕ m2
Not independent! (OTP is malleable1 )
Theorem 3. In any perfectly secret SKE we must have |K| ≥ |M| 

Proof. Take M to be uniform over M and take c s.t. P r[C = c] > 0.


Consider M 0 = {Dec(K, c) : k ∈ K} (decryption of c with every key) and
assume |K| < |M|

=⇒ |M 0 | ≤ |K| ≤ |M |
=⇒ ∃m ∈ M \ M 0
1
NowP r[M = m] =
|M|
By assumption all the c ”creates” M’
z }| {
P r[ M = m}
| {z | C=c ]=∅
m is not inside M0

The above notion can be represented with the following picture:

M
M0 m

Meaning that c is not an encryption of m!

1 An encryption algorithm is ”malleable” if it is possible to transform a ciphertext into

another ciphertext which decrypts to a related plaintext. That is, given an encryption c of
a plaintext m, it is possible to generate another valid ciphertext c0 , for a known Enc, Dec,
without necessarily knowing or learning m.

6
Lesson 2

2.1 Message Authentication

T0D0 2: Msg auth image (bob, eve, tag)

2.1.1 Message Authentication Code (MAC or Tag)


Syntax:

• K = Key space, M = Message space, Φ = Tag space

• K is uniform over K
• Tag: K × M → Φ
• Verify: K × M × Φ → {0, 1}

Correctness:

∀m ∈ M, ∀k ∈ K
Verify(k, m, Tag(k, m)) = 1

Assume Tag is deterministic, then V rf (k, m, φ) = 1 ⇔ T ag(k, m) = φ


Security: Unforgeability → The attacker is not able to forge φ
Definition (ε-statistical one-time security): we say that Mac has ε-statistical
one-time security if ∀m, m0 ∈ Ms.t.m 6= m0 , ∀φ, phi0 ∈ Φ

P r[M ac(K, m0 ) = φ0 |M ac(K, m) = φ] ≤ ε

T0D0 3: Not really sure of the Correctness of the following thins

Think of an ”Unbounded” adversary choosing m ∈ M and obtaining φ =


M ac(k, m). His goal is to forge on m0 6= m0 a tag φ0 such that M ac(k, m0 ) = φ0
(with no knowledge of k).

Tool: Pairwise Independent Hashing

7
Definition 2. A family of (Hash) functions H = {hs : M → Φ}s∈S is Pairwise Independent
if ∀m, m0 ∈ Ms.tm 6= m0 we have that (hs (m), hs (m0 )) is uniform over Φ2 for
random choice of seed s ∈ S. Meaning that:
∀m, m0 s.t.m 6= m0 , ∀φ, φ0 ∈ Φ
1
P r[hs (m) = φ ∧ hs (m0 ) = φ0 ] =
|Φ|2

Construction: Let p be a prime number.
Define h(a,b) (x) = ax + b mod p now S = Zp2 , M = Φ = Zp = {0, 1, ..., p − 1}
Thus the following group will be defined (Zp , +)
Theorem 4. Above H is Pairwise Independent 
Proof.
Fixm, m0 ∈ Zp s.t.m 6= m0 , φ, φ0 ∈ Zp
P r(a,b) [ha,b (m) = φ ∧ ha,b (m0 ) = φ0 ] =
= P r(a,b) [am + b = φ ∧ am0 + b = φ0 ] =
     
m 1 a φ
= P r(a,b) [ · = ]=
m0 1 b φ0
   −1  
a m 1 φ
= P r(a,b) [ = · ]=
b m0 1 φ0
1 1
= = 2
|Zp |2 p
Where P r(a,b) denotes the probability over (a,b) random in Zp
Theorem 5. Let M ac(K, m) = hk (m) for H = {hk : M → Φ}k be Pairwise
1
Independent. The Mac is |Φ| -statistical one-time secure. 
Proof.
∀m ∈ M, φ ∈ Φ
1
P r[M ac(K, m) = φ] = P rk [hk (m) = φ] =
|Φ|
For any m 6= m0 , φ, φ0 ∈ Φ
1
P r[M ac(K, m) = φ ∧ M ac(K, m0 ) = φ0 ] =
|Φ|2
=⇒ P r[M ac(K, m0 ) = φ0 |M ac(K, m) = φ] =
Pr[M ac(K, m) = φ ∧ M ac(K, m0 ) = φ0 ]
= =
P r[M ac(K, m) = φ]
1 1 1
= 2
/ =
|Φ| |Φ| |Φ|

Theorem 6. Any t-time(ε = 2−λ )-statistical secure MAC has a key of size
(t + 1)λ for any λ > 0.
No proof 

8
Lesson 3

3.1 Randomness Extraction


Goal: extract ”real” randomness from ”imperfect” randomness.

1
Suppose to have a biased coin Bs.t.P r[B = 0] = p 6= 2 and
P r[B = 1] = 1 − p.
Can you make it uniform?

3.1.1 Von Neumann Extractor


1. Sample b1 ←$ B; b2 ←$ B
2. If b1 = b2 go to 1.
3. Else:
• If b1 = 0; b2 = 1 output 1
• If b1 = 1; b2 = 0 output 0

Considerations:
• The probability of both cases in 3. is p(p − 1)
• The probability to reach 3. is 2p(p − 1)
• P r[Fail n-times] = (1 − 2p(1 − p))n which is very small
Goal: Let X be a Random Variable, I want to design an Extractor such
that Ext(X) is uniform. PROBLEM: Impossible as stated. What if X is
constant? =⇒ Add the requirement that X has uncertainty (ENTROPY)
Definition 3. MIN-ENTROPY of X (written as H∞ ) is:

H∞ (X) = −log2 maxx P r[X = x]


Where maxx is the x that maximize the probability. 
Example 1. Say X = {0, 1}n

1. X is constant(P r[X = x̄] = 1, P r[X = x 6= x̄] = 0)


=⇒ H∞ (X) = 0

9
2. X ≡ Un =⇒ H∞ (x) = n (since X is uniform P r[X = x] = 2−n )
Question: We want Ext : {0, 1}n → {0, 1} working ∀Xs.t.H∞ (X) ≥ k
Claim: Impossible even for k = n − 1
Proof. Let Ext be any candidate extractor, Let b be any binary value s.t.
|Ext− 1(b)| is maximal (Ext− 1 : {0, 1} → {0, 1}n )
2
=⇒ |Ext− 1(b)| ≥ 2n−1 = 22

x=0

x=1

{0, 1}n
Define X to be uniform over Ext− 1(b) so H∞ (x) = n − 1 but Ext(X) = b

CONSTANT.

T0D0 4: Help with the proof

Theorem 7. Way-Out-Seeded-Extractor allow X to take SMALL uniformly


random SEED(catalyst)

Ext : {0, 1}d × {0, 1}n → {0, 1}m


| {z } | {z } | {z }
seed(public) input output


Definition 4. We say that Ext : {0, 1}d × {0, 1}n → {0, 1}l is a (k, ε)-extractor
(”good”) if ∀xs.t.H∞ (X) ≥ k we have:

SD((S, Ext(S, x)), (S, Ul )) ≤ ε

where S ≡ Ud , SD denotes the statistical distance and we are basically com-


paring two uniform values (x and Ul ) 
Idea: Even if the seed is public the SD must not be influenced. Therefore
the two compared variables should be uniform so that ε is small. In this way
the output of the extractor can’t be distinguished from a uniform string.

10
Definition 5. Statistical Distance: a distance between two random vari-
ables.
Say that X, X 0Pare Random Variables over X .
SD(x, x0 ) = 12 x∈X |P r[X = x] − P r[X 0 = x]|

T0D0 5: Image of the statistical distance

T0D0 6: Leftover hash lemma

11
Lesson 4

4.1 Negligible function


What is exactly a negligible function? Below here there is a possible interpre-
tation of this notion:
“In real life, we can just consider adversaries with limited com-
putational power; even if every non-perfectly secure authentication
scheme can resist to unbounded computational power, the true un-
bounded computational power doesn’t exist at all. So, it’s reasonable
to consider just bounded adversaries.
So consider a scheme Π where the only attack against it is brute-
force attack. We consider Π to be secure if it cannot be broken by
a brute-force attack in polynomial time.
The idea of negligible probability encompasses this exact no-
tion. In Π, let’s say that we have a polynomial-bounded adversary.
Brute force attack is not an option.
But instead of brute force, the adversary can try (a polynomial
number of) random values and hope to guess the right one. In this
case, we define security using negligible functions: The probability
of success has to be smaller than the reciprocal of any polynomial
function.
And this makes a lot of sense: if the success probability for an
individual guess is a reciprocal of a polynomial function, then the
adversary can try a polynomial amount of guesses and succeed with
1
high probability. If the overall success rate is poly(λ) then we consider
this attempt a feasible attack to the scheme, which makes the latter
insecure.
So, we require that the success probability must be less than
the reciprocal of every polynomial function. This way, even if the
adversary tries poly(λ) guesses, it will not be significant since it will
poly(λ) 2
only have tried: superpoly(λ)
As λ grows, the denominator grows far faster than the numerator
and the success probability will not be significant3 .”
2 If
we design a function hard for superpoly(λ) possible attempts and the attacker completed
poly(λ)
poly(λ) attempts, he has just P[ superpoly(λ) ] of finding the key to break the scheme
3 “What exactly is a negligible (and non-negligible) function?” — Cryptography Stack Exchange

12
Definition
Let ν : N → [0, 1] be a function. Then it is deemed negligible iff:
1
∀p(λ) ∈ poly(λ) =⇒ ν(λ) ∈ o( )
p(λ)

Exercise 8. Let p(λ), p0 (λ) ∈ poly(λ) and ν(λ), ν 0 (λ) ∈ negl (λ). Then prove
the following:

1. p(λ) · p0 (λ) ∈ poly(λ)

2. ν(λ) + ν 0 (λ) ∈ negl (λ)

Solution 1 (1.1). T0D0 7: Questa soluzione usa disuguaglianze deboli; per


essere negligibile una funzione dev’essere strettamente minore di un polinomiale
inverso. Da approfondire

We need to show that for any c ∈ N, then there is n0 such that ∀n > n0 =⇒
h(n) ≤ n−c .
So, consider an arbitrary c ∈ N. Then, since c + 1 ∈ N, and both f and g
are negligible, there exists nf and ng such that:

∀n ≥ nf =⇒ f (n) ≤ n−(c+1)
∀n ≥ ng =⇒ g(n) ≤ n−(c+1)

Fix n0 = max(nf , ng ). Then, since n ≥ n0 ≥ 2, ∀n ≥ n0 we have:

h(n) = f (n) + g(n)


≤ n−(c+1) + n−(c+1)
= 2n−(c+1)
≤ n−c

Thus we conclude h(n) is negligible.

4.2 One-Way Functions


A One-Way Function (owf) is a function that is “easy to compute”, but “hard
to invert”.
Definition 6. Let f : {0, 1}n(λ) → 0, 1n(λ) be a function. Then it is a owf iff:
 
∀ ppt A ∃ν(λ) ∈ negl (x) : Pr Gameowf
f,A (λ) = 1 ≤ ν(λ) (4.1)

Do note that the game does not look for the equality x0 = x, but rather for
the equality of their images computed by f .

13
A Cowf

y x ←$ {0, 1}n
y = f (x)

x0
Output 1 iff f (x0 ) = y

Figure 4.1: One-Way Function hardness

Exercise 9.
1. Show that there exists an inefficient adversary that wins Gameowf with
probability 1

2. Show that there exists an efficient adversary that wins Gameowf with
probability 2−n

One-Way puzzles
A one-way function can be thought as a function which is very efficient in gen-
erating puzzles that are very hard to solve. Furthermore, the person generating
the puzzle can efficiently verify the validity of a solution when one is given.
For a given couple (Pgen , Pver ) of a puzzle generator and a puzzle verifier,
we have:

A Cpuzzle

y
(x, y) ←$ Pgen
x0
Output Pver (x0 , y)

Figure 4.2: The puzzle game

So, we can say that one-way puzzle is a problem in np, because the verifier
enables efficient solution verification, but not in p because finding a solution
from scratch is impractical by definition.

4.2.1 Impagliazzo’s Worlds


Suppose to have Gauss, a genius child, and his professor. The professor gives
to Gauss some mathematical problems, and Gauss wants to solve them all.
Imagine now that, if using one-way functions, the problem is f (x), and its
solution is x. According to Impagliazzo, we live in one of these possible worlds:

• Algorithmica: P = NP, meaning all efficiently verifiable problems are also


efficiently solvable.

14
The professor can try as hard as possible to create a hard scheme, but
he won’t succeed because Gauss will always be able to efficiently break it
using the verification procedure to compute the solution

• Heuristica: NP problems are hard to solve in the worst case but easy on
average.
The professor, with some effort, can create a game difficult enough, but
Gauss will solve it anyway; here there are some problems that the professor
cannot find a solution to

• Pessiland : NP problems are hard on average but no one-way functions


exist
• Minicrypt: One-way functions exist but public-key cryptography is im-
practical

• Cryptomania: Public-key cryptography is possible: two parties can ex-


change secret messages over open channels

4.3 Computational Indistinguishability


Distribution ensembles X = {Xλ∈N } and Y = {Yλ∈N } are distribution se-
quences.
Definition 7. X and Y are computationally indistinguishable (X ≈c Y )
if ∀ ppt D, ∃ν(λ) ∈ negl (λ) such that:

|Pr[D(Xλ ) = 1] − Pr[D(Yλ ) = 1]| ≤ ν(λ)

T0D0 8: AP181129-2344: There may be room for improvement, but I like


how it’s worded: it puts some unusual perspective into the cryptographic game,
and it could be a good thing since it closely precedes our first reduction, and
the whole hybrid argument mish-mash.

Suppose we have this mental game: a Distinguisher D receives the value


z. This value has been chosen by me, the Challenger, among Xλ and Yλ , and
the Distinguisher has to distinguish which was the source of z. What does this
formula mean?
This formula means that, fixed 1 as one of the sources, the probability that
D says “1!” when I pick z from Xλ is not so far from the probability that D
says “1!” when I pick z from Yλ .
So, this means that, when this property is verified by two random variables,
there isn’t too much difference between the two variables in terms of exposed
information (reachable by D), otherwise the distance between the two probabil-
ities should be much more than a negligible quantity.
What’s the deep meaning of this formula? This is something to do.
Lemma 1. ∀ ppt f, X ≈c Y =⇒ f (x) ≈c f (y) 

15
Proof. We want to show that f (x) ≈c f (y). So, let’s suppose this property is
not true.
Assume ∃ ppt f, A and some p0 (λ) ∈ poly(λ) such that
1
|Pr[D0 (f (x)) = 1] − Pr[D0 (f (y)) = 1]| ≥ (4.2)
p0 (λ)
.

A Cf -dist

f (zb ) z0 ←$ X
z1 ←$ Y
b ←$ {0, 1}
b0
Output 1 iff b0 = b

Figure 4.3: A distinguisher for f

In the game depicted by figure 4.3, the adversary can make use of f , because
it is ppt. Thus, if such a distinguisher exists, then it can be used to distinguish X
from Y , as shown in figure 4.4. So, if A could efficiently distinguish between f (x)
and f (y), then D can efficiently distinguish between x and y. This contradicts
the hypothesis X ≈c Y , which in turn means f (X) ≈c f (Y ).

D A Cdist

zb z0 ←$ X
z1 ←$ Y
f (zb ) b ←$ {0, 1}
b0
b0
Output 1 iff b0 = b

Figure 4.4: Distinguisher reduction

16
4.4 Pseudorandom Generators
A deterministic function G : {0, 1}λ → {0, 1}λ+l(λ) is called a PseudoRandom
Generator, or prg iff:
• G runs in polinomial time

• |G(s)| = λ + l(λ)
• G(Uλ ) ≈c Uλ+l(λ)

A Cprg

x0 ←$ G(Uλ )
xb
x1 ←$ Uλ+l(λ)
b ←$ {0, 1}
b0
Output 1 iff b = b0

Figure 4.5: The pseudorandom game

So, if we take s ←$ Uλ , the output of G will be indistinguishable from a


random draw from Uλ .

17
Lesson 5

5.1 Stretching a prg


Consider this algorithm that uses G to construct Gl , as depicted in figure 5.6:
1. Let s0 ←$ {0, 1}λ
2. ∀i ∈ [l(λ)] , let G(si−1 ) = (si , bi ), where bi is the extra bit generated by G
3. Compose (b1 , b2 , ..., bl(λ) , sl(λ) ) and output the result; the outputted string
will be λ + l(λ) bits long

b1 b2 bi+1 bl(λ)

s0 s1 s2 si si+1 sl(λ)−1 sl(λ)


G G ... G ... G

Figure 5.6: Constructing Gl(λ) form G(λ)

To prove that this construct is a valid prg, we will make use of a known
technique for proving many other results, which relies heavily on reductions,
and is called the hybrid argument.
Lemma 2. (Hybrid Argument) Let X = {Xn }, Y = {Yn }, Z = {Zn } : X ≈c
Y ∧ Y ≈c Z. Then X ≈c Z. 
Proof. ∀ ppt D, by using the triangle inequality:
| Pr[D(Xn ) = 1] − Pr[D(Zn ) = 1]|
= |(Pr[D(Xn ) = 1] − Pr[D(Yn ) = 1]) + (Pr[D(Yn ) = 1] − Pr[D(Zn ) = 1])|
≤ | Pr[D(Xn ) = 1] − P r[D(Yn ) = 1]| + | Pr[D(Yn ) = 1] − Pr[D(Zn ) = 1]|
≤ negl(n) + negl(n)
= negl(n)
In essence, the hybrid argument proves that computational indistinguisha-
bility is transitive across a “hybrid” game, or by extension more than one. This
property will be very useful in all future proofs.
Theorem 10. If there exists a prg G(λ) with one bit stretch, then there exists
a prg Gl(λ) with polynomial stretch relative to its input length:
G : {0, 1}λ → {0, 1}λ+1 =⇒ ∀l(λ) ∈ polyλ ∃Gl : {0, 1}λ → {0, 1}λ+l(λ) 

18
Proof. First off, do observe that, since both G and l are polynomial in λ, then
Gl(λ) , which combines G l-many times is polynomial in λ too. To prove that
Gl(λ) is indeed a prg, we will apply the hybrid argument. The hybrids are
defined as:
• Hλ0 := Gl(λ) (Uλ ), which is the original construct

b1 , ..., bi ←$ {0, 1}

i
• Hλ := si ← {0, 1}λ+i

(bi+1 , ..., bl(λ) , sl(λ) ) := Gl(λ)−i (si )

l(λ)
• Hλ := Uλ+l
Focusing on two subsequent generic hybrids, as shown in figures 5.7 and 5.8,
it can be observed that the only difference between the two resides in how bi+1
is generated: in H i it comes from an instance of G, whereas in H i+1 is chosen
at random. Hλ0 is the starting point where all bits are pseudorandom, which
l(λ)
coincides with the Gl(λ) , and Hλ will generate a totally random string.

b1 ... bi bi+1 bi+2 bl(λ)

si si+1 si+2 sl(λ)−1 sl(λ)


Uλ+1 G G ... G

Figure 5.7: Hλi

b1 ... bi bi+1 bi+2 bl(λ)

si+1 si+2 sl(λ)−1 sl(λ)


U1 Uλ+1 G ... G

Figure 5.8: Hλi+1

So let’s fix a step i in the gradual substitution, and consider the following
function fi :

fi (si+1 , bi+1 ) = (b1 , . . . , bi , bi+1 , bi+2 , . . . , bl(λ) , sl(λ) )


where the first i bits are chosen uniformly at random, and the remaining
ones are obtained by subsequent applications of G (∀j ∈ {i + 2, . . . , l(λ)} =⇒
G(sj−1 ) = (sj , bj )). It can be observed that:

• fi (G(Uλ )) has the same distribution of Hλi


• fi (Uλ+1 ) has the same distribution of Hλi+l

Since by prg definition G(Uλ ) ≈c Uλ+1 , by lemma 1, we can deduce that


fi (G(Uλ )) ≈c fi (Uλ+1 ), which in turn, by how f is defined, implies H i ≈c H i+1 .
This holds for an arbitrary choice of i, so by extension:
l(λ)
Gl(λ) (Uλ ) = Hλ0 ≈c Hλ1 ≈c . . . ≈c Hλ = Uλ+l(λ)
which proves that Gl is indeed a prg.

19
Proof. (Contradiction): This is an alternate proof that, instead of looking for a
function f to model hybrid transitioning, aims for a contradiction.
Suppose Gl is not a prg; then there must be a point in the hybrid chain
Hλ0 ≈c . . . ≈c Hλl where Hλi 6≈c Hλi+1 . Thus there exists a distinguisher D0 able
to tell apart Hλi from Hλi+1 , as shown in figure 5.9:

1
∃i ∈ [0, l], ∃ ppt D0 : |Pr[D0 (Hλi ) = 1] − Pr[D0 (Hλi+l ) = 1]| ≥
poly(λ)

A Ci-dist

z = (b1 , . . . , bl(λ) , sl(λ) ) Z ←$ {H i , H i+1 } ' {0, 1}


λ λ
z ←$ Z
a
Output 1 iff a ' Z

Figure 5.9: Distinguisher for Hλi and Hλi+1

If such a distinguisher exists, it can be also used to distinguish an output of


G from a λ + 1 uniform string by “crafting” a suitable bit sequence, which will
distribute exactly as the hybrids in question, as shown in the reduction in figure
5.10. This contradicts the hypothesis of f being a prg, which by definition is to
be indistinguishable from a truly random distribution. Therefore, Gl is indeed
a prg.

Di-dist A Cprg
s ←$ Uλ
za = (b, s) z0 = G(s)
z1 ←$ Uλ+1
a ←$ {0, 1}
(β, b, σ) β ←$ Ui
σ = Gl(λ)−i+1 (s)

a0
a0
Output 1 iff a0 = a

Figure 5.10: Reducing to a distinguisher for G, where β = (b1 , . . . , bi−1 ) and


σ = (bi+1 , . . . , bl(λ) , sl(λ))

20
5.2 Hardcore predicate
We’ve seen how to reuse a prg in order to obtain an arbitrary length of pseu-
dorandom bits starting from a prg with one bit stretch. How to construct a
1-bit prg?
Consider a typical one-way function f , s.t. f (x) = y.
Question 1. Which bits of the input x are hard to compute given y = f (x)?
Is it always true that, given f , the first bit of f (x) is hard to compute ∀x?
Example 2. Given an OWF f , then f 0 (x) = x0 ||f (x) is a OWF.
Definition 1. A polynomial time function hc : {0, 1}n → {0, 1} is hard core
for a given function f : {0, 1}n → {0, 1}n if, ∀ ppt A:

Pr(A(f (x)) = hc(x)|x ←$ {0, 1}n ) ∈ negl (λ))

A Chc(1)

f (x)
x ←$ {0, 1}n
b
Output 1 iff b = hc(x)

Figure 5.11: The hardcore game, f and hc are known

There is also an alternative definition:


Definition 2. A polynomial time function hc : {0, 1}n → {0, 1} is hard-core for
a function f iff:
(f (x), h(x)) ≈c (f (x), b)
where x ←$ {0, 1}n and b ←$ {0, 1}.

A Chc(2)

x ←$ {0, 1}n
(f (x), zb ) z0 = hc(x)
z1 ←$ {0, 1}
b ←$ {0, 1}

b0
Output 1 iff b0 = b

Figure 5.12: Another hardcore game, f and hc are known

Having made this definition, some observations are in order:


Claim 1. There is no universal hardcore predicate HC.

21
Proof. Suppose there exists such a predicate HC. Let f 0 (x) = HC(x)||f (x) for a
given function f . Then HC cannot be a hardcore predicate of f 0 , because any
image obtained by f reveals the predicate image itself. This contradicts the
universailty assumption.
Nevertheless, it has been proven always possible to construct a hardcore
predicate for a owf, given another owf:
Theorem 11 (Goldreich-Levin, ’99). Let f be a owf and consider g(x, r) =
(f (x), r) for r ∈ {0, 1}n . Then g is a owf and:
n
X n
M
h(x, r) =< x, r >= xi ri mod 2 = xi ri
i=1 i=1

is hard core for g. 

Proof. T0D0 9: TO BE COMPLETED (...did we actually do this? è una bella


menata dimostrare questo)

Exercise 12. Prove that g is a owf if f is a owf (Hint: do a reduction).

5.3 One Way Permutation


f : {0, 1}n → {0, 1}n is an OWF and

∀x, |x| = |f (x)| ∧ x 6= x0 ⇒ f (x) 6= f (x0 )

Corollary 1. If f : {0, 1}n → {0, 1}n is a OWP, then for g(), h() as in the GL
theorem,
G(s) = (g(s), h(s))
is a PRG.

Proof. By GL , if f is an OWP, so is g. This means that if we want to invert g,


since g depends on f we have to invert a OWP.
Moreover h is hardcore for g. Hence

G(U2n ) ≡ (g(U2n ), h(U2n )) ≡ (f (Un ), Un , h(U2n )) ≈c (f (Un ), Un , U1 ) ≡ U2n+1


| {z }
definition 1 of hard core pred.

We are stretching just 1 bit, but we know we can stretch more than one.

22
Lesson 6

6.1 Computationally secure encryption


Question 2. How to define the concept of computationally secure encryp-
tion ?
Find a task/scheme that is computationally hard for an attacker to break
(supposing the attacker is polyλ, we want a scheme which requires an amount
of time near ,as much as possible, to superpoly(λ) to be broken).
This scheme should have these properties:
• one wayness w.r.t. key (given c = Enc(k, m), it should be hard to
recover k)
• one wayness w.r.t. message (given c = Enc(k, m), hard to obtain the
message)
• no information leakage about the message
.
Consider the experiment depicted in figure 6.13 for the encryption scheme
Π = (Enc, Dec), where the adversary wins the game when the challenger out-
puts 1.

A Cind

m0 , m ∈ M
k ←$ K
c b ←$ {0, 1}
c ←$ Enc(k, mb )
b0
Output 1 iff b = b0

Figure 6.13: Gameind


Π,A (λ, b)

Definition 8. Π is said to be computationally one time secure iff:


4
Gameind ind
Π,A (λ, 0) ≈c GameΠ,A (λ, 1)
4 Gameind refers to the indistinguishability of the messages sent by A during the game
Π,A

23
or, alternatively ∀ ppt A:

|Pr[Gameind ind
Π,A (λ, 0) = 1] − Pr[GameΠ,A (λ, 1) = 1]| ∈ negl (λ)


This last definition is compliant with the three properties exposed before-
hand. In particular, if a scheme is one time secure, then it has each one of
these 3 properties:

T0D0 10: TO BE REVIEWED

• compliance with point 1: suppose point 1 is not valid, and k is not


hard to discover for A. Then A is able to perfectly distinguish m and m0
every time, therefore the scheme cannot be one time secure;
• compliance with point 2: suppose point 2 is not valid, and then the
encrypted message can be easily discovered by A. Then, as before, A can
win every game with Pr[1], so the scheme couldn’t be one time secure;
• compliance with point 3: suppose point 3 is not valid, and some infor-
mation about m is leaked in c, for example the first bit of c is the same bit
of m. A could forge m0 = m such that they have the same bits but just
the first is different. When A obtains c, he can look at the first bit and
distinguish which was the message encrypted. Thus, the scheme wouldn’t
be one time secure.

What is not two time secure? Here is an apparently safe scheme, Π⊕ ,


where G : {0, 1}λ → {0, 1}l is a prg:

• k ←$ {0, 1}λ = K
• Enc(k, m) = G(k) ⊕ m, m ∈ {0, 1}l
• Dec(k, c) = c ⊕ G(k) = m

To prove that Π⊕ is not two time secure, assume an adversary A knows


the pair (m̄, c̄ = G(k) ⊕ m̄). Then, given any ciphertext c = G(k) ⊕ m, where
m is unknown, A can efficiently compute the following:

c̄ = G(k) ⊕ m = c ⊕ m ⊕ m̄ ⇒ c ⊕ c̄ = m ⊕ m̄

and thus, by XORing the result with m̄, obtain the entire unknown message5 .
Nevertheless, this scheme is still one time secure:
Theorem 13. If G is a prg, then Π⊕ is computationally one time secure 
Proof. We need to show that, ∀ ppt A:

Gameind ind
Π⊕ ,A (λ, 0) ≈c GameΠ⊕ ,A (λ, 1)

Consider the hybrid game in figure 6.14, where the original encryption rou-
tine is changed to use a random value instead of G(k)6 . Compare with the

24
A Cind

m0 , m1
r ←$ {0, 1}l
c
b ←$ {0, 1}
c = r ⊕ mb
b0
Output 1 iff b0 = b

Figure 6.14: HybΠ⊕ ,A (λ, b)

original one time secure definition in figure 6.13 to observe that it perfectly
matches.
Lemma 3. HybΠ⊕ ,A (λ, 0) ≡ HybΠ⊕ ,A (λ, 1) 
Proof. This is true because distribution of c does not depend on b ∈ {0, 1}.

T0D0 11: Eh?

Lemma 4. ∀b ∈ {0, 1}, HybΠ⊕ ,A (λ, b) ≈c Gameind


Π⊕ ,A (λ, b) 
Proof. The proof proceeds by reduction as depicted in figure 6.15, by assuming
there exists a distinguisher Dind for c = G(k) ⊕ mb and c = r ⊕ mb , and using
it to break the prg itself.

Dind A Cprg

xb x0 ←$ G(Uλ )
x1 ←$ Ul
m0 , m1 b ←$ {0, 1}
c
c = xb ⊕ m0
b0

(
00 0 iff b0 = 0 b00
b = Output 1 iff b00 = b
1 else

Figure 6.15: Reducing to breaking a prg


5 This example models a technique called “Chosen Plaintext Attack”, which will be dis-

cussed in depth later


6 The observant student may recognize that this modification yields exactly the One-Time

Pad encryption scheme discussed in lesson 1

25
Do observe that the adversary always encrypts m0 , but this game can be
modified for m1 by switching b00 ’s assignments without loss of generality; the
crucial point is to see if the distinguisher guesses which message the adversary
has encrypted:
• if Dind is wrong, then A can deduce with high probability that the value
given by Cprg is in fact truly random;
• if Dind is right, then A has a sensibly greater probability of having received
a pseudorandom value from Cprg .
Either way, by the existence of Dind , A gains an edge in efficiently breaking
a prg, which is absurd.
Finally, by the two above lemmas, the proof can be concluded by transitivity:

Gameind ind
Π⊕ ,A (λ, 0) ≈c HybΠ⊕ ,A (λ, 0) ≡ HybΠ⊕ ,A (λ, 1) ≈c GameΠ⊕ ,A (λ, 1)

6.2 Pseudorandom functions


Prgs are practically used as a stepping stone for building PseudoRandom Func-
tions (prf), which in turn form the basis for most, if not all cryptographic
schemes. Before introducing what a prf formally is, a more intuitive definition
of a truly random function is given.
Definition 9. A random function R : {0, 1}n → {0, 1}l is a function that,
depending on what is known about its previous applications:

• if x is “fresh” (in formal terms, R has never been applied to x beforehand),


then a value y is chosen uar from R’s codomain, and it is permanently
associated as the image of x in R7 ;
• if x is not fresh, then R(x) is directly returned instead.


It sholud be noted that such functions occupy too much space, if we were
to memorize it anywhere: supposing all the possible outputs of R have been
generated and stored as an array in memory, its size in bits will be 2n l.

l bits l bits l bits l bits

0010... 1110... 0011... ... ... ... ... 1111...


1 2 3 2n

So we look for a kind of function which is “random”, but does not require
to be memorized as a whole map, while still being efficiently computable. This
is where pseudorandomness comes in: a function is deemed pseudorandom (a
prf) when it is (computationally) indistinguishable from a truly random one.
7 this property is also called lazy sampling

26
Usually, prfs come as function families Fk , where k is a key that identifies
a single function inside the family itself8 .
With this in mind, let F = {Fk : {0, 1}n(λ) → {0, 1}l(λ) }k∈{0,1}λ be the
keyed function family that defines a prf. Consider the two games in figure 6.16,
each one involving respectively F and a random function R, where R = {R :
{0, 1}n → {0, 1}l }, also written as R(λ, n, l) is the set of all random functions
from n-bit strings to l-bit strings.

A C A C
x x
z k ←$ {0, 1}λ z R ←$ R(λ, n, l)
b 0 z = Fk (x) b 0 z = R(x)

Figure 6.16: RealF ,A (λ) vs RandR,A (λ)

b0 ∈ {0, 1} is a convention and 1 is assigned to Real or Rand; so in this game


the adversary recognizes which machine he is talking with.

Definition 10. Fk is a prf iff:

RealF ,A (λ) ≈c RandR,A (λ)


Exercise 14. Show that no prg is secure against unbounded attackers.

Exercise 15. Show the same as above, but for any prf instead.

8 Those unfamiliar with the “currying” technique may see the key as an additional argument

to pass to the “main” function F

27
6.2.1 Ggm-tree
In this section, it is described how to construct a prf starting from a given
prg, using a technique devised form Goldreich, Goldwasser and Micali, called
the ggm-tree.
Construction 1. Let G : {0, 1}λ → {0, 1}2λ be a prg such that it doubles the
length of its input, and splits ithe images in halves, effectively returning string
couples:
G(k) = (G0 (k), G1 (k))
Consider the ggm-tree depicted in figure 6.17, which describes how G will
be used to craft a prf. To lighten up the notation, a function composition
Ga (Gb (Gc (k))) will be contracted to Gabc (k). So let F = {Fk : {0, 1}n →
{0, 1}λ } be a family of functions such that:

Fk (r) = Grn (Grn−1 (. . . Gr2 (Gr1 (k)) . . . ))

G0 (k) G1 (k)

G00 (k) G01 (k) G10 (k) G11 (k)

G000 (k) G001 (k) G010 (k) G011 (k) G100 (k) G101 (k) G110 (k) G111 (k)

.. .. .. .. .. .. .. ..
. . . . . . . .

Figure 6.17: The ggm-tree for G


An example, with a string value of 001 for r would evaluate as Fk (001) =
G0 (G0 (G1 (k))).

28
Lesson 7

Recall the ggm-tree construct used to define a function family F := {Fk } from
a prg G. From now on we will refer to this function family as Fk , implicitly
stating that k represents a uniform probability distribution. Our wish is that
Fk is indeed a prf.
Theorem 16. If G is a prg, then Fk is a prf. 
Proof. Concerning Fk ’s efficiency, one can observe that it involves computing G
a polynomial number of times, and G itself is efficient, thus Fk is indeed easy
to compute.
The rest of the proof, regarding Fk ’s hardness, will proceed by induction
over Fk ’s domain length n, which indirectly defines its ggm-tree’s height.
Base case (n = 1): Fk ’s domain is restricted to {0, 1}, meaning that its
images will be respectively the two halves on a single iteration of G(k), which
are pseudorandom by G’s definition:

(Fk (0), Fk (1)) = (G0 (k), G1 (k)) ≈c U2λ

therefore, in this case, Fk is pseudorandom.


Inductive step: Let Fk0 : {0, 1}n−1 → {0, 1}λ be a prf. Define Fk (x, y)
as Gx (Fk0 (y)), where Fk : {0, 1}n → {0, 1}λ and x ∈ {0, 1}. It must be proven
that if Fk0 is a prf, then so is Fk . We will proceed by hybrid games, depicted
in figures 7.18, 7.19 and 7.20:

A C

r = (r0 , r1...n )
z k ←$ {0, 1}λ
z = Fk (r) = Gr0 (Fk0 (r1...n ))

Figure 7.18: Hyb0F ,A (λ) = Gameprf


F ,A (λ)

Lemma 5. Hyb0F ,A (λ) ≈c HybR,A


1
(λ) 

Proof. Assume ∃ ppt D that can distinguish Fk from H; then an adversary A


can use D as in figure 7.21 to break the induction hypothesis (i.e. can distinguish
Fk0 from R).

29
A C

r = (r0 , r1...n )
z R ←$ R(λ, n − 1, λ)
z = H(r) = Gr0 (R(r1...n ))

1
Figure 7.19: HybR,A (λ)

A C

r = (r0 , r1...n )
z R ←$ R(λ, n, λ)
z = R(r)

Figure 7.20: Hyb2R,A (λ)

D A C

k ←$ Uλ
(r0 , r1...n ) R ←$ R(λ, n − 1, λ)

r1...n
z0 = Fk0 (r1...n )
zb
z1 = R(r1...n )
Gr0 (zb )
b ←$ {0, 1}
b0
b0
Output 1 iff b0 = b

Figure 7.21

30
Before tackling the reduction from the last two hybrids, it is best to introduce
another lemma:
Lemma 6. If G : {0, 1}λ → {0, 1}2λ is a PRG, then for any t(λ) ∈ polyλ:

(G(k1 ), . . . , G(kt )) ≈c (U2λ , . . . , U2λ ) ∀ki ←$ Uλ

Proof. T0D0 12: Idea: all values are independent and pseudorandom on their
own, hybridize progressively...

Now for the final lemma:


Lemma 7. Hyb1R,A (λ) ≈c Hyb2R,A (λ) 

Proof. T0D0 13: Bad proof: the idea is to reduce to a distinguishing game for
the previous lemma)

Consider the distinguishing game for H and R in figure 7.22. The random
functions R and R are entirely independant, and G transforms randomness in
pseudorandomness, so this game boils down into distinguishing pseudorandom
values from random ones, which is possible only with negligible probability.

A C

R ←$ R(λ, n, λ)
R ←$ R(λ, n − 1, λ)
(r0 , r1...n )

zb z0 = Gr0 (R(r1...n ))
z1 = R(r)
b ←$ {0, 1}
b0
Output 1 iff b0 = b

Figure 7.22: The distinguishing game between H in Hyb1Π,A and R in Hyb2Π,A

In the end the hybrids are proven to mutually indistinguishable, therefore


the inductive step is correct, proving the theorem.

31
7.1 Cpa-security
Now it’s time to define a stronger notion of security, which is widely used in cryp-
tology for first assessments on cryptographic schemes. Let Π := (Enc, Dec) be
a ske scheme, and consider the game depicted in figure 7.23. Observe that this
time, the adversary can “query” the challenger for the ciphertexts of any mes-
sages of his choice, with the only reasonable restriction that the query amount
must be polynomially bound by λ. This kind of game/attack is called the Chosen
Plaintext Attack, because of the adversary’s capability of obtaining ciphertexts
from messages. The usual victory conditions found in n-time security games,
which are based on ciphertext distinguishability, apply.

A Ccpa

k ←$ {0, 1}λ
m
c
c ←$ Enc(k, m)

m∗0 , m∗1
c∗ b ←$ {0, 1}
c∗ ←$ Enc(k, m∗b )

m
c
c ←$ Enc(k, m)

b0
Output 1 iff b0 = b

Figure 7.23: The cpa-security game: Gamecpa


Π,A (λ, b)

Definition 11. A scheme is cpa-secure if Gamecpa cpa


Π,A (λ, 0) ≈c GameΠ,A (λ, 1) 
Having given this definition of security, recall the Π⊕ scheme defined in the
previous lesson. It is easy to see that Π⊕ is not cpa-secure for the same reasons
that it is not computationally 2-time secure; however this example sheds some
new light about a deeper problem:
Observation 1. No deterministic scheme can achieve cpa-security. 
This is true, because nothing prevents the adversary from asking the chal-
lenger to encrypt either m0 or m1 , or even both, before starting the actual
challenge; just as in the 2-time case for Π⊕ , he will know the messages’ ci-
phertexts in advance, so he will be able to tell which message the challenger
has encrypted every time. The solution for obtaining a cpa-secure encryption
scheme consists of returning different ciphertexts for the same message, even
better if they look random. This can be achieved by using prfs.

32
Consider the following ske scheme ΠF , with F = {Fk : {0, 1}n → {0, 1}l }
being a prf:

• Enc(k, m) = (c1 , c2 ) = (r, Fk (r) ⊕ m), where k ←$ {0, 1}λ and r ←$ {0, 1}n
• Dec(k, (c1 , c2 )) = Fk (c1 ) ⊕ c2

Observe that the random value r is part of the ciphertext, making it long
n + l bits; also more importantly, the adversary can and will always see r. The
key k though, which gives a flavour to the prf, is still secret.

Theorem 17. If F is a prf, then ΠF is cpa-secure. 


Proof. We have to prove that Gamecpa cpa
ΠF ,A (λ, 0) ≈c GameΠF ,A (λ, 1); to this end,
the hybrid argument will be used. Let the first hybrid Hyb0Π,A be the original
game, the second hybrid Hyb1Π,A will have a different encryption routine:

• r ←$ {0, 1}n

• R ←$ R(λ, n, l)
• c = (r, R(r) ⊕ m), where m is the plaintext to be encrypted

and then the last hybrid Hyb2Π,A will simply output (r1 , r2 ) ←$ Un+l .
Lemma 8. ∀b ∈ {0, 1} =⇒ Hyb0Π,A (λ, b) ≈c Hyb1Π,A (λ, b). 
Proof. As usual, the proof is by reduction: suppose there exists a distinguisher
D capable of telling the two hybrids apart; then D can be used to break F’s
property of being a prf. The way to use D is to make it play a cpa-like game,
as shown in figure 7.249 , where the adversary attempting to break F decides
which message to encrypt between m0 and m1 beforehand, and checks whether
D guesses which message has been encrypted. Either way, the adversary can get
a sensible probability gain in guessing if the received values from the challenger
were random, or generated by F. Thus, assuming such D exists, A can efficiently
break F, which is absurd.

9 An observant student may notice a striking similarity with a previously exposed reduction

in figure 6.15

33
D A Cprf

k ←$ {0, 1}λ
m b ←$ {0, 1}

r
r ←$ {0, 1}n
zb z0 ←$ Fk (r)
c z1 ←$ R(r)
c = (r, zb ⊕ m)

m∗0 , m∗1
r∗
r∗ ←$ {0, 1}n
zb∗ z0∗ ←$ Fk (r)
c∗ z1∗ ←$ R(r)
c∗ = (r∗ , zb∗ ⊕ m∗0 )

m
r
r ←$ {0, 1}n
zb z0 ←$ Fk (r)
c z1 ←$ R(r)
c = (r, zb ⊕ m)

b0

(
0 iff b0 = 0 b00
b00 = Output 1 iff b00 = b
1 else

Figure 7.24: Breaking a prf, for fixed message choice of m0

34
Lemma 9. ∀b ∈ {0, 1} =⇒ Hyb1Π,A (λ, b) ≈c Hyb2Π,A (λ, b). 
Proof. Firstly, it can be safely assumed that any ciphertext (ri , R(ri ) ⊕ mb )
distributes equivalently with its own sub-value R(ri ), because of R’s true ran-
domness, and independency from mb .
Having said that, the two hybrids apparently distribute uniformly, making
them perfectly equivalent; however there is a caveat: if both games are run and
one value r is queried twice in both runs, then on the second query the adversary
will receive the same image in Hyb1Π,A , but almost certainly a different one in
Hyb2Π,A . This is because the first hybrid uses a function, which is deterministic
by its nature, whereas the image in the second hybrid is picked completely
randomly from the codomain. Nevertheless, this sneaky issue about ”collisions”
can be proven to happen with negligible probability.
Call Repeat this collision event on r between 2 consecutive games. Then:

Pr[Repeat] = Pr[∃i, j ∈ q such that ri = rj ]


X
≤ Pr[ri = rj ]
i6=j

= Col(Un )
X X
= Pr[r1 = r2 = e]
i6=j e∈{0,1}n
X X
= Pr[r = e]2
i6=j e∈{0,1}n
 
q n 1
= 2 2n
2 2
 
q −n
= 2
2
≤ q 2 2−n ∈ negl (λ)

which proves that the Repeat influences negligibly on the two hybrids’
equivalence. Thus Hyb1Π,A (λ, b) ≈c Hyb2Π,A (λ, b)10 .

With the above lemmas, and observing that Hyb2Π,A (λ, 0) ≡ Hyb2Π,A (λ, 1),
we can reach the conclusion that Hyb0Π,A (λ, 0) ≈c Hyb0Π,A (λ, 1), which is what
we wanted to demonstrate.

10 Do note that the hybrids lose their originally supposed perfect equivalence (Hyb1Π,A (λ, b) ≡
Hyb2Π,A (λ, b)) because of the Repeat event. The lemma, though, is still proven.

35
Lesson 8

8.1 Domain extension


Up until now, encryption has been dealt with messages of fixed size around a
polynomial function to λ. How to deal with messages with aritrary size? Setting
a maximum bound to message length seems impractical, both for waste reasons
when messages are too short, and for practicality when messages eventually get
too long. The solution takes the form of a “block-cipher”, where a message of a
given size is split into equally-sized blocks, and then encrypted using a fixed-size
encryption scheme. Various instances of this technique, called modes, have been
devised.

8.1.1 Electronic Codebook mode


The operation of ecb-mode is straightforward: Given a message split into blocks
(m1 , . . . , mt ), apply the scheme’s encryption routine to each block, as shown in
figure 8.25:
ci = Fk (r) ⊕ mi ∀i ∈ {0, . . . , t}
Decryption is trivially implemented by xor-ing the ciphered blocks with Fk (r).

r r ... r

Fk Fk ... Fk

m1 ⊕ m2 ⊕ ... mt ⊕

c1 c2 ... ct

Figure 8.25: ecb-mode block-cipher in action, using a prf as the encryption


routine

This approach has the advantage of being completely parallelizable, as each


block can clearly be encrypted separately; however there is a dangerous flaw
in being not cpa-secure, even when using a prf-based encryption scheme. To
understand why, observe that random nonces for ciphertext randomization are
chosen per-message; this means the encryption of message blocks become de-
terministic in the message scope, enabling an adversary to attack the scheme

36
within a single plaintext. It is sufficient to choose an all-0 or all-1 message to
realize that all its blocks would encrypt to the same ciphered block.

8.1.2 Cipher block chaining mode ( cbc)


This mode serializes block encryption by using the preceding ciphered block in
the formula:
ci = Pk (r) ⊕ mi ∀i ∈ {0, . . . , t}
This time, a pseudorandom permutation(prp) is used instead of a prf; they
will be discussed later on. The diagram in figure 8.26 shows a general view of
cbc-mode’s operation. The decryption process is analogous but in a reversed
fashion, by computing the preimage of a ciphered block and xor-ing it with the
preceding ciphered block:

mi = Pk−1 (ci ) ⊕ ci−1

m1 m2 ... mt

r ⊕ ⊕ ... ⊕

Pk Pk ... Pk

c0 c1 c2 ... ct

Figure 8.26: cbc-mode block-cipher in action, using a prp as the encryption


routine

8.1.3 Counter mode


This mode closely resembles ecb-mode but uses a “rolling” nonce instead of
a static one, as shown in figure 8.27. At each successive block, the nonce is
incremented by 1 and then used in a single block encryption. Since the nonce is
in {0, 1}n , the increment is done modulo 2n so that the value will wrap around
to 0 it it ever overflows. Decryption is analogous.

r r+1 ... r+t−1

Fk Fk ... Fk

m1 ⊕ m2 ⊕ ... mt ⊕

c0 c1 c2 ... ct

Figure 8.27: Counter-mode block-cipher in action, using a prf as the encryption


routine

37
This apparently innocuous change to ebc is enough to ensure cpa-security,
at the cost of perfect parallelization.

Theorem 18. Assume Fk is a prf, then the counter-mode block cipher ( ctr)
is cpa-secure for variable length messages11 . 

Proof. Figure 8.28 models a cpa attack to a counter-mode block-cipher. The


proof will proceed by hybrid argument starting from this game, therefore the
statement to verify will be Gamecpa cpa
ctr,A (λ, 0) ≈c Gamectr,A (λ, 1).

A Ccpa

m = (m1 , . . . , mt )
c = (c0 , c1 , . . . , ct ) r = c0 ←$ {0, 1}n
ci = Fk (r + i − 1) ⊕ mi ∀i
µ0 , µ1
|µ0 | = |µ1 | ∈ M ρ = γ0 ←$ {0, 1}n
γb = (γ0 , γ1 , . . . , γt )
b ←$ {0, 1}
(γb )i = Fk (ρ + i − 1) ⊕ (µb )i ∀i
m = (m1 , . . . , mt )
c = (c0 , c1 , . . . , ct ) r = c0 ←$ {0, 1}n
ci = Fk (r + i − 1) ⊕ mi ∀i

b0
Output 1 iff b0 = b

Figure 8.28: A chosen plaintext attack to counter-mode block-cipher

Define the two hybrid games from the original cpa game as follows:

• Hyb1ctr,A (λ, b): A random function R is chosen uar from R(λ, n, n) at the
beginning of the game, and is used in place of Fk in all block encryptions;
• Hyb2ctr,A (λ, b): The challenger will pick random values from {0, 1}n as
ciphered blocks, disregarding any encryption routine.

Lemma 10. Gamecpa 1


ctr,A (λ, b) ≈c Hybctr,A (λ, b) ∀b ∈ {0, 1} 
Proof. The proof is left as exercise.
Hint: Since the original game and the first hybrid are very similar, we can use
a distinguisher which plays the cpa-game; since this is a lemma, our goal in the
reduction is to break the precondition contained in the theorem statement.
Lemma 11. Hyb1ctr,A (λ, b) ≈c Hyb2ctr,A (λ, b) ∀b ∈ {0, 1} 
11 Variable length messages exactly means every message m = (m , . . . , m ) is made of t
1 t
blocks, and t can change from any message to a different one.

38
Proof. Since mi doesn’t affect the distribution of the result at all, for any i,
if R(r∗ ) behaves like a true random extractor, then the two hybrids are indis-
tinguishable in the general case (R(r + i) ⊕ mi ≈ R(r + i)). However, there
is a sneaky issue: if in both games it happens that a given nonce ri is used in
both one query encryption and the challenge mesage encryption at any step,
the subsequent encrypted blocks will be completely random in the second hy-
brid, whereas in the first hybrid the function’s images, albeit random, become
predictable, enabling a cpa.
Nevertheless, it can be proved that these “collisions” happen with negligible
probability within Hyb1ctr,A . Let:
• q = number of encryption queries in a game run
• ti = number of blocks for the i-th query
• τ = number of blocks for the challenge ciphertext
• Overlap event: ∃i, j, ι : ri + j = ρ + ι
The Overlap event exaclty models our problematic scenario. Now it suffices
to show that it occurs negligibly. For simplicity, assume the involved messages
are of same length, that is ti = τ =: t. Denote with Overlapi to be the event
that the i-th query overlaps the challenge sequence as specified above.
Fix some ρ. One can see that Overlapi happens if:
ρ − t + 1 ≤ ri ≤ ρ + t − 1
which means that ri should be chosen at least in a way that:
• the sequence ρ, . . . , ρ + t − 1 comes before the sequence ri , . . . , ri + t − 1,
and they overlap just for the last element ρ + t − 1 = ri or
• the sequence ri , . . . , ri + t − 1 comes before the sequence the sequence
ρ, . . . , ρ + t − 1, and they overlap just for the last element ri + t − 1 = ρ.
Then:

(ρ + t − 1) − (ρ − t + 1) + 1
Pr[Overlapi ] =
2n
2t − 1
=
2n
Xt
Pr[Overlap] ≤ Pr[Overlapi ]
i=1
2
t
≤2 ∈ negl (λ)
2n
which proves that our collision scenario happens with negligible pobability,
thus the two hybrids are indistinguishable.

Having proven the indistinguishability between the hybrids, the conclusion


is reached:
Gamecpa cpa
ctr,A (λ, 0) ≈c Gamectr,A (λ, 1)

39
Lesson 9

9.1 Message Authentication Codes and unforge-


ability
After having explored the security conerns and challenges of the ske realm, it
is time to turn the attention to symmetric mac schemes. Recall that a mac
scheme is a couple (Tag, Verify), with the purpose of auhenticating the message’s
source. In this chapter, the tagging function will be denoted as Tagk , akin to a
prf.
The desirable property that a mac scheme should hold is to prevent any
attacker from generating a valid couple (m∗ , φ∗ ), even after querying a tagging
oracle polynomially many times12 . The act of generating a valid couple from
scratch is called forging, and the aforementioned propery is defined as unforge-
ability against chosen-message attacks (or ufcma, in short); its game diagram
is shown in figure 9.29. Do note that m∗ is stated to be outside the query set
M , expressing the “freshness” of the forged couple13 . In formal terms:
Definition 12. A mac scheme Π is ufcma-secure iff:

∀ ppt A =⇒ Pr[Gameufcma
Π,A (λ) = 1] ∈ negl (λ)

A Cufcma

k ←$ K
mi ∈ M
φi
φi = T agk (mi )

(m∗ , φ∗ )

m ∈
/M Output 1 iff T agk (m∗ ) = φ∗

Figure 9.29: Gameufcma


Π,A (λ)

12 Do note how this property resembles cpa-security in the encryption setting


13 Observe how this setup resembles the original owf game

40
Having defined a good notion of security in the mac scheme domain, we
turn our attention to a somewhat trivial scheme, and find out that it is indeed
secure:

Theorem 19. Let Π be a mac scheme such that Tagk = Verifyk = Fk , where
Fk is a prf. Then Π is ufcma-secure. 
Proof. The usual proof by randomic hybridization entails. The original game is
identical to the ufcma game, where the tagging function is the prf, whereas
the hybrid game will have it replaced with a truly random function, as shown
in figure 9.30.

A C

R ←$ R(λ, n, l)
mi ∈ M
φi
φi = R(mi )

(m∗ , φ∗ )
m∗ ∈
/M Output 1 iff Tagk (m∗ ) = φ∗

Figure 9.30: Hyb1Π,A (λ)

Lemma 12. Gameufcma 1


Π,A (λ) ≈c HybΠ,A (λ) 

Proof. By assuming there is a distinguisher Dufcma capable of disproving the


lemma, it can be used to distinguish the prf itself, as depicted by the reduction
in figure 9.31

Lemma 13. ∀ ppt A =⇒ Pr[Hyb1Π,A (λ) = 1] ≤ 2−l 


Proof. This is true because attacker has to predict the output R(m∗ ) on a fresh
input m∗ to win the game, which can happen at most with probability 2−l .

Thus, the conclusion is that Π is ufcma-secure.

41
Dufcma A Cprf

k ←$ {0, 1}λ
R ←$ R(λ, n, l)
m∈M b ←$ {0, 1}
m
φb φ0 ←$ Fk (r)
φb φ1 ←$ R(r)

(m∗ , φ∗ )

m ∈
/M
m∗
ϕb ϕ0 ←$ Fk (r)
ϕ1 ←$ R(r)
(
0 0 iff ϕb = φ∗ b0
b = Output 1 iff b0 = b
1 else

Figure 9.31: Distinguishing a prf by using Dufcma

9.2 Domain extension for mac schemes


The previous scheme works on fixed length messages; as in the encryption
domain, there are techinques for tagging variable length messages which are
ufcma-secure. However, before showing them, some other apparently secure
modes are described here to give some possible insights on how to tackle the
problem.
Assume the message m = (m1 , . . . , mt ) ∈ {0, 1}n·t for some t ≥ 1. Given the
tagging function Tagk : {0, 1}n → {0, 1}l , an attempt to tag the whole message
may be to:
Lt
• xor all the message blocks, and then tag: φ = Tagk ( i=1 mi ). But then,
given an authenticated message (m, φ), an adversary can always forge a
valid couple (m0 , φ), where m0 is the original message with two flipped bits
in two distinct blocks at the same offset; the resulting xor would be the
same.
• define the tag to be a t-sequence of tags, one for each message block.
Hovever, an adversary can just flip the position of two arbitrary distinct
message blocks and their relative tags, and would succesfully forge a dis-
tinct authenticated message.
• attempt a variant of the above approach, by adding the block number to
the block itself to avoid the previous forging. Again, this is not ufcma-
secure: the adversary may just make two queries on two distinct messages,
obtain the two tag sequences, and then forge an authenticated message by
choosing at each position i whether to pick the message-tag blocks from
the first or second query.

42
9.2.1 Universal hash functions
A devised solution which has been proven to be secure relies on the following
definition: a function family H which can be used to “shrink” variable length
messages and then composed with a prf:

H = {hs : {0, 1}n·t → {0, 1}n }s∈{0,1}λ

Tagk,s (m) = Fk (hs (m))


So what are the properties of the induced family F(H) = {Fk (hs (.))}? The
main problem are collisions, since for each m ∈ {0, v1}n·t it should be hard to
find m0 6= m such that hs (m) = hs (m0 ). But collisions do exist for functions
in F(H), because they map elements from {0, 1}n·t to {0, 1}t , and since the
codomain is smaller than the domain, the functions cannot be injective in any
way.
To overcome this problem, we can consider two options:

• assume collisions are hard to find given s ∈ {0, 1}λ publicly, and we have
a collision resistant hashing;
• let s be secret, and assume collisions are hard to find because it is hard to
know how hs works.
Definition 13. A function family H is deemed ε-universal iff:

∀x 6= x0 ∈ {0, 1}n·t =⇒ Pr [hs (x) = hs (x0 )] ≤ ε


s←${0,1}λ


If ε = 2−n , meaning the collision probability is minimized, then the family
is also called perfectly universal ; in the case where ε ∈ negl (λ) isntead, it is
defined as almost universal (au). Take care abotu telling the difference between
universality and pairwise independence, which states:

(hs (x), hs (x0 )) ≡ U2n

Lemma 14. Show that any pairwise independent hash function is perfectly uni-
versal. 
Proof. The proof is left as exercise. (should I use Col for solving this? What is
the difference and when I should use Col instead of one-shot-probability?) ASK
FOR SOLVING PROPERLY (Thoughts: when I ask what’s the probability
that, chosen 2 distinct x-es, their hashes are the same on a certain value?,
maybe I have to use one-shot, because one-shot refers to the prob. that the two
inputs collide on a specific value, even if not specified.
Instead, if I consider what’s the prob. that , chosen 2 distinct x-es, their
hashes are the same?, maybe I have to calculate all the possible collisions, be-
cause I want to know if the 2 inputs can collide in general. )
Theorem 20. Assuming F is a prf with n-bit domain and H is au, then
F 0 = F(H) is a prf on (n · t)− bit domain, for t ≥ 1. 

43
A C

k ←$ {0, 1}λ
x s ←$ {0, 1}λ
y
y = Fk (hs (x))

Figure 9.32: RealF ,A (λ)

A C

R ←$ R(λ, n, l)
x s ←$ {0, 1}λ
y
y = R(hs (x))

Figure 9.33: HYBR,A (λ)

A C

R ←$ R(λ, n, l)
x
y
y = R(x)

Figure 9.34: RandR0 ,A (λ)

44
Proof. This proof too will proceed by hybridizing the original game up to the
ideal random one. Consider the three sequences depicted in figures 9.32, 9.33
and 9.34:
Lemma 15.
RealF ,A (λ) ≈c HYBR,A (λ)

Proof. The proof is left as exercise.
Lemma 16.
HYBR,A (λ) ≈c RandR0 ,A (λ)

Proof. Again, collisions come into play there, but in a much sneakier way. Given
two queries with arguments x1 , x2 returning the same image y, the random game
can model two scenarios:
• the arguments are equal, but with negligible probability
• the arguments are distinct
while the hybrid can model three of them:
• the arguments are equal, again with negligible probability
• the arguments are distinct, and so are their hashes
• the arguments are distinct, but not their hashes
We want to show that the collision at hash level is negligible: as long as
they don’t happen, the random function R is run over a sequence of distinct
points, and behaves just as the random game’s function R does. So let bad be
the event:
∃i 6= j ∈ [q] : hs (xi ) = hs (xj )
where q denotes the adversary’s query count. It suffices to show that Pr[Bad] ∈
negl (λ).
Since we don’t care what happens after a collision, we can alternatively
consider a mental experiment where we answer all queries at random, and only
at the end sample s ←$ {0, 1}λ and check ofr collisions: this does not change the
value of Pr(Bad). Now queries are independent of s, and this eases our proof:
Pr[Bad] = Pr[∃xi 6= xj , hs (xi ) = hs (xj )]
s
X
≤ Pr[hs (xi ) = hs (xj )] hs is au by definition
s
i6=j
 
q
≤ negl (λ) ∈ negl (λ)
2
By ruling out this event, the lemma is proven.
So now we have Real ≈c HYB ≈c Rand
Corollary 1. Let Π = (Tag, Verify) be a variable length message mac scheme
where, given a prf Fk and an au hash funcion family hs , the tagging function
is defined as Fk (hs ). Then Π is ufcma-secure. 

45
9.2.2 Hash function families from finite fields
A generic 2n -order finite field has very useful properties: adding two of its
elements is equal to xor-ing their binary representations, while multiplying
them is done modulo 2n . It is possible to define a hash function family that
makes good use of these properties, and is suitable for a ufcma-secure mac
scheme.
Construction 2. Let F = GF (2n ) be a finite field (or “Galois field”) of 2n
elements, and let m = (m1 , . . . , mt ) ∈ Ft and s = (s1 , . . . , st ) ∈ Ft . The desired
hash function family will have this form:
t
X
hs (m) = si mi = hs, mi = qm (s)
i=1


Lemma 17. The above function family hs is almost universal. 
Proof. In order for hs to be almost universal, collisions must happen negligibly.
Suppose we have a collision with two distinct messages m and m0 :
t
X t
X
mi si = m0i si
i=1 i=1

Let δi = mi − m0i
and assume, without loss of generality, that δ 6= 0. Then,
by using the previous equation, when a collision happens:
t
X t
X t
X
0= mi si − m0i si = δi si
i=1 i=1 i=1

Since the messages are different from each other, there is at least some i-th
block that contains some of the differences. Assume, without loss of generality,
that some of the differences are contained in the first block (i = 1); the sum can
then be split between the first block itself δ1 s1 and the rest:

t
X t
X
δi si = δ1 s1 + δi si = 0
i=1 i=2
t
X
δ1 s1 = − δi si
i=2
Pt
− i=2 δi si
s1 =
δ1
which means when a collision happens, s1 must be exactly equal to the sum
of the other blocks, which is another element of F. But since every seed is
chosen at random among F, the probability of picking the element s1 satisfying
−1
the above equation is just |F| = 2−n ∈ negl (λ). By repeating this reasoning
for every difference-block, a sum of negligible probabilities is obtained, which is
in turn negligible; therefore the hash function family hs is almost universal.

46
H with Galois fields elements and polynomials
Construction 3. Take F = GF (2n ), a Galois field of 2n elements.
Let m = (m1 , ..., mt ) ∈ Ft and s ←$ Ft . We state that
t
X
hs (m) = si−1 mi
i=1


Exercise 21. Prove that this construction is almost universal.
(possible proof: to be almost universal, looking at the definition, collisions
with m 6= m0 must be negligible.
So consider a collision as above: it must be true that
t
X t
X t
X t
X
mi si−1 = m0i si−1 ⇔ mi si−1 − m0i si−1 = 0 ⇔ qm−m0 (s) = 0
i=1 i=1 i=1 i=1

How can we make a polynomial equal to 0? We have to find the roots of the
polynomial, which we know are at most the grade of the polynomial. So, the
grade of this polynomial is t − 1, and the probability of picking a root from F
as seed of hs (.) is
t−1
P[s = root] = n ∈ negl (λ)
2
)

47
Lesson 10

10.1 Domain extension for prf-based mac schemes


10.1.1 Hash function families from prfs
Another way to obtain domain extension for a mac scheme, using the Fk (hs )
approach, is to construct the hash function family from another prf. We expect
to have:
• Pr[hs (m) = hs (m0 ), s ←$ {0, 1}λ , (m, m0 ) ←$ A(1λ )] ∈ negl (λ);
• We need two prfs: one is Fk , and the other is Fs

10.1.2 xor-mode
Assume that we have this function

hs (m) = Fs (m1 ||1) ⊕ ... ⊕ Fs (mt ||t)

so that the input to the PRF Fs (.) is n + log2 t bytes long.


Lemma 18. Above H is computational AU if Fs is a PRF. 
Proof. The proof is left as exercise.
(Hint: The pseudorandom functions can be defined as Fs = Fk0 (0, . . . ) and
Fk = Fk0 (1 . . . )).
Possible proof: we have to show that

P[hs (m) = hs (m0 )] ∈ negl (λ)

with m 6= m0 . This means that


" t ! t
!#
M M
Pr Fs (mi , i) = Fs (m0i , i)
i=1 i=1
t
M
= Pr[∀i Fs (mi , i) ⊕ Fs (m0i , i) = Fs (mj , j) ⊕ Fs (m0j , j) = α]
j=1,j6=i

for each i ∈ [1, t]. But α is one unique random number chosen over 2n possible
candidates, so the collision probability is negligible.

48
10.1.3 cbc-mode mac scheme
This is part of the standard, used in TLS. It’s used with a prf Fs , setting the
starting vector as iv = 0n = c0 and running this prf as part of cbc. The last
block otained by the whole process is the message’s signature:
hs (m) = Fs (mt ⊕ Fs (mt−1 ⊕ Fs (. . . Fs (m2 ⊕ Fs (m1 ⊕ iv)) . . . )))

Lemma 19. CBC MAC defines completely an AU family. 


Proof. (not proven)
We can use this function to create an encrypted CBC, or E-CBC:
E − CBCK,S (m) = Fk (hCBC
s (m))
Theorem 22. Actually if Fk is a prf, CBC-MAC is already a MAC with
domain n · t for arbitrarily fixed t ∈ N. 
Proof. (not proven)

10.1.4 XOR MAC


Instead of F(H) now the T ag() function outputs φ = (η, Fk (η) ⊕ hs (m)) where
η ← ${0, 1}n is random and it’s called nonce.
Authentication is done as:
(m, (η, Fk (η) ⊕ hs (m)))
When I want to verify a message and I get the couple (m, (η, v)), I just check
that v = Fk (η) ⊕ hs (m). It should be hard to find a value called a such that,
given m 6= m0 ,
hs (m) ⊕ a = hs (m0 )
In fact, since an adversary who wants to break this scheme has to send a
valid couple (m∗ , φ∗ ) after some queries, he could:
• ask for message m and store the tag (η, Fk (η) ⊕ hs (m))
• try to find a = hs (m) ⊕ hs (m0 ) and modify the previous stored tag adding
v ⊕ a,
so now he could send the authenticated message
(m0 , (η, Fk (η) ⊕ hs (m0 )))
which is a valid message.

T0D0 14: AXU property definition is missing

Lemma 20. XOR mode gives computational AXU (Almost Xor Universal) 
Proof. (not proven)
Theorem 23. If F is a PRF and H is computational AXU, then XOR-MAC
is a MAC. 
Proof. (not proven)

49
Summary

T0D0 15: not sure what to do with this bullet list...

With variable input lenght:

• AXU based XOR mode is secure;


• F(H) is insecure with polynomial construction hs (m) = qm (s), but can
be fixed;
• CBC-MAC is not secure, left as exercise;

• E-CBC is secure.

50
10.2 Cca-security
Going back to the encryption realm, a new definition of attack to a ske scheme
will be introduced. Now the adversary can query a decryption oracle, along
with the cpa-related encryption oracle, for polynomially many queries. This
attack is called the Chosen Ciphertext Attack 14 , and schemes that are proven
to be cca-secure are also defined as non-malleable, on the reasoning that an
attacker cannot craft fresh valid ciphertexts from other valid ones.

A Ccca

k ←$ K
m
c r chosen uar
c = Enck (m, r)

c
m
m = Deck (c)

m∗0 , m∗1
c∗b r∗ chosen uar
b ←$ {0, 1}
c∗ = Enck (m∗b , r∗ )
m
c r chosen uar
c = Enck (m, r)

c
m
m = Deck (c)

b0
Output 1 iff b0 = b

Figure 10.35: The chosen ciphertext attack, on top of cpa

Exercise 24. Show that the scheme ΠF defined in theorem 17, while cpa-
secure, is not cca-secure.
Proof. Let m0 and m1 be the messages the adversary sends to the challenger as
the challenge plaintexts; on receiving the ciphertext cb = (r, Fk (r ⊕ mb ) : b ←$
{0, 1}, the adversary crafts another ciphertext with an arbitrary value α:

cbb = (r, Fk (r) ⊕ mb ⊕ α)


14 Different versions of the cca notion exist. The one defined here is also called cca2, or

adaptive Chosen Ciphertext Attack

51
and queries the decryption oracle on it. The latter will decrypt the new cipher-
text and return a plaintext, which can be easily manipulated by the adversary
to reveal exactly which message was encrypted during the challenge:

Deck (cbb ) = Deck (r, Fk (r) ⊕ mb ⊕ α)


= Fk (r) ⊕ Fk (r) ⊕ mb ⊕ α
= mb ⊕ α

Therefore, the adversary certainly wins after just one decryption query, prov-
ing the scheme’s vulnerability to cca attacks.

10.3 Authenticated encryption


Instead of tackling the cca-security problem upfront, it might be useful to
consider a construction that achieves both secrecy and authentication: that is,
a scheme that encrypts messages and authenticates their respective senders at
the same time. In the encryption setting, such a scheme is defined as being
cpa-secure with an additional auth property denoting the scheme’s resistance
to forgeries, much like its mac cousins. The game shown in figure 10.36 models
this auth property of a scheme Π = (Enc, Dec), with an additional quirk to
the decryption routine:

Dec : K ∗ C → M ∪ {⊥}

where the ⊥ value is returned whenever the decryption algorithm is supplied an


invalid or malformed ciphertext.

A Cauth

k ←$ K
m
c r chosen uar
c∈C c = Enck (m, r)

c0
c0 ∈
/C Output 1 iff Deck (c0 ) 6=⊥

Figure 10.36: Gameauth


Π,A (λ)

Theorem 25. Let Π be a ske scheme. If it is cpa-secure, and has the auth
property, then it is also cca-secure. 
Proof. The proof is left as an exercise.
Hint: consider the experiment where Dec(k, c):

• if c not fresh ( i.e. output of previous encryption query m, output m)

52
• else output ⊥

The approach would be to reduce cca to cpa; given Dcca , we can build
D . Dcca will ask decryption queries, but Dcpa can answer just with these
cpa

two properties shown above, so it can reply just if he asked these (c, m) before
to its challenger C.

10.3.1 Combining ske & mac schemes


Let Π1 = (Enc, Dec) be a ske scheme, and Π2 = (Tag, Verify) be a mac scheme;
there are 3 ways to combine them into an authenticated encryption scheme:
• Encrypt-and-Tag:
1. c ←$ Enc(k1 , m)
2. φ ←$ T ag(k2 , m)
3. c∗ = (c, φ)
• Tag-then-encrypt:
1. φ ←$ T ag(k2 , m)
2. c ←$ Enc(k1 , (φ, m))
3. c∗ = c
• Encrypt-then-Tag:
1. c ←$ Enc(k1 , m)
2. φ ←$ T ag(k2 , c)
3. c∗ = (c, φ)

Of the three options, only the last one is proven to be cca-secure for arbi-
trary scheme choices; the other approaches are not secure “a-priori”, with some
couples proven to be secure by themselves. Notable examples are the Trans-
port Layer Security (tls) protocol, which employs the second strategy, and has
been proven to be secure because of the chosen ecnryption scheme; Secure SHell
(ssh) instead uses the first strategy.
Theorem 26. If an authenticated encryption scheme Π is made by combining
a cpa-secure ske scheme Π1 with a strongly unforgeable mac scheme Π2 in
the Encrypt-then-tag method. Then Π is cpa-secure and auth-secure. 
Proof. Assume that Π is not cpa-secure; then an adversary can use the resulting
distinguisher Dcpa to direct a succesful cpa against Π1 , as shown in figure 10.37:
the point is to run the two components of Π separately.

T0D0 16: Professor says that we have to show that Gamecpa (λ, 0) ≈c
Gamecpa (λ, 1) , but why??? Isn’t this proof enough?

53
Dcpa A Ccpa1

k1 ←$ K1
k2 ←$ K2
m
m
c
c ←$ Enck (m)
(c, φ)
φ ←$ T agk (c)

m∗0 , m∗1
m∗0 , m∗1
c∗ b ←$ {0, 1}
(c, φ ) ∗ c∗ ←$ Enck (mb )
φ∗ ←$ T agk (c∗ )

m
m
c
c ←$ Enck (m)
(c, φ)
φ ←$ T agk (c)

b0
b0
Output 1 iff b0 = b

Figure 10.37

54
Proved for the cpa-security property, now we have to prove, in a similar way,
that the auth property must be holded by Π if Π2 is an auth-secure scheme.

Exercise 27. Prove it!


Similar to the cpa-security proof.

55
Lesson 11

11.1 Authenticated encryption (continued)


Having proven cpa-security of an authenticated encryption scheme in an Encrypt-
then-Tag mode, it remains to prove that it has the auth property. Before this,
a new unforgeability definition is needed:
Definition 14. Let Π = (Tag, Verify) be a mac scheme. Then Π is eufcma-
secure iff it is ufcma-secure, that is:
Pr[Gameufcma
Π,A (λ) = 1] ∈ negl (λ)

with the additional restriction that the tag φ∗ of the forged message must be
“fresh” itself. 
Note the small difference in security between ufcma and eufcma.
Theorem 28. Let Π = (Enc, Dec, T ag, V erif y) be an authenticated encryption
scheme, composed by a ske scheme Π1 and a mac scheme Π2 . If Π2 is eufcma,
then Π has the auth property. 
Proof. The proof is analogous to the previous proof regarding the scheme’s cpa
security. Suppose that Π has not the auth property; then an adversary can use
the distinguisher Dauth to successfully forge authenticated messages with fresh
signatures against Π2 , as depicted in figure 11.38.
From Aauth perspective, all the couples (ci , φi ) received are made with the
following schema:

ci ∈ Enc(k1 , m ∈ M) ∧ φi ← $Enc(k2 , ci )
Since Aauth wins Gameauth , the challenge couple (c∗, φ∗ ) which breaks
Gameauth will be produced to be decrypted as

Dec(k, (c∗ , φ∗ )) → Dec(k1 , c∗ ) ∈ M ∧ Dec(k2 , φ∗ ) = c∗


But if this happens , then A can use the same challenge couple of Aauth to
win Gameuf cma , which is impossible.
It could happen that, for c∗ = c previously seen, φ∗ is a new fresh tag, never
seen before. Just in this case the auth game would be valid because (c∗ , φ∗ )
would have never been seen before, but not the eufcma game, because c∗ was
previously sent to the challenger.
Now we want an ufcma secure scheme able to resist against message-tag chal-
lenge couples where the tag is fresh but the message has been already requested
to the challenger.

56
Dauth A Ceufcma2

m
c
c ←$ Enc(k1 , m)
φ
φ ←$ T ag(k2 , c)
c∈C (c, φ)
φ∈Φ

c∗ ∈
/C (c∗ , φ∗ )
φ∗ ∈
/Φ (c∗ , φ∗ ) Output 1 iff
Verifyk (c∗ , φ∗ ) = 1

Figure 11.38: Breaking authenticity of Π2

11.2 Pseudorandom permutations

T0D0 17: Luby-Rackoff is cited here, but it’s related to both prfs-prps and
the Feistel network analysis

Nothing prevents a prf Fk to be bijective; in this case, it is referred to as a


pseudorandom permutation, or prp in short. Their definition is analogous to
a generic prf: as shown in figures 11.39 and 11.40, prps are computationally
indistinguishable from a random permutation:
RealF ,A (λ) ≈c IdealF ,A (λ)
An important difference is that Fk is efficiently invertible, although knowledge
of k is reqiuired in order to do so.

A C

k ←$ {0, 1}λ
x
y
y = Fk (x)
b0

Figure 11.39: RealF ,A (λ)

11.2.1 Feistel network


Prps have been successfully constructed by using existing prfs into what is
called a Feistel network. As a starting point, let F : {0, 1}n → {0, 1}n be a prf,

57
A C

P ←$ P(λ, n)
x
y
y = P (x)
b0

Figure 11.40: IdealP,A (λ)

and define the function ψF as follows:

ψF (x, y) = (y, x ⊕ F (y)) = (x0 , y 0 )


ψF−1 (x0 , y 0 ) = (F (x0 ) ⊕ y 0 , x0 ) = (F (y) ⊕ x ⊕ F (y), y) = (x, y)

Fk

y ⊕

Figure 11.41: A single-round Feistel network

While this construct is invertible and uses a prf, it is not pseudorandom


itself, because the first n bits of ψF ’s image are always equal to y, and thus
visible to any adversary. A first attempt at fixing this vulnerability would be
2
to apply the construct two times on two different prfs ψF,F 0 , in an attempt to

“hide” y. Yet, this approach still leaks valuable information:

ψF,F 0 (x, z) ⊕ ψF,F 0 (y, z) = (x ⊕ y, . . . )

However, this example with additional restrictions will be useful very soon,
so it is reworded as the following lemma:
Lemma 21. For any unbounded adversary making q ∈ poly(λ) queries, the
following games in figure 11.42 are statistically close as long as y1 , . . . , yq are
mutually distinct. 

Proof. T0D0 18: Idea: Hybridize over the queries before the challenge, from
PR to random; prove that the stat distance between i and i+1 is negligible

58
A C
x, y
2
ψF,F 0 (x, y)
F, F 0 ←$ R(λ, n, n)

A C
x, y
R(x, y)
R ←$ R(λ, 2n, 2n)

Figure 11.42

Going back to the Feistel networks in general, it should be easy to see that
they can be made of an arbitrary number of rounds, by simply chaining output
with input. The l-th iteration is denoted as:
l
ψΦ (x, y) = ψF (ψF 00 (. . . ψF (l) (x, y) . . .))

where Φ is the sequence of prfs used at each single step. It can be shown
that the rounds neeed to obtain a network that is indeed pseudorandom is just
3; also, the same prf can be used, it is sufficient to change the seed on each
iteration15 :
3
Theorem 29. Let Fi , Fj , Fk be a prf over three seeds. Then ψi,j,k is a prp. 

Proof. T0D0 19: Idea: Four total games: original, swap prfs with random
functions, swap the three functions with a single one (use previous lemma),
swap random function with random permutation (avoid bad events generated
by injection property)

15 That is actually the purpose of using a prf

59
Lesson 12

12.1 Hashing
12.1.1 Merkle-Damgård construction
12.1.2 Domain extension
12.1.3 Compression functions

60
Lesson 13

13.1 Number theory


13.2 Standard model assumptions

61
Lesson 14

14.0.1 Decisional Diffie-Hellman assumption


14.0.2 Naor-Reingold encryption scheme

14.1 Public key encryption schemes

62
Lesson 15

15.1 Public key encryption recap


Gamepke−cca
Π,A :

A C pke-cca

pk
(pk, sk) ←$ KGen(1λ )

c0
m0

m0 , m1
c b ←$ {0, 1}
c ←$ Enc(pk, mb )

c0
m0

b0
Output 1 iff b0 = b

Figure 15.43: CCA on a PKE scheme

(Reminder): Encryptions are made like this: Enc(k, m) = (r, Fk (r)⊕m), r '
Unif ({0, 1}λ ).
Every time an encryption is made, a fresh value r is picked UAR.

63
15.1.1 Trapdoor permutation
A TDP(TrapDoor Permutation) is a OWP family structured has these features:

• A key pair is chosen UAR by a key generator algorithm: (pk, sk) ←$


KGen(1λ )
• There is a function family fpk ⊆ (Vpk → Vpk ) such that:
– Computing fpk is efficient
– Sampling from the domain (x ←$ Vpk ) is efficient
• There is an efficient function gsk that “inverts” fpk (sk is the “trapdoor”):

g(sk, f (pk, x)) = x

• No efficient adversary is able to invert fpk wihout knowing sk

Note: Since pk is public, any adversary gets the capability of encrypting


messages for free, without requiring n external challenger/oracle!
Therefore, if left deterministic, a TDP is not CPA-secure.
Here, in this scheme, we combine randomness and the notion of hardcore
predicate hc:

• (pk, sk) ←$ KGen(1λ )


• r ←$ Ξpk
• c := Enc(pk, m) = (fpk (r), hc(r) ⊕ m)
• Correctness: Dec(sk, c) = hc(gsk (c1 )) ⊕ c2

Theorem 30. If (KGen, f, g) is a TDP, and hc is hardcore for f , then the


above scheme is CPA-secure. 
Proof: (Execrcise)

T0D0 20: Apparently, the reduction here is not easy at all. Some hints are
needed to solve this exercise.

15.1.2 TDP examples


One example stems form the factoring problem: let’s look again at Z×
n , where
n = pq, p, q ∈ P;
Theorem 31. (Chinese remainder, or CRT): The following isomorphisms to

n are true:

• Zn ' Zp × Zq
• Z× × ×
n ' Zp × Zq

Note that the theorem is more general, and holds for any p, q that are co-
prime. 

64
How to use this theorem for constructing a PKE scheme:
Reminder (Euler’s theorem):

REVIEW: ∀x ∈ Zn =⇒ xϕ(n) = x mod n


maybe the correct one is
∀x ∈ Zn =⇒ xϕ(n) = 1 mod n

Reminder: ∀p, q ∈ P =⇒ ϕ(pq) = (p − 1)(q − 1)


So let a be the public key such that gcd(a, ϕ(n)) = 1, then ∃!b ∈ Zn : ab = 1
mod ϕ(n), b will be our private key.
Define encryption as f (a, m) = ma mod n, and then decryption as g(b, c) =
b
c mod n.
Observe that

g(b, f (a, m)) = (ma )b = mab = mkϕ(n)+1 = (mϕ(n) )k m = m mod n

, because ab = 1 mod ϕ(n).


So we conjecture that the above is a valid TDP-based PKE scheme. This is
actually called the “RSA assumption”:

A C rsa

n, pk, mpk b ←$ {0, 1}


c ←$ Enc(pk, mb )

m0
Output 1 iff m0 = m

Figure 15.44: Depiction of the RSA assumption

Relation to the factoring problem: rsa =⇒ fact


Proof: Given p, q, an adversary can compute ϕ(n) = (p − 1)(q − 1), and then
find the inverse of the public key in Z×
pq .
It hasn’t been proven that fact =⇒ rsa

15.2 Textbook RSA


This is an insecure toy example of the more complex RSA (Rivest Shamir Adle-
man) algorithm. The key generation algorithm: KGen = GenRSA(1λ ) outputs
Pk = (n, e) and Sk = d, then we have

Enc(Pk , m) = me (mod n)
Dec(Sk , c) = cd (mod n)

65
Since the output of Enc is deterministic this is not CPA secure! However
it can be used with HARD-CORE Predicate.
Preprocess the message to add randomness:

m̂ = r||m where r ← ${0, 1}l

now Enc is not deterministic.


Facts:

1. l ∈ super(log(λ)) otherwise it is possible to bruteforce in PPT.

REVIEW:

2. If m ∈ {0, 1} then I can prove it CPA secure under RSA (just use standard
TDP)

3. If m is ”in the middle” ({0, 1} ≤ m ≤ {0, 1}l ) RSA is believed to be secure


and is standardized (PKCS#1,5)

REVIEW:

4. Still not CCA secure! counterexample?

15.2.1 Trapdoor Permutation from Factoring


Let’s look at f (x) = x2 (mod n) where f : Zn∗ → QRn (⊂ Zn∗ ), this is not a
permutation in general.
Now let’s consider the Chinese Reminder Theorem (CRT) representation:

x = (xp , xq ) → xp ≡ x (mod p), xq ≡ x (mod q)


f (x) = x2 (mod p); x ← $Zp∗

Since Zp∗ is cyclic I can always write:

p−1 (p−1)
Zp∗ = {g 0 , g 1 , g 2 , . . . , g 2 −1 ,g 2 , . . . , g p−2 }
p−1
−1
g 2 inZp∗
z}|{
QRp = {g 0 , g 2 , g 4 , . . . , g p−3 , g 0 , . . .}
|{z}
p−1
g 2 inZp∗

p−1
|QRp | =
2
p−1 p−1 p−1
Moreover, since (g 2 )2 ≡ 1 (mod p) and g 2 cannot be 1 (since g 0 6= g 2 6=
p−1
g p−1 ) but must be one of the p − 1 elements of Zp∗ , then g 2 ≡ −1 (mod p).

66
Now it’s possible to show that f : QRp → QRp is a permutation, and we are
going to show a method to invert it, aka f −1 .

Assume p ≡ 3 (mod 4) ([*]p = 4t + 3 ⇒ t = p−34 ), then squaring M odp is a


permutation because, given y = x2 (mod p) if I compute:
p−1
(y t+1 )2 = y 2t+2 = (x2 ) 2 +1 = 1x2 = x2
| {z }
[*] 2t+2= p−3 p+1 p−1
2 +2= 2 = 2 +1

=⇒ x = ±y t+1

But only 1 among the above ±y t+1 is a square, in particular only the positive
one.
Since we have that
p−1 4k + 2
p = k4 + 3 ⇒ = = 2k + 1
2 2
p−1
so 2 is odd.

Now, since we are considering just QRp = {y ∈ Z∗p : ∃x ∈ Z∗p x2 = y} and we


can write each x ∈ Z∗p as g z for a z ∈ Zp ,

y = x2 ⇔ y = (g z )2 = g 2z
0
So, y = g z ∈ QRp ⇔ z 0 is even. If z 0 is odd, then y ∈ / QRp .
p−1
Since p−1
2 is odd, then g 2 ∈
6 QR p , and since it is possible to generate all
of the other numbers with odd exponents
p−1 p−1
g odd = g 2 ±even =g 2 g ±even ⇒ −1(g ±even )

and g powered to odd exponents will have this form.

From that, it’s possible to state the following


Lemma 22. ∀z, z ∈ QRp =⇒ −z ∈
/ QRp 

15.2.2 Rabin’s Trapdoor permutation


Now we study a one way function built on previous deductions about number
theory and modular arithmetic.
The Rabin trapdoor permutation is defined as

f (x) = x2 mod n

where n = p ∗ q for primes p, q = 3mod4.

We can observe that the image of this function is QRn , a subset of Z∗n .
For Chinese remainder theorem it is possible to state that f maps as
follows
x = (xp , xq ) 7→ (x2p , x2q )

67
since each element of Zn has always two different forms , in Zp and in Zq .

So
y ∈ QRn ⇔ yp ∈ QRp ∧ yq ∈ QRq
As before, the image of f is exactly

QRn = {y : ∃x : y = x2 mod n}

If we try to invert the function f , even without applying the previous inver-
sion algorithm, we easily note that among the 4 possible values

f −1 (y) = {(xp , xq ), (−xp , xq ), (xp , −xq )(−xp , −xq )}

only 1 is a quadratic residue since we said, in the last lemma, that only one
out of −xk , xk is a quadratic residue for k = q, p.

Therefore, we have that the Rabin’s TDP is a permutation in QRn , and that
|Z∗ |
the cardinality of QRn is 4n .

Furthermore, with the following claim we can state that the Rabin cryp-
tosystem is OWF thanks to the hardness of factoring.
Claim 1. Given x, z such that x2 modn ≡ z 2 mod n ≡ y mod n,

x 6= ±z ⇒ we can factor n


Proof. Since f −1 (y) has only one value out of four, x 6= ±z and z ∈ {(xp , xq ), (−xp , −xq )}
, then x ∈ {(xp , −xq ), (−xp , xq )} and

x + z ∈ {(0, 2xq ), (2xp , 0)}

Now assume x + z = (2xp , 0) without loss of generality, since the proof for
the other case is the same.
We have that x + z ≡ 0 mod q and x + z 6≡ 0 mod p.
But then gcd(x + z, n) = q, and we obtain q.
Theorem 32. Squaring mod n (where n is a bloom integer16 ) is a trapdoor
permutation under factoring. 
Since we have already shown tha Rabin’s function is a permutation since it
is invertible, we have to show that Rabin’s function is also OWF.
In other words

Factoring is hard ⇒ f (x) is OWF , aka inverting it is hard

The following proof is by contraddiction.


Proof. Assume that exists an adversary PPT who, given y = x2 mod n, can
find a z ∈ Zn such that z 2 mod n = y but z 6= ±x .

68
ARabin Af act C(p,q,n)←$Genbloom∧p,q≡3 mod 4

n = pq

n, y x ← $Z∗n
y = x2 mod n

z
f (z) = f (x)

We can build the following reduction to show that A choses x:


Once obtained z 6= ±x which z 2 = y we can use Claim 1(just summing x
and z and analyzing the result) to factorize n in polytime.

But factorizing n in polytime is not possible.

16 a bloom integer n is n = p, q for p, q = 3mod4, as the definition of Rabin’s TDP

69
Lesson 16

16.1 Pke schemes over ddh assumption


16.1.1 El Gamal scheme
Let’s define a new Π = (KGen, Enc, Dec). Generate the needed (public)
parameters (G, g, q) ←$ GroupGen(1λ )17 .
The KeyGen algorithm is defined as follows:
• Pick x ←$ Zq
• Output the key pair (pk, sk) as (g x , x)

The encryption routine Enc(pk, m) will:


• Pick r ←$ Zq
• Output c = (c1 , c2 ) = (g r , pk r · m)18
The decryption routine Dec(sk, c) will:

• Compute m̂ = c−sk
1 · c2
Correctness of this scheme follows from some algebric steps:

m̂ = Dec(sk, Enc(pk, m))


= Dec(x, Enc(g x , m))
= Dec(x, (g r , (g x )r · m))
= (g r )−x · (g x )r · m
=m

Theorem 33. Assuming ddh, the El Gamal scheme is cpa-secure. 


Proof. Consider the two following games H0 (λ, b) and H1 (λ, b) defined as fol-
lows. Observe that b can be fixed without loss of generality.

17 G could be any ”valid” group such as QRp or an Elliptic Curve


18 We need r because we want to re-randomize c

70
H0 (λ, b) :

A C pke

pk x ←$ Zq
pk = g x
(m0 , m1 ) ∈ G
c = (c1 , c2 ) r ←$ Zq
b ←$ {0, 1}
(c1 , c2 ) = (g r , (g x )r mb )
b0
Output 1 iff b = b0

H1 (λ, b) :

A C pke

pk x ←$ Zq
pk = g x
(m0 , m1 ) ∈ G
c = (c1 , c2 ) r ←$ Zq
z ←$ Zq
(c1 , c2 ) = (g r , g z mb )
b0
Output 1 iff b = b0

Note: it is important to note that we can measure the advantage of A, so


b0 =1 b0 =1
z }| { z }| {
fixed its output AdvA (λ) = | P r[A → 1 |b = 0] − P r[A → 1 |b = 1] |. Since b is
| {z } | {z }
”A loses” ”A wins”
fixed the above formula will give a value λ innegl, generally the advantage of
an adversary is: 12 + λ (random guessing + a negligible factor).
Proof techinique: H0 (λ, 0) ≈c H0 (λ, 1) ≡ H1 (λ, 0) ≈c H1 (λ, 1)

=⇒ H0 (λ, 0) ≈c H1 (λ, 1)

Lemma 23. ∀b ∈ {0, 1}, H0 (λ, 0) ≈c H0 (λ, 1)


Fix b. (Reduction to DDH)
Assume ∃ PPT D which is able to distinguish H0 (λ, b) and H1 (λ, b) with non
negl. probability. 
Proof. Consider the following Game:
Contradiction: D should be able to compute logg to distinguish the mes-
sage.

71
0
D DDDH CDDH

(X, Y, Z) X = gx , Y = gy
Z = g xr or Z = g z

h=X

(m0 , m1 )

(Y, Zmb )

b0

b0

Lemma 24. H1 (λ, 0) ≡ H1 (λ, 1) 


Proof. This follows from the fact that: (g x , (g r , g z m0 )) ≡ (g x , (g r , Uλ ) ≡
≡ (g x , (g r , g z m1 )))
Lemma 25. H1 (λ, 1) ≡ H0 (λ, 1) 
This is proved in the exact same way as Lemma 20. As a matter of fact it
is the second part of the proof (where b is fixed to 1).

Properties of of El Gamal pke scheme


Some useful observations can be made about this scheme:
• It is homomorphic: Given two ciphertexts (c1 , c2 ) and (c01 , c02 ), then
doing the product between them yields another valid ciphertext:

(c1 · c01 , c2 · c02 )


0 0
= (g r+r , hr+r (m · m0 ))

thus, decrypting c · c0 , gives m · m0 .

• It is re-randomizable: Given a ciphertext (c1 , c2 ), and r0 ←$ Zq , then


0 0
computing (g r · c1 , hr · c2 ) results in a “fresh” encryption for the same
message: the random value used at the encryption step will change from
the original r to r + r0

72
So this is not CCA-Secure!
These properties of the El Gamal scheme can be desirable in some use cases,
where a message must be kept secret to the second party. In fact, there are
some pke schemes which are designed to be fully homomorphic, i.e. they are
homomorphic for any kind of function.
Consider the following use case: a client C has an object x and wants to
apply a function f over it, but it lacks the computational power to execute it.
There is another subject S, which is able to efficiently compute f , so the goal
is to let it compute f (x) but the client wishes to keep x secret from him. This
can be achieved using a fh-pke scheme as follows:

C S

Object: x
Function: f f, c
(pk, sk) ←$ KGen
c ←$ Enc(pk, x)

f (c)
f (x) = Dec(sk, f (c))

Figure 16.45: Delegated secret computation

However one important consideration must be made: All these useful char-
acteristics expose an inherent malleability of any fully homomorphic scheme:
any attacker can manipulate ciphertexts efficiently, and with some predictable
results. This compromises even cpa security of such schemes.

16.1.2 Cramer-Shoup pke scheme


This scheme is based on the standard ddh assumption, and has the advan-
tage of being cca secure. A powerful tool, called Designated Verifier Non-
Interactive Zero-Knowledge (DVNIZK), or alternatively Hash-Proof Sys-
tem, is used here.

Proof systems
Let L be a Turing-recognizable language in N P , and a predicate R ∈ X × Y →
{0, 1} such that:
L := {y ∈ Y : ∃x ∈ X R(x, y) = 1}
where x is called a “witness” of y.
In our instance, let y = pq, x = (p, q)
Π = (Setup, Prove, Verif y)
(ω, τ ) ←$ Setup(1λ ), where ω is the Common Reference String, and τ is
the trapdoor.
Additional notes:
• ω is public (= pk)

73
• τ is part of the secret key
• τ = (x, y) : R(x, y) = 1

• There is presumably a common third-party, which samples from the setup


and publishes ω, while giving τ to only B.

A B

ω π = Prove(ω, y, x) τ, y 0 ∈ L
(x, y) s.t. R(x, y) = 1 V er(τ, y) = π̃ ∈ P
check if π = π̃

Figure 16.46: Overview of Cramer-Shoup operation

Proof system - purpose: a way to convince B that A knows something


Can compute the proof in two different ways, this is the core notion of ZK
No τ =⇒ ZK

Properties
• (implicit, against malicious Bob) Zero-knowledge: Proof for x can be
simulated without knowing x itself
• (stronger, against malicious ALice) Soundness: It is hard to produce a
valid proof for any y ∈
/L
• honest people Completeness: ∀y ∈ L, ∀(ω, τ ) ←$ Setup(1λ ) :
Prove(ω, x, y) = Verif y(τ, y)

T0D0 21: to review and understand/better

t-universality
Definition 15. Let Π be DV-NIKZ19 .
We say it is t-universal if for any distinct

y1 , . . . , yt s.t. yi 6∈ L(∀i ∈ [t])

we have
(ω, V er(τ, y1 ), . . . , V er(τ, yt )) = (ω, v1 , . . . , vt )
where (ω, τ ) ← $Setup(1λ ) and v1 , . . . , vt ← $P where P should be the proofs’
space. 
19 Designated verifier non-interactive zero-knowledge

74
Enriching DV-NIKZ
Can we enrich DV-NIKZ with labels l ∈ {0, 1}∗ ?

Suppose to have the following:

L0 = Lk{0, 1}∗ = {(y, l) : y ∈ L ∧ l ∈ {0, 1}∗ }

Then our scheme changes : P rove(ω, (y, l), x) = Π; V er(τ, (y, l)) and , for t-
universality , now we can consider 2 distinct (yi , li ).

Membership Hard Language (MH)


Definition 16. Language L is MH if ∃L̄ such that:
1. L ∩ L̄ = ∅
2. ∃ PPT Samp outputting y ← $Y together with x ∈ X̄ such that

R(y, x) = 1

(it’s possible to say that Samp(1λ ) ← $(y, x))


¯ outputting y ← $L̄
3. ∃ PPT Samp
¯
4. {y : (y, x) ← $Samp(1λ )} ≈c {y : y ← $Samp(1 λ
)}


75
Lesson 17

17.1 Construction of a cca-secure pke


This section exposes a construction of a cca-secure pke scheme, using hash-
proof systems, membership-hardness, and the n-universality property.
Let Π1 , Π2 be two distinct hash-proof systems for some np language L and
the range of P rove2 supports labels (L0 = L||{0, 1}` ).
Construct the cca scheme as follows: Π := (KGen, Enc, Dec)
pk sk
z }| { z }| {
• ((ω1 , ω2 ), (τ1 , τ2 )) ←$ KGen(1λ ) , (ω1 , τ1 ) ←$ Setup1 (1λ ), (ω2 , τ2 ) ←$
Setup2 (1λ )
• Encryption routine: Enc((ω1 , ω2 ), m)
– (y, x) ←$ Sample1 (1λ )
– π1 ←$ P rove1 (ω1 , y, x)
– l := π1 · m
– π2 ←$ P rove2 (ω2 , (y, l), x)
– c := (c1 , c2 ) = ((y, l), π2 )
• Decryption routine: Dec((τ1 , τ2 ), (c1 , c2 ))
– π̂2 = V erif y2 (τ2 , c1 ))
– if π̂2 6= c2 then output false
– Recall: c1 = (y, l)
– π̂1 = V erif y1 (τ1 , y)
– output l · π̂1−1
Correctness (assume π̂i = πi ∀i):

m̂ = Dec(sk, Enc(pk, m))


= Dec((τ1 , τ2 ), Enc((ω1 , ω2 ), m))
= Dec((τ1 , τ2 ), ((y, l), π2 ))
= l · π̂1−1
= π1 · m · π̂1−1
=m

Some additional notes (may be incorrect):

76
• The message space of the second prover is the range of the first prover.
• The message space of the first prover is a multiplicative group
• The message space of the second prover is a polylogarithmic language in
(λ)
Theorem 34. Assuming π1 is 1-universal, π2 is 2-universal and L is Membership-
hard, then the above scheme is cca-secure. 
Proof. Five different games will be defined, from Game0Π,A up to Game4Π,A ;
the first game will be an analogous formalization of how the above pke scheme
works. It shall be proven that, for either fixed b in {0, 1}:
Game0Π,A (λ, b) = Game1Π,A (λ, b) ≈C Game2Π,A (λ, b) ≈S Game3Π,A (λ, b) = Game4Π,A (λ, b)

and finally that Game4Π,A (λ, 0) = Game4Π,A (λ, 1), therefore concluding that
Game0Π,A (λ, 0) ≈C Game0Π,A (λ, 1), and proving this scheme is cca-secure.

T0D0 22: Non sono per niente sicuro riguardo all’origine di x ed y, né
tantomeno dove sia definito il sampler per essi

The games are defined as follows:

A Cpke-cca

(ω1 , ω2 )
((ω1 , ω2 ), (τ1 , τ2 )) ←$ KGen(1λ )

m0 , m1
(x, y) ←$ Sample1
c = ((y, (π1 · mb )), π2 ) π1 ←$ P rove1 (ω1 , y, x)
b ←$ {0, 1}
π2 ←$ P rove2 (ω2 , (y, (π1 · mb )), x)

b0
Output 1 iff b = b0

Figure 17.47: Original cca game

T0D0 23: Non è stato chiaro sull’origine di x ed y, inoltre mi manca da


scrivere le query di decifratura

Lemma 26.
∀b, G0 (λ, b) ≡ G1 (λ, b)


77
A C1

m0 , m1
(x, y) ←$ Sample1
c = ((y, (π1 · mb )), π2 ) π1 ←$ V erif y1 (τ1 , y)
b ←$ {0, 1}
π2 ←$ V erif y2 (τ2 , (y, (π1 · mb )))

b0
Output 1 iff b = b0

Figure 17.48: Use verifiers

A C2

m0 , m1
(x, y) ←$ Sample1
c = ((y, (π1 · mb )), π2 ) π1 ←$ V erif y1 (τ1 , y)
b ←$ {0, 1}
π2 ←$ V erif y2 (τ2 , (y, (π1 · mb )))

b0
Output 1 iff b = b0

Figure 17.49: Sample statements outside the language

A C3

m0 , m1
(x, y) ←$ Sample1
c = ((y, (π1 · mb )), π2 ) π1 ←$ V erif y1 (τ1 , y)
b ←$ {0, 1}
π2 ←$ V erif y2 (τ2 , (y, (π1 · mb )))

b0
Output 1 iff b = b0

Figure 17.50: Modify decryption queries

78
A C4

m0 , m1
c = ((y, (π1 · mb )), π2 ) π1 ←$ Im(P rove1 )
b ←$ {0, 1}
π2 ←$ P rove2 (ω2 , (y, (π1 · mb )), x)

b0
Output 1 iff b = b0

Figure 17.51: π1 is chosen uar

Proof. This follows by the correctness of Π1 and Π2


Π1 = τ1 = V er1 (τ, y)
Π2 = Π̃2 = V er2 (τ, y)

with probability 1 over the choice of (ω1 , τ1 ) ← $Setup1 (1λ )

Π1 ← P rove1 (ω, y, x), (ω2 , τ1 ) ← $Setup2 (1λ )


Π2 ← P rove(ω2 , (y, l), x)∀y ∈ L, l ∈ P1

Lemma 27.
∀b, G1 (λ, b) ≈c G2 (λ, b)

Proof. Straight forward reduction from membership hardness.
Lemma 28.
∀b, G2 (λ, b) ≈c G3 (λ, b)


T0D0 24: I’m not completely sure the next proof is complete

Proof. Recall that the difference between G2 and G3 is that


(i)
(g (i) , l(i) , Π2 ) such that y (i) 6∈ L
are answered ⊥ in G3 , instead in G2
(i) (i)
⊥ comes out as output ⇔ Π̃2 = V er(τ, y (i) 6= Π2 )
It’s possible to distinguish two cases , looking at c = (y, l, Π2 ) :

79
(i) (i)
1. if (y (i) , l(i) ) = (y, l) and Π̃2 = Π2 , it outputs ⊥ if in the decryption
(i) (i)
scheme (Π2 6= Π̃2 ) it outputs ⊥.
20 (i)
2. otherwise (y (i) , l(i) ) 6= (y, l) if y (i) 6∈ L we want that Π2 doesn’t output
exactly V er2 (τ, (y (i) , l(i) )), but it should output ⊥.
EVENT BAD: If we look at the view of A, the only information he knows
is
(ω2 , Π̃2 = V er2 (τ2 , y))
for y 6∈ L.
The value
V er2 (τ2 , (y (i) , x(i) ))
for y (i) ∈ L and (y (i) , l(i) ) 6= (y, l) is random.
So,
P[BAD] = 2−|P2 |

Lemma 29.
∀b, G3 (λ, b) ≡ G4 (λ, b)

Proof. If we look at the view of A, the only information known about τ1 is ω1 ,
since the decryption oracle only computes for y (i) ∈ L

V er1 (τ, y (i) ) = P rove1 (ω1 , y (i) , x(i) )

By 1-universality, Π = V er1 (τ, y) for any y ∈ L is random.


Lemma 30.
G4 (λ, 0) ≡ G4 (λ, 1)

Proof. The challenge ciphertext is independent of b.

REVIEW: referencing something from another part of lesson 17

17.1.1 Instantiation of U-HPS (Universal Hash Proof Sys-


tem)
MHL(Membership Hard Language) from DDH
∃r using a DDH language such that LDDH = {(c1 , c2 ), ∃r|c1 = g1r , c2 = g2r }
Given a group G of order q with (g1 , g2 ) as generators, we will have (g1 , g2 , c1 , c2 )
but if we impose g1 = g and g2 = g a then the previous construction be-
comes (g, g a , c1 , c2 ). But for definition c1 = g1r1 and c2 = g2r2 then I can write
(g, g a , g r , g ar ).
Now we can define our U-HPS Π :=(Setup, Prove, Verify)
20 when decryption oracle doesn’t output the challenge

80
• Setup(1λ ): Pick x1 , x2 ← $Zq and define:
ω = h1 = (g1x1 , g2x2 ), τ = (x1 , x2 )

• P rove(ω, (c1 , c2 ), r) output Π = ω 2


| {z }
y

e = cx1 cx2
• V erif y(τ, (c1 , c2 )) output Π 1 2
| {z }
y

Correctness: Π = ω 2 = (g1x1 g2x2 )r = g1rx1 g2rx2 = cx1 1 cx2 2 = Π


e

Theorem 35. Above construction defines a 1-universal DVNIZK for LDDH 


Proof. We want to prove that if we take any (c1 , c2 ) ∈ / LD DH the distribution
(ω = h1 , Π e = V erif y(τ, (c1 , c2 ))) uniformly distributed.
random
Define a MAP µ( x1 , x2 ) = (ω, Π) = (g1x1 , g2x2 , cx1 1 , cx2 2 ) it suffices to prove
z }| {
that µ is injective. This can easily be done with some constrains:
µ0 (x1 , x2 ) = logg1 (µ(x1 , x2 )) = (logg1 (ω), logg1 (Π)
For r1 6= r2 then c1 = g1r1 , c2 = g2r2 = g αr2 . For α = logg2 g1 then Π =
c1 c2 = g1r1 x1 g2r1 x1 +αr2 x2 .
x1 x2

     
z 1 α x1
µ0 (x1 , x2 ) = 1 = ·
z2 r1 r2 α x2
 
1 α
Since Det = α(r2 − r1 ) 6= 0 the map is injective.
r1 r2 α

• Setup(1λ ):
– Pick x3 , x4 , x5 , x6 ← $Zq and define:
∗ ω = (h2 , h3 , s) = (g1x3 , g2x4 , g1x5 , g2x6 , s) where s is a seed for a
CRH→ H = {Hs }

• P rove(ω, (c1 , c2 , l), r)


– Compute β = Hs (c1 , c2 , l) ∈ Zp
– Output Π = hr2 hrβ
3

• V erif y(τ, (c1 , c2 , l))

– Compute β = Hs (c1 , c2 , l) ∈ Zq
e = cx3 +βx5 cx4 +βx6
– Output Π 1 2

Correctness:
Π = hr2 hrβ x3 x4 r x5 x6 r x3 x4 βx5 βx6
3 = (g1 g2 ) (g1 g2 ) = c1 c2 c1 c2 = c1x3 +βx5 c2x4 +βx6 = Π
e

Theorem 36. The above construction define a 2-universal DVNIZK for LDDH


81
Proof. Same goal and procedure as before
- Take any (c1 , c2 ) ∈ / LDDH
- Fix (c1 , c2 , l) 6= (c01 , c02 , l0 ) s.t. (c1 , c2 ), (c01 , c02 ) ∈
/ LDDH which means:

• (c1 , c2 ) = (g1r1 g2r2 )


r1 6= r2 and r10 6= r20
r0 r0
• (c01 , c02 ) = (g11 g22 )
• β = Hs (c1 , c2 , l)
• β 0 = Hs (c01 , c02 , l0 )

- Let’s define a MAP

µ0 (x3 , x4 , x5 , x6 ) = (ω, Π f0 = V er(τ, (c01 , c02 , l0 ))) =


e = V er(τ, (c1 , c2 , l)), Π
0
x5 0x4 +β 0 x6
= ((h2 , h3 ), cx1 3 +βx5 cx2 4 +βx6 , c10x3 +β c2 =
| {z }
ω
r 0 x +β 0 r10 x5 r20 x4 +β 0 r20 x6
= ((g1x3 g2x4 , g1x5 g2x6 ), g1r1 x3 +βr1 x5 g2r2 x4 +βr2 x6 , g11 3 g2 )

But I can rewrite g2 as g2 = g1α since they are generators. So:


α(r2 x4 +βr2 x6 ) r 0 x3 +r10 β 0 x5 α(r20 x4 +β 0 r20 x6 )
((g1x3 +αx4 , g1x5 +αx6 ), g1r1 x3 +βr1 x5 g1 , g11 g1 ) =
r10 x3 +αr20 x4 +r10 β 0 x5 +αβ 0 r20 x6
= ((g1x3 +αx4 , g1x5 +αx6 ), g1r1 x3 +αr2 x4 +βr1 x5 +αβr2 x6 , g1 )=
     
z1 1 α 0 0 x3
z2   0 0 1 α  x4 
z3  r1 αr2 βr1 αβr2  · x5 
 =   

z4 r10 αr20 β 0 r10 αβ 0 r20 x6


 
. . . .
. . . . 2 0 0 0
. . . . = α (r2 − r1 )(r2 − r1 )(β − β ) 6= 0
Since Det  

. . . .
IFF:
for construction
z }| {
r2 6= r1 , r20 6= r10 , β 6= β 0 → this last condition is true because we picked Hs
collision resistant.
Otherwise Hs computed on different elements((c1 , c2 , l) and (c01 , c02 , l0 )) will
have to output the same β(= β 0 ).

82
Lesson 18

Cramer-Shoup scheme construction


From the above two proof systems we can construct a pke scheme, which is
attributed to Cramer and Shoup:

T0D0 25: split definition from correctness

• (pk, sk) ←$ KGen, where:


– pk := (h1 , h2 , h3 ) = (g1x1 g2x2 , g1x3 g2x4 , g1x5 g2x6 )
– sk := (x1 , x2 , x3 , x4 , x5, x6 )
• Encryption procedure:
– r ←$ Zq
– β = Hs (c1 , c2 , c3 ) = (g1r , g2r , hr1 m)
– Enc(pk, m) = (c1 , c2 , c3 , (h2 hβ3 )r )
• Decryption procedure:
– Check that cx1 3 +βx5 c2x4 +βx6 = c4 . If not, output FALSE.
– Else, output m̂ = c3 c1−x1 c−x
2
2

18.1 Digital signatures


In this section we explore the solutions to the problem of authentication with
the use of an asymmetric key scheme. Some observations are in order:
• In a symmetric setting, a verifier routine could be banally implemented as
recomputing the signature using the shared secret key and the message.
Here, Bob cannot recompute σ as he’s missing Alice’s secret key (and for
good reasons too...). Thus, the verifying routine must be defined otherwise

sk pk

m (m, σ) b
Alice Bob

Figure 18.52: Asymmetric authentication

83
• In a vaguely similar manner to how an attacker could encrypt messages
by itself in the asimmetric scenario, because the public key is known to
everyone, any attacker can verify any signed messages, for free
Nevertheless, proving that a ds scheme is secure is largely defined in the
same way as in the symmetric scenario, with the uf-cma property:

A Cuf-cma

pk
(pk, sk) ←$ KGen(1λ )
m
m∈M
σ
σ ←$ Sign(sk, m)
(m , σ ∗ )

m∗ ∈
/M Output Verify(pk, m∗ , σ ∗ )

Figure 18.53: Unforgeable digital signatures

18.1.1 Public Key Infrastructure


The problem now is that Alice has a public key, but she wants ”the blue check”
over it, so Bob is sure that public key comes only from the true Alice.
To obtain the blue check, Alice needs an universally-trusted third party,
called Certification Authority, which will provide a special signature to Alice for
proving her identity to Bob.
The scheme works as follows:
The CA message pk 0 is also called as certA , the signature of the CA for
Alice.
Now, when Bob wants to check the validity of the Alice’s public key:
How can Bob recognize a valid certificate from an expired/invalid one?
The infrastructure provides some servers which contain the lists of the cur-
rently valid certificates.
Theorem 37. Signatures are in Minicrypt. 
This is a counterintuitive result, not proven during the lesson, but very
interesting because it implies that we can create valid signatures only with hash
functions, without considering at all public key encryption.

Cert. Auth. pkCA , skCA Alice

”Alice”, pkA pkA , skA


pkCA

certA where certA is


σ ←$ Sign(skCA , ”Alice”kpkA )

84
CA Alice Bob

”Alice”, pkA pkA , skA


pkCA

certA where certA is


σ ←$ Sign(skCA , ”Alice”kpkA )

pkBOB , pkCA
V rf y(pkCA , ”Alice”kpkA , certaA )

In the next episodes:


• Digital Signatures from TDP*

• Digital Signatures from ID Scheme*


• Digital Signatures from CDH
Where * appears, something called Random Oracle Model is used in the
proof. Briefly, this model assumes the existance of an ideal hash function which
behaves like a truly random function (outputs a random y as long as x was
never queried, otherwise gives back the already taken y).

85
Lesson 19

T0D0 26: Warining: e’ venuto fuori un casino in questa lezione, ho cercato


di riordinare le cose

19.1 Bilinear Map


Let’s define a Bilinear Group as (G, Gt , q, g, ê) ← BilGen(1λ ) where:
• G, Gt are prime order groups (order q).
• g is a random generator of G.

• (G, ·) is a multiplicative group and Gt is called target group.


• ê is a (bilinear) MAP: G × G → Gt efficiently computable. Defined as
follows:
→ Take a generator g for G and an element h in G.
∀g, h ∈ G, a, b ∈ Zq ê(g a , hb ) = ê(g, h)ab = ê(g ab , h)
and ê(g, g) 6= 1 (Non degenerative)
”I can move the exponents”
Venturi said something here related to Weil pairing over an elliptic curve. I
found this. Interesting but not useful.
Assumption: CDH is HARD in G.
Observation: DDH is EASY in G!
Proof. (g, g a , g b , g c ) ≈c (g, g a , g b , g ab ) =⇒ ê(g a , g b ) =? ê(g, g c ) now just move
the exponents and I can transform the first element in ê(g, g ab ) and check if it
is equal to ê(g, g c ).
Now KGen(1λ ) will:

• Generate some params = (G, Gt , g, q, ê) ← BilGen(1λ )


• Pick a ← $Zq then g1 = g a
• Pick g2 = g b and g2 , u0 , u1 , ..., uk ← $G.

• Then output:
– Pk = (params, g1 , g2 , u0 , ..., uk )

86
– Sk = g2a = g ab
Sign(Sk , m):
• Divide the message m of length k in bits and not it as follows: m =
(m[1], ..., m[k])
Qk m[i]
• Now define α(m) = u0 i=1 ui
• Pick r ← Zq and output the signature σ = ( g2a ·α(m)r , g r ) = (σ1 , σ2 )
|{z}
Sk

Vrf(Pk , m, (σ1 , σ2 ))
• Check ê(g, σ1 ) = ê(σ2 , α(m)) = ê(g1 , g2 )
Correctness: ”Just move the exponents”
I can separate the second part for bilinearity
z }| {
ê(g, σ1 ) = ê(g, g2a · α(m)a ) = ê(g, g2a ) · ê(g, α(m)r ) =
a r
ê( g , g2 ) · ê( g , α(m)) = ê(g1 , g2 ) · ê(σ2 , α(m))
|{z} |{z}
g1 σ2
We can say that we are moving the exponents from the ”private domain” to the
”public domain”.

19.2 Waters signatures


Theorem 38. The Waters’ signature scheme is ufcma 
Proof. The trick is to “program the ’u’s ” (Venturi)

T0D0 27: Sequence is incomplete/incorrect, have to study it more...

T0D0 28: The following explanation is roundabout, will rectify later

The following describes how the Waters’ challenger constructs the u string.
The main idea is, given k as the message length, to choose each single bit of u
from 1 up to k such that:

k k
β(m) γ(m)
X X
α(m) = g2 g , β(m) = x0 + mi xi , γ(m) = y0 + mi yi
i=1 i=1

where x0 ←$ {−kl, . . . , 0}, x1—k ←$ {0, . . . , l}, y0—k ←$ Zq


In particular: l = 2qs , where qs is the number of sign queries made by the
adversary.
Therefore, let ui = g2xi g yi ∀i ∈ [0, k]. Then:

k
x + k
P Pk
i=1 mi xi y0 + β(m) γ(m)
Y
α(m) = g2x0 g y0 (g2xi g yi )mi = g2 0 i=1 mi yi
g = g2 g
i=1

87
A Cwds Ccdh

“params”=
(“params”, g a , g b ) = (G, Gt , g, q, ê)
←$ BilGen(1λ )
a, b ←$ Zq
(“params”, g a , g b , u)
Construct u string

m
m∈M
σ
σ = Sign()

(m∗ , σ ∗ )
m∗ ∈
/M

Figure 19.54: Reducing Waters’ scheme to cdh

T0D0 29: Partizioni, doppi if.... qui non ci ho capito ’na mazza

Step 1: σ = (σ1 , σ2 ) = (g2a α(m)r̄ , g r̄ ), for r̄ ←$ Zq r̄ = r − aβ −1

σ1 = g2a α(m)r̄
−1
= g2a α(m)r−aβ
β(m) γ(m) r−aβ −1
= g2a (g2 g )
β(m)r−a γ(m)r−γ(m)aβ −1
= g2a g2 g
β(m)r γ(m)r −γ(m)β −1
= g2 g g

88
Lesson 20

20.1 Random Oracle Model (ROM)


The Random Oracle Model treats an Hash function H as a truly random func-
tion.

20.2 ID Scheme
20.3 Honest Verifier Zero Knowledge (HVZK)
and Special Soundness (SS)

89
Lesson 21

21.1 Full domain hashing

90
Lesson 22

22.1 Examples of ID schemes

91
Lesson 23

23.1 Bilinear DDH assumption

92
Lesson 24

24.1 CCA proof for ???

93

You might also like