Professional Documents
Culture Documents
Lecture 03
Lecture 03
1
Lecture
Lecture Outline
Outline
• Last lecture:
– classical ciphers
• This lecture:
– perfect secrecy
– entropy
– language redundancy
Lecture
Lecture Outline
Outline
– computational security
– unconditional security
–Σx∈X Pr[X = x] = 1
– Let D 1 denote the result of throwing first dice, D 2 the result of throwing
the second dice, and S their sum
Pr[X = x, Y = y] = P r [ X = x | Y = y] · (1)
Pr [Y = y]
and
P r [ X = x , Y = y] = Pr [Y = y | X = x ] · P r [ X = (2)
x]
• From these two expressions we obtain Bayes’ Theorem:
– if Pr [Y = y] > 0, then
P r [ X = x ] · Pr [Y = y | X = x ]
P r [ X = x | Y = y] = (3)
Pr [Y = y]
• How is it useful to us?
Probability
Probability Theory
Theory
Pr[X = x | Y = y] = P r [ X
= x]
for all x ∈ X and y ∈ Y
Pr [M = m | C = c] = Pr [M = m ]
Pr [C = c | M = m ] = Pr [C = c]
– This means that the probability distribution of the ciphertext does not
depend on the plaintext
– In other words, an encryption scheme (Gen, Enc, Dec) is perfectly
secret if and only if for every distribution over M and every m 1 , m 2
∈M and c ∈ C:
Pr [C = c | M = m 1 ] = Pr [C = c | M = m 2 ]
Perfect Indistinguishability
• Experiment PrivKeav
A,£
1
Pr[PrivKeav
A,£ = 1] =
2
– notice that is must work for every
A
• This definition is equivalent to our original definition of perfect secrecy
One-Time
One-Time Pad
Pad
• Proof
Pr [C = c | M = m ] =
– this works for all distributions and all m, so for all distributions over n ,
all m 1 , m 2 ∈ n , and all c ∈ C:
1
Pr [C = c | M = m 1 ] = Pr [C = c | M = m 2 ] =
2n
– by definition of perfect secrecy, this encryption is perfectly secret
More
More on
on One-Time
One-Time Pad
Pad
• One-time pad can be defined on units larger than bits (e.g., letters)
– Since the key must be long, what if we use text from a book as our key?
• Examples
Entropy
Entropy
• If there are n messages and they are all equally probable, then
Σn 1 1 1
H(X) = − log 2 = − log 2 = log 2 n
n n
i=1 n
• Entropy is commonly used in security to measure information leakage
Σ Σ
H(X|Y ) = − ( P r [ Y = y ] · P r [ X = x | Y = y]·
y∈Y x∈X
log2 P r [ X = x|Y
Language
Language Redundancy
Redundancy
• In an alphabet of l letters:
• Now compare that rate with the amount of information each English letter
actually encodes
H(Mn )
H L = n→ lim
∞ n
– it measures the amount of entropy per letter and represents the average
number of bits of information per character
• Redundancy of English
H L 1.25
RL = 1− = 1− ≈ 0.75
ra 4.7
Summary
Summary
• Next time:
– private-key encryption
– computational security