Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Information

Theory

Dr. Eng. Sattar B. Sadkhan


Msc.Sarah Abd UL_Rizah
OUTLINE
 Definition of
1. Computational Security
2. Provable Security
3. Unconditional Security
 Shannon theory
 Elementary Probability Theory
 Discrete Random Variable
 Conditional Probability
 Joint Probability

 Bayes Theorem
 Perfect Secrecy
 Entropy
 Huffman Encoding
 Spurious Keys and Unicity Distance
Shannon theory

we discuss several of Shannon's idea , First , however, we


consider some of the various approaches to evaluate the
security of cryptosystem.
1- Computational security
This measure related to “computational effort” required to
break a cryptosystem. We might define a cryptosystem to be
computationally secure if the best algorithm for breaking it
requires at least “N -Operations “ ,where N is some specified,
very large number so that it : Requires a large amount of
“computer time “ .
2- Provable Security :
Another approach is to provide evidence of
computational security by reducing the security of
the cryptosystem to some well- studied problem
that is thought to be difficult .
3- Unconditional security :
A cryptosystem is defined to be unconditionally
secure if it cannot be broken, even with infinite
computational resources
Fig. 1- schematic of a general secrecy system.
Shannon’s characteristics of
“Good” ciphers

 The amount of secrecy needed should determine the


amount of labour appropriate for the encryption and
decryption.
 The set of keys and enciphering algorithm should be free
from complexity.
 The implementation of the system should be as simple as
possible.
 The error in ciphering should not propagate and cause
corruption of further information in the message.
 The size of the enciphered text should be no longer than
the text of the original message.
• Elementary Probability Theory
A discrete random variable , say X, consist of a finite set X and
probability distribution defined on X, the probability that the
random variable X takes on the value x is denoted by Pr[X=x];
some time we will abbreviate this to Pr[x] if the random variable
X is fixed. It must be the case that 0≤Pr[x] for all x  X, and

 Pr[x]  1
x X
Example
We could consider a “coin toss “ to be a random variable
defined on the set {heads, tails}. The associated
probability distribution would be Elementary Probability

Dr.Eng.Sattar.B.Sadkhan
Supervised by Fadhil
mohammad salman
PHD
Theory
pr[heads] = Pr[tails] = 1/2.
Example
Suppose we consider a random throw of a pair of dice. This can be
modeled by a random variable Z defined on the set
Z = {1,2,3,4,5,6} x {1,2,3,4,5,6},where Pr[(i, j)] = 1/36 for all (i, j )  Z
Example
where Pr[(i, j)] = 1/36 for all (i, j )  Z. Let's consider the sum of the two
dice. Each possible sum defines an event, and the probabilities of
these events can be computed using equation

Dr.Eng.Sattar.B.Sadkhan
Supervised by Fadhil
mohammad salman
PHD
Pr[x  E ]   Pr[x]
xX
For example, suppose that we want to compute the probability
that the sum is 4. This corresponds to the event

S4={(1,3),(2,2),(3,1)}
and therefore Pr[s4] = 3/36 = 1/12.
DEFINITION
Joint and Conditional probabilities
Suppose X and Y are random variable defined on a finite sets X and Y
, respectively.

Dr.Eng.Sattar.B.Sadkhan
Supervised by Fadhil
mohammad salman
PHD
The Joint probability Pr[x,y] is the probability that X takes on the value
x and Y takes on the value y .
The Conditional probability Pr[x|y] is the probability that X takes on
value x given that Y takes on value y.
We said that X and Y are Independent random variable if it’s satisfy
one of the conditions:
Pr[x,y]=Pr[x]Pr[y]
Pr[x|y]=Pr[x]
Joint probability can be related to conditional probability
by the formula

Dr.Eng.Sattar.B.Sadkhan
Supervised by Fadhil
mohammad salman
PHD
Pr[x,y]=Pr[x|y]Pr[y].
Interchanging x and y we have that
Pr[x,y]=Pr[y|x]Pr[x].
From those two expressions, we immediately obtained
the following result, which is known Bayes’ theorem.
THEOREM
 Bayes Theorem
If Pr[y] > 0, then

• COROLLARY
X and Y are independent variables if and only if p(x|y) = p(x) for
all x, y.
It is now possible to compute the conditional probability
Pr[x|y] using Bayes’ Theorem. The following formula is
obtained:
Example :-
Let P={a,b} with Pr[a]=1/4 ,Pr[b]=3/4 .Let
K={k1,k2,k3} with Pr[k1]=1/2, Pr[k2]=1/4 ,
Pr[k3]=1/4.Let C={1,2,3,4}, and suppose the
encryption function are defined to be

This Cryptosystem can be represented Five-tuples: (P, C, K, E, D)


P: finite set of possible plaintexts
by the following encryption matrix
C: finite set of possible ciphertext
K: key space, finite set of possible
alphabet keys
Key
A b E: set of ek(m): encryption P→C
K1 1 2 D: set of dk(c): decryption C →P
such that dk(ek(m)) = m.
K2 2 3
k3 3 4
• We now compute the probability distribution Pc .
We obtain
Now we can compute the conditional probability distributions
on the plaintext, given that a certain ciphertext has been
observed. We have:
 Definition

Perfect Secrecy
A cryptosystem has perfect secrecy if
Pr[x|y] = Pr[x] for all x P, y C

The probability that the attacker finds the plaintext after


observing the ciphertext is the same as the probability or
difficulty of getting the plaintext before observing the
ciphertext.
Entropy

• The entropy of a string provides the minimum average number of


bits required to encode a random source
• DEFINITION Suppose X is a random variable which takes on a finite
set of values according to a probability distribution p(X). Then, the
entropy of this probability distribution is defined to be the quantity

• If the possible values of X are x, 1≤ i ≤ n, then we have


Entropy
Example

• We compute as follows:-

• Similar calculations yield H(K) = 1.5 and .


Spurious Keys and Unicity Distance
• The attacker is guess the key from the ciphertext and try to
decrypt the cipher .he checks the plaintext obtained is"
meaningful", if not he rules out the key but due to redundancy
of languages more than one key will pass this test. Those keys,
that tested but incorrect, are called spurious keys.
• Let be a cryptosystem. Then
Example
• We have already computed and
. Previous theorem tells us that .
• Definition
• The unicity distance of a cryptosystem is defined to be the value of n
, denoted by n0, at which the expected number of spurious keys
becomes zero ; i.e., the average amount of ciphertext required for an
opponent to be able to compute the key,given enough computing
time.

Example
consider the Substitution Cipher .In this cryptosystem ,
|K|=26!,|P|=26, If we take RL = 0.75, then we get an estimate for the
unicity distance of n0≈88.4/(0.75*4.7)≈25
This suggests that, given a ciphertext string of length at least 25,a
unique decryption is possible.
The End•

You might also like