Shannon-Fano Coding

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Communications Systems II-Lec.

7 Fourth stage 2020-2021

7. Shannon–Fano coding,
I is named after Claude Shannon and Robert Fano, is a name given to two different but
related techniques for constructing a prefix code based on a set of symbols and their
probabilities (estimated or measured). In this lecture we focus on Shannon method.

7.1 Shannon Method

1. Arrange probabilities
1
2. Find the number of bits for coding the 𝑖 𝑡ℎ symbol (𝑏𝑖 ) where log 2 𝑝 ≤ 𝑏𝑖 < 1 +
𝑖
1
log 2 𝑝
𝑖
3. Evaluate the accumulated probability upper (over) the symbol (i) 𝐹𝑖 , where 𝐹𝑖 =
∑𝑖−1
𝑘=1 𝑃𝑘
4. Compute the designation bits for the 𝑖 𝑡ℎ symbol (𝐶𝑖 ) where 𝐶𝑖 = (𝐹𝑖 )2 𝑏𝑖 , where 2 for
binary and ( )𝑏𝑖 𝑚𝑒𝑎𝑛𝑠 finding the binary equivalent of 𝐹𝑖 up to 𝑏𝑖 bits.

Example 7.1

Design a source coding using Shannon method for the same communication source and their
data in the example 6.4?

Find 𝑏𝑖 , we should find the natural integer between the ranges


1 1
- For i =1, log 2 0.3 ≤ 𝑏1 < 1 + log 2 0.3 , 1.73 ≤ 𝑏1 < 2.73, then b1 = 2.
1 1
- For i =2, log 2 0.2 ≤ 𝑏2 < 1 + log 2 0.2 , 2.32 ≤ 𝑏2 < 3.32, then b2 = 3.
1 1
- For i =3, log 2 0.15 ≤ 𝑏3 < 1 + log 2 0.15 , 𝑏3 = 3.
1 1
- For i =4, log 2 0.12 ≤ 𝑏4 < 1 + log 2 0.12 , 𝑏4 = 4.
1 1
- For i =5, log 2 0.1 ≤ 𝑏5 < 1 + log 2 0.1 , 𝑏5 = 4.
1 1
- For i =6, log 2 ≤ 𝑏6 < 1 + log 2 , 𝑏6 = 4.
0.08 0.08
1 1
- For i =7, log 2 0.05 ≤ 𝑏7 < 1 + log 2 0.05 , 𝑏7 = 5.

Find 𝐹𝑖 , we start with ZERO for F1 and then F2= probability of first symbol, then F3=
accumulation of first and second and so forth.

- F1 =0;
- F2= 0.3
- F3= 0.5
- F4= 0.65
- F5= 0.77
- F6 =0.87
- F7 = 0.95.

Computation of 𝐶𝑖 , the number of bits is restricted by 𝑏𝑖 and then we should multiply 𝐹𝑖 by 2


for (bi numbers) every time we get the resulted one and use it in the next multiplication. If the
Dr. Hasanain A. Hasan Department of Electrical Engineering, University of Misan
Communications Systems II-Lec.7 Fourth stage 2020-2021

resulted number more than 1 we reduce it by 1 to get multiplied on the next times. Finally,
we give (0) for result of mutplication less than ONE and (0) for cases ≥ 1.

- C1 , F1 =0
0 * 2 =0
0 * 2 = 0, Then C1 = 00
- C2 , F2 =0.3
0.3 *2 = 0.6 0
0.6 * 2 = 1.2 1 C2 = 010
0.2 *2 = 0.4 0

- C3, F3 = 0.5
0.5 * 2 =1 1 C3 = 010
0 *2=0 0
0 * 2 =0 0

- C4, F4 = 0.65
0.65 * 2 = 1.3 1
0.3 * 2 = 0.6 0 C4 = 1010
0.6 * 2 = 1.2 1
0.2 * 2 = 0.4 0

- C5, F5 =0.77
1
0.77 * 2 = 1.54
0.54 * 2 = 1.08 1
0 C5 = 1 1 0 0
0.08 * 2 = 0.16
0.16 * 2 = 0.32 0

- C6 , F6= 087
0.87 * 2 = 1.54 1
0.74 * 2 = 1.08 1 C6 =1 1 0 1
0.48 * 2 = 0.16 0
0.96 *2 = 1.92 1

- C7, F7 =
0.95 * 2 = 1.9 1
0.9 * 2 = 1.8 1 C7 = 1 1 1 1 0
0.8 * 2 = 1.6 1
0.6 * 2 = 1.2 1
0.2 *2 = 0.4 0

• You can sketch a table and put it in front values, bi,Fi, Ci for each symbols
and their probabilities.

To evaluate the efficiency of this code H is the same as used in Example 6.4 which is
3.1 bits/symbol.

N= ∑𝑄𝑖=1 𝑃𝑖 𝑏𝑖 , Q: number of symbols which is in this example =7

Dr. Hasanain A. Hasan Department of Electrical Engineering, University of Misan


Communications Systems II-Lec.7 Fourth stage 2020-2021

N = 0.3 *2 + 0.2 *3 + 0.15 * 3 + 0.12 * 4 + 0.1 *4 + 0.08 * 4 + 0.05 * 5


= 3.1 bit/symbol.
2.602
Then , 𝜂 = * 100 = 83.93 %.
3.1

Dr. Hasanain A. Hasan Department of Electrical Engineering, University of Misan

You might also like