Professional Documents
Culture Documents
E Conference Entropy
E Conference Entropy
E Conference Entropy
entropy
Jayro Santiago-Paz
Let X be a random variable that take values of the set {x1 , x2 , ..., xM },
pi := P (X = xi ) the probability of occurrence, and M the cardinality of the
M
X
pi logpi .
(1)
i=1
1
1
+ (1 p) log
,
p
1p
(2)
which takes on one of a set of possible values AX = {a, b}, having probabilities Px = {p, (1 p)} and pP [0, 1]. For a r.v. with possibles values A =
3
S
{a, b, c}, P (X) = {r, s, t}, and i=1
PXi = 1,1the
Shannon entropy is H (P ) =
1
1
S
H (r, s, t) = r log r + (s) log s + (t) log t , see Figure 1(b) for graphic representation. The convention that 0 log(0) = 0 is followed, which can be justied
by the fact that lmx0 x log(x) = 0.
and
H T (P, q) =
1
[1 (pq + (1 p)q )] .
q1
(4)
Figure 2 shows the plots of Shannon entropy (2) compared to renyi entropy
(3) and Tsallis entropy (4) for various values of q parameter.
q = 0,1.
q = 0,5.
q = 1,5.
q = 5,0.
Figura 3: Renyi entropy for r.v X with probabilities Px = {r, s, t} and some
values of q.
q = 0,1.
q = 0,5.
q = 1,5.
q = 5,0.
Figura 4: Tsallis entropy for r.v X with probabilities Px = {r, s, t} and some
values of q.