Professional Documents
Culture Documents
1.probability Random Variables and Stochastic Processes Athanasios Papoulis S. Unnikrishna Pillai 1 300 151 180
1.probability Random Variables and Stochastic Processes Athanasios Papoulis S. Unnikrishna Pillai 1 300 151 180
-'_I
x
(0)
y=sinx
(b)
FlGURE5-1S
y FIGURE 5-19
(5-40)
THE INVERSE PROBLEM. In the preceding discussion, we were given a random vari-
able x with known distribution Fx(x) and a function g(x) and we determined the distri-
bution Fy(y) of the random variable y = g(x). We consider now the inverse problem:
We are given the distribution of x and we wish to find a function g(x) such that the
CHAP'I'ER S JIUNCT10NS OF ONE RANDOM VAlUABLE 139
distribution of the random variable y = g(x) equals a specified function F,(Y). This
topic is developed further in Sec. 7-5. We start with two special cases.
From F" (x) to a uniform distribution. Given a random variable x with distribution
FA (x), we wish to find a function g(x) such that the random variable u = g(x) is
uniformly distributed in the interval (0, 1). We maintain that g(x) = Fx(x), that is, if
u = FAx) then Fu(u) = u for 0 :5 u :5 1 (5-41)
Proof. Suppose that x is an arbitrary number and u = Fx (x). From the monotonicity of
Fx(x) it follows that u :5 u iff x :5 x. Hence
Proof. The random variable u in (5-41) is uniform and the function F,,(x) is arbitrary.
Replacing Fx(x) by Fy(y), we obtain (5-42) (see also Fig. 5-20).
From Fz(x) to F,<Y). We consider, finally, the general case: Given FzCx) and Fy(Y).
find g(x) such that the distribution ofy = g(x) equals Fy(Y). To solve this problem, we
form the random variable u = Fx (x) as in (5-41) and the random variable y = F{-l) (u)
'Iz:
0 1 II
y - Ftl)(u)
~ ptl)(It)
FIGURE 5-20
140 PROBABILITY ANO RANDOM VARIABLES
I:
The expected value or mean of a random variable x is by definition the integral
EXAl\IPLE 5-J6 ~ If x is unifonn in the interval (XIt X2), then I(x) = 1/(X2 - XI) in this interval.
Hence
E{x} = _1_1x2
X2 - XI .1:1
xdx = Xl +X2
2
J:
Inserting into (5-44) and using the identity
we obtain
E{x} = LPix; Pi = P{x=xd (5-46)
EX,\l\IPLE 5-17 ~ If x takes the values 1, 2, ...• 6 with probability 1/6, then
E{x} = ~(1 + 2 + ... + 6) = 3.5
Co,uJitional mean The conditional mean of a random variable x assuming an
event M is given by the integral in (5-44) if I(x) is replaced by the conditional density
I(x 1M):
1 -00
00
xf(x) dx:::: L
00
k=-oo
Xd(Xk) Ax (5-49)
And since !(x,,) Ax :::: P{XIc < x < Xk + AX}. we conclude that
00
Here, the sets {Xk < x < Xk + AX} are differential events specified in terms of the
random variable x, and their union is the space S (Fig. S-21b). Hence, to find E{x}, we
multiply the probability of each differential event by the corresponding value of x and
sum over all k. The resulting limit as Ax -+ 0 is written in the form
E {x} = 1 x dP (5-50)
Proof, We denote by Il.nk the number of XI'S that are between 4k and Zk + Il.x = 4k+I'
From this it follows that
XI + ... +x,,:::: LZk Il.nk
and since !(Zk) Il.x :::: Il.nk/n [see (4-21)] we conclude that
-00
xf(x)dx~
I I I
Jt-J Xi XJc+1 x
(a)
F1GURE5-Z1
142 PROBABIUTY ANDlWIDOMVARIABLSS
D
D.
c
I U A x
A
(tI) (b)
FlGURES-21
We shall use the above frequency interpretation to express the mean of x in terms
of its distribution. From the construction of Fig. 5-22a it readily follows that .f equals
the area under the empirical percentile curve of x. Thus
.f = (BCD) - (OAB)
where (BCD) and (DAB) are the shaded areas above and below the u axis, respectively.
These areas equal the corresponding areas of Fig. 5-22b; hence
j = 1
00
10
(1_ F,.(x)]dx _ jO
-00
F,.(x)dx
E{x} = [R(X)dX -
o
jO
-00
F(x)dx, R(x) = 1- F(x) = P{x > x} (5-52)
In particular, for a random variable that takes only nonnegative values, we also obtain
Mean of g(x). Given a random variable x and a function S(x), we form the random
variable y = Sex). As we see from (5-44), the mean of this random variable is given by
E{y} = L: yl'l(Y) dy
It appears, therefore, that to determine the mean of y, we must find its density ly(Y).
: (5-54)
This. however, is not necessmy. As the next basic theorem shows, E{y} can be expressed
directly in tenns of the function g(x) and the density Ix{x) ofx.
Proof. We shall sketcb a proof using the curve g(x) of Fig. 5-23. With)l = g(xl) = g (X2) =g(%3)
as in the fi.gure, we see that
!y<Y)dy = h(XI)dxI + !,.(X2)dx2 + !,.(X3)dx3
CHAI'1'ER 5 fUNCIlONS OF ONBMNDOM VAlUABLl! 143
g(x)
_-:-!...-...,E{g(X)} = FI.Xo)
x
o "0 x
FIGUU5-n FIGUU5-24
Multiplying by y, we obtain
yfy(y)dy = g(Xt)!z(Xl)dxl + g(X2)!x(x2}dX2 + g(X3)!,,(X3)dxa
Thus to each differential in (5-54) there cOlTespond one or more differentials in (5-55). As
dy covers the y axis. the corresponding dx's are nonoverlapping and they cover the entire x axis.
Hence the integrals in (5-54) and (5-55) are equal.
Ifx is of discrete type as in (5-45). tben (5-55) yields
E{g(x)} = Lg(x;)P{x=xd (5-56)
LXVdPLL S-l t) ~ With Xo an arbitrary number and g(x) as in Fig. 5-24, (5-55) yields
EX,\i\IPLI S-20 ~ In this example, we show that the probability of any event A can be expressed as
expected value. For this purpose we fonn the zeJ:'(H)ne random variable XA associated
with the event A:
I reA
XA(r) = { 0 ~ ¢ A
Since this random variable takes the values 1 and 0 with respective probabilities P (A)
and peA), yields
E{XA} = 1 x P(A) + 0 x peA) = peA)
Unearity: From (5-55) it follows that
E{algl(x) + ... + t&ngn(x)} = alE{gl(x)} + ... + an E{gn (x)} (5-57)
In particular, E{ax + b} = aE{x} + b
Complex random J1II1'iables: If z = x + jy is a complex random variable, then
its expected value is by definition
E{z} = E{x} +jE{y}
144 PROB.uu.rrv ANDRANDOI!fVARIABLI!S
1:
is a complex function of the real random variable x then
Variance
Mean alone will not be able to truly represent the p.dJ. of any random variable. To illus-
trate this, consider two Gaussian random variables Xl '" N (0, 1) and X2 '" N(0, 3). Both
of them have the same mean f..L = O. However, as Fig. 5-25 shows, their p.d.fs are quite
different. Here Xl is more concentrated around the mean, whereas X2 has a wider spread.
Clearly, we need at least an additional parameter to measure this spread around the mean!
For a random variable X with mean f.L, X - f..L represents the deviation of the random
variable from its mean. Since this deviation can be either positive or negative, consider
the quantity (x - f..L)2. and its average value E[(x - p)2] represents the average square
deviation of x around its mean. Define
0'; £: E[(x - 1£)2] > 0 (5-59)
With g(x) = (x - 1£)2 and using (5-55) we get
(a) u2 = 1 (b) u2 = 3
FlGURES·2S
CKAP1'ER S FUNcnONS OF ONE RANDOM VARIABLE 145
Hence
(12 = E{x2 ) _ ,,2 = E{x2 } _ (E{X})2 (5-61)
or, for any random variable
(12 = E{x2 ) = -1
2c -c
1 c
x 2 dx = -c
3
2
I(x) = _1_e-(~-'f/)2{lq2
(15
where up to now TJ and (12 were two arbitrary constants. We show next that TJ is indeed
the mean of x and (12 its variance.
Proof. Clearly. lex) is symmetrical about the line x = '7; hence E{x} = 1]. Furthexmore.
roo e-l;il-r/l2/'MI' dx = ( 1 5
1-00
because the area of I(x) equals 1. Differentiating with respect to ()". we obtain
1 -00
00 (x - 11)2 -1;il-rr)J/ltIl d -
()"
3 e X-v"-7f
~2
Multiplying both sides by (}"2/.fii. we conclude that E(x - 7/)2 = (}"2 and the proof is
complete....
EX \l\lPLE 5-23 ~ The random variable x tSkes the values 1 and 0 with probabilities P and q = 1 - P
respectively. In this case
E{x} = 1 x p+Oxq=p
E{x2 } = 12 X P + Q2 x q = P
Hence
EXAl\IPLE 5-24 ~ A Poisson distributed random variable with parameter A takes the values O. 1, ...
with probabilities '
Ak
PIx = k) = e-J._
k!
146 PROBABJllTY Al'IDRANDOMVAIUAIILES
i'=Lkl
1<-<1
co A"-I 1 co Ak
el. ="k-
~ kl
"k-kl
= -A~
k_O k-I
Hence
Notes 1. The variance 0'2 ofarandom variable x is a measure of the concentration of" near its mean 1/. Its
relative frequency interpretation (empirical estimate) is the average of (Xl - 1/)2:
(5·65)
where X, are the observed values of x. This avenge can be used as the estimate of 0'2 only if 1/ is known. If it
is unknown. we replace it by its estimate of and we change 11 to 11 - 1. This yields the estimate
known as the sample variance of" [see (7-65)]. The reason for changing n to 11 - 1 is explained later.
2. A simpler measure of the concentration of x near IJ is the .first absolute central moment M =
E {Ix - 171}. Its empirical estimate is the average of IXI - 171: '
M~;LIXi-171
If 1/ is unknown. it is replaced by .f. This estimate avoids the computation of aquares.
5-4 MOMENTS
The following quantities are of interest in the study of random variables:
Moments
(5-67)
CHAP1'I!RS PUNCT1ONSOPONaRANDOMVARWILS 147
Central moments
Absolute moments
ILn = E{(x _1})n} = I: (x -1}t f(x)dx (5-68)
E{lxIR} (5-69)
Generalized moments
E{lx -al"} (5-70)
We note that
(5-71)
Similarly.
(5-72)
In particular.
JLo =mo =1 ILl =0
and
Notes 1. If !he function I (x) is iDtelpleted as mass density on the :;c axis. then E{x} equals its center of
gravity, E(r} equals the moment of inertia with respect to !he origin. and 0'2 equals the c:eoIraI momeat of
inertia. The standard deviation (1' is the radius of gyration.
2. The constants 71 and (1' give only a limited characterization of I(x). Knowledge of other rnomeIIts
provides additional information that can be used, for example, to distinguisb between two densities with the
same" and (1'. In fact. ifm" is known forevcry n, then, undetcertainconditiona. I(x) is ~ uniquely
[see aiso (5-1OS)]. The underlying theory is known in mathematics as the mtRrI8IIl pmb1em.
3. The moments of a random variable are not atbitrary numbe.rs but must satisfy 'YIrious Inequalities
[see (5·92)]. For example [see (5-61»)
E{lxl ) -
"_{I ·3· -. (n - 1)(1"
2kk!(l2k+1.J1Jii
n
n
=2k
=lk + 1 (5-74)
The odd moments of x are 0 because f( -x) = f (x). To prove the lower part of
(5-73), we differentiate k times the identity
1 00
-00
e-ax1 dx = ~-
a
This yields
1 00
X
2k _ax2
e dx -
_ 1 . 3 ... (lk - 1)
2k
V tr
2k+l
-00 a
and with a = 1/2(12, (5-73) results.
Since fe-x) = f(x), we have
E{lxl2k+ I } = 2fx2k+1 f(x)dx = _2_ (00 x2k+le-xl /'Jtr2 dx
o u~Jo
With y =x 2/2(12. the above yields
~ -
1r
(2U2)k+l
2(1
f -
0
le'dy
(5-76)
In particular.
E{x} = uJ1r/2 Varix} = (2 -tr/2)u2 (5-77)
~
CIW'J'SIl S PUNC'I'IONS OF ONS RANDOM VARlAJIt.E 149
.,fi 2 _x2 M - 2
I(x) = - - x e 1 - U(x)
a 3 .fii
then
Poisson random variables. The moments of a Poisson distributed random variable are
functions of the parameter A:
00 Ale
mll(A) = E{x"} = e-J. ~k"-kl (S-SO)
k=O •
00 Ale
JLII(A) = E{(x - A)"} =e-J. 2:(k - A)"-kl (S-SI)
k-o •
We shall show that they satisfy the recursion equations
mll+l(A) = A[m,,(A) +m~(A)] (S-82)
JLII+I(A) = A[nJLn-I(A) + JL~(A)J (5-83)
ESTIMATE OF THE MEAN OF g(x). The mean of the random variable y = g(x) is
given by
And if it is approximated by a straight line, then 11, ~ g(1I). This shows that the slope
of g(x) has no effect on 11,; however, as we show next, it affects the variance ofy. q;
Variance. We maintain that the first-order estimate of 0-; is given by
q; ~ 19'(11)1 20 2 (5-87)
Proof. We apply (5-86) to the function g2 (x). Since its second derivative equals 2(g')2 +
2gg", we conclude that
Inserting the approximation (5-86) for 11, into the above and neglecting the 0'4 tenD, we
obtain (5-87).
EXA\IPLE 5-27 ~ A voltage E = 120 V is connected across a resistor whose resistance is a random
variable r uniform between 900 and 1100 O. Using (5r85) and (5-86), we shall estimate
the mean and variance of the resulting cwrent
. E
1=-
CHAPl1!R5 FUNCTlONSOJ'ONBRANOOMVARIABU! 151
A measure of the concentration of a random variable near its mean T1 is its variance
0- 2 • In fact,
as the following theorem shows, the probability that x is outside an arbitrary
interval (TJ - s, 71 + s) is negligible if the ratio 0-/8 is sufficiently small. This result,
known as the Chebyshev inequality. is fundamental.
-00
f(x)dx + 100
1/'1'£
f(x)dx =
1\%-"'i!!~
f(x)dx
Indeed
and (5-88) results because the last integral equals P{lx - ,,1 ~ 8}. ~
Notes 1. From (5-88) it follows that. if (1 ::: 0, then the probability that x is outside the interval (71- 8. 71 + 6)
equals 0 for any 8; hence x = 71 with probability I. Similarly, if
E(r} = ,r + (12 = 0 then 71 =0 a =0
hence x = 0 with probability I.
2. For specific densities, tbe bound in (5-88) is too high. Suppose, for example, that x is normal. In this
case, P(lx -711 ~ 3a) = 2 - 2G(3) = 0.0027. Inequality (5-88), however. yields P(1x -111 ~ 3a} !: 1/9.
The significanc:c of Chebyshev's inequality is the fact that it holds for any I (x) and can. tbelefore be
used even if I(x) is not known.
3. The bound in (5-88) can be RJduced if various assumptions are made about I(x) [see ChemoJl
bound (Prob. 5·35)J.
E(x) = L oo
xJ (x) dx ~ [00 xf(x)dx ~ ~ [00 f(x)dx
and (5-89) results because the last integral equals Pix ~ a}. ~
152 PROB...BR.ITY ...NDRANOOM V...RI·...BLES
BIENAYME ~ Suppose that x is an aIbitrary random variable and a and n are two arbitrary numbers.
INEQUALITY Clearly, the random variable Ix - al n takes on,Iy positive values. Applying (5-89), with
a = e", we conclude that .
(5-90)
Hence
(5-91)
This result is known as the inequality of Bienaymi. Chebyshev's inequality is a special
case obtained with a = '1 and n = 2. <fi
LYAPUNOV ~ Let Pic = E {lxllc} < 00 represent the absolute moments of the random variable x.
INEQU~Y Then for any k
tN(Ic-I) < II.I/Ie k~l (5-92)
"'Ic-l - "'/C
P: ::: /J2. ~ ::: Pi. p; ::: p1 ... ·. P:- I ::: p:-I. or 1-'1_1
RI/(,t-1) <
-
$III"
1-'1
f:
If jw is changed to s. the resulting integral
.. We shall show that the characteristic function of an N (71, (1) random variable x equals
(see Table 5-2)
(5-100)
, The random variable z = (x - rMu is N (0, 1) and its moment function equals
Proof.
with
z'- 1 S2
SZ - -
2
= --(z
2
- $)2 + -2
we conclude that
$z($) = e'l{l 1 00
-00
1
~e-(Z-$Y.{l dz
'V21r
= ~(l (5-101)
And since x = uz + 71. (5-100) follows from (5-99) and (5-101) with $ = jw. ....
>l
Inversion formula As we see from (5-94), ¢>x(w) is the Fourier transform of
I(x). Hence the properties of characteristic functions are essentially the same as the
properties of Fourier transforms. We note, in particular, that I (x) can be expressed in
terms of <I>(w)
I(x) = -217r 1 00
-co
¢>x(w)e-)tAIX dw
•
(5-102)
Hence
(5-103)
Thus the derivatives of «I>(s).at the origin equal the moments of x. This justifies
the name ''moment function" given to «I>(s).
In particular,
«1>'(0) = ml = 11 (5-104)
Note Bx.panding _(s) into a series near the origin and using (5-103). we obtain
00
_(s) ~~s"
= L..." (5-105)
n!
_=0
=
This is valid only if all moments are finite and the series converges absolutely nears O. Since f(x)
can be determined in terms of _(s). (5-lOS) shows that, under the stated conditions. the density of a random
variable is uniquely determined if all its moments are known.
~ We shall determine the moment function and the moments of a random variable x
with gamma distribution: (see also Table 5-2)
d'+1
I(x) = yxb-1e-CxU(x) Y = r(b+ 1)
From (4-35) it follows that
. . () 1
'fI'S
o
=y
00
x b-1 e-(c-s)x d x= yr(b) =--...,-
(c - s)b
d'
(c - s)b
(5-106)
Chhquare: Setting b = m/2 and c = 1/2 in (5-106), we obtain the: moment function
of the chi-square density x2(m):
.(s) -
"
1
- J(l-2s)1n
E{x} =m q2 = 2m (5-109)
.
Cumulants. The cumulants All of random variable x are by definition the derivatives
dn'll(O) = All (5-110)
ds"
CHAPl'ER S PUNCTJONS OFONE RANDOM VARIABLE 155
ofits second moment function \II(s). Clearly [see (5-97)] \11(0) = >"0 = 0; hence
1.'(
... S) = >"IS 1 2 + ... + ->"nS
+ ->"2$ 1 n + ...
2 n!
We maintain that
(5-l11)
Discrete Type
Suppose that x is a discrete-type random variable taking the values Xi with probability
Pi. In this case, (5-94) yields
~xC(() = 2: Pi e}fn1 (~-112)
i
Thus ~z{(() is a sum of exponentials. The moment function of x can be defined as in
(5-96). However. if x takes only integer values. then a definition in terms of z transforms
is preferable. .
MOMENT GENERATING FUNCTIONS. If x is a lattice type random variable taking
integer values, then its moment generating function is by definition the sum
+co co
r{z) = E{zX} = 2: PIx = n}zn = 2: Pnzn (5-113)
R--CO
Thus rO/z) is the ordinary z transfol1I\. of the sequence Pn = Pix = n}. With ~xC(()
as in (5-112), this yields
co
~z«(() = r{eiCII) = L PneinCII
n=-co
Thus ~z«(() is the discrete Fourier transform (OFT) of the sequence {PnJ. and
\II{s) = In r(~) t5-114)
FXA\IPLE 5-30 ... (a) Ifx takes the values 0 and I with PIx = I} = p and P{x = O} = q. then
r(z) = pz +q
r'(l) = E{x} =p rlf(l) = E{r} - E(x) = 0
(b) Ifx bas the binomial distribution B(m, p) given by
(5-117)
and
r'(l) = mp r"(!) = m(m - l)p2
Hence
£{x} = mp 0'2 = mpq (5-118)
~
~ Let x"'" B(n t p).1ben from (5-117), we obtain the characteristic function ofth~
binomial random variable to be
DBMOIVRE-
LAPLACE <I>.I'(Cr) = (pelf» + q)"
THEOREM
and define
x-np
y=-- (5-1201
..jnpq
CHAPTER 5 PUNCTIONS OFONE RANDOM VAlUABLE 157
This gives
(5-121)
since
cp(n) ~ 2 L:,.)W
1 (. )
00 k-2
pq +q ~p
k ( )Ic
-+ 0,
k..,3 k.,Jn (.JPlj)
On comparing (5-121) with (5-100), we conclude that as n -+ 00, the ran-
dom variable y tends to the standard nonnal distribution, or from (5-120), x tends to
N(np, npq). 4J
In Examples 5-32 and 5-33 we shall exhibit the usefulness of the moment gener-
ating function in sOlving problems. The next example is of historical interest, as it was
first proposed and solved by DeMoivre.
EX \i\IPLE 5-32 ~ An event A occurs in a series of independent trials with constant probability p. If
A occurs at least r times in succession, we refer to it as a run of length r. Find the
probability of obtaining a run of length r for A in n trials.
SOLtmON
Let Pn denote the probability of the event X n that represents a run of length r for A in n
trials. A run of length r in n + 1 trials can happen in only two mutually exclusive ways:
either there is a run of length r in the first n trials, or a run of length r is obtained only
in the last r trials of the n + 1 trials and not before that. Thus
(5-122)
where
B,,+I = (No run of length r for A in the first n - r trials}
n {A does not occur in the (n - r + l)th trial} .
n {Run of length r for A in the last r trials}
=X n- r n Ant4 nA n.. .. · n4,
r
158 PR!)BABJUTY ANO RANDOM VAlUABLES
(5-128)
[1-z(l-qp'z')r 1 =
co Ln/(,+I)J (
L: L: n~ r
k) (_I)k(qp')k z n
co
= L:an.,zn
n~ k~ n~
and the upper limit on k corresponds to the condition n - kr ~ k so that ("kkr) is well
defined. Thus
an,T =
Ln/(/+I)J (
L: k) <_I)k(qpr)k
n~ r (S-133)
k=O
With all" so obtained, finally the probability of r runs for A in n trials is given by
Pn = 1 - qll =1- an•r + pT an-r,r (S-134)
The following problem has many varients and its solution goes back to Montmort
(1708). It has been further generalized by Laplace and many others.
TABLE 5-1
Probability Pre in (5-134)
n=5 n=10
r p=I/5 p=I/3 p=I/5 p=I/3
1 0.6723 0.8683 0.8926 0.9827
2 0.1347 0.3251 0.2733 0.5773
3 0.0208 0.0864 0.0523 0.2026
4 0.0029 0.0206 0.0093 0.0615
5 0.0003 0.0041 0.0016 0.0178
6 0.0003 0.0050
160 PROBABI1.l'rY AND RANDOM VARIABLES
EX ,\.i\IPLl~ 5-33 ... A pers~n writes n letters and addresses n envelopes. Then one letter is randOmly
placed into each envelope. What is the probability that at l~t one letter will reach its
TBEPAIRING correct destination? What if n -+ oo?
PROBLEM
SOLUTION
When a letter is placed into the envelope addressed to the intended person. let us refer to it.
as a coincidence. Let Xk represent the event that there are exactly k coincidences among
the n envelopes. The events Xo. Xl •. · .• Xn form a partition since they are mutually
exclusive and one of these events is bound to happen. Hence by the theorem of total
probability
Pn(O) + Pn(l) + p;,(2) + ... + Pn(n) = 1 (5-135)
where
(5-136)
To determine Pn(k) let us examine the event Xk. There are (:) number of ways of
drawing k letters from a group of n. and to generate k coincidences, each such sequence
should go into their intended envelopes with probability
1 1 1
n n-l n-k+l
while the remaining n - k letters present no coincidences at all with probability Pn-k(O).
By the independence of these events. we get the probability of k coincidences for each
sequence of k letters in a group of n to be '
1
- - - - - - - P n - " (0)
n(1l - 1) ..• (n - k + 1)
But there are (~) such mutually exclusive sequences. and using (2-20) we get
Pn(k)=P{X,,}= (n)
k
l'
n(n-l) ... (n-k+l)Pn-,,(O)=
Pn-"(O)
kl (5-137)1
Since Pn(n) = lIn!, equation (5-137) gives Po(O) = 1. Substituting (5-137) into (5-135)'
term by teno, we get
(0) + Pn-I(O) + Pn-2(0) + ... + PI (0) +..!.. = 1 (5-138)1
Pn l! 21 (n -1)! n!
which gives successively
P3(0) = l
and to obtain an explicit expression for Pn (0), define the moment generating function
00
q,(z) =1
(_l)k) z"
-=-- =I: L -,
e-Z 00 ("
Z ncO k=O k.
and on comparing with (5-139), we get
(-lyt
Pn(O) = I: -- ~
n
k=O k!
-e1 = 0.377879 (5-141)
(5-142)
Thus
P{At least one letter reaches the correct destination}
n (_l)k
= 1 - PJI(O) =1- I: - k ~ 0.63212056
l (5-143)
k=O •
Even for moderate n, this probability is close to 0.6321. Thus even if a mail delivery
distributes letters in the most causal manner without undertaking any kind of sorting at
all, there is still a 63% chance that at least one family will receive some mail addressed
~~~ .
On a more serious note, by the same token. a couple trying to conceive has about
63% chance of succeeding in their efforts under normal conditions. The abundance of
living organisms in Nature is a good testimony to the fact that odds are indeed tilted in
favor of this process. .....
Determination of the density of g(x). We show next that characteristic functions can
be used to determine the density !.,(y) ofthe random variable y = g(x) in terms of the
density !~ (x) of x.
From (5-58) it follows that the charaeteristic function
[ : ejw'h(y)dy
TABLE~2
Log-nonna!
1 _(lIIJI-p)2
---e
j1.tJ2
,
xJ2rr(72
x 2: 0,
x 2: O. a > O. fJ > 0
(kAi i-l-fu 1
Erlang-k (1 - jw/ k).)-k
(k_l)!x e I Jc).2
x·/1-1
Chi-square x2(n) 2R /1r(nI2)'
-x/1 > 0
,x_
n 2n (1- j2(a})-n/1
Weibull
ax--Ie~/-.
x2:0,a>O./J>O
-a r (1+-f1)
(fJr'- J (~)2/_ [r(l+~)
- (r (I +~)r]
Rayleigh x -~/2tt~
tT2e ,x 2:
0
/fu
(2 -n/2)ul (1 +j IftT6)) .-.2';'/1
1 a+b (b _a)2 eJ- _ , - jtl4>
Unifonn U(a, b) -b-,a<x<b
-0 2 12 j6)(b- a)
Rician .:...-~
tT2
10 (~).
0'1
u ~ [(1 +r)10(r/2)
-oo<x<oo.a>O + r11(r/2)],-,/2,
"
=
r a 2/2O'2
TABLE 5-2
(Continued)
Probability density Characteristic
Random variable function /"Vt) Mean Variance function .... (c.»
F -distribution
r«m + n)/2) (~)""2 x",/2-1 n
--2,n > 2
n2 (2m + 2n - 4)
r(mj2)r(n/2) n n- m(n - 2)2 (n - 4) .n>4
( mx)
x 1+,,-
-(M+oe)/2
.x>O
Hypergeometric
(~)(~:r) nM
M( 1 -M)(
n- - 1 -n-l)
--
Ii"
(~)
N N N-l
(r+!-l)p'qk. rq rq
f,x.\ \1t'LL 5-35 .. We assume finally that x is uniform in the interval (-7r /2, 7r /2) and y = sin x. In
this case
<Py(w) =[ ).
e fI}$IIIX I(x)dx = -I11t/2 eJ(t)$IIIx
.. dx
-eo 7r -71/2
As x increases from -7r /2 to 7r /2. the function y = sinx increases from -1 to 1 and
dy = cos x dx = Vl- y2dx
Hence
PROBLEMS
5·1 The random variable x is N(5, 2) and Y= 2x + 4. Fmd 1/y. u y• and ly(Y).
5·2 Find Fy<y) and/y<Y) ify = -4x+3 and Ix(x) = 2,-2%U(x).
5-3 If the random variable x is N(O, el) and Sex) is the function in Fig. 5-4. find and sketch the
distribution and the density of the random variable y = Sex).
5-4 The random variable x is unifonn In the interval (- 2c, 2c). Find and sketch Iy{y) and Fy(Y)
if y = g(x) and g(x) is the function in Fig. 5-3.
5-5 The random variable x is N(O, b2 ) and g(x) is the function in Fig. 5-5. Find and sketch Iy(y)
and F)'(y).
CHAPTER S FUNCTIONS OF ONE RANDOM VARIABLE 165
5-6 The random variable x is uniform in the interval (0, 1). Find the density of the random
variable y =- In x.
5-7 We place at random 200 points in the interval (0, 100). The distance from 0 to the first random
point is a random variable z. Find F:(~) (a) exactly and (b) using the Poisson approximation.
5-8 If y = ../i, and x is an exponential random variable, show that y represents a Rayleigh
random variable.
5-9 Express the density lyCY) of the random variabley = g(x)intermsof I .. (x) if(a)g(x) = Ixl;
(b) g(x) = e-X U (x).
5-10 Find F,(y) and i.v(Y) if Fx(x) = (\ - e-2r )U(x) and (a) y = (x-l)U(x -1); (b) y = x2 •
5-11 Show that, if the random variable x has a Cauchy density with ex = 1 and y = arctan x, then
y is uniform in the interval (-'f{ 12, 'f{ 12).
5-12 The random variable x is uniform in the interval (-2rc,2rc). Find I,(Y) if (a) y=x3,
.(b) y = x", and (e) y = 2 sin(3x + 40°).
5-13 The random variable x is uniform in the interval (-I, 1). Find g(x) such that if y = g(x)
then ly(Y) = 2e-2 \·U(y).
5-14 Given that random variable x is of continuous type. we form the random variable y = g(x).
(a) Find ly(Y) if g(x) = 2FAx) + 4. (b) Find gex) such that y is uniform in the interval
(8, 10).
5-15 A fair coin is tossed 10 times and x equals the number of heads. (a) Find Fx(x). (b) Find
F,(y) ify = (x - 3)2.
5-16 Ifx represents a beta random variable with parameters ex and fj, show that 1 - x also represents
a beta random variable with parameters fj and ex.
5-17 Let x represent a chi-square random variable with n degrees of freedom. Then y = r is
known as the chi-distribution with n degrees of freedom. Determine the p.d! of y. .
5-18 Let x ...., U(0.1). Showthaty = -210gxis x2(2).
5-19 If x is an exponential random variable with parameter A. show that y = XI/fJ has a Weibull
distribution.
5-20 If t is a random variable of continuous type and y = a sin wt. show that
=
5-22 (a) Show thatify = ax+b. then u y lalu.l' (b) Find 71, andu\ ify = (x -71.1)lux '
=
5-23 Show that if x has a Rayleigh density with parameter ex and y = b + cx2• then u; 4c2u 4 •
5-24 lfxis N(0.4) andy = 3x2 , find 71"U" and Ir{Y). .
5-25 Let x represent a binomial random variable with parameters nand p. Show that (a) E{x) =
np; (b) E[x(x - 1)) = n(n - 1)p2; (e) E[x(x - l)(x - 2») = n{n - l)(n - 2)p3;
(d) Compute E(X2) and E(xl).
5-26 ForaPoissonrandomvariablexwithparameter}..showthat(a)P(O < x < 2J..) > (04-1)/04;
(b) E[x(x - 1») = )..2, E[x(x -l)(x - 2)] = ),.3.
5-27 Show that if U = [A It ••• , An] is a partition of S, then
E{x) = E{x I AdP(A.) + ... + E(x I A,,)P(A,,).
5-28 Show that if x ~ 0 and E (x) = 71, then P (x ~ ..fti) ~ .ft.
5-29 Using (5-86), find E (x3 ) if 71x = 10 and (1"x 2. =
1" PltOBABII..l'n' ANORANDOMVARlABLES
.5-30- Ii" is 'uniform in the interval (10.12) and y = x3• (a) find f,.(Y); (b) find E{y}: (i) exactly
(ii) using (5-86).
5·31 The random variable x is N(IOO. 9). Find approximately the mean of the random variabl~
y = l/x using (5-86).
5-32 (a) Show that if m is the median of x. then
5-41 Let x be a negative binomial random variable with parameters r and p. Show that as p -+ 1
and r -+ 00 such that r(l - p) -+ A. a constant. then
'Aft
P(x=n+r)-+e->'- n=O.1.2•...
n!
5·42 Show that if.E{x} = 7/. then
543 Show that if 4>z{WI) = 1 for some WI #:- O. then the random variable x is of lattice type
taking the values x" = 2:rrn/wl'
Hint:
5-44 The random variable x has zero mean. centtal moments JI.". and cumulants A". Show that
A3 = Jl.3. 4 == J.l4 - 3J1.~; ify is N(O; 0-;) and 0, == (lr. then E{x4} == E{t} + A4.
5-4S The random variable x takes the values O. 1•... with P{x = k} == Pit. Show that if
y = (x - I)U(x - 1) then r,(z) = Po + z-I[rx(z) - Po]
717 == 7/", - 1 + Po E{f} == E{x2 } - 21Ir + 1 - Po
5·46 Show that. if 4>(w) = E{e JItIX }. then for any a;.
II /I
L L 4>(Wj - WJ)alaj ?: 0
i-I J=I
Hint:
5-47 We are given an even convex function g(x) and a random variable x whose density I(x)
is symmetrical as in Fig. PS-47 with a single maximum at x = 7}. Show that the mean
E (g(x - a)} of the random variable g(x - a) is minimum if a = 71.
I(x)
o FIGUREPS47
5-48 The random variable x is N~O; (12). (a) Using characte;istic functions. show that if g(x) is
a function such that g(x)e-Z p6l -+ 0 as /xl-+ 00, then (Price's theorem)