Professional Documents
Culture Documents
Et91 PDF
Et91 PDF
A unified framework is establishedfor the study of the computation of the distribution function from the characteristic function. A new approach to the
proof of Ourland's and Oil-Pelaez'sunivariate inversion theorem is suggested.
A multivariate inversion theorem is then derived using this technique.
1. INTRODUCTION
It is often easierto manipulatecharacteristicfunctions than distribution functions. If the characteristic function is known then we can compute the distribution function by using an inversion theorem. This paper reviews the
theoretical basis of inverting characteristic functions, presenting the work
within a unified framework based on the well-known results of Fourier
analysis.
Inverting the characteristic function to find the distribution function has
a long history (cf. Lukacs [16, chapter 2]). Levy's [15] result is the most famous of thesetheorems, although in this context its practical use is limited
to somespecialcasesunlessthe random variable of interest is alwaysstrictly
positive (seeHohmann [2] and Knott [14]). Ourland's [10] paper gavea more
useful inversion theorem, but it is the paper of Oil-Pelaez [9] which has provided the basisof most of the distributional work completedin this field (cf.
Davies [4,5] and Imhof [12]). Ourland's and Oil-Pelaez's results are almost
identical. Ourland's is basedon the principal value of a Lebesgueintegral,
while Oil-Pelaez removesthe needfor principal valuesby using a Riemann
integral. The univariate inversion has beenusedextensivelyin econometrics;
a short review is given in Phillips [17].
RecentlyShively [23] has generalizedOil-Pelaez'swork on Riemannintegrals to provide a bivariate inversion theorem, while Shively [24] used this
expressionto tabulate critical values of a statistic proposed by Watson and
Engle [25] for testing the stability of the parametersin a regressionmodel.
Only Ourland [10] has attempted to provide a multivariate inversion theoI thank Dr. M. Knott and D~. R.W. Farebrother
comments of Professor P.C.B. Phillips and the referees were also of considerable assistance.
0266-4666/91$5.00 + .00
519
520
N.G. SHEPHARD
rem. Our results are slightly different from those obtained by Gurland. In
this paper we develop a framework for the analysis of univariate inversion
theoremswhich offers an easymultivariate generalization.The result, which
is given in Theorem 5, is an expressionthat involves terms that are straightforward to compute.
2. THE UNIVARIATE INVERSION THEOREM
Bohmann [1] studied inversion theorems using the results of Fourier analysis. His work, which relies on the properties of convolutions, is in keeping
with the discussionof characteristicfunctions given by Feller [8, chapterXV].
Although the subject matter of this paper is rather different from that considered by Bohmann, our general approach will be consistent with his.
To establishour notation we introduce somedefinitions. Let F denotethe
distribution function of interest. Supposeits correspondingdensity,f, is integrable in the Lebesguesense(written f E L, seefor example Rubin [19,
chapter 1] for an introduction to Lebesgueintegrals) and that its characteristic function is defined as rp(t) = f~ooeiIXf(x)dx. We supposethat rp is
known and we wish to compute F directly from it. The basic result we will
use to perform this calculation is the Fourier inversion theorem.
THEOREM 1 (Fourier inversion theorem). Supposeg, rpE L, and
<p(t)=
eitXg(x) dx,
(1)
then
e-i/Xtp(t) dt,
(2)
everywhere.
521
= -1
oo
e-itxlp(t)dt.
211" -00
Equally, following, for example, Feller, we can convolute Fwith the uniform distribution on [-h,h] and then useCorollary I to produce Levy's important theorem.
-~
211"
-00
= ! - ~ ra>~
2
where Ll17(t)
t
211"
Jo
= 71(t) +
If'(t).e-;tX
dt,
(3)
It
71(-t).
Proof.Givenin theappendix.
522
N.G. SHEPHARD
mean
F(x)
= -1 - - 1
Urn
211"n--+oo
[ 1 - -t
.:l
'P(t)riIX
lit.
(4)
it
(5)
where x = (XI,. . . ,xp)' and t = (II,. . . ,tp)' wherep is some positive integer. It is well known that ~heFourier inversion theoremand convolution theory go through to the multivariate case,so allowing us trivial proofs of the
following well-known corollaries.
f f
-00
I{)
e-it'XIp(t) dt.
E LP, then
(6)
-00
lary1.
then
p
=- 1
Pr(R)
(2h)
(27r)P
f . . .f
oo
-00
oo
II
p
-00 j=l
sin htj
htj
e-it'xtp(t) dt.
(7)
523
THEOREM5. If/' I() E LP, then under the assumptionof the existenceof
a mean, the following equality holds.
L
L
[ tp(t)e-ix't
-(-2)P
...
,:l,:l...,:l
] dt=ut(x),
(8)
""
(21r)P
""
t1 t2
tp it,it2...itp
where
ut(x) =2PF(x"...,xp)
= 2ip-lJ1
.. .~ [ Ip(t)e-ix't
tp it 1it2 . . . it
II
p
.6
...J1 1m Ip(t)e-ix't
1\12' . . I
12
=2iPJ1...J1Re
12
'
ifpisodd
Ip
Ip(t )rix't
1\/2".1
Ip
] '
if p is even. (9)
tions.
(211")P n-+oo
i ... i II
n
n p
j=l
to ]
1-.1.n ~~
t,t2 ...~tp
[ Ip(t)e-x't
itlit2,..itp
] dt=ut(x)
(10)
524
N.G. SHEPHARD
where
ut(x)
= 2PF(xl>'"
,xp)
+ .. . + (-I)P.
4. EXAMPLES
Theorem 3 and the following examplesof Theorem 6 may be usedto successively generatethe distribution functions of F(xd, F(X),X2)' F(XIoX2,X3),
and so forth.
p
=2
1 1
""
~ 22
(21/")
""
A A [ ",,(t)e-ix"
..
II 12
1 1
""
=~ -23
(21/")
= 4F(XIoX2) ""
~-23
(21/")
""
ARe [ ",,(t)e-ix"
12
""
0
""
dt)dt2
t)t2
AAA. [ ",,(t)e-ix"
..
11 12 13
= ~ 24 1 1 1
(21/")
2 [F(xd + F(X2)] + 1.
L 1 1
p=3
""
dt) dt2
It)lt2
""
""
(11)
dt)dt2dt3
It)lt21tp
A A 1m [ ",,(t)e-ix"
12 13
= 8F(XIoX2,X3)
- 4[F(XIoX2)
+ 2[F(xd
p=4
+ F(X2) +
1 1 1 1
""
~ 24
(21/")
""
""
""
""
(21/")
""
+ F(XIoX3)
F(X3)] - 1.
AAAA
""
""
+ F(X2,X3)]
(12)
[ ",,(t)e-ix"
. . .. ] dt)dt2dt3dt4
11 12 13 14
= ~ 2S 1 1 1 L
dt)dt2dt3
t)t2tp
AAARe
12 13 14
[ ",,(t)e-iX"
dt)dt2dt3dt4
t)t2t3t4
= 16F(XIoX2,X3,X4)
- 8[F(x),X2,X3)
+F(X),X2,X4)
+F(X),X3,X4)
+F(X2,X3,X4)]
2[F(xd
(13)
525
4 (1961): 99-1157.
2. Bohmann, H. A method to calculate the distribution function when the characteristic function is known. Nordisk Tidskr. Informationsbehandling
(BIT) 10 (1970): 237-242.
3. Cramer, H. Mathematical methods of statistics. Princeton: Princeton University Press, 1946.
4. Davies, R.B. Numerical inversion of a characteristic function. Biometrika 60 (1973): 415-417.
5. Davies, R.B. AS 155: The distribution of a linear combination of x2 random variables. Applied Statistics 29 (1980): 323-333.
6. Farebrother, R.W. Testing linear restrictions with unequal variances, a problem. Econometric
Theory 4 (1988): 349.
7. Farebrother, R. W. Testing linear restrictions with unequal variances, a solution. Econometric
Theory 5 (1989): 324-326.
8. Feller, W. Introduction to Probability Theory and Its Applications, Volume Two. 2nd ed.
New York: Wiley, 1971.
9. Gil-Pelaez, J. Note on the inversion theorem. Biometrika 37 (1951): 481-482.
10. Gurland, J. Inversion formulae for the distribution of ratios. Annals of Mathematical Statistics 19 (1948): 228-237.
11. Hewitt, E. & K.R. Stromberg. Real and Abstract Analysis. New York: Springer-Verlag, 1965.
12. Imhof, J.P. Computing the distribution of quadratic forms in normal variables. Biometrika
48 (1961): 419-426.
13. Kendall, M.G., A. Stuart & J.K. Ord. Advanced Theory of Statistics, Vol. 1. 5th ed.
London: Griffin, 1987.
14. Knott, M. The distribution of the Cramer-von Mises sta~isticfor small samplesizes.Journal of the Royal Statistical Society, SeriesB 36 (1974): 430-438.
15. Levy, P. Calcul desprobabilites. Paris: Gauthier-Villars, 1925.
16. Lukacs, E. Characteristic Functions. London: Griffin, 1970.
17. Phillips, P.C.B. Exact small sample theory in the simultaneous equations model. In
llandbookof Econometric5,
VolumeI, Z. Griliche5& M.D. Intriligator (ed5.),Arn5tenlam;
North-Holland Publishing Company, 1983.
526
N.G. SHEPHARD
18. Phillips, P.C.B. The distribution of matrix quotients. Journal of Multivariate Analysis 16
(1985): 157-161.
19. Rubin, W. Real and Complex Analysis. New York: McGraw-Hili,
1970.
20. Shephard, N .G. Numerical integration rules for multivariate inversions. Journal of Statistical
Computation and Simulation 39 (1991): 37-46.
21. Shephard, N.G. Evaluating the distribution function of the maximum likelihood estimator of a first order moving average process and a local level model. Working paper, London
School of Economics, 1990.
22. Shephard, N.G. Tabulation of Farebrother's test for linear restrictions in linear regression
models under heteroscedasticity. Working paper, London School of Economics, 1990.
23. Shively, T.S. Numerical inversion of a bivariate characteristic function. Working paper,
University of Texas at Austin, 1988.
24. Shively, T.S. An analysis of tests for regression coefficient stability. Journal of Econometrics
39 (1988): 367-386.
25. Watson, M.W. & R.F. Engle. Testing for regression coefficient stability with a stationary
AR(1) alternative. Review of Economics and Statistics 67 (1985): 341-346.
APPENDIX
Proof of Theorem 3. The function g(y) = sign(y), y E [-h,h], hasthe transform
'Ph(t)=
i:
eity sign(y)
dy = 2(cos ht it
1)
'
(A.I)
t.
The convolution, written Uh(X), of g(y) with the continuous density f(x) is
2F(x) - F(x + h) - F(x - h) which, although bounded, is not integrable as h -+ 00.
The convolution has the transform 2\O(t)(cos ht - l)/it, which is integrable as
\0 E L. Hence, for fixed h we can use the inversion theorem to give the equality
00
(cos - 1) 'P(t)e-ixt dt,
211"
~t
-00
=- 2
It
""
(cosht-l)A
211" 0
[ fII(t)e-iXf
.
] dt=Uh(X).
t
It
(A.2)
is that the integralof this functionwill exist.As a result,in the limit ash -+ 00 the
left-hand side of the (A2) can be reducedto, using the Riemann-Lebesguetheorem
(cf. Feller [8, p. 513])
-2
d
[
27rJo t
(""
"'U).e-ilX
It
dt
= 2F(x)
- 1.
527
= -.!...n
211"
kn(x)
(A.3)
L:kn(x) dx = 1.
(i)
(A.4)
'Pn(t) = [1
~
(A.S)
f [
""
-2
271"
-""
= -2
Itl
l--/(Itl<n)(cosht-l)
""
271" 0
",(t)e-iX'
It
-1).1
dt
",(t)e-iX'
.
It
dt
= Uh* kn(x).
(A.6)
When h -+ 00 the left-hand side of (A.6) can be manipulated using the RiemannLebesguetheorem (cf. Feller [8, p. 513]) becausefor fixed n
[1
I~I ]/((
(A.7)
--2
CO
211" 0
dl.
(A.8)
II
00,
Rememberit is
(A.9)
"" ut * kn(x),
(A.tO)
528
N.G.SHEPHARD
= 2F(x) -
where ut(x)
=-2 1
""
1--
211" 0
1. Thus,
[(1<"'.
>
(t)e-ixt
dt=ut*kn(x).
It
(A.ll)
i [ ][
n
t
--2 Hrn
1 - -.i
2... n-oo 0
n I
]dt = 2F(x)- 1.
<p(t)e-ixl
It
g(ylo.
where
Yj E [-h,h],
(A.12)
= fr. 2(costjh
j=l
Uh(X)=
itj
j h ... j
hj(X)-y),...,.Xp-Yp)g(Y)dY,
-h
- 1)
(A.t4)
-h
L
a..az
Vthal.haz
ap
,hap'
(A.tS)
where the summation is taken over all the values (OJ = ::t 1), with
Vth..hz
hp=
..bp
(-l)bt+"'+bpF(x)-h)b),...,xp-hpbp).
(A.t6)
where the summation is taken over the binary numbers (hj = 0,1) andF denotesa
generic distribution function. By the (multivariate) Fourier inversion theorem,
1 ..,1
co
2P
-co
(2"Y
2P
(2r)P
= -
.p(t)e-tx" fI
co
-co
eo
...
[cos I~~ -
J=1
eo
IIJ
1p(t)e-Ix'I
.<:1.<:1..,.<:1.. .
'112
Ip
Itllt2"'ltp
] dl,
]IT (cost}h-l)dt=Uh(X),
P
}=1
(A.17)
Allowing h -+
00,
-(-2)P
(2?r)P
""
." L
""
.1.1 "'.1
0 '1'2
.p(t)e-ix't
Ip it1itz...itp
] dt = ut(x).
. (A.II)
529
1.
Proof.
ut * kn(x) -
ut(x)
i:
(ut(x-
t) - ut(xkn(t)dt
(A.19)
Ii"" kn(t)(3(t)dtl
If
:S E(
:S EI +
:S EI +
-2
supl(3(t)1
1 ""
~n
-2
supl(3(t)I.
E~n
"2 dt
t
(A.20)
small.
.
Ii