Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Computers ind. Engng Vol. 22, No. 3, pp. 313-321, 1992 0360-8352/92 $5.00+0.

00
Printed in Great Britain. All rights reserved Copyright C) 1992PergamonPress Ltd

SOME ADVANCEMENTS IN MONTE CARLO


INTEGRATION METHODS WITH APPLICATIONS TO
PROXIMITY FUZE DETECTION PROBABILITIES

DARRELL G . LINTON I and MARCUS J. BENDICKSON 2


tComputer Engineering Department, Engineering Bldg, Rm 207, University of Central Florida, Orlando,
FL 32816, U.S.A. and 2Dynetics, Inc., P.O. Drawer B, Huntsville, AL 35814-5050, U.S.A.

(Received for publication 22 October 1991)

AlutractmA simulation model is developed for estimating any quantity defined as a multiple intesral with
constant, variable or infinite limits of intesnttion. The model evaluates multiple integrals by sampfing
uniformally over the multidimensionalvolume defined by the orisinal region of infestation, and employing
the sample variance (associated with Monte Carlo methods) to obtain a probabUistic representation for
the error bound. Uniform sampling over any zesion of integration is accomplished by determining the
appropriate conditional probability density functions and integrating--an approach which is not shown
in the simulation literature. The calculation of detection probabilities for a proximity fuze is used to
illustrate the results (and to show how such problems arise), and comparison with alternative solution
procedures (e.g. Gaussian quadrature) are discussed.

1. INTRODUCTION

A proximity fuze is a device which detects objects in space that pass through some encounter
volume. In designing an area defense system one approach is to equip each defensive missile
(interceptor) with homing terminal guidance and a fuze subsystem. When the guidance mechanism
determines that the interceptor is sufficiently close to the target, the fuze is activated, i f the
encounter volume associated with the fuze detects an object, the interceptor warhead is automati-
cally detonated.
For example, consider a proximity fuze with detection radius r that is activated from time t = 0
to time t = h (Fig. 1).
As indicated in Fig. 1, a cylindrical volume is generated by the proximity fuze on board the
interceptor. Thus, given the density of target objects, it is of interest to determine the probability
that the fuze will detect a target (object).
In particular, suppose the spatial position of an object possesses a trivariate normal
distribution with probability density function ( p d f ) f ( x , y , z) and assume that the volume of
space which the fuze encounters is V. Then the probability P that the fuze will detect the
object is:

P = ~f(x,y, z)dx dy dz.


v

If, for example, V is a cylinder with radius r and length h as above, then

P= f ( x , y , z ) d x dy dz. (1)
. I x - O J y = - , Oz= _(,2_y2) 112

When the trivariate normal pdf has a non-zero mean vector or the covariance matrix is
not diagonal, P from (1) does not yield a closed-form analytical solution. Consequently, the
analyst must use some numerical evaluation procedure which can be implemented on a high-speed
computer. The technique described in this report is used in conjuction with what is commonly
called sampling or Monte Carlo methods [3, 4]. After some.preliminaries, a modification of ,the
above example will be used to illustrate the method. A list of frequently used symbols is given in
Appendix A.
313
314 DARRELL G. LINTON and MARCUSJ. ]~ENDICKSON

Direction of interceptor
velocity vector
B

t-O t,.h
Fig. 1. Cylindrical encounter volume associated with a proximity fuze on board an interceptor.

2. REVIEW OF MULTIPLE INTEGRATION BY SAMPLING

A general statement of the problem is to evaluate the N-dimensional multiple integral/,

I= f f ... ff(Xl,X ,. (2)

where f ( . ) is a multivariate function which is integrable over the region R defined by


R = {(Xl, x2,... , XN): B 1 ~ x I ~ T~, B : ( x , ) <~x2T2(x~),

B3(xl , x=) <~x3 <~ T3(xl , x2), . . . . BN(X, , X 2 . . . . . X N - I) <~ xt~ <~ T~(xl , x= . . . . . xN_ ,)}

and TI, B~ are constants.


In order to evaluate I, let x = (x~ . . . . . xt~), dx --- dXl dx=.., dx~ and compute a quantity Vwhere

(3)
R
Note that
~I/V, if x t R
(4)
p s ( x ) = (0, otherwise
is a valid pdf for a random vector X defined over the region R. Now, consider

o=f..fbqx)/V]dx (5)
R
which differs from I only by the factor (I/V) in the intergrand; i.e. 1/0 = L Using the properties
of the expected value operator with respect to X, say E{. }, it follows from equations (4) and (5)
that
0 .= E { f ( . ) } . (6)
Thus, our original problem of finding I is equivalent to that of finding 0 and although the
expressions for I and 0 differ only by a constant, (6) will allow an unbiased estimator of 0, say
~, to be defined such that E{0} = 0.
To find the function /}, let x~, x2 . . . . . x, be n independent random vectors taken from the
distribution of X with p d f p x ( , ) a n d put
fi = f ( x ~ ) , i = 1, 2 . . . . n.

Although the distribution o f f is unknown, from equation (6) and properties of E{. },
E { f } = E { f ( x , ) } = O, i = 1, 2 . . . . n. (7)
Monte Carlo integration methods 315

Using (7) it is easy to show that


n

Y. f,/n
i-I

is an unbiased estimator of 0 and


n

i v.Y.f,/n (s)
l--I

is an unbiased estimator of I.
Since ~, as well as I, is a random variable, an estimate can be found for the error (variance)
incurred by using i to compute I; i.e. from (8),
ol-~var (Vd) = V 2 var (~) (9)
where
var(~) ~ s21n (10a)
and

s2=fl~.l(fl--{j~.l~/nt)2tl(n--l). (10b)
Applying the Central Limit Theorem to (8) when n is "large," I is approximately normally
distributed with mean I and variance ~r~. Hence, it is 99.8% certain that I satisfies the inequality
i - 3~r~< I < i 4- 3~ri.
Now, since

and
Prob{ [I ]l < 3o/}
- '99.8%
R.-* QO

the quantity 3o/= 3Vs/v/'ncan be used to determine the maximum number of desired samples.
Although the sampling approach described appears straightforward, it is the generation of the
x,'swhich may present the user with computational difficulties.The following section discusses this
problem in detail.

3. RESOLVING COMPUTATIONAL DIFFICULTIES

In order to generate x : (x] . . . . . xs) from (4), it is sufficient to determine the N functions, say
us), 1 <<.i <~ N, which map a random vector u : (um, . . . . us) from the uniform pdf
x, ffi gl(u~ . . . . .
l, if 0 ~ u l ~ l , l~i~N (11)
Pu(u)-- f(0, otherwise
into a random vector x from (4). Since most machines have a means for generating pseudo-random
numbers ul from (11), the x/s may then be obtained directly from the gi(')'s. The problem, then,
becomes one of finding the g functions.
In the spirit of Bendickson [2], the following approach was used. First, find the marginal density
for Xi, given by

(x2 . . . . . x s ) t ~
and assume xl : gl (ul) where gl (') is a monotonic function of u]. Then, by probability theory [7],
316 DJdtS~LL G. Iam~N and M~tcus J. IkNDICKSON

or, integrating both sides,

f[ l px~(y) dy -- +ul + constant

where the choice of sign is arbitrary;i.e.the constant of integration is zero or one depending upon
(12)

whether the plus or minus sign is chosen. Defining Fx, (') as the cumulative distribution function
(cdf) for X~ and arbitrarilychoosing the plus sign, equation (12) can be written as

&(x,) = u,. (13)


Solving for x~ from (13) in terms of u,, the function g~(.) is obtained.
To obtain x2 =g2(u~, u2) the correct correlation between Xl and X~ must be maintained. This
is accomplished by first finding the conditional pdf

Px~txt (x21 x~ ) = px~.x~(x I, x2 )


p~, (xl)
where

(X3. . . . . x~)eJ¢
Again, from probability theory,

px~lx, (X2l Xl )'-~2 = 1 (14)

and letting Fxzlx,(. ) be the conditional cdf of ,!"2 given Xm, (14) may be integrated to yield

Fx21x,(x~lxl) = P x 2 m ( Y Ixl) dy = us; (15)

solving (15) for x2 in terms of Ul and u2, the function g2(') is produced. The procedure is continued
until the relationship
dx#
Px,,m ..... x,, _, ( x ~ l x t . . . . . XN_ ~)'-d-~u~ = 1

is integrated and the resultant used to obtain XN= g~(U), the final mapping required.
Potential pitfalls of this technique and the necessary remedies are discussed in [2] and Appendix
B. In the following material, the method described above will be applied to the fuzing problem of
Section 1.

4. A NUMERICAL EXAMPLE*

Consider an optical fuze with detection radius r kin, which is activated from t = 0 until t = h s.
(see Fig. 1), where h and r are independently distributed uniform random variables over (a,, b~)
and (a2, b2), respectively. Fm:thermore, assume the coordinates (x, y, z) of an object in space are
distributed with pdf.
f(x,y,z) =A.exp(-Ax).exp{-(y'+z2)/2a2}/(2xa2), O < x < oo, -oo <y, z < +oo;
in other words, the pdf associated with the spatial position of an object is exponential, with mean
l/A, over x and bivariate normal, with mean vector (0, O) and covariance matrix

[: oo],
*Since the actual data used in the fuzing study is classified, the example will be tutorial in nature.
Monte Carlo integration methods 317

over (y, z). Hence, the probability P that the fttze will detect an object is

P= blfa ! b2 fxsffi-,¢~'~
{[b, - a, ) (b2 - a2)]- 12. e x p ( - Ax3)"exp[ - (x42+ x ~)/2a2]/2n02} dx (16a)

-" [(bl -- a l)(b2 - a2 )]- 1 I;fl


l~al 2~a2
(1 - e - ~')(1 - e - ~2.2) dx Idx2

= [(b~ - a~)(b2 - a2)] -~ {(b~ - a~ ) + ~ (e-h~-e-~,~)}. {(b2 - a2)

- ~x/~ [ Z ( b 2 / a ) - Z(a2/a)]} (16b)

where
Pt
Z(t) = J_ ~oe x p ( - u212) du/~2"~.

In order to apply the method of section 3, V from (3) is computed using the integration fimits
of (16a); it follows from (4),
f6/[~(b~-a~)(b~-a])], if x ~ R
(17)
px(x) = ~ otherwise

where x - (xl . . . . , Xs) and

R={x: al <~xl <~bl, a2<~x2<~b2, O <~x2 <~Xl,

-x2<.x,<. _

It will be less cumbersome to work in cylindrical coordinates, say ( y ~ , . . . , Ys), and transform
back to the Cartesian system, (x~, . . . . xs), as the final step; to this end, apply the transformations

Yl = XI, Y2 ~ X2, Y3 = X3,

Y, = ~ , Y5 ffi arctan (Xs/X,) (18a)


with inverses,
xt =Yl, x2=Y2, x3=y3
x4=y4 cos Ys, xs--y4 sin Ys, (18b)
to (17) and obtain
f6yJ[n(b2-a~)(b~-a~)], if y ~ R '
(19)
Pr(Y) = -~ (elsewhere
0- , -

where y -- (y~ . . . . . Ys) and

R'={y: aj<~yl<~bl, a2<~Y2<~b2, 0~<y3~<yl, 0~<y4~y2, 0~<ys~<2n}.

Now, using R', Pr('), Pr,(') etc., in place of R, Px('), Pxl('), etc., combine (12) and (19) with
(13) to obtain
( y~ -- a21)/(b~ -- a21)= u,
or
y, = [(b~ - - a~)u, + a~] '/2 (20a)
318 DARRELLO. LINTONand MAgCUSJ. BENDICICSON

Similarly, from (15) and (19),


(y] -- a~)/(b] - a]) = u2
or
Yz = [(b~ -- a])u= +a]] '/3 (20b)
Continuing to apply the m e t h o d outlined in Section 3 yields
Y3 = Yt u3 (20c)

Y, = Y2x/~4 (20d)

Ys = 2~u5. (20¢)
F r o m (18b) and (20a-¢), the functions gt(u), 1 ~< i ~< 5 are
Xl ----gl(") = [(bl2 -- a2)ul + a1211/2 (21a)
x2 = g2(') = [(b~ - a3~)u2+ a23]]/3 (21b)
x3 = g3(') = xl u3 (21C)
X4 = g4(" ) = X2X//~4 COS 2nUs (21d)
xs = gs(') = x2 x/~4 sin 2nus (21e)
A flow-chart for using (21a--e) to compute P and a~ from (8, 9 and 16a) is shown in Fig. 2, where
U ( . ) is a function o f a d u m m y argument which returns independent u n i f o r m r a n d o m numbers over
(0, 1) [most machines have a system supplied subprogram with the properties of U(.)].

I READ a 1, b 1, a2, b2, ~,, G, n I

S.=O, S1 = 0 , k = O
I
1. I IF k = n, GO TO 2; O T H E R W I S E C OM P U TE

Il ku i==k +U(.),l i = 1, 2 .... 5, AND


Ixi = gi('), i = 1, 2 .... 5, FROM (21a) - (21e)

F = ((b 1 - al) (b 2 - a2))-1 ~exp(-X, x3).

I
.exPl-(x 2 + x~)1202}12no'2
S=S+F
S1 = S1 + F2
GO TO 1

2. P = V.S/k
S 2 = SI-(S2)/k
$3 = V.(S2/(k_ 1))112
=

PRINT P, o~
STOP

Fig. 2. Flow-chart for computing/5, a~ from equation (16a) with n samples.


Monte Carlo integration methods 319

When a~ = 3, bm= 4, as = 2, b2 = 4, 2 = 0.28, ¢ = 2, the flow-chart of Fig. 2 was programmed


for an IBM 370/165 II computer using subroutine R A N D U [5] to generate uniform random
numbers. From tables for the normal distribution, the true solution may be determined from (16b)
as

P = 0.41107.
Results for n,/;, ¢~ and an estimate for the execution time required, say T, are displayed in Fig. 3
for several seed values. As indicated by Fig. 3, answers correct to one or two decimal places can
be obtained in less than 2 s. If more accurate values are sought, however, it may require significantly
more computer time. Fortunately, one or two place accuracy (depending on the analysis desired)
is usually sufficient when N, the number of integrations, is 5 or more.

5. C O N C L U D I N G REMARKS

It should be noted that there arc other techniques for estimating (2) and the method of Section
3 may not always be the best approach. Specifically, McNamee and Stengcr [6] produce a set of
transformations which map the region R from (2) into an N-dimensional hypercube with limits
- 1 to + 1. In general, the new integral is more cumbersome than the original but the transformed
integral is amenable to evaluation by Gaussian Quadrature [3] or by sampling with V = 2 ~
and
g~(u) = 2u~ - 1, 1 ~< i ~< N. (22)
At this point, the reader may wonder how sampling compares with other methods like Gaussian
Qadrature, trapezoidal rule, etc. As indicated by Davis and Rabinowitz [3], the state of theory and
hardware prior to 1975 divides the evaluation of N-dimensional integrals into three ranges:
Range I: 2 ~< N ~< 5 or 6
Range II: 6 or 7 ~< N ~< 12
Range III: N > 12
Although sampling techniques arc not as effective in Range I, they are at least competitive with
other methods in Range II (a borderline area) and are usually the only means available in
Range III. However, whereas sampling provides a reliable error estimate [equation (9)], no such
error formula is readily available when using Gaussian Quadrature, etc. (but s ~ [1, 8]).
The difference between sampling directly (Section 3) and sampling with a transformed integral
[via (22)] is more subtle. For a fixed amount of execution time, the variance associated with/~ from
(16a) was smaller (for all test cases) when direct sampling was employed (these results are not

Multiplier = 11111
^
P a~ T(sec)
500 0.40629 0.87989E-2 0.17
5000 0.41077 0.29551E-2 1.71
10,000 0.40929 0.20922E-2 3.38
20,000 0.40877 0.14671E-2 6.68

Multiplier = 55555
500 0.42481 0.90623E-2 0.16
5000 0.40944 0.29275E-2 1.68
10,000 0.41148 0.20841E-2 3.40
20,000 0.41074 0.14681E-2 6.74

Multiplier = 77777
500 0.39732 0.88442E-2 0.16
5000 0.41256 0.29054E-2 1.66
10,000 0.41292 0.20890E-2 3.89
20,000 0.41121 0.14767E-2 6,64
Fig. 3. Computational results from the flow-chart of Fig. 2 when a, ffi 3, b] ffi 4, a2 -- 2, b 2 -- 4, 2 ffi 0 . 2 8 ,
o'~2.
320 DARRELL G. LINTON and MARCUSJ. BF.NDICKSON

shown). However, a proof that sampling from the original region directly is alwaysbetter (i.e. has
a smaller variance) than sampling from the transformed region (hypercube) could not be shown.
Although this paper emphasizes the estimation of detection probabilities defined in terms of
multiple integrals, the method for generating random vectors from (4) may be used in conjunction
with combined simulation modeling (i.e. discrete and continuous). For instance, using the
simulation languages SLAM II [9] and SIMAN [10], the detection probability P from (1) associated
with a cylindrical fuze can be approximated via simulation as follows. First, the dynamics of the
fuze and the target object(s) are modeled in terms of differential equations (i.e. continuous
simulation). During the period of time that the fuze is activated, a series of three-dimensional
points, randomly distributed over the volume of a cylinder, are generated. Applying the procedure
described in Section 3 (or [2]), random points (X, Y, Z) distributed uniformally over the volume
of a cylinder of height h and radius r may be determined from the relationships.
R=rx/~, O=2~V, X=RcosO, y=RsinO, Z=hW
where U, V, W are uniform random numbers over (0, 1). If any of these random points lies within
a target object, then the fuze has detected the target (i.e. a discrete event). Repeated simulations
may be used to estimate P from the number of detections observed. Of course, any encounter
volume (not necessarily cylindrical) may be accommodated by this approach.

APPENDIX A
Notations Definition
N --The dimensionality (number of integrations) of a quantity expressed as a multiple integral.
I ---a general N-dimensional multiple integral.
P ---a detection probability expressed as a multiple integral.
^
--the circumflex over a quantity implies unbiased estimator; e.g. P is an unbiased estimator for P.
Px('), px,(')
Px~lx,(" ), etc. --indicates a probability density function (pdf) associated with the random variable (or vector)
represented by the subscript.
F~,(.), Fx, lx~(.), etc. --indicates a cumulative distribution function (cdf) associated with the subscripted random variable.
X = (X! . . . . . XN) - - a random vector with lxif Px(').
n --the number of samples used to define an unbiased estimator.
xi, l<<.i<~n - - a set of n independent and identically distributed random vectors for the distribution with pdf
Px(').
R , R" --regions of integration associated with multiple integrals; also, regions associated with non-zero
values of a pdf.
V --the N-dimensional volume represented by R.
,~., U, O" --parameters associated with pdf's.
0"~, 0 .2-
p --the variance of the random variable appearing as a subscript.
IJl~ U~j),
l<~i<~N, 1 ~<j~<2 --independent uniform random variables over (0, 1).
x, f g,(m),
l <~i <~N - - a set of N functions which map a uniform random vector u =(um . . . . . us) into a random vector
x = (x I . . . . . x~) with pdf px(. ).

APPENDIX B
Modifications to the procedure outlined in Sections 2 and 3 are necessary when the limits of integration may be infinite.
For example, consider again equation (I) where h and r are assumed to be independently and exponentially distributed
random variables with means I/g and I/g, respectively. Then

e=f...f,~.exp<-axOv.exp(-,.:)f(x.x,,x,)~ (A.l)
xeR
where
x = ( x I . . . . . xs), dx = d x l . . . d x s
and
R = { x : 0 < x , , x , < oo, 0 < x s < x,, - x z < x , < +xz, - x / x .2- x , <2 x s < + x / ~ i - xl}.
Since

zaR
Monte Carlo integration methods 321

it follows that
~fAexp(--Axl)/~~ exp(--p.x2).~/F/2~, if x e R
px(x) (0, otherwise (A.2)
is a valid pdf over the region of integration in (A.1). Hence, multiplying and dividing the integrand by the constant A/t2/2~,
P from (A.I) may be rewritten as

Pffif...fV~(x)'h(x)d~
xeR
(A.3)

where
h (x) ffi (2~/A# 2) .f(x3 ' xs ). (A.4)
Now, since the integrand of (A,3) already contains pdf over the region R, it is no longer necessary (or possible) to calculate
V from (3). Instead, an unbiased estimator for P, say/5, is obtained from

= (I/n) ~ h(x,), (A.5)


i-|

where the x~, I ~<i ~<n, are generated from the distribution Px(').
In order to obtain a random vector x = (xl . . . . . xs) from Px('), the results of Section 3 are applied to (A.2); thus, the
marginal density for X~, becomes
px,(xl)= ~2xl exp(--Axl), x I >0, (A.6)
and appealing to (13) using (A.6),

ul = Fxl(xl)=
I0 PxI(y) dy

= 1 -- (I + Ax0exp(--Axl). (A.7)
Equation (A.7), however, is transcendental in nature and cannot be solved for x~ in terms of uI . But from (A.6), X~ is a
2-Erlang random variable can be written as the sum of two independent exponential random variables, each with mean
1/2. Thus, using the standard integral transform approach for the exponential distribution [4, (3.41)]
xl ffi - ( I n ul + In ull))/A. (A.8a)
Continuing to apply the results of Section 3, the conditional pdf of X2 given X~ is
Px21x,(x2lxl) ffi/~(/~x2)2exp(-/zx2)/2, x 2 > 0;
as before, (15) turns out to be transcendental but Px:lx,(') is 3-Erlang. Hence, X2 can be generated from the sum of three
exponential random variables, each with mean I/#:
x2 = - (In u2 + In u[l) + In u[2))//~ (A.8b)
The remaining transformations via Section 3 are easily shown to be:
x 3 = x I "u3 (A.8c)
x4 = x2 ~ cos 2nu5 (A.Sd)
x5 = x2 ~ sin 2~u5. (A.8e)
Hence, ( A . 8 a - A.8e) may be combined with (A.4) and (A.5) to estimate P from (A.1).
Note that when the limits of integration are infinite, the analyst must manufacture a pdf over the region of integration.
While this modification is easy to implement, the number of uniform variates per sample may be larger than N, i.e. 8 per
sample vs only 5 per sample for the case of finite integration limits (Section 4).

REFERENCES

1. A. C. Ahlin. On error bounds for Gaussian cubature. S l A M Rev. 4, 25-39 (1962).


2. M. J. Bendickson. Generating samples of a given density function using computer derived samples of a uniform
distribution. Internal Memorandum MB75-0031, Teledyne Brown Engineering, Huntsville, AL 35807, Sept. (1975).
3. P. J. Davis and P. Rabinowitz. Methods of Numerical Integration. Academic Press, NY (1975).
4. J. W. Hammersley and D. C. Handscomb. Monte Carlo Methods. Methuen, London (1966).
5. IBM., System~360 Scientific Subrouting Package. IBM Corporation, White Plains, New York (1968).
6. J. McNamee and F. Stenger. Construction of fully symmetric numerical integration formulas. Num. Math. 10, 327-344
(1967).
7. A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, NY (1965).
8. F. Stenger. Error bound for the evaluation of integrals by repeated Gauss-Type formulae. Num. Math. 9, 200-213
(l~).
9. A. A. B. Pritsker. Introduction to Simulation and S L A M IX, 3rd Edn. Wiley, NY (1986).
10. C. D. Pegden. Introduction to SIMAN. Systems Modeling Corp., (1985).

You might also like