Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

Nagarjuna College of Engineering &

Technology
Department of Computer Science &
Engineering

System Modeling and Simulation


(16CST741)
By
Dr. Shantakumar B Patil
Properties of Random Numbers
• Two important statistical properties:
– Uniformity
– Independence.
• Random Number, Ri, must be independently drawn from a
uniform distribution with pdf:

1, 0  x  1
f ( x) = 
0, otherwise
1
1 2


x 1
E ( R) = xdx = =
0 2 2
0
Figure: pdf for
random numbers
2
Generation of Pseudo-Random
Numbers
• “Pseudo”, because generating numbers using a known method
removes the potential for true randomness.
• Goal: To produce a sequence of numbers in [0,1] that simulates, or
imitates, the ideal properties of random numbers (RN).
• Important considerations in RN routines:
– Fast
– Portable to different computers
– Have sufficiently long cycle
– Replicable
– Closely approximate the ideal statistical properties of uniformity and
independence.

3
Techniques for Generating Random
Numbers
• Linear Congruential Method (LCM).
• Combined Linear Congruential Generators (CLCG).
• Random-Number Streams.

4
Linear Congruential Method
[Techniques]
• To produce a sequence of integers, X1, X2, … between 0 and m-1
by following a recursive relationship:

X i +1 = (aX i + c) mod m, i = 0,1,2,...

The The The


multipli incremen modulus
er t
• The selection of the values for a, c, m, and X0 drastically affects
the statistical properties and the cycle length.
• The random integers are being generated [0,m-1], and to convert
the integers to random numbers:
Xi
Ri = , i = 1,2,...
m
5
Example [LCM]

• Use X0 = 27, a = 17, c = 43, and m = 100.


• The Xi and Ri values are:
X1 = (17*27+43) mod 100 = 502 mod 100 = 2, R1 = 0.02;
X2 = (17*2+32) mod 100 = 77, R2 = 0.77;
X3 = (17*77+32) mod 100 = 52, R3 = 0.52;

6
Characteristics of a Good Generator
[LCM]
• Maximum Density
– Such that he values assumed by Ri, i = 1,2,…, leave no large gaps on
[0,1]
– Problem: Instead of continuous, each Ri is discrete
– Solution: a very large integer for modulus m
• Approximation appears to be of little consequence
• Maximum Period
– To achieve maximum density and avoid cycling.
– Achieve by: proper choice of a, c, m, and X0.
• Most digital computers use a binary representation of numbers
– Speed and efficiency are aided by a modulus, m, to be (or close to) a
power of 2.

7
Combined Linear Congruential
Generators
[Techniques]
• Reason: Longer period generator is needed because of the
increasing complexity of stimulated systems.
• Approach: Combine two or more multiplicative congruential
generators.
• Let Xi,1, Xi,2, …, Xi,k, be the ith output from k different multiplicative
congruential generators.
– The jth generator:
• Has prime modulus mj and multiplier aj and period is
mj-1
• Produces integers Xi,j is approx ~ Uniform on integers in
[1, m-1]
• Wi,j = Xi,j -1 is approx ~ Uniform on integers in [1, m-2]
8
Combined Linear Congruential
Generators
[Techniques]
– Suggested form:
 Xi
 m , Xi  0
 k 
X i =   (−1) X i , j  mod m1 − 1
 j −1
Hence, Ri =  1
 j =1  m −1
 1 , Xi = 0
 m1
The coefficient:
Performs the
subtraction Xi,1-1

• The maximum possible


(m1 − 1)(mperiod
2 − 1)...( mk − 1)
is:
P=
2 k −1

9
Combined Linear Congruential
Generators
[Techniques]
• Example: For 32-bit computers, L’Ecuyer [1988] suggests combining k = 2
generators with m1 = 2,147,483,563, a1 = 40,014, m2 = 2,147,483,399 and
a2 = 20,692. The algorithm becomes:
Step 1: Select seeds
• X1,0 in the range [1, 2,147,483,562] for the 1st generator
• X2,0 in the range [1, 2,147,483,398] for the 2nd generator.
Step 2: For each individual generator,
X1,j+1 = 40,014 X1,j mod 2,147,483,563
X2,j+1 = 40,692 X1,j mod 2,147,483,399.
Step 3: Xj+1 = (X1,j+1 - X2,j+1 ) mod 2,147,483,562.
Step 4: Return  X j +1
 , X j +1  0
R j +1 =  2,147,483, 563
 2,147,483,562 , X j +1 = 0
 2,147,483,563
Step 5: Set j = j+1, go back to step 2.
– Combined generator has period: (m1 – 1)(m2 – 1)/2 ~ 2 x 1018
10
Random-Numbers Streams
[Techniques]
• The seed for a linear congruential random-number generator:
– Is the integer value X0 that initializes the random-number sequence.
– Any value in the sequence can be used to “seed” the generator.
• A random-number stream:
– Refers to a starting seed taken from the sequence X0, X1, …, XP.
– If the streams are b values apart, then stream i could defined by starting seed:
Si = X b (i −1)
– Older generators: b = 105; Newer generators: b = 1037.
• A single random-number generator with k streams can act like k distinct
virtual random-number generators
• To compare two or more alternative systems.
– Advantageous to dedicate portions of the pseudo-random number sequence to
the same purpose in each of the simulated systems.

11
Tests for Random Numbers
• Two categories:
– Testing for uniformity:
H0: Ri ~ U[0,1]
H1: Ri ~/U[0,1]
• Failure to reject the null hypothesis, H 0, means that evidence of non-
uniformity has not been detected.
– Testing for independence:
H0: Ri ~ independently
H1: Ri ~/independently
• Failure to reject the null hypothesis, H 0, means that evidence of
dependence has not been detected.
• Level of significance a, the probability of rejecting H0 when it is
true: a = P(reject H0|H0 is true)

12
Tests for Random Numbers
• When to use these tests:
– If a well-known simulation languages or random-number generators is
used, it is probably unnecessary to test
– If the generator is not explicitly known or documented, e.g.,
spreadsheet programs, symbolic/numerical calculators, tests should
be applied to many sample numbers.
• Types of tests:
– Theoretical tests: evaluate the choices of m, a, and c without actually
generating any numbers
– Empirical tests: applied to actual sequences of numbers produced.
Our emphasis.

13
Frequency Tests [Tests for RN]

• Test of uniformity
• Two different methods:
– Kolmogorov-Smirnov test
– Chi-square test

14
Kolmogorov-Smirnov Test [Frequency Test]

• Compares the continuous cdf, F(x), of the uniform distribution


with the empirical cdf, SN(x), of the N sample observations.
– We know:
– If the sample from the RN F ( x) = x, 0isRx1,R12, …, RN, then the empirical
generator
cdf, SN(x) is:

number of R1 , R2 ,..., Rn which are  x


S N ( x) =
N
• Based on the statistic: D = max| F(x) - SN(x)|
– Sampling distribution of D is known (a function of N, tabulated in Table
A.8.)
• A more powerful test, recommended.

15
Kolmogorov-Smirnov Test
[Frequency Test]
• Example: Suppose 5 generated numbers are 0.44, 0.81, 0.14, 0.05,
0.93.
Arrange R(i) from
Step 1: R(i) 0.05 0.14 0.44 0.81 0.93 smallest to
i/N 0.20 0.40 0.60 0.80 1.00 largest
i/N – R(i) 0.15 0.26 0.16 - 0.07 D+ = max {i/N – R(i)}
Step 2:
R(i) – (i-1)/N 0.05 - 0.04 0.21 0.13
D- = max {R(i) - (i-
1)/N}
Step 3: D = max(D+, D-) = 0.26
Step 4: For a = 0.05,
Da = 0.565 > D

Hence, H0 is not rejected.

16
Chi-square test [Frequency Test]

• Chi-square test uses the sample statistic:


n is the # of Ei is the
n
(Oi − Ei ) 2
 02 = 
classes expected # in
i =1 Ei the ith class
Oi is the
observed # in
the ith class
– Approximately the chi-square distribution with n-1 degrees of freedom
(where the critical values are tabulated in Table A.6)
– For the uniform distribution, Ei, the expected number in the each class
is: N
Ei = , where N is the total # of observation
n

• Valid only for large samples, e.g. N >= 50

17
Tests for Autocorrelation [Tests
for RN]

• Testing the autocorrelation between every m numbers (m is


a.k.a. the lag), starting with the ith number
– The autocorrelation rim between numbers: Ri, Ri+m, Ri+2m, Ri+(M+1)m
– M is the largest integer such that
i + (M + 1 )m  N
• Hypothesis:

H 0 : r im = 0, if numbers are independent


H1 : r im  0, if numbers are dependent
• If the values are uncorrelated:
– For large values of M, the distribution of the estimator of rim, denoted
is approximately normal.
r̂ im

18
Tests for Autocorrelation [Tests
for RN]

• Test statistics is: rˆ im


Z0 =
ˆ rˆ im

– Z0 is distributed normally with mean = 0 and variance = 1, and:


1 M 
ˆρim =  
M + 1  k =0
Ri + km Ri +(k +1 )m  − 0.25

13M + 7
σˆ ρim =
12(M + 1 )
• If rim > 0, the subsequence has positive autocorrelation
– High random numbers tend to be followed by high ones, and vice versa.
• If rim < 0, the subsequence has negative autocorrelation
– Low random numbers tend to be followed by high ones, and vice versa.

19
Example [Test for Autocorrelation]

• Test whether the 3rd, 8th, 13th, and so on, for the following
output on P. 265.
– Hence, a = 0.05, i = 3, m = 5, N = 30, and M = 4

1 (0.23)(0.28) + (0.25)(0.33) + (0.33)(0.27)


ρˆ 35 =  − 0.25
4 + 1 + (0.28)(0.05) + (0.05)(0.36) 
= −0.1945
13(4) + 7
σˆ ρ35 = = 0.128
12( 4 + 1 )
0.1945
Z0 = − = −1.516
0.1280

– From Table A.3, z0.025 = 1.96. Hence, the hypothesis is not rejected.

20
Shortcomings [Test for Autocorrelation]

• The test is not very sensitive for small values of M, particularly


when the numbers being tests are on the low side.
• Problem when “fishing” for autocorrelation by performing
numerous tests:
– If a = 0.05, there is a probability of 0.05 of rejecting a true hypothesis.
– If 10 independence sequences are examined,
• The probability of finding no significant autocorrelation,
by chance alone, is 0.9510 = 0.60.
• Hence, the probability of detecting significant
autocorrelation when it does not exist = 40%

21
Summary
• In this chapter, we described:
– Generation of random numbers
– Testing for uniformity and independence

• Caution:
– Even with generators that have been used for years, some of which
still in used, are found to be inadequate.
– This chapter provides only the basic
– Also, even if generated numbers pass all the tests, some underlying
pattern might have gone undetected.

22
Inverse-transform Technique
• The concept:
– For cdf function: r = F(x)
– Generate r from uniform (0,1) r = F(x)
– Find x: x = F-1(r) r1

x1

23
Exponential Distribution [Inverse-
transform]

• Exponential Distribution:
– Exponential cdf:

r= F(x)
= 1 – e-l x for x  0

– To generate X1, X2, X3 …

Xi = F-1(Ri)
= -(1/l) ln(1-Ri) [Eq’n 8.3]
Figure: Inverse-transform
technique for exp(l = 1)

24
Exponential Distribution [Inverse-
transform]

• Example: Generate 200 variates Xi with distribution exp(l= 1)


– Generate 200 Rs with U(0,1) and utilize eq’n 8.3, the histogram of Xs
become:

– Check: Does the random variable X1 have the desired distribution?

P( X 1  x0 ) = P( R1  F ( x0 )) = F ( x 0 )

25
Other Distributions [Inverse-
transform]

• Examples of other distributions for which


inverse cdf works are:
– Uniform distribution
– Weibull distribution
– Triangular distribution

26
Empirical Continuous Dist’n [Inverse-
transform]
• When theoretical distribution is not applicable
• To collect empirical data:
– Resample the observed data
– Interpolate between observed data points to fill in the gaps
• For a small sample set (size n):
– Arrange the data from smallest to largest
x (1)  x (2)    x (n)

– Assign the probability 1/n to each interval x (i-1)  x  x (i)

ˆ −1  (i − 1) 
X = F ( R) = x(i −1) + ai  R − 
 n 
where x(i ) − x(i −1) x(i ) − x(i −1)
ai = =
1 / n − (i − 1) / n 1/ n

27
Empirical Continuous Dist’n [Inverse-
transform]
• Example: Suppose the data collected for100 broken-widget
repair times are:
Interval Relative Cumulative Slope,
i (Hours) Frequency Frequency Frequency, c i ai
1 0.25 ≤ x ≤ 0.5 31 0.31 0.31 0.81
2 0.5 ≤ x ≤ 1.0 10 0.10 0.41 5.0
3 1.0 ≤ x ≤ 1.5 25 0.25 0.66 2.0
4 1.5 ≤ x ≤ 2.0 34 0.34 1.00 1.47

Consider R1 = 0.83:

c3 = 0.66 < R1 < c4 = 1.00

X1 = x(4-1) + a4(R1 – c(4-1))


= 1.5 + 1.47(0.83-0.66)
= 1.75

28
Discrete Distribution [Inverse-
transform]

• All discrete distributions can be generated via


inverse-transform technique
• Method: numerically, table-lookup procedure,
algebraically, or a formula
• Examples of application:
– Empirical
– Discrete uniform
– Gamma

29
Discrete Distribution [Inverse-
transform]

• Example: Suppose the number of shipments, x, on the loading


dock of IHW company is either 0, 1, or 2
– Data - Probability distribution:
x p(x) F(x)
0 0.50 0.50
1 0.30 0.80
2 0.20 1.00

– Method - Given R, the generation


scheme becomes:
0, R  0.5

x = 1, 0.5  R  0.8
2, 0.8  R  1.0 Consider R1 = 0.73:
 F(xi-1) < R <= F(xi)
F(x0) < 0.73 <= F(x1)
Hence, x1 = 1 30
Acceptance-Rejection technique
• Useful particularly when inverse cdf does not exist in closed form,
a.k.a. thinning
• Illustration: To generate random variates, X ~ U(1/4, 1)

Generate R
Procedures:
no
Step 1. Generate R ~ U[0,1]
Step 2a. If R >= ¼, accept X=R. Condition
yes
Step 2b. If R < ¼, reject R, return
to Step 1 Output R’
• R does not have the desired distribution, but R conditioned (R’) on
the event {R  ¼} does.
• Efficiency: Depends heavily on the ability to minimize the number
of rejections.
31
NSPP
[Acceptance-Rejection]
• Non-stationary Poisson Process (NSPP): a Possion arrival process
with an arrival rate that varies with time
• Idea behind thinning:
– Generate a stationary Poisson arrival process at the fastest rate, l * =
max l (t)
– But “accept” only a portion of arrivals, thinning out just enough to get
the desired time-varying rate
Generate E ~ Exp(l*)
t=t+E
no

Condition
R <= l(t)
yes
Output E ’~ t
32
NSPP [Acceptance-Rejection]

• Example: Generate a random variate for a NSPP


Data: Arrival Rates Procedures:
Step 1. l * = max l (t) = 1/5, t = 0 and i = 1.
Mean Time
Between Arrival Step 2. For random number R = 0.2130,
t Arrivals Rate l (t) E = -5ln(0.213) = 13.13
(min) (min) (#/min)
t = 13.13
0 15 1/15
Step 3. Generate R = 0.8830
60 12 1/12
l (13.13)/l *=(1/15)/(1/5)=1/3
120 7 1/7
180 5 1/5 Since R>1/3, do not generate the arrival
240 8 1/8 Step 2. For random number R = 0.5530,
300 10 1/10 E = -5ln(0.553) = 2.96
360 15 1/15
t = 13.13 + 2.96 = 16.09
420 20 1/20
Step 3. Generate R = 0.0240
480 20 1/20
l (16.09)/l *=(1/15)/(1/5)=1/3
Since R<1/3, T1 = t = 16.09,
and i = i + 1 = 2
33
Special Properties
• Based on features of particular family of
probability distributions
• For example:
– Direct Transformation for normal and lognormal
distributions
– Convolution
– Beta distribution (from gamma distribution)

34
Direct Transformation [Special Properties]

• Approach for normal(0,1):


– Consider two standard normal random variables, Z1 and Z2, plotted as a
point in the plane:

In polar coordinates:
Z1 = B cos f
Z2 = B sin f

– B2 = Z21 + Z22 ~ chi-square distribution with 2 degrees of freedom =


Exp(l = 2). Hence, B = (−2 ln R)1/ 2
– The radius B and angle f are mutually independent.
Z1 = (−2 ln R)1/ 2 cos(2R2 )
Z 2 = (−2 ln R)1/ 2 sin( 2R2 ) 35
Direct Transformation [Special
Properties]
• Approach for normal(m,2):
– Generate Zi ~ N(0,1)
X i = m +  Zi

• Approach for lognormal(m,2):


– Generate X ~ N((m,2)
Yi = eXi

36
Data Collection
• One of the biggest tasks in solving a real problem. GIGO – garbage-in-
garbage-out
• Suggestions that may enhance and facilitate data collection:
– Plan ahead: begin by a practice or pre-observing session, watch for
unusual circumstances
– Analyze the data as it is being collected: check adequacy
– Combine homogeneous data sets, e.g. successive time periods, during
the same time period on successive days
– Be aware of data censoring: the quantity is not observed in its entirety,
danger of leaving out long process times
– Check for relationship between variables, e.g. build scatter diagram
– Check for autocorrelation
– Collect input data, not performance data

37
Identifying the Distribution (1): Histograms (1)
• A frequency distribution or histogram is useful in determining
the shape of a distribution
• The number of class intervals depends on:
– The number of observations
– The dispersion of the data
– Suggested: the number intervals  the square root of the sample
size works well in practice
• If the interval is too wide, the histogram will be coarse or blocky and
it’s shape and other details will not show well
• If the intervals are too narrow, the histograms will be ragged and will
not smooth the data

38
Identifying the Distribution (1): Histograms (2)
• For continuous data:
– Corresponds to the probability density
function of a theoretical distribution
– A line drawn through the center of each
class interval frequency should results in
a shape like that of pdf
• For discrete data:
– Corresponds to the probability mass
function
• If few data points are available: combine
adjacent cells to eliminate the ragged
appearance of the histogram

Same data with


different interval
sizes 39
Identifying the Distribution (1): Histograms (3)
• Vehicle Arrival Example: # of vehicles arriving at an intersection between 7 am and
7:05 am was monitored for 100 random workdays.
Arrivals per
Period Frequency
0 12
• 1 10
2 19
3 17
4 10
5 8
6 7
7 5
8 5
• There are ample data, so the histogram may have a cell for each possible value in the
9 3
data range
10 3
11 1

40
Identifying the Distribution (2): Selecting the
Family of Distributions (1)
• A family of distributions is selected based on:
– The context of the input variable
– Shape of the histogram
• The purpose of preparing a histogram is to infer a known pdf or
pmf
• Frequently encountered distributions:
– Easier to analyze: exponential, normal and Poisson
– Harder to analyze: beta, gamma and Weibull

41
Identifying the Distribution (2): Selecting the
Family of Distributions (2)
• Use the physical basis of the distribution as a guide, for example:
– Binomial: # of successes in n trials
– Poisson: # of independent events that occur in a fixed amount of time
or space
– Normal: dist’n of a process that is the sum of a number of component
processes
– Exponential: time between independent events, or a process time that
is memoryless
– Weibull: time to failure for components
– Discrete or continuous uniform: models complete uncertainty. All
outcomes are equally likely.
– Triangular: a process for which only the minimum, most likely, and
maximum values are known. Improvement over uniform.
– Empirical: resamples from the actual data collected

42
Identifying the Distribution (2): Selecting the
Family of Distributions (3)
• Do not ignore the physical characteristics of the process
– Is the process naturally discrete or continuous valued?
– Is it bounded or is there no natural bound?
• No “true” distribution for any stochastic input process
• Goal: obtain a good approximation that yields useful results from the
simulation experiment.

43
Identifying the Distribution (3): Quantile-
Quantile Plots (1)
• Q-Q plot is a useful tool for evaluating distribution fit
• If X is a random variable with cdf F, then the q-quantile of X is the g such that

– When ( )
F gF has ( )
 g =g =qFfor
= PanXinverse, -1(q)0  q  1 By a quantile, we mean the
fraction (or percent) of points
below the given value
• Let {xi, i = 1,2, …., n} be a sample of data from X and {yj, j = 1,2, …, n} be the
observations in ascending order. The Q-Q plot is based on the fact that yj is an
estimate of the (j-0.5)/n quantile of X.

percentile: 100-quantiles
where j isythe ranking -1  j - 0.5 
or order number
j is approximately F  n  deciles: 10-quantiles
  quintiles: 5-quantiles
quartiles: 4-quantiles

44
Identifying the Distribution (3): Quantile-
Quantile Plots (2)
• The plot of yj versus F-1( (j-0.5)/n) is
– Approximately a straight line if F is a member of an appropriate family
of distributions
– The line has slope 1 if F is a member of an appropriate family of
distributions with appropriate parameter values
– If the assumed distribution is inappropriate, the points will deviate
from a straight line
– The decision about whether to reject some hypothesized model is
subjective!!

45
Identifying the Distribution (3): Quantile-
Quantile Plots (3)
• Example: Check whether the door installation times given below follows a
normal distribution.
– The observations are now ordered from smallest to largest:

j Value j Value j Value j Value


1 99.55 6 99.82 11 99.98 16 100.26
2 99.56 7 99.83 12 100.02 17 100.27
3 99.62 8 99.85 13 100.06 18 100.33
4 99.65 9 99.9 14 100.17 19 100.41
5 99.79 10 99.96 15 100.23 20 100.47

– yj are plotted versus F-1( (j-0.5)/n) where F has a normal distribution


with the sample mean (99.99 sec) and sample variance (0.28322 sec2)

46
Identifying the Distribution (3): Quantile-
Quantile Plots (4)
• Example (continued): Check whether the door installation times follow a
normal distribution.

Straight line,
supporting the
hypothesis of a
normal distribution

Superimposed
density function of
the normal
distribution

47
Identifying the Distribution (3): Quantile-
Quantile Plots (5)
• Consider the following while evaluating the linearity of a Q-Q plot:
– The observed values never fall exactly on a straight line
– The ordered values are ranked and hence not independent, unlikely for
the points to be scattered about the line
– Variance of the extremes is higher than the middle. Linearity of the
points in the middle of the plot is more important.
• Q-Q plot can also be used to check homogeneity
– Check whether a single distribution can represent two sample sets
– Plotting the order values of the two data samples against each other. A
straight line shows both sample sets are represented by the same
distribution

48
Parameter Estimation (1)
• Next step after selecting a family of distributions
• If observations in a sample of size n are X1, X2, …, Xn (discrete or
continuous), the sample mean and variance are defined as:

i=1 X i i=1 i
n n
X 2
− nX 2

X= S2 =
n n −1
• If the data are discrete and have been grouped in a frequency distribution:



 j =1 f j X j 
n n
j =1
f X 2
− nX 2
• jX=
where f is the observed frequency of S =
value
2 X
j
j j

n n −1

49
Parameter Estimation (2)
• When raw data are unavailable (data are grouped into class intervals), the
approximate sample mean and variance are:

 j =1 f j m j  j =1 j j
c n
f m 2
− nX 2

X= S2 =
n n −1
• where fj is the observed frequency of in the jth class interval; mj is the midpoint of the
jth interval, and c is the number of class intervals

• A parameter is an unknown constant, but an estimator is a statistic.

50
Parameter Estimation (3) Suggested Estimators
Distribution Parameters Suggested Estimator

Poisson a

â = X
Exponential l
lˆ = 1
X
Normal m,2
mˆ = X
ˆ 2 = S 2 (Unbiased)

51
Parameter Estimation (4)
• Vehicle Arrival Example (continued): Table in the histogram example on slide 7 (Table 9.1 in
book) can be analyzed to obtain:
n = 100, f1 = 12, X 1 = 0, f 2 = 10, X 2 = 1,...,
and  j =1 f j X j = 364, and  j =1 f j X 2j = 2080
k k

– The sample mean and variance are

364
X= = 3.64
100
2080 − 100 * (3.64) 2
S =
2

99
= 7.63
– The histogram suggests X to have a Possion distribution
• However, note that sample mean is not equal to sample variance.
• Reason: each estimator is a random variable, is not perfect.

52
Goodness-of-Fit Tests (1)
• Conduct hypothesis testing on input data distribution using:
– Kolmogorov-Smirnov test
– Chi-square test
• Goodness-of-fit tests provide helpful guidance for evaluating the suitability
of a potential input model
• No single correct distribution in a real application exists.
– If very little data are available, it is unlikely to reject any candidate
distributions
– If a lot of data are available, it is likely to reject all candidate
distributions

53
Goodness-of-Fit Tests (2):
Chi-Square test (1)
• Intuition: comparing the histogram of the data to the shape of the
candidate density or mass function
• Valid for large sample sizes when parameters are estimated by maximum
likelihood
• By arranging the n observations into a set of k class intervals or cells, the
test statistics is:

(Oi − Ei ) 2
k


Expected Frequency
 02 = Ei = n*pi
i =1
Ei where pi is the theoretical
Observed prob. of the ith interval.
Frequency Suggested Minimum = 5
which approximately follows the chi-square distribution with k-s-1 degrees of
freedom, where s = # of parameters of the hypothesized distribution estimated
by the sample statistics.

54
Chi-Square test (2)
• The hypothesis of a chi-square test is:
• H0: The random variable, X, conforms to the distributional
assumption with the parameter(s) given by the estimate(s).
• H1: The random variable X does not conform.

• If the distribution tested is discrete and combining adjacent cell is not


required (so that Ei > minimum requirement):
– Each value of the random variable should be a class interval, unless
combining is necessary, and

pi = p(xi ) = P(X = xi )

55
Chi-Square test (3)
• If the distribution tested is continuous:
ai
pi =  ai −1
f ( x) dx = F (ai ) − F (ai −1 )
where ai-1 and ai are the endpoints of the ith class interval
and f(x) is the assumed pdf, F(x) is the assumed cdf.

– Recommended number of class intervals (k):

Sample Size, n Number of Class Intervals, k


20 Do not use the chi-square test
50 5 to 10

– Caution: Different
100 grouping of data
10 (i.e.,
to 20 k) can affect the hypothesis
testing result.> 100 n1/2 to n/5

56
Chi-Square test (4)
• Vehicle Arrival Example (continued) (See Slides 7 and 19):
• The histogram on slide 7 appears to be Poisson
• From Slide 19, we find the estimated mean to be 3.64
• Using Poisson pmf:

 e −a a x

p( x) =  x! , x = 0,1,2,...
• For a=3.64, the probabilities are: 0, otherwise
p(0)=0.026 p(6)=0.085
p(1)=0.096 p(7)=0.044
p(2)=0.174 p(8)=0.020
p(3)=0.211 p(9)=0.008
p(4)=0.192 p(10)=0.003
p(5)=0.140 p(11)=0.001

57
Chi-Square test (5)
• Vehicle Arrival Example (continued):
• H0: the random variable is Poisson distributed.
• H1: the random variable is not Poisson distributed.

xi Observed Frequency, Oi Expected Frequency, Ei (Oi - Ei)2/Ei Ei = np ( x)


e −a a x
0 12 2.6
7.87
1
2
10
19
9.6
17.4 0.15
=n
3 17 21.1 0.8
x!
4 19 19.2 4.41
5 6 14.0 2.57
6 7 8.5 0.26
7 5 4.4
8 5 2.0
9 3 0.8 11.62 Combined because
10 3 0.3
– Degree of freedom is k-s-1 = 7-1-1 = 5, hence, the
> 11 1 0.1 hypothesis isofrejected
min Ei
27.68
at the 0.05 level of significance.
100 100.0

 02 = 27.68   02.05,5 = 11.1


58
Chi-Square test (5)

• Chi-square test can accommodate estimation of


parameters
• Chi-square test requires data be placed in
intervals
• Changing the number of classes and the interval
width affects the value of the calculated and
tabulated chi-sqaure
• A hypothesis could be accepted if the data
grouped one way and rejected another way
• Distribution of the chi-square test static is known
only approximately. So we need other tests
59
Kolmogorov-Smirnov Test
• Intuition: formalize the idea behind examining a q-q plot
• Recall from Chapter 7.4.1:
– The test compares the continuous cdf, F(x), of the hypothesized
distribution with the empirical cdf, SN(x), of the N sample observations.
– Based on the maximum difference statistics (Tabulated in A.8):
D = max| F(x) - SN(x)|
• A more powerful test, particularly useful when:
– Sample sizes are small,
– No parameters have been estimated from the data.
• When parameter estimates have been made:
– Critical values in Table A.8 are biased, too large.
– More conservative.

60
p-Values and “Best Fits” (1)
• p-value for the test statistics
– The significance level at which one would just reject H0 for the given test
statistic value.
– A measure of fit, the larger the better
– Large p-value: good fit
– Small p-value: poor fit

• Vehicle Arrival Example (cont.):


– H0: data is Possion
– Test statistics: , with 5 degrees of freedom

– p-value = 0.00004,
2
0 = meaning
27.68 we would reject H0 with 0.00004
significance level, hence Poisson is a poor fit.

61
p-Values and “Best Fits” (2)
• Many software use p-value as the ranking measure to automatically
determine the “best fit”.
– Software could fit every distribution at our disposal, compute the test
statistic for each fit and choose the distribution that yields largest p-
value.
• Things to be cautious about:
– Software may not know about the physical basis of the data, distribution
families it suggests may be inappropriate.
– Close conformance to the data does not always lead to the most
appropriate input model.
– p-value does not say much about where the lack of fit occurs
• Recommended: always inspect the automatic selection using graphical
methods.

62
Multivariate and Time-Series Input Models
(1)
• Multivariate:
– For example, lead time and annual demand for an inventory model,
increase in demand results in lead time increase, hence variables are
dependent.
• Time-series:
– For example, time between arrivals of orders to buy and sell stocks, buy
and sell orders tend to arrive in bursts, hence, times between arrivals
are dependent.

Co-variance and Correlation are measures of the


linear dependence of random variables

63
Multivariate and Time-Series Input Models (2):
Covariance and Correlation (1)
• Consider the model that describes relationship between X1 and X2:

( X 1 − m1 ) = b ( X 2 − m 2 ) +   is a random variable
– b = 0, X1 and X2 are statistically independent with mean 0 and is
independent of X2
– b > 0, X1 and X2 tend to be above or below their means together
– b < 0, X1 and X2 tend to be on opposite sides of their means

• Covariance between X1 and X2 :

= 0, =0
2 2) = E[( X 1 − m1 )( X 2 − m 2 )] = E ( X 1 X 2 ) − m1 m 2
– wherecov(cov(X
X1, X1, X ) < 0, then b <0
> 0, >0
Co-variance can take any value between - to 

64
Multivariate and Time-Series Input Models (2):
Covariance and Correlation (2)
• Correlation normalizes the co-variance to -1 and 1.
• Correlation between X1 and X2 (values between -1 and 1):

cov( X 1 , X 2 )
r = corr( X 1 , X 2 ) =
 1 2
= 0, =0
– where corr(X1, X2) < 0, then b <0
> 0, >0
– The closer r is to -1 or 1, the stronger the linear relationship is between
X1 and X2.

65
Multivariate and Time-Series Input Models (3): Auto
Covariance and Correlation

• A “time series” is a sequence of random variables X1, X2, X3, … , are


identically distributed (same mean and variance) but dependent.
– Consider the random variables Xt, Xt+h
– cov(Xt, Xt+h) is called the lag-h autocovariance
– corr(Xt, Xt+h) is called the lag-h autocorrelation
– If the autocovariance value depends only on h and not on t, the time
series is covariance stationary

66
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (1)
• If X1 and X2 are normally distributed, dependence between them can be
modeled by the bi-variate normal distribution with m1, m2, 12, 22 and
correlation r
– To Estimate m1, m2, 12, 22, see “Parameter Estimation” (Section 9.3.2 in
book)
– To Estimate r, suppose we have n independent and identically
distributed pairs (X11, X21), (X12, X22), … (X1n, X2n), then:

1 n
côv( X 1 , X 2 ) = 
n − 1 j =1
( X 1 j − Xˆ 1 )( X 2 j − Xˆ 2 )

1  n 
= 
 
n − 1  j =1
ˆ ˆ
X 1 j X 2 j − nX 1 X 2 

côv( X 1 , X 2 )
rˆ =
ˆ1ˆ 2 Sample deviation
67
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (2)
• Algorithm to generate bi-variate normal random variables
Generate Z1 and Z2, two independent standard normal random variables
(see Slides 38 and 39 of Chapter 8)
Set X1 = m1 + 1Z1
Set X2 = m2 + 2(rZ1+ Z2 )
1− r 2
• Bi-variate is not appropriate for all multivariate-input modeling
problems
• It can be generalized to the k-variate normal distribution to model the
dependence among more than two random variables

68
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (3)
• Example: X1 is the average lead time to deliver in months and X2 is the
annual demand for industrial robots.
• Data for this in the last 10 years is shown:

Lead time Demand


6.5 103
4.3 83
6.9 116
6.0 97
6.9 112
6.9 104
5.8 106
7.3 109
4.5 92
6.3 96
69
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (4)
• From this data we can calculate:

X 1 = 6.14, ˆ1 = 1.02; X 2 = 101.8, ˆ 2 = 9.93

• Correlation is estaimted as:

10

X
j =1
1j X 2 j = 6328.5

cov = [6328.5 − (10)(6.14)(101.80)] /(10 − 1) = 8.66


8.66
rˆ = = 0.86
(1.02)(9.93)

70
Multivariate and Time-Series Input Models (5): Time-
Series Input Models (1)
• If X1, X2, X3,… is a sequence of identically distributed, but dependent and
covariance-stationary random variables, then we can represent the process
as follows:
– Autoregressive order-1 model, AR(1)
– Exponential autoregressive order-1 model, EAR(1)
• Both have the characteristics that:

r h = corr ( X t , X t + h ) = r h , for h = 1,2,...


• Lag-h autocorrelation decreases geometrically as the lag increases,
hence, observations far apart in time are nearly independent

71
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (2):AR(1) Time-Series Input Models (1)
• Consider the time-series model:
X t = m + f ( X t −1 − m ) +  t , for t = 2 ,3,...
where  2 ,  3 ,  are i.i.d. normally distributed with m = 0 and variance  2

• If X1 is chosen appropriately, then


– X1, X2, … are normally distributed with mean = m, and variance =
 2/(1-f2)
– Autocorrelation rh = fh
• To estimate f, m,  2 :

côv( X t , X t +1 )
mˆ = X , ˆ 2 = ˆ 2 (1 − fˆ 2 ) , fˆ =
ˆ 2
where côv( X t , X t +1 ) is the lag-1 autocovariance

72
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (2):AR(1) Time-Series Input Models (2)
• Algorithm to generate AR(1) time series:
Generate X1 from Normal distribution with mean = m, and variance =  2/(1-
f2). Set t=2
Generate t from Normal distribution with mean 0 and variance  2
Set Xt=m+f(Xt-1- m)+ t
Set t=t+1 and go to step 2

73
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (1)

• Consider the time-series model:


fX t −1 , with probability f
Xt =  for t = 2,3,...
fX t −1 +  t , with probability 1-φ
where  2 ,  3 ,  are i.i.d. exponentially distributed with m ε = 1/λ, and 0  f  1
• If X1 is chosen appropriately, then
– X1, X2, … are exponentially distributed with mean = 1/l
– Autocorrelation rh = fh , and only positive correlation is allowed.
• To estimate f, l :

côv( X t , X t +1 )
lˆ = 1 / X , ˆ
f = rˆ =
ˆ 2
where côv( X t , X t +1 ) is the lag-1 autocovariance

74
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (2)
• Algorithm to generate EAR(1) time series:
Generate X1 from exponential distribution with mean = 1/l . Set t=2
Generate U from Uniform distribution [0,1].
If U f, then set Xt= f Xt-1.
Otherwise generate t from the exponential distribtuion with mean
1/l and set Xt=m+f(Xt-1- m)+ t
Set t=t+1 and go to step 2

75
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (3)
• Example: The stock broker would typically have a large sample of data, but
suppose that the following twenty time gaps between customer buy and sell
orders had been recorded (in seconds): 1.95, 1.75, 1.58, 1.42, 1.28, 1.15, 1.04,
0.93, 0.84, 0.75, 0.68, 0.61, 11.98, 10.79, 9.71, 14.02, 12.62, 11.36, 10.22, 9.20.
Standard calculations give


X = 5 .2 and ˆ 2
To estimate the lag-1autocorrelation we need
= 26.7
19

• Thus, cov=[924.1-(20-1)(5,2)2]/(20-1)=21.6 and


X X
j =1
t t +1 = 924.1

• Inter-arrivals are modeled as EAR(1) process with mean = 1/5.2=0.192 and f=0.8
rˆ = 21.6 26.7 = 0.8
provided that exponential distribution is a good model for the individual gaps

76
Modeling Building

• Verification: concerned with building the model right. It is utilized in the


comparison of the conceptual model to the computer representation that
implements that conception. It asks the questions: Is
the model implemented correctly in the computer? Are the input parameters and
logical structure of the model correctly represented?

77
Verification and Validation
of Simulation Models (cont.)
• Validation: concerned with building the right model. It is utilized to
determine that a model is an accurate representation of the real system.
Validation is usually achieved through the calibration of the model, an
iterative process of comparing the model to actual system behavior and
using the discrepancies between the two, and the insights gained, to
improve the model. This process is repeated until model accuracy is
judged to be acceptable.

78
Verification of Simulation Models

Many commonsense suggestions can be given for use in the verification


process.
1. Have the code checked by someone other than the programmer.
2. Make a flow diagram which includes each logically possible action a system
can take when an event occurs, and follow the model logic for each action
for each event type.

79
Verification of Simulation Models

3. Closely examine the model output for reasonableness under a variety of


settings of the input parameters. Have the code print out a wide variety of
output statistics.
4. Have the computerized model print the input parameters at the end of the
simulation, to be sure that these parameter values have not been changed
inadvertently.

80
What is “calibration”?

Reducing model error is a process


Verification

Calibration

Validation
What is “calibration”?

Reducing model error is a process


Verification Has the model been built correctly?

Is the model statistically significant? Can


we even determine significance?

Calibration Do we know what the model is, such that


it can be verified?

Validation
What is “calibration”?
Reducing model error is a process
Process by which the analyst selects model
Verification parameters that cause the model to best
reproduce real world conditions for a
specific application.

Calibration

Validation
Lesson 38
What is “calibration”?
Reducing model error is a process
Process to determine that a model is an
Verification accurate representation of the real world.

Calibration

Validation
What is “calibration”?

Complex Simplified
Reality Abstract
Error

Verification

Calibration and Validation


Lesson 39
What is “calibration”?
Reducing model error is a process
Verification
Calibration
and
Calibration Validation
are iterative
processes!
Validation
What is calibration?
Calibration activities vary by the model
used
and the user’s tolerance for error
➢ selection and confirmation of field data
➢ application of a numerical constant
➢ statistical comparison of model to field data
➢ visual inspection

You might also like