Professional Documents
Culture Documents
Module3 5
Module3 5
Technology
Department of Computer Science &
Engineering
1, 0 x 1
f ( x) =
0, otherwise
1
1 2
x 1
E ( R) = xdx = =
0 2 2
0
Figure: pdf for
random numbers
2
Generation of Pseudo-Random
Numbers
• “Pseudo”, because generating numbers using a known method
removes the potential for true randomness.
• Goal: To produce a sequence of numbers in [0,1] that simulates, or
imitates, the ideal properties of random numbers (RN).
• Important considerations in RN routines:
– Fast
– Portable to different computers
– Have sufficiently long cycle
– Replicable
– Closely approximate the ideal statistical properties of uniformity and
independence.
3
Techniques for Generating Random
Numbers
• Linear Congruential Method (LCM).
• Combined Linear Congruential Generators (CLCG).
• Random-Number Streams.
4
Linear Congruential Method
[Techniques]
• To produce a sequence of integers, X1, X2, … between 0 and m-1
by following a recursive relationship:
6
Characteristics of a Good Generator
[LCM]
• Maximum Density
– Such that he values assumed by Ri, i = 1,2,…, leave no large gaps on
[0,1]
– Problem: Instead of continuous, each Ri is discrete
– Solution: a very large integer for modulus m
• Approximation appears to be of little consequence
• Maximum Period
– To achieve maximum density and avoid cycling.
– Achieve by: proper choice of a, c, m, and X0.
• Most digital computers use a binary representation of numbers
– Speed and efficiency are aided by a modulus, m, to be (or close to) a
power of 2.
7
Combined Linear Congruential
Generators
[Techniques]
• Reason: Longer period generator is needed because of the
increasing complexity of stimulated systems.
• Approach: Combine two or more multiplicative congruential
generators.
• Let Xi,1, Xi,2, …, Xi,k, be the ith output from k different multiplicative
congruential generators.
– The jth generator:
• Has prime modulus mj and multiplier aj and period is
mj-1
• Produces integers Xi,j is approx ~ Uniform on integers in
[1, m-1]
• Wi,j = Xi,j -1 is approx ~ Uniform on integers in [1, m-2]
8
Combined Linear Congruential
Generators
[Techniques]
– Suggested form:
Xi
m , Xi 0
k
X i = (−1) X i , j mod m1 − 1
j −1
Hence, Ri = 1
j =1 m −1
1 , Xi = 0
m1
The coefficient:
Performs the
subtraction Xi,1-1
9
Combined Linear Congruential
Generators
[Techniques]
• Example: For 32-bit computers, L’Ecuyer [1988] suggests combining k = 2
generators with m1 = 2,147,483,563, a1 = 40,014, m2 = 2,147,483,399 and
a2 = 20,692. The algorithm becomes:
Step 1: Select seeds
• X1,0 in the range [1, 2,147,483,562] for the 1st generator
• X2,0 in the range [1, 2,147,483,398] for the 2nd generator.
Step 2: For each individual generator,
X1,j+1 = 40,014 X1,j mod 2,147,483,563
X2,j+1 = 40,692 X1,j mod 2,147,483,399.
Step 3: Xj+1 = (X1,j+1 - X2,j+1 ) mod 2,147,483,562.
Step 4: Return X j +1
, X j +1 0
R j +1 = 2,147,483, 563
2,147,483,562 , X j +1 = 0
2,147,483,563
Step 5: Set j = j+1, go back to step 2.
– Combined generator has period: (m1 – 1)(m2 – 1)/2 ~ 2 x 1018
10
Random-Numbers Streams
[Techniques]
• The seed for a linear congruential random-number generator:
– Is the integer value X0 that initializes the random-number sequence.
– Any value in the sequence can be used to “seed” the generator.
• A random-number stream:
– Refers to a starting seed taken from the sequence X0, X1, …, XP.
– If the streams are b values apart, then stream i could defined by starting seed:
Si = X b (i −1)
– Older generators: b = 105; Newer generators: b = 1037.
• A single random-number generator with k streams can act like k distinct
virtual random-number generators
• To compare two or more alternative systems.
– Advantageous to dedicate portions of the pseudo-random number sequence to
the same purpose in each of the simulated systems.
11
Tests for Random Numbers
• Two categories:
– Testing for uniformity:
H0: Ri ~ U[0,1]
H1: Ri ~/U[0,1]
• Failure to reject the null hypothesis, H 0, means that evidence of non-
uniformity has not been detected.
– Testing for independence:
H0: Ri ~ independently
H1: Ri ~/independently
• Failure to reject the null hypothesis, H 0, means that evidence of
dependence has not been detected.
• Level of significance a, the probability of rejecting H0 when it is
true: a = P(reject H0|H0 is true)
12
Tests for Random Numbers
• When to use these tests:
– If a well-known simulation languages or random-number generators is
used, it is probably unnecessary to test
– If the generator is not explicitly known or documented, e.g.,
spreadsheet programs, symbolic/numerical calculators, tests should
be applied to many sample numbers.
• Types of tests:
– Theoretical tests: evaluate the choices of m, a, and c without actually
generating any numbers
– Empirical tests: applied to actual sequences of numbers produced.
Our emphasis.
13
Frequency Tests [Tests for RN]
• Test of uniformity
• Two different methods:
– Kolmogorov-Smirnov test
– Chi-square test
14
Kolmogorov-Smirnov Test [Frequency Test]
15
Kolmogorov-Smirnov Test
[Frequency Test]
• Example: Suppose 5 generated numbers are 0.44, 0.81, 0.14, 0.05,
0.93.
Arrange R(i) from
Step 1: R(i) 0.05 0.14 0.44 0.81 0.93 smallest to
i/N 0.20 0.40 0.60 0.80 1.00 largest
i/N – R(i) 0.15 0.26 0.16 - 0.07 D+ = max {i/N – R(i)}
Step 2:
R(i) – (i-1)/N 0.05 - 0.04 0.21 0.13
D- = max {R(i) - (i-
1)/N}
Step 3: D = max(D+, D-) = 0.26
Step 4: For a = 0.05,
Da = 0.565 > D
16
Chi-square test [Frequency Test]
17
Tests for Autocorrelation [Tests
for RN]
18
Tests for Autocorrelation [Tests
for RN]
19
Example [Test for Autocorrelation]
• Test whether the 3rd, 8th, 13th, and so on, for the following
output on P. 265.
– Hence, a = 0.05, i = 3, m = 5, N = 30, and M = 4
– From Table A.3, z0.025 = 1.96. Hence, the hypothesis is not rejected.
20
Shortcomings [Test for Autocorrelation]
21
Summary
• In this chapter, we described:
– Generation of random numbers
– Testing for uniformity and independence
• Caution:
– Even with generators that have been used for years, some of which
still in used, are found to be inadequate.
– This chapter provides only the basic
– Also, even if generated numbers pass all the tests, some underlying
pattern might have gone undetected.
22
Inverse-transform Technique
• The concept:
– For cdf function: r = F(x)
– Generate r from uniform (0,1) r = F(x)
– Find x: x = F-1(r) r1
x1
23
Exponential Distribution [Inverse-
transform]
• Exponential Distribution:
– Exponential cdf:
r= F(x)
= 1 – e-l x for x 0
Xi = F-1(Ri)
= -(1/l) ln(1-Ri) [Eq’n 8.3]
Figure: Inverse-transform
technique for exp(l = 1)
24
Exponential Distribution [Inverse-
transform]
P( X 1 x0 ) = P( R1 F ( x0 )) = F ( x 0 )
25
Other Distributions [Inverse-
transform]
26
Empirical Continuous Dist’n [Inverse-
transform]
• When theoretical distribution is not applicable
• To collect empirical data:
– Resample the observed data
– Interpolate between observed data points to fill in the gaps
• For a small sample set (size n):
– Arrange the data from smallest to largest
x (1) x (2) x (n)
ˆ −1 (i − 1)
X = F ( R) = x(i −1) + ai R −
n
where x(i ) − x(i −1) x(i ) − x(i −1)
ai = =
1 / n − (i − 1) / n 1/ n
27
Empirical Continuous Dist’n [Inverse-
transform]
• Example: Suppose the data collected for100 broken-widget
repair times are:
Interval Relative Cumulative Slope,
i (Hours) Frequency Frequency Frequency, c i ai
1 0.25 ≤ x ≤ 0.5 31 0.31 0.31 0.81
2 0.5 ≤ x ≤ 1.0 10 0.10 0.41 5.0
3 1.0 ≤ x ≤ 1.5 25 0.25 0.66 2.0
4 1.5 ≤ x ≤ 2.0 34 0.34 1.00 1.47
Consider R1 = 0.83:
28
Discrete Distribution [Inverse-
transform]
29
Discrete Distribution [Inverse-
transform]
Generate R
Procedures:
no
Step 1. Generate R ~ U[0,1]
Step 2a. If R >= ¼, accept X=R. Condition
yes
Step 2b. If R < ¼, reject R, return
to Step 1 Output R’
• R does not have the desired distribution, but R conditioned (R’) on
the event {R ¼} does.
• Efficiency: Depends heavily on the ability to minimize the number
of rejections.
31
NSPP
[Acceptance-Rejection]
• Non-stationary Poisson Process (NSPP): a Possion arrival process
with an arrival rate that varies with time
• Idea behind thinning:
– Generate a stationary Poisson arrival process at the fastest rate, l * =
max l (t)
– But “accept” only a portion of arrivals, thinning out just enough to get
the desired time-varying rate
Generate E ~ Exp(l*)
t=t+E
no
Condition
R <= l(t)
yes
Output E ’~ t
32
NSPP [Acceptance-Rejection]
34
Direct Transformation [Special Properties]
In polar coordinates:
Z1 = B cos f
Z2 = B sin f
36
Data Collection
• One of the biggest tasks in solving a real problem. GIGO – garbage-in-
garbage-out
• Suggestions that may enhance and facilitate data collection:
– Plan ahead: begin by a practice or pre-observing session, watch for
unusual circumstances
– Analyze the data as it is being collected: check adequacy
– Combine homogeneous data sets, e.g. successive time periods, during
the same time period on successive days
– Be aware of data censoring: the quantity is not observed in its entirety,
danger of leaving out long process times
– Check for relationship between variables, e.g. build scatter diagram
– Check for autocorrelation
– Collect input data, not performance data
37
Identifying the Distribution (1): Histograms (1)
• A frequency distribution or histogram is useful in determining
the shape of a distribution
• The number of class intervals depends on:
– The number of observations
– The dispersion of the data
– Suggested: the number intervals the square root of the sample
size works well in practice
• If the interval is too wide, the histogram will be coarse or blocky and
it’s shape and other details will not show well
• If the intervals are too narrow, the histograms will be ragged and will
not smooth the data
38
Identifying the Distribution (1): Histograms (2)
• For continuous data:
– Corresponds to the probability density
function of a theoretical distribution
– A line drawn through the center of each
class interval frequency should results in
a shape like that of pdf
• For discrete data:
– Corresponds to the probability mass
function
• If few data points are available: combine
adjacent cells to eliminate the ragged
appearance of the histogram
40
Identifying the Distribution (2): Selecting the
Family of Distributions (1)
• A family of distributions is selected based on:
– The context of the input variable
– Shape of the histogram
• The purpose of preparing a histogram is to infer a known pdf or
pmf
• Frequently encountered distributions:
– Easier to analyze: exponential, normal and Poisson
– Harder to analyze: beta, gamma and Weibull
41
Identifying the Distribution (2): Selecting the
Family of Distributions (2)
• Use the physical basis of the distribution as a guide, for example:
– Binomial: # of successes in n trials
– Poisson: # of independent events that occur in a fixed amount of time
or space
– Normal: dist’n of a process that is the sum of a number of component
processes
– Exponential: time between independent events, or a process time that
is memoryless
– Weibull: time to failure for components
– Discrete or continuous uniform: models complete uncertainty. All
outcomes are equally likely.
– Triangular: a process for which only the minimum, most likely, and
maximum values are known. Improvement over uniform.
– Empirical: resamples from the actual data collected
42
Identifying the Distribution (2): Selecting the
Family of Distributions (3)
• Do not ignore the physical characteristics of the process
– Is the process naturally discrete or continuous valued?
– Is it bounded or is there no natural bound?
• No “true” distribution for any stochastic input process
• Goal: obtain a good approximation that yields useful results from the
simulation experiment.
43
Identifying the Distribution (3): Quantile-
Quantile Plots (1)
• Q-Q plot is a useful tool for evaluating distribution fit
• If X is a random variable with cdf F, then the q-quantile of X is the g such that
– When ( )
F gF has ( )
g =g =qFfor
= PanXinverse, -1(q)0 q 1 By a quantile, we mean the
fraction (or percent) of points
below the given value
• Let {xi, i = 1,2, …., n} be a sample of data from X and {yj, j = 1,2, …, n} be the
observations in ascending order. The Q-Q plot is based on the fact that yj is an
estimate of the (j-0.5)/n quantile of X.
percentile: 100-quantiles
where j isythe ranking -1 j - 0.5
or order number
j is approximately F n deciles: 10-quantiles
quintiles: 5-quantiles
quartiles: 4-quantiles
44
Identifying the Distribution (3): Quantile-
Quantile Plots (2)
• The plot of yj versus F-1( (j-0.5)/n) is
– Approximately a straight line if F is a member of an appropriate family
of distributions
– The line has slope 1 if F is a member of an appropriate family of
distributions with appropriate parameter values
– If the assumed distribution is inappropriate, the points will deviate
from a straight line
– The decision about whether to reject some hypothesized model is
subjective!!
45
Identifying the Distribution (3): Quantile-
Quantile Plots (3)
• Example: Check whether the door installation times given below follows a
normal distribution.
– The observations are now ordered from smallest to largest:
46
Identifying the Distribution (3): Quantile-
Quantile Plots (4)
• Example (continued): Check whether the door installation times follow a
normal distribution.
Straight line,
supporting the
hypothesis of a
normal distribution
Superimposed
density function of
the normal
distribution
47
Identifying the Distribution (3): Quantile-
Quantile Plots (5)
• Consider the following while evaluating the linearity of a Q-Q plot:
– The observed values never fall exactly on a straight line
– The ordered values are ranked and hence not independent, unlikely for
the points to be scattered about the line
– Variance of the extremes is higher than the middle. Linearity of the
points in the middle of the plot is more important.
• Q-Q plot can also be used to check homogeneity
– Check whether a single distribution can represent two sample sets
– Plotting the order values of the two data samples against each other. A
straight line shows both sample sets are represented by the same
distribution
48
Parameter Estimation (1)
• Next step after selecting a family of distributions
• If observations in a sample of size n are X1, X2, …, Xn (discrete or
continuous), the sample mean and variance are defined as:
i=1 X i i=1 i
n n
X 2
− nX 2
X= S2 =
n n −1
• If the data are discrete and have been grouped in a frequency distribution:
•
•
•
j =1 f j X j
n n
j =1
f X 2
− nX 2
• jX=
where f is the observed frequency of S =
value
2 X
j
j j
n n −1
49
Parameter Estimation (2)
• When raw data are unavailable (data are grouped into class intervals), the
approximate sample mean and variance are:
j =1 f j m j j =1 j j
c n
f m 2
− nX 2
X= S2 =
n n −1
• where fj is the observed frequency of in the jth class interval; mj is the midpoint of the
jth interval, and c is the number of class intervals
50
Parameter Estimation (3) Suggested Estimators
Distribution Parameters Suggested Estimator
Poisson a
â = X
Exponential l
lˆ = 1
X
Normal m,2
mˆ = X
ˆ 2 = S 2 (Unbiased)
51
Parameter Estimation (4)
• Vehicle Arrival Example (continued): Table in the histogram example on slide 7 (Table 9.1 in
book) can be analyzed to obtain:
n = 100, f1 = 12, X 1 = 0, f 2 = 10, X 2 = 1,...,
and j =1 f j X j = 364, and j =1 f j X 2j = 2080
k k
364
X= = 3.64
100
2080 − 100 * (3.64) 2
S =
2
99
= 7.63
– The histogram suggests X to have a Possion distribution
• However, note that sample mean is not equal to sample variance.
• Reason: each estimator is a random variable, is not perfect.
52
Goodness-of-Fit Tests (1)
• Conduct hypothesis testing on input data distribution using:
– Kolmogorov-Smirnov test
– Chi-square test
• Goodness-of-fit tests provide helpful guidance for evaluating the suitability
of a potential input model
• No single correct distribution in a real application exists.
– If very little data are available, it is unlikely to reject any candidate
distributions
– If a lot of data are available, it is likely to reject all candidate
distributions
53
Goodness-of-Fit Tests (2):
Chi-Square test (1)
• Intuition: comparing the histogram of the data to the shape of the
candidate density or mass function
• Valid for large sample sizes when parameters are estimated by maximum
likelihood
• By arranging the n observations into a set of k class intervals or cells, the
test statistics is:
(Oi − Ei ) 2
k
Expected Frequency
02 = Ei = n*pi
i =1
Ei where pi is the theoretical
Observed prob. of the ith interval.
Frequency Suggested Minimum = 5
which approximately follows the chi-square distribution with k-s-1 degrees of
freedom, where s = # of parameters of the hypothesized distribution estimated
by the sample statistics.
54
Chi-Square test (2)
• The hypothesis of a chi-square test is:
• H0: The random variable, X, conforms to the distributional
assumption with the parameter(s) given by the estimate(s).
• H1: The random variable X does not conform.
pi = p(xi ) = P(X = xi )
55
Chi-Square test (3)
• If the distribution tested is continuous:
ai
pi = ai −1
f ( x) dx = F (ai ) − F (ai −1 )
where ai-1 and ai are the endpoints of the ith class interval
and f(x) is the assumed pdf, F(x) is the assumed cdf.
– Caution: Different
100 grouping of data
10 (i.e.,
to 20 k) can affect the hypothesis
testing result.> 100 n1/2 to n/5
56
Chi-Square test (4)
• Vehicle Arrival Example (continued) (See Slides 7 and 19):
• The histogram on slide 7 appears to be Poisson
• From Slide 19, we find the estimated mean to be 3.64
• Using Poisson pmf:
e −a a x
p( x) = x! , x = 0,1,2,...
• For a=3.64, the probabilities are: 0, otherwise
p(0)=0.026 p(6)=0.085
p(1)=0.096 p(7)=0.044
p(2)=0.174 p(8)=0.020
p(3)=0.211 p(9)=0.008
p(4)=0.192 p(10)=0.003
p(5)=0.140 p(11)=0.001
57
Chi-Square test (5)
• Vehicle Arrival Example (continued):
• H0: the random variable is Poisson distributed.
• H1: the random variable is not Poisson distributed.
60
p-Values and “Best Fits” (1)
• p-value for the test statistics
– The significance level at which one would just reject H0 for the given test
statistic value.
– A measure of fit, the larger the better
– Large p-value: good fit
– Small p-value: poor fit
61
p-Values and “Best Fits” (2)
• Many software use p-value as the ranking measure to automatically
determine the “best fit”.
– Software could fit every distribution at our disposal, compute the test
statistic for each fit and choose the distribution that yields largest p-
value.
• Things to be cautious about:
– Software may not know about the physical basis of the data, distribution
families it suggests may be inappropriate.
– Close conformance to the data does not always lead to the most
appropriate input model.
– p-value does not say much about where the lack of fit occurs
• Recommended: always inspect the automatic selection using graphical
methods.
62
Multivariate and Time-Series Input Models
(1)
• Multivariate:
– For example, lead time and annual demand for an inventory model,
increase in demand results in lead time increase, hence variables are
dependent.
• Time-series:
– For example, time between arrivals of orders to buy and sell stocks, buy
and sell orders tend to arrive in bursts, hence, times between arrivals
are dependent.
63
Multivariate and Time-Series Input Models (2):
Covariance and Correlation (1)
• Consider the model that describes relationship between X1 and X2:
( X 1 − m1 ) = b ( X 2 − m 2 ) + is a random variable
– b = 0, X1 and X2 are statistically independent with mean 0 and is
independent of X2
– b > 0, X1 and X2 tend to be above or below their means together
– b < 0, X1 and X2 tend to be on opposite sides of their means
= 0, =0
2 2) = E[( X 1 − m1 )( X 2 − m 2 )] = E ( X 1 X 2 ) − m1 m 2
– wherecov(cov(X
X1, X1, X ) < 0, then b <0
> 0, >0
Co-variance can take any value between - to
64
Multivariate and Time-Series Input Models (2):
Covariance and Correlation (2)
• Correlation normalizes the co-variance to -1 and 1.
• Correlation between X1 and X2 (values between -1 and 1):
cov( X 1 , X 2 )
r = corr( X 1 , X 2 ) =
1 2
= 0, =0
– where corr(X1, X2) < 0, then b <0
> 0, >0
– The closer r is to -1 or 1, the stronger the linear relationship is between
X1 and X2.
65
Multivariate and Time-Series Input Models (3): Auto
Covariance and Correlation
66
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (1)
• If X1 and X2 are normally distributed, dependence between them can be
modeled by the bi-variate normal distribution with m1, m2, 12, 22 and
correlation r
– To Estimate m1, m2, 12, 22, see “Parameter Estimation” (Section 9.3.2 in
book)
– To Estimate r, suppose we have n independent and identically
distributed pairs (X11, X21), (X12, X22), … (X1n, X2n), then:
1 n
côv( X 1 , X 2 ) =
n − 1 j =1
( X 1 j − Xˆ 1 )( X 2 j − Xˆ 2 )
1 n
=
n − 1 j =1
ˆ ˆ
X 1 j X 2 j − nX 1 X 2
côv( X 1 , X 2 )
rˆ =
ˆ1ˆ 2 Sample deviation
67
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (2)
• Algorithm to generate bi-variate normal random variables
Generate Z1 and Z2, two independent standard normal random variables
(see Slides 38 and 39 of Chapter 8)
Set X1 = m1 + 1Z1
Set X2 = m2 + 2(rZ1+ Z2 )
1− r 2
• Bi-variate is not appropriate for all multivariate-input modeling
problems
• It can be generalized to the k-variate normal distribution to model the
dependence among more than two random variables
68
Multivariate and Time-Series Input Models (4):
Multivariate Input Models (3)
• Example: X1 is the average lead time to deliver in months and X2 is the
annual demand for industrial robots.
• Data for this in the last 10 years is shown:
10
X
j =1
1j X 2 j = 6328.5
70
Multivariate and Time-Series Input Models (5): Time-
Series Input Models (1)
• If X1, X2, X3,… is a sequence of identically distributed, but dependent and
covariance-stationary random variables, then we can represent the process
as follows:
– Autoregressive order-1 model, AR(1)
– Exponential autoregressive order-1 model, EAR(1)
• Both have the characteristics that:
71
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (2):AR(1) Time-Series Input Models (1)
• Consider the time-series model:
X t = m + f ( X t −1 − m ) + t , for t = 2 ,3,...
where 2 , 3 , are i.i.d. normally distributed with m = 0 and variance 2
côv( X t , X t +1 )
mˆ = X , ˆ 2 = ˆ 2 (1 − fˆ 2 ) , fˆ =
ˆ 2
where côv( X t , X t +1 ) is the lag-1 autocovariance
72
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (2):AR(1) Time-Series Input Models (2)
• Algorithm to generate AR(1) time series:
Generate X1 from Normal distribution with mean = m, and variance = 2/(1-
f2). Set t=2
Generate t from Normal distribution with mean 0 and variance 2
Set Xt=m+f(Xt-1- m)+ t
Set t=t+1 and go to step 2
73
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (1)
côv( X t , X t +1 )
lˆ = 1 / X , ˆ
f = rˆ =
ˆ 2
where côv( X t , X t +1 ) is the lag-1 autocovariance
74
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (2)
• Algorithm to generate EAR(1) time series:
Generate X1 from exponential distribution with mean = 1/l . Set t=2
Generate U from Uniform distribution [0,1].
If U f, then set Xt= f Xt-1.
Otherwise generate t from the exponential distribtuion with mean
1/l and set Xt=m+f(Xt-1- m)+ t
Set t=t+1 and go to step 2
75
Multivariate and Time-Series Input Models (5): Time-Series
Input Models (3):EAR(1) Time-Series Input Models (3)
• Example: The stock broker would typically have a large sample of data, but
suppose that the following twenty time gaps between customer buy and sell
orders had been recorded (in seconds): 1.95, 1.75, 1.58, 1.42, 1.28, 1.15, 1.04,
0.93, 0.84, 0.75, 0.68, 0.61, 11.98, 10.79, 9.71, 14.02, 12.62, 11.36, 10.22, 9.20.
Standard calculations give
•
X = 5 .2 and ˆ 2
To estimate the lag-1autocorrelation we need
= 26.7
19
• Inter-arrivals are modeled as EAR(1) process with mean = 1/5.2=0.192 and f=0.8
rˆ = 21.6 26.7 = 0.8
provided that exponential distribution is a good model for the individual gaps
76
Modeling Building
77
Verification and Validation
of Simulation Models (cont.)
• Validation: concerned with building the right model. It is utilized to
determine that a model is an accurate representation of the real system.
Validation is usually achieved through the calibration of the model, an
iterative process of comparing the model to actual system behavior and
using the discrepancies between the two, and the insights gained, to
improve the model. This process is repeated until model accuracy is
judged to be acceptable.
78
Verification of Simulation Models
79
Verification of Simulation Models
80
What is “calibration”?
Calibration
Validation
What is “calibration”?
Validation
What is “calibration”?
Reducing model error is a process
Process by which the analyst selects model
Verification parameters that cause the model to best
reproduce real world conditions for a
specific application.
Calibration
Validation
Lesson 38
What is “calibration”?
Reducing model error is a process
Process to determine that a model is an
Verification accurate representation of the real world.
Calibration
Validation
What is “calibration”?
Complex Simplified
Reality Abstract
Error
Verification