Week 9 and 10

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 129

ICS 810 MODELLING AND SIMULATION NOTES

Mainly based on Stewart Robinson (2004). Simulation: The Practice of


Model Development and Use

Experiments, Implementation
Week 9 and Week 10
The aim of experimentation

To obtain a better understanding of the real world system


that is being modeled and look for ways to improve that
system.
➔The experimentation should be carried out correctly;
➔The results that are obtained from the model should be

accurate;
➔The search for a better understanding and improvements

should be performed efficiently and effectively.


Elisha Opiyo, SCI, UON, May-August 2011
Experiments, Implementation: Week 9 and Week 10

The Nature of Simulation Models


Terminating and non-terminating simulations
Terminating simulation
One in which there is a natural end point that
determines the length of a run.

The end point can be where model reaches an empty


condition e.g:
●A bank that closes at the end of a day;
●The end of the busy lunch period in a supermarket;

●The completion of a trace of input data, e.g. the completion

of a production schedule.
Elisha Opiyo, SCI, UON, May-August 2011
Experiments, Implementation: Week 9 and Week 10

The Nature of Simulation Models


Terminating and non-terminating simulations

Non-terminating simulation
One that does not have a natural end point.

Example
A model of a production facility that aims
to determine its throughput capability.

Elisha Opiyo, SCI, UON, May-August 2011


Experiments, Implementation: Week 9 and Week 10
The Nature of Simulation Output

Transient output
Is one for which the distribution of the output
constantly changes.
Transient outputs arise from terminating
simulations.

Example
The number of customers served each hour in a bank. This is
different day by day.

Elisha Opiyo, SCI, UON, May-August 2011


Experiments, Implementation: Week 9 and Week 10
The Nature of Simulation Outputs

Steady-state output
This one in which a steady state is reached when the output
varies according to some fixed distribution (the steady-state
distribution).

Example
A production facility;
Throughput varies daily due to breakdowns, changeovers

and other interruptions;


In the long run, the throughput capability (the mean
throughput level) remains constant;
In a steady state the level the variability about the mean

remains constant.
Elisha Opiyo, SCI, UON, May-August 2011
Experiments, Implementation: Week 9 and Week 10
The Nature of Simulation Outputs
Steady-state output
Initial transient
➢This is a period in simulation when the model starts at a low
level and gradually builds up to its steady-state level.

➢It occurs at the start of a run (initial) and because the


distribution of the output is constantly changing (transient).

Initialization bias
This the bias on the output due to inclusion of the initial
transient data that is unrealistic.

Initial data should be ignored until the steady-state is


reached.
Elisha Opiyo, SCI, UON, May-August 2011
Experiments, Implementation: Week 9 and Week 10
The Nature of Simulation Outputs
Steady-state cycle output
These are cycles that arise due to participation of
several components such as where there are two
shifts.

Handle it by lengthening the observation interval in


the time-series to the length of the longest cycle.

Example
Instead of recording hourly throughput or
throughput by shift in the production the data can be
recorded daily.
Elisha Opiyo, SCI, UON, May-August 2011
Issues in Obtaining Accurate Simulation Results
Inaccurate data → inaccurate prediction of the real
system performance.

The main aim of simulation output analysis


Obtain an accurate estimate of the average performance.

Two options for ensuring the accuracy of the


estimates
➔Remove any initialization bias;

➔Generate, enough output data from the simulation

so that an accurate estimate of performance is made.

Elisha Opiyo, SCI, UON, May-August 2011


Obtaining sufficient output data: long runs

Get enough output data from the


simulation by: -
✔ Performing a single long run with the
model, the only option for a non-
terminating simulation;

✔ Performing multiple replications, the


only option for terminating simulations.

Elisha Opiyo, SCI, UON, May-August 2011


Obtaining sufficient output data: long runs

Replication
●This is a run of a simulation model that uses specified sets of
random numbers, which in turn cause a specific sequence of
random events;

To get a new replication: use a different random number set;


●What to do: perform multiple replications and take mean


value of the results.

Elisha Opiyo, SCI, UON, May-August 2011


Dealing with Initialization Bias: Warm-up and Initial
Conditions
Determining the warm-up period
The warm up period should be long enough to ensure
that the model is in a realistic condition.

For a non-terminating simulation go past the initial


transient period so that the model output is in a
steady state.

●There is need to have the model in a realistic


condition.

Elisha Opiyo, SCI, UON, May-August 2011


Dealing with Initialization Bias: Warm-up and Initial
Conditions
Determining the warm-up period
The methods include:
Graphical methods: involve the visual inspection of

time-series of the output data;


Heuristics approaches: apply simple rules with few

underlying assumptions;
Statistical methods: rely upon the principles of

statistics for determining the warm up period;


Initialization bias tests: identify whether there is any

initialization bias in the data;


Hybrid methods: these involve a combination of

graphical or heuristic methods with an initialization


bias test. Elisha Opiyo, SCI, UON, May-August 2011
Dealing with Initialization Bias: Warm-up and Initial
Conditions
Determining the warm-up period
➔Each of the methods above has limitations;

➔There is no single one method that can be


recommended for all circumstances;

➔They either overestimate or underestimate the


length of the initial transient;

➔They also rely on very restrictive assumptions


and using highly complex statistical
procedures.
Elisha Opiyo, SCI, UON, May-August 2011
Determining the warm-up period
Time-series inspection
This is a simple method for identifying the warm-up period;

Limitation:- for a single run, with a very noisy data it can be


hard to spot any initialization bias;

Alternatively:- run a series of replications use the mean


averages of the replications to plot a time-series;

At least five replications should be performed, more may be


required for very noisy data;

More replications make the time-series to be smoothed as


outliers are subsumed into the calculation of the mean for
each period.
Elisha Opiyo, SCI, UON, May-August 2011
Elisha Opiyo, SCI, UON, May-August 2011
Welch’s method of determining the warm up period
With Welch’s method, the moving averages are calculated
and plotted. The following steps are used:
1)Perform a series of replications (at least five)
to obtain time-series of the output data.
2)Calculate the mean of the output data across the
replications for each period (Yi).
3)Calculate a moving average based on a window
size w (start with w = 5).
4)Plot the moving average on a time-series.
5)Are the data smooth? If not, increase the size of
the window (w) and return to the previous two steps.
6)Identify the warm-up period as the point
where the time-series becomes flat.
Elisha Opiyo, SCI, UON, May-August 2011
The moving averages are calculated using the
following formula:

where: Yi(w) = moving average of window size w;


Yi = time-series of output data (mean of the replications);
i = period number; m = number of periods in the simulation
run Elisha Opiyo, SCI, UON, May-August 2011
Welch’s moving average with w=5 [Robinson, 2004, p. 148]
Elisha Opiyo, SCI, UON, May-August 2011
Further issues in determining the warm-up
period

➢Make the length of the simulation run much


greater than the anticipated warm-up period;
➢Ensure that the output data have settled into a
steady state beyond the warm-up period;
➢For a model with more than one key response, the initial
transient should be investigated for each one.

➢The warm-up period should be selected based on the


response that takes the longest time to settle.

Elisha Opiyo, SCI, UON, May-August 2011


The warm-up period and simulation
software

●Most simulation packages provide a


facility for specifying the warm-up
period.;

●At the point specified, the results that


are automatically collected by the
software are reset and the collection of
output data starts again.
Elisha Opiyo, SCI, UON, May-August 2011
Setting the initial conditions
Instead of using a warm-up period one can set the

initial conditions of the model.

Example:- values of work-in-progress, parts or


customers can be set in the model at the start of a run.

The initial values come from observing the real


system and data for the current state. This approach
works only if the real system exists.

The initial conditions can also be set by running the


simulation model for a warm-up period and record
the status of the model; this can be used for future runs.
Elisha Opiyo, SCI, UON, May-August 2011
Mixing initial conditions and warm-up
period
 This may sometimes be necessary;

This enables the reduction of the length of the


warm-up period if it is required;

This situation arises where some build up is


inevitable but can be slow eg.:
✗ A model of a warehouse (or any large inventory)
where it would take a long time for the warehouse
inventory to grow to a realistic level if no initial
condition is set.
Elisha Opiyo, SCI, UON, May-August 2011
Mixing initial conditions and warm-up
period
The common practice
 Use the initial conditions for terminating
simulations

 Use the warm-up period for non-terminating


simulations.

 However, the decision on which to use depends


very much on the context.

Elisha Opiyo, SCI, UON, May-August 2011


Mixing initial conditions and warm-up
period
Alternative to having a warm-up period or initial conditions

 Employ a very long run-length;


 This makes the initialization bias
becomes of little consequence to the results. ;

 This approach is can be time consuming.

Elisha Opiyo, SCI, UON, May-August 2011


Selecting the Number of Replications and Run-Length
The number of replications can be
determined by:-
➢ A rule of thumb

➢ Graphical method

➢ Confidence interval method

Elisha Opiyo, SCI, UON, May-August 2011


Selecting the Number of Replications and Run-
Length
A rule of thumb
As a rule use at least three to five replications.

●This rule is limited in that does not consider the


characteristics of a model’s output.

●Models with output data that is very varied normally


require more replications than models with a more
stable output.

Elisha Opiyo, SCI, UON, May-August 2011


Selecting the Number of Replications and Run-Length
Graphical method
A simple graphical approach is to plot the cumulative
mean of the output data from a series of replications.

Begin with at least 10 replications.


As more replications are performed the graph should


become a reasonably flat line.


The number of replications required is defined by the

point at which the line becomes flat.

If the line does not become flat, then more


replications are needed.
Elisha Opiyo, SCI, UON, May-August 2011
Cumulative mean method for
determining replications

Elisha Opiyo, SCI, UON, May-August 2011


Selecting the Number of Replications and Run-Length

Confidence interval method


●Is a statistical way for showing how accurately the mean
average of a value is being estimated.

Length of the interval ⁓ Accuracy of the estimate;


A large sample data (many replications) → narrow interval;


●For a narrow interval perform several replications (samples)


normally to the satisfaction of the client/or modeler

Elisha Opiyo, SCI, UON, May-August 2011


Selecting the Number of Replications and Run-Length
Confidence interval method
A confidence interval is calculated as follows:

CI = Xm ± (tn−1,α/2)S/√n
where:
Xm = mean of the output data from the replications;
S = standard deviation of the output data from the
replications;
n = number of replications;
tn–1,α/2 = value from Student’s t-distribution with
n–1 degrees of freedom and a significance level of
α/2, see appendix for the table of t-values.
Elisha Opiyo, SCI, UON, May-August 2011
Selecting the Number of Replications and Run-Length
Confidence interval method
The standard deviation formula is:-
S =√( [Σ(Xi-Xm)2]/n-1)
where: Xi = the result from replication i.
A significance level (α) of 5% is often selected.
This implies that there is a 5% likelihood that the mean does

not lie in the interval.


The confidence interval provides an upper and a lower limit

the significance level is divided by two (α/2).


So for a 5% significance level, values at 2.5% significance are

selected from the Student’s t-distribution.

Elisha Opiyo, SCI, UON, May-August 2011


Confidence interval method:
95% confidence intervals
[Robinson, 2004, p155]

Elisha Opiyo, SCI, UON, May-August 2011


Performing a single long run and the length of a run
✔Instead of using multiple replications, a single long

run can be performed;

✔However, an appropriate length of run must be


determined. This can be done graphically;

✔Initially, three replications are performed with the


model. These should be run for longer than the
anticipated run-length;

Elisha Opiyo, SCI, UON, May-August 2011


Performing a single long run and the length of a run

✔An initial estimate could be made using a rule of


thumb that the run-length should be at least 10
times the length of the warm-up period;

✔Time-series data are generated for the key output


data and then cumulative means are calculated for
each of the replications.

The cumulative means are plotted on a graph.


Elisha Opiyo, SCI, UON, May-August 2011


Performing a single long run and the length of a run
➔As the run-length increases, it is expected that the
cumulative means of the three replications will converge.
➔If the replications were run for an infinite period, they

would produce exactly the same result! The level of


convergence is calculated as follows:

Ci =[Max(Yi1, Yi2, Yi3)−Min(Yi1, Yi2, Yi3)]/Min(Yi1,Yi2,Yi3)

where: Ci = convergence at period i; Yij = cumulative mean


of output data at period i for replication j

➔The run-length is selected as the point where the


convergence is seen as acceptable.

Elisha Opiyo, SCI, UON, May-August 2011


Determining the run
length using convergence
[Robinson, 2004, p.160]

Elisha Opiyo, SCI, UON, May-August 2011


Multiple replications versus long runs
➔For terminating simulations perform multiple replications.

➔For non-terminating simulations, such as the user help


desk model, there is an option:-
perform multiple replications;
perform long runs.

➔The advantage of performing multiple replications is that


confidence intervals can easily be calculated, and they are
an important measure of accuracy for simulation results.

➔ The disadvantage of multiple replications is that if there is


a warm-up period, it needs to be run for every replication
that is performed. This wastes valuable experimentation
time.
Elisha Opiyo, SCI, UON, May-August 2011
SEARCHING THE SOLUTION SPACE
The immediate previous focus
How to get accurate results when running a
single experimental scenario.

Now
We consider how to select and compare
alternative scenarios in experimentation.

We consider how to search for a solution to the


real world problem being addressed by the
simulation study.
Elisha Opiyo, SCI, UON, May-August 2011
SEARCHING THE SOLUTION SPACE
Solution to the real world problem is sought from
the solution space.

The solution space is the total range of conditions


under which the model might be run.

The solution space is the region that represents all


possible combinations of values of the experimental
factors.

This space, can be very large, in which case looking


for a good or optimal scenario is a real challenge.
Elisha Opiyo, SCI, UON, May-August 2011
Solution space for 5 experimental factors using minimum and maximum
values
Elisha Opiyo, SCI, UON, May-August 2011
Procedures for searching the solutions space
The three main areas in relation to searching
the solution space that are examined are:
➔The comparison of results from two or more
different scenarios.
➔Searching the solution space:-
➔Informal methods;

➔Experimental design;

➔Metamodeling;

➔Optimization.

➔Sensitivity analysis.
Elisha Opiyo, SCI, UON, May-August 2011
Terminology

Level of an experimental factor


Is the value of the factor.

For qualitative factors (e.g. rules) the level is


interpreted as one of the available options eg
hobby may have a level of 'football'.

Elisha Opiyo, SCI, UON, May-August 2011


Terminology
Scenario
Is a run of the simulation under a specific set
of conditions, that is, levels set for
experimental factors.

A scenario can be thought of as a specific


factor/level combination, for all the factors.

By changing the level of one or more


experimental factors, the scenario is changed.

Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
Simulation experiments can be of the forms:-
➔Interactive experimentation;

➔Batch experimentation;

➔Comparing alternatives;

➔Search experimentation.

The first two describe the means by which the


simulation runs are performed.

The last two describe the means by which the


scenarios for experimentation are determined.
Elisha Opiyo, SCI, UON, May-August 2011
The Nature of Simulation Experimentation
Interactive and batch experimentation
Interactive experimentation
Watching the simulation run, make changes to the

model and note the effect;


Eg. on noticing a bottleneck in one area of the model,
increase the capacity of that area (e.g. faster cycle, more
machines) and the model run continued to see the effect of
such a change.
 Objectives:- develop an understanding of the model, the key
problem areas and identify potential solutions.

Useful for facilitating group decision-making.


Usually consists of short run lengths and must not be given

much weight. Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
Interactive and batch experimentation
Batch experiments
Are performed by setting the experimental factors

and leaving the model to run:-


 for a predefined run-length (or to a specific event)

 for a set number of replications.

This requires no interaction from the model user and


so the display is normally switched off.

Objective:- run the simulation for sufficient time to


obtain statistically significant results.

Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
Comparing alternatives and search experimentation
A limited number of scenarios may have to be
compared;

These scenarios can be known at the start of the


simulation study, eg. there may be three alternative
factory layouts;

Sometimes the scenarios emerge as the simulation


study progresses;

The number of scenarios can be small or large.


Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
Search experimentation
➔Here there are no predefined scenarios;

➔One or more experimental factors are varied until a target


or optimum level is reached;

➔For instance, the aim might be to reach a target throughput


or to achieve an optimum level of customer service by
balancing the cost of resources with the cost of lost custom;

➔Requirements- either:-
➔A clearly defined target, normally expressed in the objectives

of the project OR, ;


➔A well defined function (e.g. cost or profit) to be optimized.

Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
The problem of combinations
Simulation experiments can involve a very large number of
scenarios to be considered;
Example- A simulation study of a manufacturing plant
Assume 4 experimental factors, where each factor can
be given a number of different levels as follows:-
Factor 1 cycle times:−20%,−10%, as is, +10%, +20% (5

levels);
Factor 2 buffer sizes:−50%,−25%, as is, +25%, +50% (5

levels);
●Factor 3 machine efficiency: 85%, 90%, 95% (3 levels);
●Factor 4 number of maintenance operators:
4,5,6,7(4levels);

Elisha Opiyo, SCI, UON, May-August 2011


The Nature of Simulation Experimentation
The problem of combinations
Simulation experiments can involve a very large number of
scenarios to be considered;
Example- A simulation study of a manufacturing plant
➢ In total there are 300 scenarios (5 × 5 × 3 × 4);

➢If 5 replications are performed with each scenario and


each replication takes 30 minutes to run, then full
experimentation would require 750 hours;

That is about four and a half weeks!


Elisha Opiyo, SCI, UON, May-August 2011


Analysis of Results from a
Single Scenario
Simulation experiments are performed in
order to determine the performance of the
model, which is measured by the values of the
responses.

For each response two measures are generally


of interest:
 The average (or point estimate)

 The variability.

Elisha Opiyo, SCI, UON, May-August 2011


Analysis of Results from a Single Scenario
Point estimates
The average level of a response is most commonly measured
by its mean;
An infinite run time, then it would give an exact value of the
mean for each response. This is not possible;
Simulation runs provide a sample of results, instead;

Samples of output data makes it important that a confidence


interval (CI) for each mean is reported.
A CI provides information on the range within which the
population mean (obtained from an infinite run-length) is
expected to lie.
CI is the main method for reporting the mean in simulation
studies. Elisha Opiyo, SCI, UON, May-August 2011
Analysis of Results from a Single Scenario
Confidence interval
A confidence interval can be calculated when

multiple replications are being performed;

This approach cannot be used with output


data from a single long run;


This is because it is likely that the data are
correlated;
Confidence intervals rely on the assumption
that the samples are independent.
Elisha Opiyo, SCI, UON, May-August 2011
Analysis of Results from a Single Scenario
Confidence interval
➔Since single long runs have a number of advantages

over performing multiple replications, most notably


time saving it would be useful to be able to construct
a confidence interval for the output data;

➔A number of methods have been proposed for


achieving this such as:-
➔ Batch means method;
➔ Overlapping batch means method;

➔ Regenerative method;

➔ Standardized time-series method;

➔ Spectral estimation method;

➔ Autoregressive method.

➔We examine, only the batch means method.


Elisha Opiyo, SCI, UON, May-August 2011
Analysis of Results from a Single Scenario
Confidence interval
Batch means methods for constructing confidence
intervals from a single run
In the batch means method the time-series of output
data (Y1, Y2, . . . , Yn) is divided into k batches of
length b, such that the mean of each batch is
calculated as follows:-

b= j
Yi(b) = (1/b) Σ Y(i-1)
j=1

where: Yi(b) = batch means of length b


Elisha Opiyo, SCI, UON, May-August 2011
Batch means methods for constructing confidence intervals
from a single run

Elisha Opiyo, SCI, UON, May-August 2011


Batch means methods for constructing confidence
intervals from a single run
For large batch size, it can be assumed that the batches are
independent of one another so CI can be computed in the
normal way:
CI = Xm ± (tn−1,α/2)S/√n where:

Xm = mean of the output data from the replications (batches)


S = standard deviation of the output data from the
replications ; n = number of replications
tn–1,α/2 = value from Student’s t-distribution with n–1 degree
of freedom and a significance level of α/2
Issue:- which batch size do we use?
Suggestion: use between 10 and 30.
Elisha Opiyo, SCI, UON, May-August 2011
Analysis of Results from a Single Scenario
Median and quartile estimation
✔The average performance can also be measured
using the median or the middle value.

✔Also we may want to estimate the quartiles, that is,


the level of performance that can be achieved with a
given probability.

The median is simply the 0.5 quartile.


✔There are the upper and lower quartiles which are


0.25 and 0.75 quartiles respectively.

Elisha Opiyo, SCI, UON, May-August 2011


Analysis of Results from a Single Scenario
Measures of variability
➔An average alone does not provide a complete
picture of model performance;

➔It is also important to measure the variability.

➔For operations systems a lower level of variability


may be preferred since it is easier to match
resources to the levels of demand.

➔Indeed, a worse average with low variability may


be selected in preference to a better average with
high variability.
Elisha Opiyo, SCI, UON, May-August 2011
Analysis of Results from a Single Scenario
Measures of variability
●Be aware of outliers when stating the
minimum and maximum values, otherwise
these measures may be misleading;

●For a median, quartiles and more generally


quartiles provide a measure of variability.

●Time-series plots are also important, as they


show the pattern of variability over time, for
instance, whether there are cycles in the data.
Elisha Opiyo, SCI, UON, May-August 2011
Comparing Alternatives
Determine whether one alternative is better than another.

May be more than comparing the mean values of key


responses to see which are best.
Example:- two scenarios (A and B) are being compared, the
key response being daily throughput.
 Scenario A gives a mean result of 1050 units per day, while

the result for scenario B is 1080.


This does not mean that scenario B is better than scenario A!

If the aim is to increase throughput, then it initially appears


that the answer is in the affirmative.

Elisha Opiyo, SCI, UON, May-August 2011


Comparing Alternatives
Apart from the difference in the means, two other factors

need to be considered:-
 What is the standard deviation of the mean daily
throughput for the two scenarios?
 How many replications (or batches) were used to

generate the results?

If only a few replications were made and there is a lot of


variation in the results, then there is little confidence that the
difference is significant.

If, replications were many and the standard deviation is low,


then there is more confidence that the difference is real.

Elisha Opiyo, SCI, UON, May-August 2011


Comparing Alternatives
A practical approach is to consider all three factors:-

● The size of the difference;

● The standard deviation;

● The number of replications.

●Then judge whether the difference in the results is


significant.

●A more rigorous approach relies on forming confidence


intervals for the difference between the results.

Elisha Opiyo, SCI, UON, May-August 2011


Comparing Alternatives
Comparison of two scenarios
Assuming that common random numbers are being used in the model, a
confidence interval for the difference between the results from two
scenarios can be calculated using the paired-t approach.
The confidence interval is calculated as follows:

CI = D ± ( tn−1,α/2)SD/ √n
n n
D = Σ (X1j – X2j)/n SD = Σ (X1j − X2j − D)2/ (n − 1)
j =1 j=1
where: D = mean difference between scenario 1 (X1) and scenario
2 (X2); X1j = result from scenario 1 and replication j
X2j = result from scenario 2 and replication j; SD = standard deviation
of the differences; n = number of replications performed (same for both
scenarios); tn−1,α/2 = value from Student’s t-distribution with n–1 degree of
freedom and a significanceElisha
level of α/2
Opiyo, SCI, UON, May-August 2011
Comparing Alternatives
Comparison of two scenarios
The resulting confidence interval can lead to one of
three outcomes as follows:-

Outcome (a)

The confidence interval is completely to the left of


zero.

It can be concluded, with the specified level of


confidence (normally 95%), that the result for
scenario 1 is less than the result for scenario 2.

Elisha Opiyo, SCI, UON, May-August 2011


Comparing Alternatives
Comparison of two scenarios
The resulting confidence interval can lead to one of three outcomes as
follows:-

Outcome (b)

➢The confidence interval includes zero.

➢It can be concluded, with the specified level of


confidence (normally 95%), that the result for
scenario 1 is not significantly different from
the result for scenario 2.

Elisha Opiyo, SCI, UON, May-August 2011


Comparing Alternatives
Comparison of two scenarios
The resulting confidence interval can lead to one of three outcomes as
follows:-

Outcome (c)

The confidence interval is completely to the


right of zero;

It can be concluded, with the specified level of


confidence (normally 95%), that the result for
scenario 1 is greater than the result for
scenario 2.
Elisha Opiyo, SCI, UON, May-August 2011
Comparing Alternatives
Comparison of two scenarios
NB. the statistical significance of a difference in the results has a
very different meaning from the practical significance of a
difference;

➔Statistical significance answers the question:-is the difference in


the results real?

➔Practical significance asks the questions: is the difference


sufficiently large to affect the decision?

➔Eg. although scenario 2 may give, say, a 10% improvement


over scenario 1, if the cost of scenario 2 is much greater, then the
client may opt for scenario 1.

Elisha Opiyo, SCI, UON, May-August 2011


Choosing the best scenario(s)
To identify the best out of a group of scenarios:-

➢A naïve way:- to inspect the mean results for each scenario;


ignores the standard deviation of the results or else perform
many replications;

➢A better way:- use confidence intervals; even with its


limitations, the best scenarios are not always easily
identifiable.

➢Another statistical method:- ranking and selection; details


are left as an exercise.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Very many scenarios (factor/level combinations) can arise in
search experimentation;

It is often not possible to simulate every single scenario in the


time available in order to determine which meets the target
required or provides the optimum result.

The ways to go about this in efficient way are give later.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Approaches to improving the efficiency of the
experimentation process

Experimental Design
Identify the experimental factors that are most
likely to lead to significant improvements;

This reduces the total factor/level combinations


to be analyzed;

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Approaches to improving the efficiency of
the experimentation process

Metamodels
Fit a model to the simulation output (a
model of a model);

The fitted model runs much faster than the


simulation, many more factor/level
combinations can be investigated;
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Approaches to improving the efficiency of
the experimentation process

Optimization

Perform an efficient search of the


factor/level combinations, trying to identify
the optimum combination.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Informal approaches to search experimentation
Due to some limitations among users, there are many
informal methods that are being used and can be classified
under three headings:-

1.Identifying important experimental factors (similar to


experimental design)

2.Developing an understanding of the solution space


(similar to metamodelling)

3.Searching factor/level combinations efficiently


(similar to optimization)

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation

Informal approaches to search experimentation


Identifying important experimental factors
These experimental factors have the greatest impact;
They give the greatest improvement towards meeting the
objectives of the simulation study.
Three ways to identify the importance of an experimental
factor:-

Data Analysis
Use the data in a model to find out which experimental factor
has the most impact;

Limited by lack of a complete picture; doesn't incorporate the


randomness and interconnections in the model.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation

Informal approaches to search experimentation


Identifying important experimental factors
Expert Knowledge
Find out from subject matter experts, such as operations
staff as they often have a good understanding of the system
and the factors that are likely to have greatest impact.

Limited by lack of a complete understanding of the system by


the subject experts, sometimes.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation

Informal approaches to search experimentation


Identifying important experimental factors
Preliminary Experimentation
Change the levels of experimental factors and running the
model to see the effect.

Interactive experimentation if used with caution may be


beneficial in this respect, although it is important to perform
batch experiments to test fully the effect of a change to an
experimental factor.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Developing an understanding of the solution space
Perform simulation for a limited number of scenarios (factor/level
combinations) and form an opinion as to the likely outcome of
other scenarios. Consider the table below:

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Developing an understanding of the solution space

•The model user could run only a limited set;


•Run the model for the scenarios at the four extremes

(corners) and the centre point for confirmation.


•Only consider: X
11, X13, X22, X31 and X33.
Others can be predicted by simple linear interpolation.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Searching factor/level combinations efficiently

Identify factor changes that have:


 The greatest impact in improving the simulation
result;

 Moving the simulation result towards the desired


target.

This is done via experimental design.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Searching factor/level combinations efficiently

Experimental design
This is a formal method for carrying out the preliminary
experimentation.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Searching factor/level combinations efficiently - Experimental design

2k factorial designs
Is one approach to experimental design, where k is the

number of experimental factors.

In a 2k factorial design each factor is set at two levels denoted


by a − and + sign, 2k is the total number of responses.

For a quantitative factor, the minimum and maximum value


might be chosen for – and + respectively.

For a qualitative factor, two extreme options might be


selected such as bad, excellent.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs
Example
Consider 3 factors or variables, each has values – or a +;
There are a total of eight (23) scenarios as below;
All eight scenarios are then simulated and the results (responses R1 to R8) recorded.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Example

Effect of factor 1 (e1) can be calculated as follows:


e1 = [(R2 − R1) + (R4 − R3) + (R6 − R5) + (R8 − R7)]/4
Factor 1 changes from its − to its + level on four occasions,
each time holding the other factors constant, hence division
by for averaging.
The formula above can be formed more easily as follows:
e1 = [−R1 + R2 − R3 + R4 − R5 + R6 − R7 + R8]/4
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Example

Formula for factor 2 can also be formed with


reference to the signs in for instance:
e2 = [−R1 − R2 + R3 + R4 − R5 − R6 + R7 + R8]/4

Similarly for factor 3 we have:


e3 = [−R1 − R2 - R3 - R4 + R5 + R6 + R7 + R8]/4
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Example

●If the main effect of a factor is positive this indicates


that, on average, changing the factor from its − to its
+ level increases the response by the value of the main
effect;

●Similarly a negative main effect identifies a decrease


in the response when changing the factor from its − to
its + level; Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Example

If the main effect is close to zero this shows that a


change to the factor has a negligible effect.

It is possible to tell the direction in which each factor


should be moved to achieve the desired effect
(increasing or decreasing the response value).
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Example

Interaction: - the effect of changing more than one


factor at a time;
Eg. interaction effect between factor 1 and factor 2
(two-factor interaction effect) is calculated as follows:
e12 = [R1 − R2 − R3 + R4 + R5 − R6 − R7 + R8]/4
Get by multiplying the signs for the two factors under
investigation.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Help desk example

The main effects are as follows:


e1 = (−2333.59 + 2010.46 − 2330.86 + 2017.03)/ 2 = −318.48
e2 = (−2333.59 − 2010.46 + 2330.86 + 2017.03)/2 = 1.92
and the interaction effect between the two factors is:
e12 = (2333.59 − 2010.46 − 2330.86 + 2017.03)/2 = 4.65
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Searching factor/level combinations efficiently - Experimental design
2k factorial designs - Help desk example

e1 = −318.48; e2 = 1.92; e12 = 4.65


The interaction effect and the effect of the number of help
desk staff both appear to be negligible.
Increasing the technical team, on average, reduces the mean
time inquiries spend in the system by about 318 minutes.
This has a significant effect.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Other approaches to experimental design
Fractional factorial designs
Applied when there are too many factors to
enable full experimentation with every
factor/level combination.

A limited set of factor/level combinations is


chosen and conclusions are drawn from an
analysis of their results.

Elisha Opiyo, SCI, UON, May-August 2011


Search Experimentation
Other approaches to experimental design
Analysis of variance (ANOVA)
Provides a more rigorous means for identifying the
effect of changes to factors;

A series of hypothesis tests to determine


whether changes to the experimental factors
have an effect on the response are done;

The effect of individually changing factors


singly and and in combination is tested;

The details of ANOVA are left as an exercise.


Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation

Other approaches to experimental design


Metamodelling: metamodel is a model of a model, in
our case a model of the simulation output.

The metamodel is normally an analytical model


which runs much faster than the simulation.

Investigations use the metamodel instead;


Disadvantages:-

 Metamodel is an approximation of the simulation


output hence it is less accurate;
 Creating the metamodel is an overhead.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Other approaches to experimental design
Optimization (‘‘searchization’’)
➔Aim:- to find the factor/level combination that gives the best
value for a response that is the maximum or minimum value;
➔Similar to standard mathematical optimization methods;
➔Have an objective function to be optimized eg. cost, profit,
throughput or customer service;
➔Have a set of decision variables that can be changed ie. the
experimental factors;
➔Have a series of constraints within which the decision
variables can be changed; eg. the range within which the
experimental factors can be altered.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation

Other approaches to experimental design


Optimization (‘‘searchization’’)
Limitations
➔It can easily converge to a local and not a global optimum;

➔Deals with a single objective function not multiple


objectives which may be there;

➔The practical significance of results can be lost in the


search for statistically better outcomes;

➔There may be too much group debate optimization process


and not on the model and to provide ‘‘the result’’.
Elisha Opiyo, SCI, UON, May-August 2011
Search Experimentation
Sensitivity Analysis
✔In sensitivity analysis the consequences of changes in model
inputs are assessed.
✔In this context model inputs are interpreted more generally

than just experimental factors and include all model data.


✔Sensitivity analysis is useful in three main areas:

✔ Assessing the effect of uncertainties in the data, particularly


category C data
✔ Understanding how changes to the experimental factors
affect the responses.
✔ Assessing the robustness of the solution.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Introduction
Is the fourth and final process in a simulation project;

It is where the modeling effort affects the real world by:-


➢ Generating tangible changes (improvement);

➢ Giving the clients an improved understanding so


they can better manage the real world.
We Focus on
The meaning of implementation ;

The factors that lead to the success of the simulation study;


Ways of measuring success.


Elisha Opiyo, SCI, UON, May-August 2011


Implementation
What is Implementation?
General meaning
➔It is putting something into effect;

➔carrying something out such as implementing a

program to restructure an organization;


➔implementing a plan eg in the military, a battle

plan;
Specific meaning to simulation studies
It is any one of:
➔ Implementing the findings from the

simulation study;
➔ Implementing the model;

➔ Learning from the


Elisha Opiyo, modeling
SCI, UON, May-August 2011 and simulation.
Implementation
Implementing the findings
➢Some ideas come to light about how to tackle

the problem situation being addressed as the


study proceeds or by the time it is complete;

➢These ideas come from the experimentation and


the process of inquiry involved in the simulation
study;

➢These ideas should be clearly documented in a


final report;

➢An implementation process should be put in


place; Elisha Opiyo, SCI, UON, May-August 2011
Implementation
Implementing the findings
The contents of the final report:-

➢ The problem situation and the objectives of the

project;
➢ A summary of the model:-

➢ Describe the experiments performed;

➢ Outline the key results;

➢ A list of the conclusions

➢ Recommendations and suggestions for further

simulation work;

➢The clients determine which of the recommendations from


the simulation study will be put into practice;

➢The client's decision rests on wider issues in the real world


situation such as theElisha
organizational culture and the finance
Opiyo, SCI, UON, May-August 2011
available;
Implementation
Implementing the findings
➢The process should be monitored throughout to ensure
that the recommendations are implemented properly;

➢One may refer back to the simulation from time to time,


especially if the real world starts to diverge from the
model;

➢The real world is not static and new ideas may come to
light that may not have been tested in the simulation
study;

➢The modeler can be partially involved in the


implementation.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Implementation as learning
●Those who learn:-
● Modeler;

● Model user;

● Clients;

●Nature of learning:- all gain an improved understanding

of the real world;


●Learning occurs via:-

● The results of the simulation experiments;

● The process of developing the model;

● Using the simulation model.

●This learning is often much wider than the direct focus


of the simulation study.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Implementation as learning
●Learning, however, is normally intangible rather than
explicit, it cannot be identified directly.

●There is therefore no formal process for the


implementation of learning.

Learning leads to a change in management:-


● Attitudes;

● Beliefs;

● Behaviors.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Implementation and Simulation Project Success

There is satisfaction in seeing the findings


from a simulation study being implemented.

We consider three aspects:-


 Defining simulation project success;

 Achieving simulation project success;


Measuring simulation project success;

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Implementation and Simulation Project Success
What is simulation project success?

Balci [1985]- a simulation study can be considered


successful if:
➔ The results are credible;

➔ The results are accepted by the decision makers;

➔ The results are used by the decision makers.

Ulgen [1991]- a set of criteria for success:-


 Completion on time and to budget;

 Implementation;

 Measurable financial savings;

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
Implementation and Simulation Project Success
What is simulation project success?
Pidd [1998]- propose the four-stage model of success that is
described below.

Stage 1: It must be agreed that the simulation study has


achieved its objectives;
If this is not the case, that some benefit has been
derived from the work;

Stage 2: The results should be accepted;


This requires more than simply getting the right result
and is very much related to the clients’ reaction to the
findings;
Organizational politics can play a significant role in the
acceptability of the results;
Elisha Opiyo, SCI, UON, May-August 2011
Implementation
Implementation and Simulation Project Success
What is simulation project success?
Pidd [1998]- propose the four-stage model of success that is
described below.

Stage 3: The results should be implemented;


Even if the results are accepted, they may not be
implemented because, for instance, the finance is
not available;

Stage 4: Check whether the results of the study


are correct once the recommendations have been
implemented.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
We consider how success can be achieved.

Churchman and Schainblatt [1965]


Implementation is a matter of developing
the right understanding between the
modeler and the clients;

Their Proposal
The modeler and clients should develop a
mutual understanding of one another.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
We consider how success can be achieved.

The other extreme view


The modeler’s role: - simply to provide the clients with
‘‘the answer’’.

The modeler ensure that:-


Stage 1: The study achieves its objectives and/or shows
a benefit;
Stage 2: The results of the study are accepted;
Stage 3: The results of the study are implemented;
Stage 4: Implementation proves the results of the study
to be correct.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
The quality issues should touch on:-
 The quality of the content;

 The quality of the process;

 The quality of the outcome;

Quality of the Content


The extent to which the technical work within the
modeling process conforms to the requirements of the
study.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
The quality issues should touch on:-
The quality of the content;
The quality of the process;
The quality of the outcome;

Quality of the Process


The extent to which the process of the delivery of
the work conforms to the clients’ expectations.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
The quality issues should touch on:-
The quality of the content;
The quality of the process;
The quality of the outcome.

Quality of the Outcome


The extent to which the simulation study is useful
within the wider context for which it is intended.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
The quality issues should touch on:-
The quality of the content;
The quality of the process;
The quality of the outcome.

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?

Quality Factors in a Simulation Project


The Model
 Speed;
 Aesthetics;

 Ease of use;

Confidence in the Model


➔Trustworthiness;

➔Believability of the model;

● The results;

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?

Quality Factors in a Simulation Project


The Data
● Availability;

● Accuracy;

Software
 Proprietary/ownership of the simulation

software;
 Ease of use; suitability;

 Flexibility;

 Links to third party software;

 Confidence;
Elisha Opiyo, SCI, UON, May-August 2011
Implementation
How is success achieved?

Quality Factors in a Simulation Project


Credibility of the Modeler
 Trustworthiness;

 Believability and honesty of the modeler;

 His/her organization;

Competence of the Modeler


 Possession of the necessary skills and

knowledge by the modeler;


 Ability of his/her organization to perform the

simulation project;

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
Quality Factors in a Simulation Project
Professionalism
The modeler’s commitment (to the project, to the clients, and
to quality);
Interpersonal skills;
Appearance;

Reliability of the Modeler


consistency of performance;
Dependability;
Communication and Interaction
Frequency;
Clarity;
Appropriateness of communication;
Interaction with those involved in the simulation project
Elisha Opiyo, SCI, UON, May-August 2011
Implementation
How is success achieved?
Quality Factors in a Simulation Project

Involvement
Involving everybody (especially the clients) at all stages
of the simulation project

Interpersonal
The relationship between the clients and the modeler

Education
The clients learn about simulation and the model as the
project progresses

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
Quality Factors in a Simulation Project

Understanding the Clients


The modeler makes every effort to understand the
clients’ needs and expectations

Responsiveness
The modeler gives a timely and appropriate response
to the clients’ needs and expectations

Elisha Opiyo, SCI, UON, May-August 2011


Implementation
How is success achieved?
Quality Factors in a Simulation Project
Recovery
Recovering from problem situations

Access
● Approachability and ease of contact of the modeler;
● Accessibility of the model

Fees
Correctly charging for the simulation project

The Clients’ Organization:


The commitment of the clients’ organization to the simulation
project

Elisha Opiyo, SCI, UON, May-August 2011


Implementation

How is success measured?


It is good practice to perform a project review at the end of a
simulation study.
 The review should discuss:-
➔What went well;

➔What could have been done better;

➔What could be improved next time.

 A questionnaire can be used for this process;

 A post-project review meeting can be held.


Elisha Opiyo, SCI, UON, May-August 2011
End of Week 9 and 10 Exercises

1.Define the following terms: experimentation; terminating simulation;


transient output; steady state output; initialization bias; warm up period;
replication; solution space; level of experimental factor; scenario; interactive
experiment; batch experiment.
2.Discuss the importance of accurate data during experimentation.
3.Discuss the main approaches that are used to ensure that the
experimentation data is correct.
4.Discuss how you can ensure that enough data is obtained from the
experiment.
5.Discuss how to deal with initialization bias.
6.Describe the time series inspection of the mean.
7.Discuss Welch’s method and its importance or use.
8.Discuss how you would determine the number of replications to use for
your experiments.
9.Show how you would determine the appropriate length of a single long run.
10.What is the nature of simulation experimentation?
11.What is the difference between ‘comparing alternatives’ and ‘search
experimentation’?
12.Show how you would determine problem combinations.
13.Discuss the importance of point estimates.
14.Show how confidence interval can be obtained for a single run.
Elisha Opiyo, SCI, UON, May-August 2011
End of Week 9 and 10 Exercises
15.Discuss the interpretations of ‘statistical significance’ and ‘practical
significance’.
16.How can paired-t confidence test be used to compare more than 2
scenarios?
17.Discuss how you would identify experimental factors that have the
greatest impact on the outcomes.
18.Discuss a faster way of understanding the solution space.
19.Discuss 2k factorial experimental designs and their use.
20.Discuss how ANOVA can be used to determine experimental factors
with the greatest impact.
21.Discuss metamodeling.
22.Discuss optimization.
23.Discuss sensitivity analysis.
24.Discuss the concept of implementation in simulation project undertaking.
25.Discuss the success factors in a simulation study.
26.Show how you would ensure that you are successful in your
simulation project undertaking.
27.Discuss the quality issues and factors in a simulation project.
28.Discuss how you would measure success of your simulation project.

Elisha Opiyo, SCI, UON, May-August 2011


Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

1. 3.078 6.314 12.706 31.821 63.657 318.313


2. 1.886 2.920 4.303 6.965 9.925 22.327
3. 1.638 2.353 3.182 4.541 5.841 10.215
4. 1.533 2.132 2.776 3.747 4.604 7.173
5. 1.476 2.015 2.571 3.365 4.032 5.893
6. 1.440 1.943 2.447 3.143 3.707 5.208
7. 1.415 1.895 2.365 2.998 3.499 4.782
8. 1.397 1.860 2.306 2.896 3.355 4.499
9. 1.383 1.833 2.262 2.821 3.250 4.296
10. 1.372 1.812 2.228 2.764 3.169 4.143
11. 1.363 1.796 2.201 2.718 3.106 4.024
12. 1.356 1.782 2.179 2.681 3.055 3.929
13. 1.350 1.771 2.160 2.650 3.012 3.852
14. 1.345 1.761 2.145 2.624 2.977 3.787
15. 1.341 1.753 2.131 2.602 2.947 3.733
16. 1.337 1.746 2.120 2.583 2.921 3.686
17. 1.333 1.740 2.110 2.567 2.898 3.646
18. 1.330 1.734 2.101 2.552 2.878 3.610

Elisha Opiyo, SCI, UON, May-August 2011


Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

19. 1.328 1.729 2.093 2.539 2.861 3.579


20. 1.325 1.725 2.086 2.528 2.845 3.552
21. 1.323 1.721 2.080 2.518 2.831 3.527
22. 1.321 1.717 2.074 2.508 2.819 3.505
23. 1.319 1.714 2.069 2.500 2.807 3.485
24. 1.318 1.711 2.064 2.492 2.797 3.467
25. 1.316 1.708 2.060 2.485 2.787 3.450
26. 1.315 1.706 2.056 2.479 2.779 3.435
27. 1.314 1.703 2.052 2.473 2.771 3.421
28. 1.313 1.701 2.048 2.467 2.763 3.408
29. 1.311 1.699 2.045 2.462 2.756 3.396
30. 1.310 1.697 2.042 2.457 2.750 3.385
31. 1.309 1.696 2.040 2.453 2.744 3.375
32. 1.309 1.694 2.037 2.449 2.738 3.365
33. 1.308 1.692 2.035 2.445 2.733 3.356
34. 1.307 1.691 2.032 2.441 2.728 3.348
35. 1.306 1.690 2.030 2.438 2.724 3.340
36. 1.306 1.688 2.028 2.434 2.719 3.333
37. 1.305 1.687 2.026 2.431 2.715 3.326
Elisha Opiyo, SCI, UON, May-August 2011
Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

38. 1.304 1.686 2.024 2.429 2.712 3.319


39. 1.304 1.685 2.023 2.426 2.708 3.313
40. 1.303 1.684 2.021 2.423 2.704 3.307
41. 1.303 1.683 2.020 2.421 2.701 3.301
42. 1.302 1.682 2.018 2.418 2.698 3.296
43. 1.302 1.681 2.017 2.416 2.695 3.291
44. 1.301 1.680 2.015 2.414 2.692 3.286
45. 1.301 1.679 2.014 2.412 2.690 3.281
46. 1.300 1.679 2.013 2.410 2.687 3.277
47. 1.300 1.678 2.012 2.408 2.685 3.273
48. 1.299 1.677 2.011 2.407 2.682 3.269
49. 1.299 1.677 2.010 2.405 2.680 3.265
50. 1.299 1.676 2.009 2.403 2.678 3.261
51. 1.298 1.675 2.008 2.402 2.676 3.258
52. 1.298 1.675 2.007 2.400 2.674 3.255
53. 1.298 1.674 2.006 2.399 2.672 3.251
54. 1.297 1.674 2.005 2.397 2.670 3.248
55. 1.297 1.673 2.004 2.396 2.668 3.245
56. 1.297 1.673 2.003 2.395 2.667 3.242
Elisha Opiyo, SCI, UON, May-August 2011
Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

57. 1.297 1.672 2.002 2.394 2.665 3.239


58. 1.296 1.672 2.002 2.392 2.663 3.237
59. 1.296 1.671 2.001 2.391 2.662 3.234
60. 1.296 1.671 2.000 2.390 2.660 3.232
61. 1.296 1.670 2.000 2.389 2.659 3.229
62. 1.295 1.670 1.999 2.388 2.657 3.227
63. 1.295 1.669 1.998 2.387 2.656 3.225
64. 1.295 1.669 1.998 2.386 2.655 3.223
65. 1.295 1.669 1.997 2.385 2.654 3.220
66. 1.295 1.668 1.997 2.384 2.652 3.218
67. 1.294 1.668 1.996 2.383 2.651 3.216
68. 1.294 1.668 1.995 2.382 2.650 3.214
69. 1.294 1.667 1.995 2.382 2.649 3.213
70. 1.294 1.667 1.994 2.381 2.648 3.211
71. 1.294 1.667 1.994 2.380 2.647 3.209
72. 1.293 1.666 1.993 2.379 2.646 3.207
73. 1.293 1.666 1.993 2.379 2.645 3.206
74. 1.293 1.666 1.993 2.378 2.644 3.204
75. 1.293 1.665 1.992 2.377 2.643 3.202
Elisha Opiyo, SCI, UON, May-August 2011
Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

76. 1.293 1.665 1.992 2.376 2.642 3.201


77. 1.293 1.665 1.991 2.376 2.641 3.199
78. 1.292 1.665 1.991 2.375 2.640 3.198
79. 1.292 1.664 1.990 2.374 2.640 3.197
80. 1.292 1.664 1.990 2.374 2.639 3.195
81. 1.292 1.664 1.990 2.373 2.638 3.194
82. 1.292 1.664 1.989 2.373 2.637 3.193
83. 1.292 1.663 1.989 2.372 2.636 3.191
84. 1.292 1.663 1.989 2.372 2.636 3.190
85. 1.292 1.663 1.988 2.371 2.635 3.189
86. 1.291 1.663 1.988 2.370 2.634 3.188
87. 1.291 1.663 1.988 2.370 2.634 3.187
88. 1.291 1.662 1.987 2.369 2.633 3.185
89. 1.291 1.662 1.987 2.369 2.632 3.184

Elisha Opiyo, SCI, UON, May-August 2011


Upper critical values of Student's t distribution with (n-1) degrees of freedom
Probability of exceeding the critical value

(n-1) 0.10 0.05 0.025 0.01 0.005 0.001

90. 1.291 1.662 1.987 2.368 2.632 3.183


91. 1.291 1.662 1.986 2.368 2.631 3.182
92. 1.291 1.662 1.986 2.368 2.630 3.181
93. 1.291 1.661 1.986 2.367 2.630 3.180
94. 1.291 1.661 1.986 2.367 2.629 3.179
95. 1.291 1.661 1.985 2.366 2.629 3.178
96. 1.290 1.661 1.985 2.366 2.628 3.177
97. 1.290 1.661 1.985 2.365 2.627 3.176
98. 1.290 1.661 1.984 2.365 2.627 3.175
99. 1.290 1.660 1.984 2.365 2.626 3.175
100. 1.290 1.660 1.984 2.364 2.626 3.174
∞ 1.282 1.645 1.960 2.326 2.576 3.090

Elisha Opiyo, SCI, UON, May-August 2011

You might also like