Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

IASSC Lean

Six Sigma Green Belt


Improve Phase: Course Structure
IMPROVE PHASE: COURSE STRUCTURE IMPROVE PHASE OVERVIEW

Improve phase overview Improve phase roadmap


Concept Generation
Simple Linear Regression Start
Multiple Regression Analysis
Designed Experiments
Full and Fractional Factorial Experiments Concept generation, mistake proofing

Implementation
Concept selection
Concept selection
Implementation plan
Implementation - risk mitigation
Change Management Implementation plan

Implementation risk analysis and mitigation

Stop

2
IASSC Lean
Six Sigma Green Belt
Improve Phase: Regression Analysis
REGRESSION ANALYSIS REGRESSION ANALYSIS

Overview Examples

• Regression is used to understand the relationship between a • The level of overtime in the department should be directly related of
continuous output measure (Y) and one or more continuous inputs volume of work. Is this the case?
(X). This is known as bivariate data.

• Does the marketing spend have a quantifiable impact on sales?


• Regression analysis can determine whether or not there is a
statistically significant relationship between our inputs and output.
Measures
Regression analysis gives us 3 measures of the significance of the
• Regression analysis can quantify the significance of the impact of each relationship between the input(s) and the output – or the independent
input, so that we can choose to control only the most significant variable(s) and the dependent variable:
inputs.

1. P-value
• Regression can generate a formula so that we can predict the output
for a given level of each input – i.e. it quantifies Y=f(X). • As with other statistical tests, the P-value determines the level of
statistical significance of the relationship between the input(s) and
output.
Y data No. of Xs X data Tool
• Usually set at a threshold of 0.05, i.e. we want to be 95% confident
Correlation, that an input is having a significant effect before we infer that it has.
Continuous 1 Continuous
simple regression

Continuous 2 or more Continuous Multiple regression

2
REGRESSION ANALYSIS REGRESSION ANALYSIS

Measures (cont.)
2. Correlation coefficient (r)

• A measure of the strength and direction of the linear relationship


between the input(s) and output.

• Calculated for only one input at a time.

• Varies between -1 and 1:


Sig nific a nt Sig nific a nt
neg a tiv e No sig nific a nt c orrela tion p ositiv e
c orrela tion c orrela tion

Va lue o f r

-1 -0.8 0 0.8 1

3. Coefficient of determination (R2)


• R2 is the proportion of the variation in the dependent variable
/ output that's driven by the independent variable(s) / input(s).

• Can be calculated for the combined impact of multiple inputs.

• Varies between 0 and 1:

V a ria tion in outp ut not a d eq ua tely a c c ounted for V a ria tion in outp ut
b y v a ria tion in inp ut(s) exp la ined b y inp ut(s)

Va lue o f R2

0 0.64 1 Regression analysis includes several variations, such as


linear, multiple linear, and nonlinear.
• For simple regression, R2 equals the square of the correlation
coefficient (r).

3
REGRESSION ANALYSIS - CORRELATION

Strong positive correlation Medium positive correlation Weak positive correlation

40 40 40

30 30 30

20
Output

Output

Output
20 20

10 10 10

0
r=+0.99 r=+0.80 r=+0.60
0 0

0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
Input Input Input

30 30 30

20 20 20

10 10 10
Output

Output

Output
0 0 0

-10 -10 -10


r=-0.99 r=-0.80 r=-0.60
-20 -20 -20
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
Input Input Input

Strong negative correlation Medium negative correlation Weak negative correlation

4
SIMPLE REGRESSION SIMPLE REGRESSION

Roadmap Worked example - overtime

Start Background
A manager is concerned about the levels of overtime being worked in her
Pla n t he d a t a c ollec t ion, c ollec t t he d a t a department. Overtime should only be used to accommodate busy
periods, so she wants to check whether or not the overtime is related to
Run g ra p hic a l a na lysis
the volume of work being handled by the team.
– sc a t t erp lot , fit ted line p lot

Look for ot her


Det erm ine st a t istic a l sig nific a nc e ( P-v a lue) inp ut v a ria b les Data

Is there a statistically No
significant relationship?

Yes
Det erm ine lev el of exp la ined v a ria t ion ( R2)

Does the input explain No


at least 64% of the
variation?
Try it out
Yes Regression.MPJ
Use reg ression form ula t o c ont rol sig nific a nt Worksheet:
inp ut s a nd op t im ize out p ut Overtime

Stop

5
SIMPLE REGRESSION SIMPLE REGRESSION

Worked example - overtime Worked example - overtime


Graphical Graphical results
Graphs – Scatterplot – With Regression

20

15

Overtime hours
10

60 70 80 90 100 110 120 130 140


Number of items processed

Graphical conclusions

• The points on the scatterplot are scattered, and the best fit line is flat
– this indicates that there is no significant relationship between the
number of items processed and the overtime hours worked by the
team.

• There are 3 points that are unusually low, indicating potential special
cause variation – i.e. there is a variable driving the variation in
overtime hours, but it is not workload.

6
SIMPLE REGRESSION SIMPLE REGRESSION

Worked example - overtime Worked example - overtime


Analytical – correlation Analytical - regression – hypotheses
Stat – Basic Stats – Correlation
H0: There is no significant relationship between the number of items
processed and the overtime hours worked.

Ha: There is a significant relationship between the number of items


processed and the overtime hours worked.

Analytical – regression - run the test


Stat – Regression – Fitted Line Plot

Analysis – correlation result


Session window

r≈0
 no linear relationship between overtime and workload

7
SIMPLE REGRESSION SIMPLE REGRESSION

Worked example - overtime Worked example - overtime


Analytical – results Conclusion and next steps
Session window: • Having concluded that overtime in the department is not being
driven by workload, the manager investigated the three unusually
low overtime days.

• She tried running a time series plot to see where these days
occurred:

20

15

Overtime hours
P > 0.05
 there is no statistically
10
significant relationship between
overtime and workload

R-sq = 0% 5

 none of the variation in


overtime is explained by the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
workload Day

• She noticed a pattern, that the low overtime days occurred every
5th working day, on a Friday. She concluded that the overtime
working was habitual rather than workload-driven.

8
REGRESSION ANALYSIS – WATCH OUTS REGRESSION ANALYSIS - CURVATURE

Cause and effect Overview

• Regression looks for a relationship between two variables. It cannot • In most cases, our output (Y) will go up linearly with, or in
determine which is the cause (i.e. independent variable or input) and proportion to, our input(s), the X(s).
which is the effect (i.e. dependent variable or output). This can only
be determined by the team.
• In some cases, the output goes up with the square of the input,
or even the cube of the input. The regression line, or “line of
• A relationship between two variables doesn’t necessarily mean one best fit” will then be curved.
causes the other. For example, as a city expands, both the stork
population and human population grow in a linear relationship – they
are co-linear. However, that doesn’t prove that storks bring babies!

Sample size

• Sometimes in regression analysis, we will see a statistically significant


relationship between the input and output variables (i.e. the P-value is
low, < 0.05) but only a weak correlation (i.e. the correlation
coefficient (r) is close to 0). This is mostly likely due to a small
sample size – aim for at least 10 data points to obtain a meaningful
result from regression analysis.
An example of curvature

9
REGRESSION ANALYSIS - CURVATURE REGRESSION ANALYSIS - CURVATURE

Examples Working with curvature

• Curvature, if present, can generally be determined from a


100 scatterplot.
90 Braking distance
goes up with the • If there is curvature, we can’t use the correlation coefficient. We
80

square of the can still get a P-value and an R-square value, by fitting a curved
70
best-fit line to our data.
Braking distance

60
speed of travel
50 • Before fitting a curved line to a data set, we need to be confident
40
that there is an underlying practical reason why the relationship
30
would not linear.
20
• The relationship between the variables may vary across the range
10 of input levels. E.g. the response time in a call centre goes up
20 30 40 50 60 70
Speed linearly with utilisation from 0% up to around 50%. Above 50%
the response time grows exponentially:

300

The energy from a


250
wind turbine goes up
200
with the cube of the
wind speed
Power (kW)

150

100

50

5 10 15 20 25 30
Wind speed (mph)

10
REGRESSION ANALYSIS – PREDICTION REGRESSION ANALYSIS – PREDICTION

Aim Worked example – braking distance

• Having found and proven a linear (or curved) relationship between Background
our input variable and output, we can generate a formula that
• We’ve collected data on braking distances at different speeds.
quantifies the impact of the input variable on the output.
• We want to use this data to predict the braking at other speeds
that we haven’t tested for.
• We can use the formula to predict the output, given the settings of
the input variables.
Graphical

• Our predicted value for our output from our formula is called Ŷ.
100

90

80

70

Braking distance
60

50

40
Try it out
30
Regression.MPJ
20
Worksheet:
10
Braking 20 30 40 50 60 70
Speed

11
REGRESSION ANALYSIS – PREDICTION REGRESSION ANALYSIS – PREDICTION

Worked example – braking distance Worked example – braking distance


Analytical • Using the regression formula, derived from our sample data, we
can predict the braking distance for any speed within the range of
Stat – Regression – Fitted line plot
speeds in our sample.

From earlier, braking


distance goes up • E.g. the predicted breaking distance at 35 mph would be:
with the square of
the speed of travel

0.6 + (0.2629 x 35) + (0.01571 x 352) = 29 metres

Notes

Session window: • The regression formula is only valid for input values within the
range of our sample – in this example between 20 and 70 mph.

• Outside this range, we don’t know what the relationship is, and
we may get strange results. For example, at 0 mph, the
regression formula would give us a braking distance of 0.6 metres,
which makes no sense!
R-sq = 100% Thinking distance Thinking distance
- all variation (proportional to (proportional to
explained by speed speed) speed squared)

12
RESIDUALS ANALYSIS RESIDUALS ANALYSIS - PLOTS

The difference between the observed value of the dependent variable (y) The residual plot shows a fairly random pattern - the first residual is
and the predicted value (ŷ) is called the residual (e). Each data point has positive, the next two are negative, the fourth is positive, and the last
one residual. residual is negative. This random pattern indicates that a linear model
provides a decent fit to the data.
Residual = Observed value - Predicted value

e=y-ŷ
Below, the residual plots show three typical patterns. The first plot shows
Both the sum and the mean of the residuals are equal to zero. That is, Σ e a random pattern, indicating a good fit for a linear model.
= 0 and e = 0.
A residual plot is a graph that shows the residuals on the vertical axis and
the independent variable on the horizontal axis. If the points in a residual
plot are randomly dispersed around the horizontal axis, a linear
regression model is appropriate for the data; otherwise, a non-linear
model is more appropriate.
The table below shows inputs and outputs from a simple linear regression
analysis:
Random Pattern Non-Random: Non-Random:
x y ŷ e
U Shaped Inverted U
60 70 65.411 4.589
70 65 71.849 -6.849
80 70 78.288 -8.288
Note: Using residual plots, you can assess whether the observed error
85 95 81.507 13.493 (residuals) is consistent with stochastic error. Stochastic is a fancy word
that means random and unpredictable. Error is the difference between
95 85 87.945 -2.945
the expected value and the observed value.

13
IASSC Lean
Six Sigma Green Belt
Improve Phase: Multiple Regression
MULTIPLE REGRESSION MULTIPLE REGRESSION

Aim Roadmap

• To analyse the impact of multiple inputs (Xs) on the primary Start

process metric (Y) concurrently.


Pla n t he d a t a c ollec t ion, c ollec t t he d a t a

Run g ra p hic a l a na lysis for ea c h inp ut


– sc a t t erp lot , fit ted line p lot
• To find a limited combination (the “best subset”) of inputs (Xs) Look for ot her
inp ut v a ria b les
that adequately explain the variation in our output (Y). Run m ult ip le reg ression

Det erm ine st a t istic a l sig nific a nc e ( P-v a lue) for


ea c h inp ut

• To generate a formula that expresses the impact of the input Red uc e t he m od el b y rem ov ing non-sig nific a nt
inp ut s
variables on the output.
Det erm ine lev el of exp la ined v a ria t ion ( R2)

• To use the formula to predict the output, given the settings of the Does the input explain
at least 80% of the
No

input variables. variation?

Yes

Find t he op t im a l c om b ina t ion ( b est sub set ) of


inp ut s t ha t d riv e t he v a ria tion in t he out p ut
Overview
Ev a lua t e resid ua ls

• Used when both the inputs and output have continuous data.
Are the residuals normal No
and stable?

• Regression analysis in MINITAB will work with a mix of discrete Yes

and continuous inputs, but these leads to a set of regression Use reg ression form ula t o c ont rol sig nific a nt
inp ut s a nd op t im ize out p ut
formulae rather than one.
Stop

2
MULTIPLE REGRESSION MULTIPLE REGRESSION

Worked example – system response time Worked example – system response time
Background Graphical analysis

• Users are complaining about slow system speed. Coming out of Graph – Scatterplot – Simple
our measure phase are 5 inputs that could be having an impact on
the system response:

Variable Data type


Output System response speed Continuous (ping time)
Input 1 Number of users 100-300, treat as continuous
Input 2 Storage allocation (%) Continuous
Input 3 Processor allocation (%) Continuous
Input 4 Download speed (Mb/s) Continuous
Input 5 Upload speed (Mb/s) Continuous
Multiple Graphs…
We ran 50 tests over a week, ensuring we had as wide a range as possible
for the inputs across the tests:

3
MULTIPLE REGRESSION
Worked example – system response time
Graphical analysis - result

No. of users Storage % Processor %


200
Weak positive
correlation?

150

No correlation 100
Ping time (ms)

100 200 300


0.2 0.4 0.6 0.6 0.7 0.8

Room temp Down speed Up speed


200
Medium negative
correlation
150

100 No correlation

15 20 250 40 80 0 5 10

4
MULTIPLE REGRESSION MULTIPLE REGRESSION

Worked example – system response time Worked example – system response time
Analytical – statistical significance. Analytical – level of impact.
Aim Aim
Reduce the number of inputs by checking for statistical significance. Reduce the number of inputs by checking for level of impact.
Hypotheses Run test
H0: The input has no significant effect on the output. Assistant – Regression – Multiple Regression.
Ha: The input does have a significant impact on the output.
Run test
Stat – Regression – Regression – Fit Regression Model. Up to 5 variables in
Assistant
– storage % omitted
based on previous work

Conclusion
All inputs have a statistically significant effect, except Storage %.

5
MULTIPLE REGRESSION MULTIPLE REGRESSION

Worked example – system response time Best subsets - overview


Analytical – level of impact. • It can be costly - in terms of time, money and effort – to control
multiple inputs.
Result
Summary report
• We can use “best subsets” analysis to determine which vital few
inputs we can select to still maintain a high level of impact on our
output measure.

Worked example – print quality


Model building report:

Conclusions

• 96% of the variation in the system response time is explained by


the five input variables.

• Controlling these variables will allow us to closely control the


system response time.

• If we know (or set) the levels of these five inputs, we can


confidently predict the system response time.

6
MULTIPLE REGRESSION MULTIPLE REGRESSION - PREDICTION

Best subsets - conclusions Overview

• If we could only control one variable to improve the system • As with simple regression, multiple regression generates a
response time, we should control the upload speed. This formula which can be used to predict the value of the process
accounts for over 50% of the variation we see in our output. output based on the values of the inputs.

• We can account for (and therefore control) over 80% of the • The same watch-outs apply – around causality, and use of the
variation in response time through controlling just two variables – formula only within the scope of the study.
upload speed and number of users.

• We also need to check the validity of the regression model we


• The remaining three variables should only be actively controlled if are using – by reviewing the “residuals”.
the last 20% of control is important. The best we can achieve is
94%; the remaining 6% is due to other variables / noise.
• Residuals are the difference between what we expected to see in
our results according to the formula and what we actually saw.
Application We can use them to check that nothing strange is going on.
Based on the conclusions above, we could significantly improve system
response time with the following actions:
To view residuals, use either:
Input Aim Improvement ideas
• Stat – Regression – Regression – Fit Regression Model – Graphs
Upload Take out new contract with network – Four in one, or;
Maximise
speed provider to give higher upload speeds.
Number of Automatic logout of users after 20
Minimise
users minutes of inactivity. • Review the Diagnostic Report when using the Assistant.

7
MULTIPLE REGRESSION - RESIDUALS

Multiple Regression for Ping time (m We want to see a “spotty dog”


Diagnostic Report distribution of points on this graph
Look for these patterns:
Residuals vs Fitted Values
Look for nonrandom patterns and large residuals.
Large Residuals Clusters
Anything else indicates:
• curvature (the X-Y relationship
5 isn’t linear)
• the presence of another
0 significant input
• changes to the X-Y relationship
Unusual X Values Unequal Variation
-5
• Red points are “odd” and should
-10
be investigated
100 120 140 160 180 200

Residuals vs Observation Order


Look for nonrandom patterns and large residuals.
Large Residuals Cyclical

5 We want to see a random pattern of


points on this graph
0

Trend Shifts

-5 Anything else indicates a change the


X-Y relationship over time
-10
1 5 10 15 20 25 30 35 40 45 50

8
MULTIPLE REGRESSION - PREDICTION MULTIPLE REGRESSION - PREDICTION

Worked example – system response time Worked example – system response time

Background Analysis

• Following the analysis, the project team decide to improve the • Because we are now only considering two inputs, we need to
upload speed to a guaranteed 10 MB/s and reduce the average refit the regression model in MINITAB.
number of live users to 120. These changes will cost £2,500 per
year.
Stat – Regression – Regression – Fit Regression Model

• All other inputs will stay as was.

• Working with the Financial Controller for the project, they


estimate that each millisecond reduction in ping time will equate
to an annualised saving of £100 for the company in reduced wait
times.

• The Project Manager wants to know what benefit is likely to be


seen from the improvements and are they worth it?

9
MULTIPLE REGRESSION - PREDICTION MULTIPLE REGRESSION - PREDICTION

Worked example – system response time Worked example – system response time

Analysis Analysis
Get Minitab to predict the ping time with our inputs set to the new Results – the regression equation
values:
Session window

Stat – Regression – Predict

Results – the prediction

The predicted ping time The likely range of ping time, given
the uncertainty in our model

10
MULTIPLE REGRESSION - PREDICTION

Worked example – system response time

Analysis

Cost-benefit analysis

Cost Benefit (ping time) Benefit (£)


Implementation Ping time improvement 50ms
cost: = (time before) – (time after) improvement @
£2,500 p.a. = 150ms - 100ms £100 / ms
= 50ms = £5,000


Net benefit
saving – cost
= £5,000 - £2,500
= £2,500 p.a.

11
IASSC Lean
Six Sigma Green Belt
Improve Phase: Design of Experiments
DESIGN OF EXPERIMENTS CONSIDERATIONS

Design of experiments (DOE) is a systematic method to determine the There are many factors to consider before actually doing the experiment.
relationship between factors affecting a process and the output of that Some include:
process. In other words, it is used to find cause-and-effect relationships.
This information is needed to manage process inputs in order to optimize
the output. • How many factors does the design have?
Key concepts in creating a designed experiment include blocking, • Are the levels of these factors fixed or random?
randomization and replication:
• Are control conditions needed, and what should they be?

• What are the background variables? Any hidden variables?


Blocking: When randomizing a factor is impossible or too costly,
blocking lets you restrict randomization by carrying out all of the trials • What is the sample size? How many units must be collected for
with one setting of the factor and then all the trials with the other setting. the experiment to be generalisable and have enough power?
Randomization: Refers to the order in which the trials of an • What is the relevance of interactions between factors?
experiment are performed. A randomized sequence helps eliminate
effects of unknown or uncontrolled variables. • How do response shifts affect self-report measures?

Replication: Repetition of a complete experimental treatment, including • Should the client/patient, researcher or even the analyst of the
the setup. data be blind to conditions?

• What is the feasibility of subsequent application of different


conditions to the same units?
Note: Design of Experiments (DOE) is a Six Sigma tool that helps project
teams determine the effects that the inputs of a process have on the final • How many of each control and noise factors should be
product. DOE helps uncover the critical relationships between variables considered?
in a process that are often hidden under all of the data and identifies the
most critical inputs that must be modified to ensure optimal process
performance.

2
DESIGN OF EXPERIMENTS DESIGN OF EXPERIMENTS

Aim
Classical design of experiments
• To find the “sweet spot” for our process – the optimal settings
for our inputs to give the best possible performance for our
process.

Process

Se le c t p o te ntia l ke y Xs fo r a na lysis

Se t up sc re e ning runs

Filte r o ut insig nific a nt Xs

Run full DOE

Op tim ise inp ut se tting s

Ve rify o p tim ise d se tting s

3
DESIGN OF EXPERIMENTS - CONTEXT
Design of

tests (e.g. T-test)


experiments

Basic statistical
Task

Regression

regression

Fractional
Multi-vari
ANOVA

factorial

factorial
Multiple

Full
Does single input (X) have effect on process output (Y)?       

Formula for effect of single X on Y     

Formula for effect of multiple Xs on Y    

Find interactions between inputs     

Pro-actively set inputs  

Efficiently screen out insignificant variables 

Find optimum settings for multiple variables  

Find optimum settings for multiple variables with curvature  

Set noise variables to reduce variation 

4
DESIGN OF EXPERIMENTS (DOE) DOE – WORKED EXAMPLE

What’s different about DOE? Background

By proactively choosing the experiments that we run (rather than relying A freight company wants to minimise the fuel consumption of its fleet of
on historical or every-day data), we can derive a lot of conclusions from a delivery trucks.
small amount of data. This minimises the time and expense required to
obtain the results we need.
The Black Belt in the company has therefore decided to run a design of
experiments exercise to understand which factors are significant (the
Benefits critical Xs), and what they should be set to in order to minimise fuel
efficiency.
A highly-efficient way of finding the ideal combination and settings of
inputs to optimise process performance.
Can identify and deal with complex interactions between inputs. Y measure Fuel efficiency (miles per gallon)
Inputs Tyre pressure (psi)
Optimisation of the process output can take several different forms: Average speed (mph)
Driving style (smooth / aggressive - points
score)
• Minimise or maximise the output. Tyre compound (A/B)
Constraining Y All deliveries by 10am
• Hit a target output. (maximum journey time 1 hour)

• Reduce variation in the output.

• Make the process output robust against variation in noise variables. Noise: record, load – held constant, route – held constant, time of day 
randomisation, ambient conditions (wind, air temperature) -
randomisation.

5
DOE – TWO-LEVEL DESIGNS DOE – TWO-LEVEL DESIGNS

Setting the levels (cont.)


The power of DOE comes from its use of (usually) just two levels for
each input, which minimises the number of tests we need to run in order
to draw valid conclusions.
25

20
Setting the levels

Fuel economy
Tyre compound 15

Some factors will be attribute and will only have two levels – e.g. 10
compound A and B in this example. We can arbitrarily choose one to be
our “low” and one to be our “high” level.
5
Tyre pressure
0
From a variable with continuous data, we need to decide what pressures 20 22 24 26 28 30 32 34 36 38
we will specify as “low” and “high”. These values need to be: Tyre pressure

• As far apart as possible to give us the best chance of seeing a


difference in output. Too small of a range may mean that any change If we picked values of 25 and 35 psi as the “low” and “high” for our
in fuel economy (the output) due to the change in tyre pressure may experiments, on average we would determine that tyre pressure had no
be indistinguishable from the noise in the experiment. effect on fuel economy, where in reality it does – we have missed a peak.
• With a reasonable range– i.e. not too low or high as to be dangerous
or unviable. In this case, we need to keep these within the
manufacturer’s guidelines. We can avoid this by either choosing our levels carefully, or adding in
• Ideally in a region of the X-Y graph where the relationship is linear. “centre points” to our experiment. In this case, we run the main tests at
From earlier analysis work, we may have determined the following: 25 and 35 psi, but also a small number of tests at the midpoint, 30 psi.

6
DOE – TWO-LEVEL DESIGNS DOE – TWO-LEVEL DESIGNS

Setting the levels (cont.) Noise

Average speed As with all statistical testing, we want to minimise the effect of the noise
variables, so that we can determine (“see”) the effect of the controllable
As for tyre pressure, the high and low values we take need to:
input variables that we are changing.

• Be as far apart as possible. In DOE, we can deal with noise in two ways:

• Be within a reasonable range. If we are conducting tests on trucks on


a motorway, then less than 50mph becomes dangerous to other road
1. Randomise the order in which we do the experiments – so that the
users, and above 60mph is illegal for HGVs.
effects of noise are randomised effect throughout our trial. This
should be standard practice, unless it is difficult and/or expensive to
• Avoid a local peak – e.g. there may be a peak fuel economy at 55mph keep changing one or more of our variables.
which we would miss if we just looked at 50 and 60mph.

2. Try and measure the effect of noise – by running the whole


Driving style experiment twice, in two “blocks”. We can compare the results
We will use a GPS tracker to rate the harshness of the accelerating and from the two blocks, when the only change will be in the time-
braking during each experiment. We will instruct our driver to adopt a related noise variables, to see their combined impact. If the noise
smooth style as a “low” level and an aggressive style as “high”. variables are having a significant effect (compared to our other
variables), then we will need to either try and control them or
redesign our process to make it less sensitive to these variables.

7
DOE – SAMPLE SIZE DOE – SAMPLE SIZE

Each combination of factors is called a treatment combination, e.g. To establish how many repeats are required, we can use the sample size
calculator in Minitab:

Treatment Tyre Average Driving Tyre


condition pressure speed style compound Stat – Power and Sample Size – 2-level Factorial Design

35 60 Smooth A
1
(High) (High) (Low) (Low)


The simplest DOE runs every treatment combination, and does so just 
once. There are therefore 2k runs, where k is the number of inputs
(factors) being considered. So, a basic DOE with 4 factors would require 
24 = 16 runs. 


However, we may need to repeat each experiment in order to: 

1. Detect the desired level of change in our output.

2. To understand noise variables (using blocks).

3. To analyse what is driving variation (rather than mean shift) in our


output.

8
DOE – SAMPLE SIZE DOE – SAMPLE SIZE

1. The number of inputs we are considering (k). Sample size (results)


2. 2k
3. How many repeats to do (what we want to find out). From the session output window, we can see that, in this case, we will
need two replications:
4. The minimum change in our output that we want to be able to
detect. As a shortcut, set this to 2, and standard deviation to 1, in
order to detect a 2σ shift in the output.
5. Typically set to 0.9 to reduce the risk of drawing the wrong
conclusions.
6. If we suspect or know that there is a non-liner (curved) relationship
between one or more of our inputs and our output, we can use
centre points (see later) – typically 3. Otherwise, set this to 0. In
this case, we will put in 3 centre points to find any local peaks in tyre
pressure or speed.
7. The standard deviation (variability) we have historically seen in our We have 16 treatment combinations, plus 3 centre points (equals 19
process output when none of the inputs have changed. We often runs). We will replicate each of these, giving us 38 runs in total.
don’t have this – in which case set this to 1. The effect (see 4 above)
will then be in standard deviations rather than an absolute value.
8. If we want to add blocks – to understand the impact of noise - these For the blocks, Minitab will tell us which of the two days to do each test
can be done in the “Design” dialogue box. – it has woven the blocks into the design without having to add additional
runs.

9
DOE – GENERATING THE MODEL DOE – GENERATING THE MODEL

Once we have specified our inputs and determined the number of First of all, we bring up the Designs dialogue box:
replicates, we can generate a list of experiments or runs – the “model”.

Stat > DOE > Factorial > Create Factorial Design

We select:

• “Full factorial” (fractional factorials are covered later).


• 3 centre points.
There are 4 factors in our case. We then need to specify the design • 2 replicates (from our sample size calculation).
parameters of our experiment. • 2 blocks – to understand the effects of noise.

10
DOE – GENERATING THE MODEL DOE – GENERATING THE MODEL

Next comes the Factors dialogue box: Lastly, we tick “Randomise runs” In the Options dialogue box:

We can name our factors (for clarity), then tell Minitab™ whether each
factor is numeric (continuous) or text (attribute, nominal or ordinal).

This ensures that we don’t have a noise variable changing in the same
Minitab™, and other software, use “coded units” to run the mathematical manner as one of our factors - for example having all the high tyre
analysis. “Low” is signified by -1; “High” is signified by +1. pressure runs in the cold in the morning, and the low pressure runs in
the warm in the afternoon – at which point we wouldn’t be able to
determine which of the two was having an effect on our output.
We can replace these coded units with the actual “Low” and “High”
values that we will use for our factors.

11
DOE – GENERATING THE MODEL DOE – ANALYSIS

Once we have set up our inputs, and hit “OK”, Minitab generates a list of Main effects
experiments for us in a worksheet, with one run on each row.

MINITAB™ will have set up the DOE to be balanced. This means that,
This is the “model”, and it’s important not to edit or move the data in for each factor, there are the same number of runs at both the “high” and
these columns. “low” levels.

In the first empty column, we will be entering our results for each run: For each factor, we can therefore take an average of all the process
output results at the “low” level, and compare it to the average process
output result at the “high” level:

Try it out – Minitab file with


results to analyse:
Design of experiments.MPJ

12
DOE – ANALYSIS DOE – ANALYSIS

Let’s get MINITAB to run this analysis: We can then create a main effects plot – as we did in Multi-vari:

Stat – DOE – Factorial – Analyse Factorial Design… Stat – DOE – Factorial – Factorial Plots

Tyre pressure Average speed Driving style Tyre compound Point Type
32.5 Corner
Center

Mean of Results
30.0

27.5

25.0

22.5

20.0

25 30 35 50 55 60 1.0 5.5 10.0 A B

Conclusions
Tyre pressure and driving style show no significant change in fuel
economy when going from “Low” level to “High” level. However, the
results for the centre points lie off-line. This indicates that there may be a
local peak that we have missed.

Fuel economy fell as average speed increased, although the off-line result
for the centre point indicates that it may not be a linear relationship. The
tyre compound appears to have the biggest effect.

13
DOE – ANALYSIS DOE – REDUCING THE MODEL

Interactions Reducing the model (funnelling)


MINITAB™ creates designs that are orthogonal. This means that the We can now reduce the factors we are considering, eliminating those that
patterns of changes to the factors are independent of each other, and, don’t have a statistically-significant effect on our output (fuel economy).
that the pattern of changes in the interactions are independent of each
Let’s go back to the Pareto graph that was generated when we got
other and the individual factors. There is no confounding, also known as
MINITAB to analyse the design:
aliasing.
Term 2.06
This means that we can easily separate out the main effects of each factor Factor Name
D
on the output, and the impact of the interactions. AD A Tyre pressure
B Average speed
B
C Driving style
First let’s see if there are any interactions in our design: C D Tyre compound
ABC
CD
Stat – DOE – Factorial – Factorial Plots ABD
BD
Tyre pressur * Average spee ACD
Average
32 spee Point Type AB
Lines not parallel  50.0 Corner AC
55.0 Center
24 BCD
probable interaction 60.0 Corner
A
16
BC
Tyre pressur * Driving styl Average spee * Driving styl
Mean of Results

Driving ABCD
32 styl Point Type
1.00 Corner 0 10 20 30 40 50
24
5.50 Center Standardized Effect
10.00 Corner
16
Tyre pressur * Tyre compoun Average spee * Tyre compoun Driving styl * Tyre compoun
Tyre
compoun Point Type
This shows that the factors having the biggest impact on fuel economy are
32
A Corner tyre compound (D), average speed (B) and the interaction between tyre
24 A Center
B Corner pressure and tyre compound (AD).
16
B Center

25 30 35 50 55 60 1.0 5.5 10.0


Tyre pressur Average spee Driving styl
Table Abbreviations (not relevant to IASSC exam or Six Sigma):
Pressur = Pressure Styl = Style
The effect of tyre pressure on fuel economy varies depending on the Compoun = Compound Spee = Speed
tyre compound

14
DOE – REDUCING THE MODEL DOE – REDUCING THE MODEL

We can get further information from the table in the Session output Overall, the model has a low p-value, which means that, between them,
window: the factors are having a significant effect on fuel economy. The effect of
the noise variables is not significant, as shown by the high p-value for the
Source P-Value blocks. This, combined with the high p-value for “lack of fit” (the error in
Model 0.000 the model) indicates that the factors explain the variation in fuel economy
Blocks 0.520 well.
Linear 0.000
Tyre pressure 0.332
Average speed 0.000
Of the main linear factors, average speed and tyre compound have a low
Driving style 0.050 p-value and are therefore statistically significant. Also significant is the 2-
Tyre compound 0.000 way interaction between the tyre compound and tyre pressure. Lastly,
2-Way Interactions 0.000 the low p-value for curvature indicates that at least some of the
Tyre pressure*Average speed 0.289 relationships between the factors and the output are not linear. These
Tyre pressure*Driving style 0.299 numerical results line up with the visual conclusions from the Pareto
Tyre pressure*Tyre compound 0.000 chart, main effects plots and interaction plots.
Average speed*Driving style 0.508
Average speed*Tyre compound 0.271
Driving style*Tyre compound 0.101
3-Way Interactions 0.184
Tyre pressure*Average speed*Driving style 0.087
Tyre pressure*Average speed*Tyre compound 0.255
Tyre pressure*Driving style*Tyre compound 0.281
Average speed*Driving style*Tyre compound 0.326
4-Way Interactions 0.891
Tyre pressure*Average speed*Driving style*Tyre compound 0.891
Curvature 0.000
Lack-of-Fit 0.846

15
DOE – REDUCING THE MODEL DOE – REDUCING THE MODEL

Working up from the bottom of the Pareto chart, we can eliminate any In the “Graphs…” dialogue box, we’ll also request a four-in-one plot of
terms (factors or combinations of factors) to the left of the red line. the residuals – as we had in regression – to test the quality of the model:
HOWEVER, if a factor appears to be insignificant (e.g. factor A), but
appears in a significant interaction (AD in this case), then we need to
keep the factor in the model.
Let’s re-run the analysis, keeping only the significant terms:

Stat – DOE – Factorial – Analyse Factorial Design… - Terms…

16
DOE – REDUCING THE MODEL DOE – OPTIMISATION

We now only have three factors to control: So, what values should we set our inputs at in order to maximise the fuel
economy of the fleet?

1. Tyre pressure
2. Average speed This is where DOE comes into its own:
3. Tyre compound

Stat – DOE – Factorial – Response Optimizer…


What impact do these have? The R-sp (adj) value in the session window
shows that these three factors account for over 98% of the variation in
fuel economy, which will give us a very high degree of control.

Our aim is to maximise


our output (fuel economy)

17
DOE – OPTIMISATION DOE – MODEL VALIDATION

When we press OK, MINITAB™ will find the optimum settings for our As with regression, we need to check the residuals to ensure that the
inputs, in this case to maximise the fuel economy: model is valid.

Optimal Tyre pre Average Tyre com Looking back at the residuals plot from when MINITAB™ analysed the
High 35.0 60.0 B
D: 0.9461
Cur 25.0 50.0 B reduced model, this is what we saw:
Predict Low 25.0 50.0 A

Results Residual Plots for Results


Maximum
y = 38.1675
Normal Probability Plot Versus Fits
99 2
d = 0.94614
90
1

Residual
Percent
50
0

10 -1

1
-2 -1 0 1 2 15 20 25 30 35
Residual Fitted Value
Setting tyre pressure to 25, average speed to 50 and using compound B
should give us a fuel economy of 38mpg. Histogram Versus Order
2
8
1

Frequency
6

Residual
4 0

2 -1

0
-1.6 -0.8 0.0 0.8 1.6 1 5 10 15 20 25 30 35 40
Residual Observation Order

Normally No
distributed patterns

…so the model is valid. We can use it to predict future performance.

18
DOE – MINIMISING VARIABILITY DOE – MINIMISING VARIABILITY

Beyond optimising the mean of our process output, design of


experiments can be used to minimise variation in our output, to make Boxplot of Results
our process more robust against noise. 40

We can do this by setting our remaining inputs to the value where we see 35
least variation.
30
In our fuel economy example, we can look at the variation in output that

Results
we see at each level of driving style: 25

20
Graph – Boxplot… – One Y, with groups
15

10
1.0 5.5 10.0
Driving style

We can see that a moderate driving style – 5.5 on our assessment scale –
gives the least amount of variation in our results. Training our drivers to
adopt this style would give us the most robustness to noise in our
delivery process, and therefore the greatest stability and predictability.

19
IASSC Lean
Six Sigma Green Belt
Improve Phase: Full and Fractional Factorials
FULL FACTORIALS LINEAR AND QUADRATIC

Overview Combined Effects or Interactions


In a full factorial experiment, at least one trial is included for all possible These factors require careful thought before planning an experiment. For
combinations of factors and levels. This exhaustive approach eliminates example, consider an experiment to grow plants with two inputs:
any missed interactions. However, the thoroughness makes it expensive fertilizer and water. Increased water is linked to increased plant growth,
and time-consuming for experiments with multiple factors. but there is a tipping point where too much water drowns the plant.
Increased fertilizer is linked to increased plant growth, but too much can
burn the roots. Considering these two effects alone is necessary before
A common experimental design is one with all input factors set at two compounding the two.
levels each. These levels are called `high’ and `low’ or `+1′ and `-1′. A
design with all possible high/low combinations of all the input factors is
called a full factorial design in two levels. Two Levels = Linear
Three Levels = Quadratic
2 = no. of levels, k = no. of factors. Four Levels = Cubic
Full factorial = 2k
Fractional factorial = 2k-1

Note: Even if the number of factors, k, in a design is small, the 2kruns


specified for a full factorial can quickly become very large. For example, 26
= 64 runs is for a two-level, full factorial design with six factors.

2
LINEAR AND QUADRATIC BALANCED AND ORTHOGONAL
Stat – DOE – Response Surface – Analyse Response Surface Design In DOE, a Balanced Design (Balanced Experiment) is a factorial design in
which each factor is run the same number of times at the high and low
levels.
This two-factor design shown below is balanced because both factors ('A'
and 'B') are run an equal number of times (twice) at both levels:

Note: Most factorial designs are balanced, unbalanced are only used in
extreme circumstances.
An experimental design is orthogonal if each factor can be evaluated
Analyse Response Surface Design - Terms independently of all the other factors.
In a two-level factorial design, this is achieved by matching each level of
each factor with an equal number of each level of the other factors.
Note - a complete factorial design is both orthogonal and balanced if in
fact the model that includes all possible interactions is correct. Fractional
factorial and Plackett-Burman designs are normally constructed to have
both orthogonality and balance; however, they may have more rows than
are required for estimating parameters and error. Central composite
designs are orthogonal in that all the parameters for the central
composite model may be estimated, but the design itself is unbalanced. A
greater or lesser number of centre points is used to achieve an estimating
criterion and an error estimate.
Note - If your preliminary screening experiments 33 indicated curvature,
then you should use a quadratic equation. If there was no significant
curvature, then try fitting a linear model.

3
FRACTIONAL FACTORIALS FRACTIONAL FACTORIALS

Overview Aliasing / confounding

A variation of design of experiments allows us to obtain most of the same Let’s look at a 4-factor DOE. In a full-factorial design, we would be
benefits, but with fewer runs, using fractional factorials. analysing every interaction, up to and including the 4-way interaction
ABCD, with 24 = 16 runs.

This is used when we still have a large number of factors (key Xs) to
consider, and we don’t have the time or money to run a large number of A 4-way interaction is when the impact of factor A on the output changes
experiments. Running a full-factorial DOE as above would require 2k based on the specific combination of settings of factors B, C and D. In
runs, so if, for example, we had 6 factors to consider, we would require practice, this is highly unlikely. In practice, interactions that are any more
64 runs. With replicates and centre points this grows to over 100 complex than two factors are rarely significant. This is known as the
experiments. “sparsity of effects” principle.

A fractional factorial is used to screen out the insignificant factors, so that So, instead, we could run a 3-factor design of DOE (with 23 = 8 runs), and
we can then run a full factorial on the remaining few critical Xs and study substitute our fourth factor (D) for the ABC interaction. This is known
interactions, derive a prediction equation and fully optimise the process. as a half-fraction factorial, because it only requires half the number of
runs. The sacrifice is that we won’t be able to tell the difference between
the effect of factor D on our output and the (unlikely) ABC interaction.
We can run fewer runs if we assume that the higher order interactions This is known as aliasing or confounding.
are unlikely to be significant.

In this case, the DOE design will also confound the two-way interactions.
We won’t be able to differentiate between the effects of AB and CD, AC
and BD, or AD and BC. For the purposes of screening out our factors,
this may well be acceptable, and worth the reduction in experiments.

4
FRACTIONAL FACTORIALS FRACTIONAL FACTORIALS

Resolution Resolution
The degree of confounding of a particular design in DOE is called the MINITAB™ can show us the available fractional factorial designs for a
resolution. This is given by the sum of the levels of the interactions that given number of factors, along with the associated resolution and number
are confounded. In the above example, with a half-fraction, 4-factor of runs:
design, we had a resolution IV design. This meant that we confounded 2-
way interactions with other 2-way interactions, and also main (single
factor) effects with 3-way interactions. The resolution is therefore: Stat – DOE - Factorial – Create Factorial Design…

2 + 2 = 3 + 1 = resolution 4
(usually written as resolution IV)

Resolution III designs are available for e.g. 3- and 5-factor DOEs, where
we assume there are no interactions, and therefore allow confounding
between main effects and 2-way interactions. This minimises the number
of runs, but is far from ideal and is not recommended.

Resolution IV designs are acceptable. Resolution V designs and above are


better.

5
FRACTIONAL FACTORIALS FRACTIONAL FACTORIALS

Resolution Resolution
This shows the available designs (full and fractional) for each number of Fractional factorials are given the notation:
factors.

𝑘𝑘−𝑝𝑝
2𝑅𝑅
Where:

• 2 = the number of levels for each factor.


• k = the number of factors.
• p = the fraction of the design.
(1 means half-fraction, 2 means quarter-fraction, etc).
• R = the resolution (III, IV, V, etc).

We can then select our design based on the balance between minimising
confounding and the time, effort and expense of larger numbers of runs.

6
IASSC Lean
Six Sigma Green Belt
Improve Phase: Concept Generation
CONCEPT GENERATION CONCEPT GENERATION

Aim
Value added example – increasing sales
• To find effective and sustainable solutions or process
improvements that control our key inputs (CTQs).
Potential Impact on customer Impact on business
• To find solutions that add value and/or reduce cost for all
solution value value
stakeholders.
Same benefit (product), Reduced benefit (lower
Offer sales reduced cost (lower revenue / profit), same
Tools discounts price) cost

Same benefit (product), Business retains margin;


reduced cost (lower suppliers see reduced
Brainstorming Pay suppliers price) benefit (lower revenue /
less – reduce profit) for same cost
Benchmarking selling price

Blue-sky Reduce product Same benefit (product), Increased benefit


cost through reduced cost (lower (higher revenue),
thinking price) reduced cost
product or
Ideal final result process redesign

Increased benefit Increased benefit


Available Sell product (greater offering), (higher revenue),
resources bundles (product less cost (more increased cost
+ extended convenient to buy (outweighed by
warranty / package) revenue)
service contract)

2
CONCEPT GENERATION – LEAN CONCEPT GENERATION – LEAN

Areas of focus Product / info focus

Focus on flow

•Everything – people, equipment, measures – is there to serve the


flow of the product or service to the customer.

•Concentrate on streamlining the way the product or information


moves not the way machines or people move.

Aim for one-piece flow

•Reduce batch sizes.

Focus on reducing overall leadtime

•This allows for much greater responsiveness to changes in customer


demand.

•Mainly achieved through reducing inventory or queues between


processes  reduces waiting time in the system.

3
CONCEPT GENERATION – LEAN CONCEPT GENERATION – LEAN

Avoid overburden: line balance Avoid overburden: pull systems

35
30 • Embed the line balance through “pull” systems, where work is
Time (mins)

triggered only when required.


25
20
• Use pull systems to avoid the bullwhip effect (amplifying of demand
15 Cycle time variations down the supply chain).
10
Takt time
5 • Pull systems minimise (but don’t eliminate) inventory in the whole
0 90%
value stream (supply chain).

• Pull systems enable rapid response to changes in demand profile.

• Ensure the process steps are balanced to takt time,


and the customer required rate.

• Ensure there is “give” in the system. Aim to have cycle times


below 90% of the takt time, or even below 80% if there are
significant issues in the process (quality, breakdowns, uneven
demand, etc).

• Protect and closely manage bottlenecks: the processes with the


longest cycle times.

4
KANBANS KANBANS

What are Kanbans?

Kanbans are simple local controls that introduce “pull” throughout the
supply chain.

There are two types of Kanban:

• Withdrawal (or “Transport” or “Incoming Material”) Kanban – a Kanban control systems need:
signal (often a card) or instruction from a downstream process to say
that it is ready for parts (or documents, orders, invoices, etc.) to be
moved on to the next process. In a production environment, they
• Fairly limited variety of product (see simplicity slide later).
are also used to trigger the issue of parts from stores when (and only
when) required. This minimises inventory, and keeps parts at the
• Stable, predictable overall demand.
lowest level of build until required.

• “No faults forward” - defect-free Parts sent to subsequent


• Production (or “Signal” or “Production Instruction”) Kanban – a
processes.
signal to a process to start processing parts (or information,
documents, etc.). This usually comes from a downstream inventory
point that has dropped below a threshold and needs replenishing.
This system avoids the waste of overproduction.

5
CONCEPT GENERATION – LEAN CONCEPT GENERATION – LEAN

Avoid overburden: evenness Make all operations visible

Expose the hidden factory (or office)


Demand management

•Put in demand management systems and incentives to provide • Make it obvious what is going on, when things are not as they
regular stable demand. should be.
•Combats one of the main sources of “mura” or unevenness.

• E.g. colour-coding to clearly indicate good/bad levels on a gauge.


Level the schedule

•Level the schedule with a regular mix – using, for example, a heijunka
board.

Level the buy

•Regular, smaller deliveries from suppliers.


•May need to have more local suppliers.
•Use pull systems to avoid the bullwhip effect.

6
CONCEPT GENERATION – LEAN CONCEPT GENERATION – LEAN

Make all operations visible Make all operations visible


Put in visual management Put in Go-see-act systems of management.

• Tasks, production and/or people scheduling are highly visible, • Managers manage by walking around. Go to the Gemba, where
rather than held on a server or in a manager’s office. the action is happening.

• Standard work for processes is obvious and reinforced by the set- • Production control and scheduling are based on the shopfloor.
up of the area so that deviation from the standard stands out.

• Tasks and performance for quality and maintenance are


immediately apparent at each work station or office area.

• Issues are visibly flagged – e.g. through Andon lights.

7
CONCEPT GENERATION – LEAN CONCEPT GENERATION – LEAN

Expose and fix waste Participation and partnership

• Inventory hides waste. A large part of the waste in a process occurs where there are hand-offs,
or where one department is working in its best interests, but inflicts pain
• Reducing inventory is like lowering the water level in a river - the on other groups.
rocks are exposed, and boats start having problems. In an
industrial environment, reducing inventory means that the wastes A key pillar of Lean philosophy is the concept of participation and
– the rocks that are restricting flow and costing us money - partnership. Major elements of this are:
become visible and painful:
• Hoshin kanri is the cascade of business strategy and objectives
through the company, and the communication of results up the
organisation.

• Working with suppliers and customers to eliminate waste


Rework

OEE Queues
NVA through the whole value chain.

• Finding win-win solutions between departments and stakeholders


– solutions where value is added for both parties, not just a
compromise.
When waste is exposed, we then need to deal with the issues.
• Recruitment and performance appraisal based on teamworking
ability.
The best approach is to gradually reduce the inventory, with a process
• Empowering employees to make improvements, with
and resources in place to fix the wastes as they are exposed.
improvement projects run locally.

• Accounting rules that drive waste removal – e.g. financial value


Further down the journey, some companies have implemented stop-and- added to a product is not realised or registered until the
fix, where the production line or the process stops as soon as an issue is customer requests and receives the item. This avoids us
found, and not restarted until it is fixed. encouraging the factory to make as much stuff as possible
whether it’s needed or not because it improves our traditional
financial measures.

8
CONCEPT GENERATION – LEAN 5S

Aim
Simplicity
Aim We need to remember that Lean is a philosophy, with tools that support
it, not a method or checklist.
To simplify designs, production systems, information systems
transactional processes and supply chains without compromising quality One of the key underlying principles of Lean is that we need an organised
and customer service. workplace as a pre-requisite, a foundation, before any improvement work
can be done.

Tools An organised workplace:

• Part count reduction – e.g. through commonality, late • Makes issues (specifically waste) visible, and therefore makes sure
configuration or simplified design. that we are improving the right things in the right areas.
• Removes many different wastes (movement, conveyance, time
• Working with a few trusted suppliers.
spent searching for items, lost items, etc.) – and therefore
• Multiple simple production lines – with one product on each, and improves quality and productivity.
the smallest, simplest possible machines – rather than complex • Is a pre-requisite before other solutions can be implemented –
lines with complex automation. e.g. Kanban.
• Ensures that improvements are sustainable, that the process
• Simplifying computer systems – e.g. replacing MRP systems with doesn’t revert to its former state.
visual pull systems (where appropriate). • Improves motivation – an organised environment is much more
conducive to good employee morale
• Reduces health and safety risks.

The tool or process for implementing and sustaining an organised work


place was developed in Japan. It is called 5S, named after the five steps in
the process, which all begin with S in Japanese.

9
5S 5S – SORT

Overview Sort

Key points:

Sort (Seiri)
• Identify and remove unnecessary items from the • A team effort, closely involving those in the area.
workspace
• Categorise items and sort them accordingly.
Straighten (Seiton)
• Set the workspace out in a logical fashion

Shine (Seiso)
• Make the workplace visually appealing

Standardise (Seketsu)
• Set the best layout and way of working as the standard Infre q ue ntly
use d ite m s
• Maintain and adhere to that standard Fre q ue ntly Usa g e Unne c e ssa ry
Re c o rd s
use d ite m s unkno w n ite m s
re q uire d b y
Sustain (Shitsuke) p o lic y / la w

• Make 5S habitual through monitoring and prioritizing Sto ra g e /


Sto re fo r 6
Se ll, re c yc le ,
Ho ld c lo se b y m o nths. If no t
• Make issues visual and drive resolution a rc hiv e d isp o se
use d , d isp o se

10
5S – STRAIGHTEN 5S – SHINE

Straighten Shine

“A place for everything, and everything in its place.” Key points:

Key points: • Cleaning the workplace can identify wastes – leaks, broken items, etc.

• A clean, bright, painted workplace raises morale, improves quality and


turns the workplace into a marketing tool.
• A designated, labelled place for all significant items.
• Part of this step is adapting the equipment, processes and systems to
• A logical layout which makes workplace items easy to find, by anyone. ensure that the work environment stays clean and bright.

11
5S – STANDARDISE 5S – STANDARDISE

Standardise Standardised work

Key points: Beyond standardising the layout of the workplace, Lean philosophy
promotes the standardisation of best practice in all work processes.
• Sort, Straighten and Shine have introduced best practice to our
workplace. This step embeds this best practice by introducing visual This has multiple benefits:
controls so that any divergence from the standard process stands out.
• It reduces (or eliminates) variation in our work methods, giving
• Typical controls include painted “footprints” on the ground for floor- consistent process outputs.
standing items, signage, colour-coding of items, shadow-boards and
painted areas to differentiate gangways, walkways, inventory areas. • It ensures everyone is working to the best-known process –
maximising efficiency, quality and productivity.

There is still room for creativity, albeit not lone-wolf creativity. Regular,
local, team-based kaizen events are used to improve the current process.
The standard work is then updated to embed the new enhanced process.

12
5S – SUSTAIN MISTAKE PROOFING

Sustain Overview

Key points: • In a process, mistake proofing is the use of any automatic


device or method that either makes it impossible for an
error to occur or makes the error immediately obvious
once it has occurred.
• Requires local ownership and accountability.
• Techniques for sustaining include:
o Dedicated 5S time each shift.
• Also known by its Japanese name - poka-yoke.
o Checklists.
o Internal audits and competitions – made visual on noticeboards.

• Ideally, integrated into the product design rather than the


process.

13
MISTAKE PROOFING MISTAKE PROOFING

Product Design Example – 3-pin plug and socket Operations Process Design Example – pick control

Potential Failure Mode Preventative Design Feature


Insertion the wrong way 1. 3-pin layout ensures correct
around insertion into socket.
Potential Failure Mode Preventative Design Features
Wires connected incorrectly 2. Wires are colour-coded. • Pick-by-lights system.
Incorrect screws given to • Barcode verification of screws.
3. Fuse connects live pin to live cable customer • Confirmation button pressed by
Fuse omitted – the system will not go live operator.
without the fuse in place.
4. Earth pin is longer than the live pin • Barcode verification of screws to
Live connection connected Material loaded into
– which makes contact first during location.
before earth incorrect location
insertion. • Load by screw/part number.

14
IASSC Lean
Six Sigma Green Belt
Improve Phase: Implementation
CONCEPT SELECTION IMPLEMENTATION PLAN

Roadmap Aim
• As with the overall project plan, the implementation plan is used
Start to lay out and agree to the schedule and ownership for
implementation of the agreed solutions.

Solutions from
Filter out solutions based on practicality
concept
and business constraints • Typical tools include who-what-when (most basic) through to
generation Gantt charts and RACI diagrams.

X-Y matrix:
Prioritise solutions based on ease-of-
implementation and impact on CTCs Example

Indirect Purchasing
Combine solutions to achieve greatest
impact on CTCs

Mfg Eng'g

Finance
HS&E

Wk1
Wk2
Wk3
Wk4
Wk5
Wk6
Tasks
Ensure consensus
A/R C C Specify new equipment
on final solutions C A/R C Get quotes
I A/R I I Purchase new equipment
A/R R I Commission new equipment

Stop
IMPLEMENTATION - RISK MITIGATION SUSTAINABILITY – RISK ASSESSMENT

Aim Aim
• To identify the likely risks to the successful and timely • To identify the likely risks to the sustainability of the solutions.
implementation of the solutions.
• To prioritise which risks need mitigating actions. This can be
• To prioritise which risks need mitigating actions. This can be done using a 9-box, in effect a simplified FMEA.
done using a 9-box, in effect a simplified FMEA as shown below.

Hig h • Projec t b ud g et c ut • La c k of b ud g et for • Reg ula tory Hig h • Future red uc tion in • Return to old • Cha ng e of
d uring Sep t rev iew ITc ha ng es a p p rov a l ta kes hea d c ount p roc ess w hen p ersonnel running
• Eq uip ment sup p lier long er tha n remov eskey roles thing sg et b usy the p roc ess
p rioritises w ork for p la nned from new p roc ess
other c ustomers

• Cha ng e of • Dela ys to • Monitoring of


p ersonnel d uring eq uip ment inp uts/ outp uts
Im p a c t imp lementa tion ma nufa c ture d ue Im p a c t la p ses
to summer holid a ys • Rea c tion p la n not
o n p la n o n p la n follow ed

• Eq uip ment
mod ific a tions
req uired d uring
c ommissioning

Lo w Lo w

Lo w Hig h Lo w Hig h
Like liho od o f o c c urre nc e Like liho od o f o c c urre nc e
• Risks in the red areas – mitigation and rescoring is required. • Risks in the red areas – mitigation and rescoring is required.
• Risks in the yellow areas – requirement for mitigation to be • Risks in the yellow areas – requirement for mitigation to be
agreed by the team and sponsor / process owner. agreed by team and sponsor / process owner.
• Risks in the green areas – no mitigation required. • Risks in the green areas – no mitigation required.
ACTIONS CHANGE MANAGEMENT

Typical sustainability risk mitigation actions Overview

1. Identify the likely risks to the successful and timely • Mitigating risk and moving into the Control phase requires a
implementation of the solutions. change management strategy.

• Use a methodology like the Kotter Change Management Model


2. Prioritise which risks need mitigating actions. to help manage the “people” side of your suggested changes.

3. Embed the improvements into software, forms, layout, etc.

4. Include the improvements in training curriculum for new starters.

5. Make input controls highly visible.

6. Procedurise the process.

7. Assign responsibility for new process elements written into job


descriptions.

8. Carry out regular control plan audits.


CHANGE MANAGEMENT TOOLS CHANGE MANAGEMENT TOOLS

Below is a list of tools which can help you manage change as you
implement your plans: • Stakeholder List & Strategy

• Success Factors

• Action Plan • Leadership Change Readiness Assessment

• Bulls Eye Chart • More of.../Less of…

• Business Case / SWOT Analysis / Team Charters • RACI Chart

• Change Readiness Assessment • Risk Control

• Communication Strategy

• Control the Controllables

• Elevator Pitch / 15 Words

• Force Field Analysis

• Future State (Vision)

• Gap Analysis

• GRPI – Goals, Roles & Responsibilities, Process, Interpersonal


Survey

• In /Out of Scope

• Skills Inventory

You might also like