TQM Mba

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

BA5107 TOTAL QUALITY MANAGEMENT

UNIT 3

STATISTICAL PROCESS CONTROL(SPC)

Statistical Process Control (SPC) is a methodology for monitoring


a process to identify special causes (or assignable causes or man-made
causes) of variation and signalling the need to take corrective action when it
is appropriate. When special causes are present, the process is deemed to be
out of control. If the variation in the process is due to natural causes (or chance
causes or common causes) alone, the process is said to be in statistical control.

The seven old statistical process control tools/ 7QC tools are: (1)
Check sheet, (2) Histogram, (3) Scatter diagram, (4) Control chart, (5)
Pareto chart, (6) Cause-and-effect diagram, and (7) Process flow chart.
The term ‘statistical’ is a misnomer since some of these tools have nothing to
do with statistics.

(1) Check Sheet:

* Check sheets are a systematic way of collecting and recording data. They
are also called ‘Tally sheets’. They are used to indicate the frequency of a
certain occurrence. They can be easily used even by shop floor personnel.
They facilitate quick decisions from the data collected.

* The format is tailored to suit each situation/application. The counts are


marked as | , || , ||| , |||| and |||| in groups of five. Totals for each
category can be quickly calculated. Check sheets can be used to analyze types
of defects, causes of defects, nature of complaints, etc.

Example: Check sheet for customer complaints.

Nature of complaint Frequency

Delayed delivery |||| || 07

Missing items ||| 03

Damaged items |||| |||| || 12

Invoice errors |||| 04

TOTAL 26

(2) Histogram:

* It is a type of bar graph showing frequency distribution. It shows the


variation in a process. It consists of a set of rectangles that represent the
frequency of observed values in each category.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


* The range of values (of a variable) is divided into a number of groups, called
class intervals or cells. These are shown on the x-axis while their frequencies
of occurrence are shown on y-axis. The width of all cells are equal.

Example: Histogram of students classified based on age.


* The shape of the histogram can indicate whether the frequency distribution
is normal, skewed, peaked, flat, etc. Frequency tables, check sheets can be
converted into histograms.

(3) Scatter Diagram:


* It helps to analyze cause-and-effect relationship between two variables. The
independent variable is plotted on the x-axis and the dependent, response
variable on the y-axis. The data is plotted as a cloud of points. The density
and the direction of the cloud indicate how the two variables influence each
other – the various possibilities being: positive correlation, negative
correlation, no correlation, etc.

* The advantage of scatter diagram is that once the exercise is carried out, it
is possible to extrapolate the results for any given situation.
* Examples of situations where scatter diagrams can be used for analysis are:
level of training vs. number of errors, equipment age vs. number of
breakdowns, work experience vs. number of accidents, etc.

(4) Control Chart:


* A ‘Run chart’ is a chart that plots data pertaining to a variable or
characteristic over time. The control chart is a type of run chart that is used
in Statistical Quality Control (SQC) to monitor the quality of a process
continuously.
* It was developed by Walter A. Shewhart in 1924 to identify common cause
and special cause variations in a process.
* It is a graph consisting of three horizontal lines: central line (CL), upper
control limit (UCL), and lower control limit (LCL). Random samples from a
process are drawn at specific intervals, the sample serial number is shown on
the x-axis while the quality characteristic being measured is plotted on the y-
axis.

* The central line is set at the mean value while the UCL and LCL are set at
±3 sigma limits above and below the mean. If all readings fall between the
UCL and LCL, the process is inferred to be in control and if the readings fall
beyond the limits, the process is deemed to be out of control.

* The various types of control charts are:

x-chart and R-chart (for variables) and

p-chart, np-chart, c-chart, and u-chart (for attributes).

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


Points representing a statistic (e.g., a mean, range, proportion) of
measurements of a quality characteristic in samples taken from the process
at different times [the data]
The mean of this statistic using all the samples is calculated (e.g., the mean
of the means, mean of the ranges, mean of the proportions)
A centre line is drawn at the value of the mean of the statistic
The standard error (e.g., standard deviation/sqrt(n) for the mean) of the
statistic is also calculated using all the samples
Upper and lower control limits (sometimes called "natural process limits") that
indicate the threshold at which the process output is considered statistically
'unlikely' and are drawn typically at 3 standard errors from the centre line

The chart may have other optional features, including:


Upper and lower warning or control limits, drawn as separate lines, typically
two standard errors above and below the centre line
Division into zones, with the addition of rules governing frequencies of
observations in each zone
Annotation with events of interest, as determined by the Quality
Engineer in charge of the process's quality
Applications

If the process is in control (and the process statistic is normal),


99.7300% of all the points will fall between the control limits. Any
observations outside the limits, or systematic patterns within, suggest the
introduction of a new (and likely unanticipated) source of variation, known as
a special-cause variation. Since increased variation means increasedquality
costs, a control chart "signaling" the presence of a special-cause requires
immediate investigation.
This makes the control limits very important decision aids. The
control limits provide information about the process behavior and have no
intrinsic relationship to any specificationtargets or engineering tolerance. In
practice, the process mean (and hence the centre line) may not coincide with
the specified value (or target) of the quality characteristic because the process'
design simply cannot deliver the process characteristic at the desired level.
Control charts limit specification limits or targets because of the
tendency of those involved with the process (e.g., machine operators) to focus
on performing to specification when in fact the least-cost course of action is
to keep process variation as low as possible. Attempting to make a process
whose natural centre is not the same as the target perform to target
specification increases process variability and increases costs significantly
and is the cause of much inefficiency in operations. Process capability studies
do examine the relationship between the natural process limits (the control
limits) and specifications, however.
The purpose of control charts is to allow simple detection of events
that are indicative of actual process change. This simple decision can be
difficult where the process characteristic is continuously varying; the control
chart provides statistically objective criteria of change. When change is
detected and considered good its cause should be identified and possibly

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


become the new way of working, where the change is bad then its cause
should be identified and eliminated.
The purpose in adding warning limits or subdividing the control chart
into zones is to provide early notification if something is amiss. Instead of
immediately launching a process improvement effort to determine whether
special causes are present, the Quality Engineer may temporarily increase the
rate at which samples are taken from the process output until it's clear that
the process is truly in control. Note that with three-sigma limits, common-
cause variations result in signals less than once out of every twenty-two
points for skewed processes and about once out of every three hundred
seventy (1/370.4) points for normally distributed processes.[7] The two-sigma
warning levels will be reached about once for every twenty-two (1/21.98)
plotted points in normally distributed data. (For example, the means of
sufficiently large samples drawn from practically any underlying distribution
whose variance exists are normally distributed, according to the Central Limit
Theorem.)

(5) Pareto Chart:

* Vilfredo Pareto (1848 – 1923), an Italian economist, outlined the Pareto


Principle – variously known as “Vital few, Trivial many” and “80:20 Principle”
[80% of effects are produced by 20% of causes]. This is the basis for the Pareto
Chart which helps to prioritize problems, issues, costs, etc. in order of
importance. The technique also goes by the name of ‘ABC Analysis’.

* The 80:20 principle is found to operate in diverse fields: 80% of problems


are caused by 20% of workers/causes, 80% of the wealth is concentrated in
20% of the population, 80% of inventory costs are accounted for by 20% of
inventory, and so on.

* A Pareto Chart/Diagram is a graph where the data are classified in


descending order from left to right. The data may pertain to complaints,
breakdowns, costs, etc. The vital few are grouped on the left and the trivial
many are grouped on the right.

* The x-axis shows the item classification while the y-axis shows the numbers
or percentages of occurrence. A variation of the Pareto Chart shows
cumulative percentages on the y-axis.

* ABC classification arbitrarily separates the items/problems/breakdowns,


etc. into 3 classes – A class, B class, and C class in order of importance.

(6) Cause-and-Effect Diagram:


* It is a picture made up of lines and symbols designed to show the
relationship between an effect and its causes. It was developed by Dr.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


Ishikawa in 1943 and hence is also known as Ishikawa diagram. Since the
diagram looks like the skeleton of a fish, it is also called Fishbone diagram.

* For every effect, there are likely to be numerous causes. The causes can be
grouped under a number of main causes, with each main cause having level
one, level two causes, and so on. Analysis of causes is normally done through
brainstorming.

* Once the Cause-and-Effect Diagram is complete, solutions are developed to


correct the causes and improve the process. The diagram has wide application
in research, manufacturing, marketing, office operations, services, and so on.

Example: Cause-and-Effect Diagram for formation of excess scrap.

Materials
People

Excess

Process
Machines

* Cause-and-Effect Diagrams can be used to investigate either a “bad” effect


and take corrective action, or a “good” effect and to learn from those causes.

(7) Process Flow Chart:


* This diagram shows the flow of the product or service as it moves through
the various processing operations. The diagram makes it easy to visualize the
entire system, identify potential trouble spots, and locate control activities.

* Improvements can be achieved by changing, reducing, combining, or


eliminating steps.

Example: Process Flow Chart for job order manufacturing.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


Receive Procure Schedule Produce

order materials production

Inspect Pack Dispatch Collect

payment

CONTROL CHARTS :

* A control chart is a graph in which the results of inspection of samples are


plotted from time to time. Time is measured on the horizontal axis and the
value of a variable on the vertical axis. The chart consists of three horizontal
lines: a central line (CL) indicating the mean value, a line above it and the
other below it. They are called upper control limit (UCL) and lower control
limit (LCL) respectively. UCL and LCL are called statistical limits.

* If the process is under control, it means that the variation is only due to
common causes. In such a case the measured value will lie between UCL and
LCL. Any point lying outside the limits is due to a special cause variation. In
such cases, the organization should take efforts to find the root cause of the
problem and eliminate it.

* Use of control charts needs understanding of specification limits and


statistical limits. While the specification limits are set by the customer, the
process performance decides the statistical limits.

* Types of control charts:

Control charts for variables: (1) x -charts, and (2) R-charts.

Control charts for attributes: (1) p-charts and np charts (for fraction
defectives) and (2) c-charts

and u-charts (for number of defects).

* Control charts have three basic applications: (1) to establish a state of


statistical control, (2) to monitor a process and signal when the process goes
out of control, and (3) to determine process capability. Although many
different control charts are in use, they differ only in the type of measurement
for which the chart is used; the basic charting procedure remains the same
for all.
* Control charts for variables: The charts most commonly used for variables
data are x-chart and R-chart. The x-chart is used to monitor the centering of
the process, and the R-chart is used to monitor the variation in the process.
The range is used as a measure of variation simply for convenience. For large

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


samples and when the data is analyzed on computers, the standard deviation
is a better measure of variability.

* The first step in constructing x- and R-charts is to gather data. Usually,


about 25 to 30 samples are collected. Samples between size 3 and 10 are
used, with 5 being the most common. The number of samples is indicated by
k, and n denotes the sample size. For each sample, the mean and the range
are computed and plotted on the respective control charts. Next, the overall
mean and average range calculations are made. These values specify the
center lines for the x- and R-charts, respectively. The overall mean (denoted
by x) is the average of the sample means. The average range is the average of
the ranges of the samples. The control limits (UCL and LCL) are calculated
using the following formulae:

For x-chart: UCLx = x + A2R LCLx = x – A2R

For R-chart: UCLR = D4R LCLR = D3R

where A2, D3 and D4 are constants whose values depend on sample size and
can be found from tables.
* Example problem: A pharmaceutical company manufactures a certain
brand of capsules. The company randomly picks up samples of 5
capsules from production at regular intervals, and the weights in
grams of 10 such samples are given below. Construct the x and R charts
and comment if the process is in control.

Sample X1 X2 X3 X4 X5 Mean Range

no. X R

1 42 60 65 75 70 62.4 33

2 39 30 36 45 72 44.4 42

3 55 66 45 72 78 63.2 33

4 65 44 50 33 29 44.2 36

5 75 19 24 80 76 54.8 61

6 63 48 54 72 36 54.6 36

7 37 32 48 39 57 42.6 25

8 75 65 40 45 70 59 35

9 33 40 25 20 50 33.6 30

10 60 63 60 81 55 63.8 26

Total 522.6 357

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


Interpreting patterns in control charts:

The following patterns indicate that the process is in control:

1. No points are outside the control limits.

2. The number of points above and below the central line is about the same.

3. The points seem to fall randomly above and below the central line.

4. Most points are near the central line, and only a few are close to the control
limits.

Other patterns encountered:

1. One point outside control limits: A single point outside the control limits is
usually produced by a special cause. Possible causes are a sudden power
fluctuation, a broken tool, measurement error, or an incomplete or omitted
operation in the process.

2. Sudden shift in the process average: An unusual number of consecutive


points falling on one side of the central line. Typically, this occurrence is the
result of an external influence that has affected the process, which would be
considered a special cause. Possible causes might be a new operator, a
careless operator, a new machine setting, a change in method, poor
maintenance, etc.

3. Cycles: Cycles are short, repeated patterns in the chart, alternating high
peaks and low valleys. These patterns are due to causes that come and go on
a regular basis. Examples are operator rotation, fatigue at the end of a shift,
seasonal effects such as temperature or humidity, or differences between day
and night shifts.

4. Trends: A trend is the result of some cause that gradually affects the quality
characteristics of the product and causes the points on a control chart to
gradually move up or down from the central line. Trends may occur due to
improvement in operator skills, improvements in maintenance, tool wear,
aging of equipment, etc.

* Control charts for attributes: Attributes usually cannot be measured, but


they can be observed and counted and are useful in many practical situations.
Attributes data are usually easy to collect, often by visual inspection. But one
drawback in using attributes data is that large samples are necessary to
obtain valid statistical results.
* One distinction we should make is between the terms defects and defectives.
A defect is a single nonconforming quality characteristic of an item. An item
may have several defects. The term defective refers to items having one or
more defects. The term nonconforming is often used instead of defective.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


* A p-chart (fraction nonconforming or fraction defective chart) monitors
the proportion of nonconforming items produced in a lot. 25 to 30 samples,
each of size around 100, are chosen over a time period and analyzed for
nonconformities. Suppose that k samples, each of size n are selected. If y is
the number of nonconformities in a sample, the proportion nonconforming is
y/n. If pi be the fraction nonconforming in the ith sample, the average fraction
nonconforming for the group of k samples is

p = (p1 + p2 +……+ pk) / k

This statistic reflects the average performance of the process. An estimate of


the standard deviation, based on binomial distribution, is given by

Sp = √[ p(1-p)/n ] .Hence the upper and lower control limits are given by

UCL = p + 3sp LCL = p – 3sp If LCL is less than zero, a value


of zero is used.

* np-chart: Instead of using a chart for the fraction nonconforming, if a chart


for the number of nonconforming items is used, such a control chart is called
an np-chart. To use the np-chart, the size of each sample must be constant.
Equal sample sizes are not required for p-charts.

If yi be the number of nonconforming items in the ith sample, the average


number of nonconforming items per sample (denoted by np) for k samples is

np = (y1 + y2 + - - - + yk) / k

The estimate of standard deviation is snp =√ [np(1-p)]. Using 3σ limits as


before,

UCL = np + 3snp and LCL = np – 3snp.

* Example problem: In the mass production of computer chips, 10


samples of size 50 each were inspected and classified as ‘good’ or ‘bad’.
The data are as follows. Construct p-chart with 3 sigma limits and
comment on the process.

Sample no. 1 2 3 4 5 6 7 8 9 10

No. of defective 10 9 4 6 11 8 10 9 12 11
chips

Fraction 0.2 0.18 0.08 0.12 0.22 0.16 0.2 0.18 0.24 0.22
defectives

* C-chart: In some situations, one may be interested not only in whether an


item is defective but also in how many defects it has. For example, in complex
assemblies such as electronics, the number of defects is just as important as
whether the product is defective. Two charts can be used in such situations.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


The c-chart is used to control the total number of defects per unit when
sample size is constant. If sample sizes are variable, a u-chart is used to
control the average number of defects per unit.
* The c-chart is based on Poisson distribution. To construct c-chart, estimate
the average number of defects per unit, c, by taking at least 25 samples of
equal size, counting the number of defects per sample, and finding the
average. The standard deviation of the Poisson distribution is the square root
of the mean , i.e. sc = √ (c).

Thus the center line (CL) is c, and the 3σ limits are given by

UCL = c + 3√(c) and LCL = c – 3√(c)


* u-chart: As long as the sample size is constant, the c-chart is appropriate.
In some cases, however, the sample size is not constant or the nature of the
production process does not yield discrete, measurable units. Production of
textiles, photographic film, or paper, have no convenient set of items to
measure. In such cases, a standard unit of measurement is used, such as
defects per square foot or defects per square inch. The control chart for these
situations is the u-chart.
The variable u represents the average number of defects per unit of
measurement, that is u=c/n, where n is the size of the subgroup (such as
square feet). The center line u for k samples each of size ni is computed as
follows:

u = (c1+c2+ - - - -+ck) / (n1+n2+ - - - - +nk).

The standard deviation of the ith sample is estimated by su = √(u/ni)

The control limits, based on 3 standard deviations, are: UCL=u+3√(u/ni) and


LCL=u – 3√(u/ni).

Example problem: Newly fabricated trucks were inspected for missing


rivets. The following observations were obtained. Construct c-chart and
comment on the process.

Truck no. 1 2 3 4 5 6 7 8 9 10

Missing rivets 14 13 26 20 9 25 15 11 14 13

* After a process is determined to be in control, the charts should be used on


a daily basis to monitor performance, identify any special causes that might
arise, and make corrections only as necessary. Unnecessary adjustments to
a process result in nonproductive labor, reduced production, and increased
variability of output.
* Control charts are designed to be used by employees in their work areas
rather than by inspectors or quality control personnel. Under the philosophy
of statistical process control, the burden of quality rests with the employees
themselves. The use of control charts allows them to react quickly to special
causes of variation.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


* Although control charts were first developed and used in a manufacturing
context, they can also be applied to service organizations. Examples: Cheque-
processing accuracy in banks; On-time delivery of meals and medicines in
hospitals; Check-out time in hotels, etc.

KEY METRICS OF NORMAL DISTRIBUTION

Areas under the normal curve

Mean μ = 50%

μ + 1σ = 50 + 34.134 = 84.134%

μ – 1σ = 100 – 84.134 = 15.866%

μ + 2σ = 50 + 47.725 = 97.725%

μ – 2σ = 100 – 97.725 = 2.275%

μ + 3σ = 50 + 49.865 = 99.865%

μ – 3σ = 100 – 99.865 = 0.135%

μ + 4σ = 50 + 49.997 = 99.997%

μ – 4σ = 100 – 99.997 = 0.003%

μ + 5σ = ………………………

μ – 5σ = ………………………

μ + 6σ = ………………………

μ – 6σ = ………………………

****************************

Areas on either side of the center line

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


Between +1σ and – 1σ = 68.268%

Between + 2σ and – 2σ = 95.45%

Between + 3σ and – 3σ = 99.73%

Between + 4σ and – 4σ = 99.994%

Between + 5σ and – 5σ = 99.999943%

Between + 6σ and – 6σ = 99.9999998%

CONTROL CHART FACTORS

Sampl 2 3 4 5 6 7 8 9 10
e size

(n)

A2 1.8 1.0 0.7 0.5 0.4 0.4 0.37 0.3 0.31


8 2 3 8 8 2 4

D3 0 0 0 0 0 0.0 0.14 0.1 0.22


8 8

D4 3.2 2.5 2.2 2.1 2.0 1.9 1.86 1.8 1.78


7 7 8 1 0 2 2

PROCESS CAPABILITY

+ The fundamental requirement of any process is that it should


be stable first. Stability is indicated by consistent performance of the process
within the limits set. The only variations allowed are the common cause
variations. Process capability only makes sense if all special causes of
variation have been eliminated and the process is in a state of statistical
control. (When there are special causes, the process should be stopped to
investigate and eliminate them).

+ Specifications (or tolerance limits) are the permissible variation in a


process parameter, and are established by design engineers to meet a
particular function from the customer standpoint. On the other hand, control
limits exist on the basis of statistical principles where variations occur only
due to natural or common causes.

+ Process capability analyses the relationship between these two aspects of


a process, viz. design specifications and control limits, to judge whether the
process is capable of meeting the design specifications or not. In simple terms,

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


if the specification limits are greater than the control limits, the process is
capable of meeting the specifications, and if the control limits exceed the
specification limits, the process is not capable of meeting the specifications.
Thus capability indices are used to determine whether a process, given its
natural variation, is capable of meeting established specifications.
+ One of the properties of the normal distribution is that 99.73% of the
observations will fall within 3 standard deviations of the mean. Thus a process
that is in control can be expected to produce a large percentage of output
between μ – 3σ and μ + 3σ , where μ is the process average and σ is the
standard deviation. Therefore, the natural tolerance limits, i.e. control limits of
the process are μ ± 3σ. A 6σ spread is commonly used as a measure of process
capability.
+ Thus if the design specifications are between μ – 3σ and μ + 3σ, the process
will be capable of producing nearly 100% conforming output.

+ Process capability index: Process capability is measured by a process


capability index (CP) which is defined as the ratio of the specification width to
the natural tolerance of the process.
CP = (USL – LSL) / 6σ [where USL – LSL = upper specification – lower
specification, or tolerance and σ = standard deviation].
When CP >1, the process is capable of meeting the specifications, and when
CP <1, the process is incapable of meeting the specifications.
+ Process performance index: CP can be applied only when the process is
centered about the mid-specification. The value of CP does not depend on the
mean of the process. To include information on process centering, another
index called process performance index (CPK) is used. CPK = Minimum of
[(USL – μ)/3σ and (μ – LSL/3σ].

+ Relationship between CP and CPK:

i) When the process is centered, CP = CPK. Otherwise, CP ≠ CPK.

ii) CPK is always less than or equal to CP.

iii) When either CP or CPK is less than one, it indicates that the process does
not produce in conformance with specifications.

+ Using the capability index and performance index concepts, we can measure
quality. The larger the indices, the better the quality This is accomplished by
having realistic specifications and continual striving to improve process
capability.

+ Six Sigma performance corresponds to process variation equal to half the


design tolerance, or a CP value of 2.0. However, because Six Sigma allows a
mean shift of up to 1.5σ from the target, CPK must be held to 1.5.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


SIX SIGMA

+ Six Sigma is defined as “a business process that allows organizations to


drastically improve their bottom-line by designing and monitoring everyday
business activities in ways that minimize waste of resources while increasing
customer satisfaction”.

+ Motorola embarked on six sigma in the 1980s in order to give a sharper


focus to its TQM efforts. GE adopted six sigma in 1995. Kodak was also one
of the early users. Today it is adopted worldwide to improve process
performance and is increasingly popular as a way of organizing an entire
organization to become more quality-oriented and customer-focused. Six
sigma has been applied not only in manufacturing, but also in product
development, customer service, accounting, and many other business
functions.

+ In the early days, organizations were happy with a process where variations
were controlled within ±3σ limits, i.e. if the 3σ limits matched the specification
limits. This means they didn’t mind allowing 0.27% defects (area contained in
the tail of the normal curve corresponding to 3σ). 6σ (six sigma) is a rigorous
concept of applying SPC to control the defects to 3.4 parts per million (ppm).
Therefore the application of six sigma concept means controlling variations
and thereby defects closer to the level of zero defects.

+ When the specification limits coincide with 6σ limits on both sides of the
mean (in a normal distribution), one can expect total defects of 0.002 ppm (or
2 parts per billion) , compared with 2,700 ppm when we achieve the
traditional 3σ quality level. During normal operations, there could be a shift
in the mean of the process due to various reasons. If we assume a shift of the
mean by 1.5 sigma on either side, the actual effect will still be 4.5 sigma.
(Recall that CPK =Minimum of [(USL – μ)/3σ and (μ – LSL)/3σ). This
adjustment of the process mean by 1.5 sigma provides a fairly realistic idea
of what the process capability will be over repeated cycles of operation of the
process.

+ Six sigma represents a quality level of at most 3.4 defects per million
opportunities (DPMO).. The allowance of a shift in the distribution is
important, because no process can be maintained in perfect control.

Six sigma levels before and


after shift in average

Sigma Without shift With shift


(σ)
% Process % Process
levels
conformance DPMO Capability conformance DPMO Capability
(CP) (CPK)

1 68.27 317,320 0.33 30.23 697,700 -0.167

2 95.45 45,500 0.67 69.13 308,700 0.167

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


3 99.73 2,700 1.00 93.32 66,810 0.5

4 99.9937 63 1.33 99.379 6,210 0.834

5 99.999943 0.57 1.67 99.9767 233 1.167

6 99.9999998 0.002 2.00 99.99966 3.4 1.5

+ Note that a quality level of 3.4 defects per million can be achieved in several
ways, for instance:

• with 0.5-sigma off-centering and 5-sigma quality


• with 1.0-sigma off-centering and 5.5-sigma quality
• with 1.5-sigma off-centering and 6-sigma quality.
+ The difference between a 4- and 6-sigma quality level can be surprising. If
your cellular phone system operated at a 4-sigma level, you would be without
service for more than 4 hours each month, whereas at 6-sigma, it would only
be about 9 seconds a month; a 4-sigma process would result in one
nonconforming package for every 3 truckloads while a 6-sigma process would
have only one nonconforming package in more than 5,000 truckloads. A
change from 3 to 4 sigma represents a 10-fold improvement; from 4 to 5
sigma, a 30-fold improvement; and from 5 to 6 sigma, a 70-fold improvement
– difficult challenges for any organization.
+ However, not all processes should operate at a six sigma level. The
appropriate level should depend on the strategic importance of the process
and the cost of improvement relative to the benefit. It is generally easy to move
from a 2 or 3-sigma level to a 4-sigma level, but moving beyond that requires
much more effort and sophisticated statistical tools.

+ Since stating its goal of six sigma, Motorola has made great strides in
meeting this goal, achieving 6-sigma capability in many processes and 4- or
5-sigma levels in most others. Even in those departments that have reached
the goal, Motorola employees continue their improvement efforts in order to
reach the ultimate goal of zero defects. The company saved $2.2 billion in 4
years as a result of these efforts. In 1988, Motorola received the MBNQA
Award for quality.

+ In 1990, Motorola joined with IBM, Texas Instruments and Kodak to


establish Six Sigma Research Institute (SSRI). It developed the concept of
Black Belt – a person trained to facilitate the carrying out of six sigma projects.

+ In addition to a focus on defects, six sigma seeks to improve all aspects of


operations. Thus, other key metrics include cycle time, process variation, and
yield. Selecting the appropriate metric depends on the scope and objectives of
the project, making six sigma a universal approach for improvement in all
aspects of a business.
+ One of the more difficult challenges in six sigma is the selection of the most
appropriate problems to attack. High costs, excessive defects, excessive cycle

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


times, delays, a rash of customer complaints, low customer satisfaction, or
lost customers often characterize quality- and performance-related problems
that trigger opportunities for six sigma projects.
+ At the outset of a six sigma initiative, it is beneficial to pick the “low-hanging
fruit” – projects that are easy to accomplish in order to show early successes.
This visible success helps to build momentum and support for future projects.

+ Applying six sigma to services requires examination of four key measures of


performance: Accuracy, Cycle time, Cost, and Customer satisfaction.

* Six sigma implementation models: There are two basic models for six
sigma implementation:

1) DMAIC model (for improving existing processes) and

2) DMADV model (for design of new products to achieve six sigma quality).

(1) DMAIC model: It is a 5-step process improvement model. The steps are:
Define, Measure, Analyze, Improve, and Control.
i) Define: Define the six sigma project to be taken up. Select the team. Identify
the customers (internal and external). Identify the critical to quality (CTQ)
issues, i.e. key performance measures. Document the existing process.
Describe the current level of performance.. Benchmark best performance
standards. Calculate the cost/revenue implications of the project. Decide
what needs to be done, by whom, and when.
ii) Measure: Identify appropriate measures for assessing performance. Define
target performance based on customer requirements (through benchmarking,
if necessary). Measure current performance and identify the gaps.

iii) Analyze: Discover the causes for the gaps/shortfalls/defects. Identify key
variables which cause the defects (through a cause-and-effect diagram, if
necessary). Group the influencing factors into the following three categories:

Constants (C): these factors cannot be changed.

Noise factors (N): while efforts should be made to reduce noise, these
can’t be eliminated
Experimental factors (X): these factors can be modified to improve the
results.

In this way, identify the parameters to be experimented with in order to


improve the process.

iv) Improve: Fix maximum permissible ranges of the key variables. Devise a
system to measure deviations of the variables. Modify the process to ensure
that variations occur within the permissible range. Implement the solution on
a pilot basis. Monitor and measure performance. Standardize the improved
method if performance is successful.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


v) Control: Put in place systems and procedures to ensure that key variables
remain within the maximum permissible ranges continuously. These might
include establishing new standards and procedures, training the workforce,
and instituting controls to make sure that improvements do not die over time.

(2) DMADV model: DMADV stands for Define, Measure, Analyze, Design, and
Verify. It is employed for design of new products to achieve 6-sigma quality.

i) Define: This phase is similar to the DMAIC model. The only difference is
that ‘document the existing process’ and ‘describe the current level of
performance’ do not arise.

ii) Measure: Identify customer needs and convert them into technical
requirements through Quality Function Deployment (QFD) technique. Define
measures for each of the technical requirements and define performance
standards for the process.

iii) Analyze: Generate various design options for the concept. Evaluate them
and select the right option.

iv) Design: Detailed design stage involving finer details and identifying all the
required steps in the process. This is followed by system integration. This step
may involve the fabrication of prototypes or establishing a pilot plant.

v) Verify: Verify and validate the functionality of the product or process.


Document the findings and transfer to regular production.
* Design for six sigma (DFSS): Design plays an important role in achieving
six sigma. Design is important for controlling variations and reducing costs.
DFSS is used to design or redesign a product/service. For efficient design of
products/services, the following techniques are useful: Taguchi’s techniques,
Quality Function Deployment [QFD], and Failure Modes and Effects Analysis
[FMEA].

* Six sigma implementation:

In large organizations, six sigma is implemented in a seamless manner at 3


levels: process level, operations level, and business level. The time taken for
implementation normally is 6 to 8 weeks at the process level, 12 to 18 months
at the operations level, and a few years at the business level. Many projects
may be going on simultaneously at the different levels.

* Roles of personnel involved in six sigma (based on Karate terms):

1. Green Belt: Process owners. They should be familiar with basic statistical
tools.

2. Black Belt: Junior level with 5 years or more experience. Thorough with
basic and advanced statistical tools. One Black Belt per 100 employees. They
work on full-time basis, and are responsible for specific six sigma projects.
They undergo four, one-week training programs.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


3. Master Black Belt: Senior level persons. One Master Black Belt for every
30 Black Belts. They train Black Belts and Green Belts, and work full time on
six sigma projects.
4. Champion: A senior management person who identifies improvement
projects to be taken up. There is one Champion per business group/site.

RELIABILITY CONCEPTS

* Reliability is the ability of a product to perform as expected over time. It is


one of the principal dimensions of quality. Sophisticated equipment used in
areas such as transportation (airplanes), communications (satellites), and
medicine (pacemakers) require high reliability.
* The results of product or process failure can be disruptive, inconvenient and
expensive. For the manufacturer, low reliability of products/services can lead
to uncompetitive position, customer dissatisfaction, high warranty costs, and
possible product liability costs. For the customer, unreliable products can
result in reduced safety, inconvenience and higher cost.
* While quality control involves prevention of defects or failures during
manufacture, reliability refers to freedom from defects or failures during use.
* High reliability can provide a competitive advantage for many consumer
goods, e.g. Japanese cars in the 1970s which dominated the US markets.

* Definition: Reliability is “the probability that a product, equipment, or system


performs its intended function for a stated period of time under specified
operating conditions”. [Probability implies that the value lies between 0 and
1. Time implies that the longer the life, the higher is the reliability.
Performance refers to the intended use. Operating conditions refer to the
type and amount of usage, and the working environment].
* Types of reliability failures:
(1) Device does not work at all. [E.g. car will not start].
(2) Operation of device is unstable. [E.g. erratic, jerky acceleration].
(3) Performance deteriorates. [E.g. braking becomes progressively less
effective].

* Measures of reliability:
+ Reliability is measured by the number of failures per unit time (called failure
rate λ), or its reciprocal of time units per failure [Mean time to failure (MTTF)
for unrepairable items or Mean time between failures (MTBF) for repairable
items].
+ Component reliability (CR) - probability that a part will not fail in a given
time period or number of trials under ordinary conditions.
If failure rate is FR, then CR = 1 – FR

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


FR = Number of failures / Number tested
FRn = Number of failures / Unit hours of operation
MTBF = Unit hours of operation / Number of failures = 1 / FRn
Availability = Uptime / (Uptime + downtime)
= MTBF / (MTBF + Mean down time[MDT])

* Predicting system reliability:


+ Many systems are composed of individual components with known
reliabilities. The reliability data of individual components can be used to
predict the reliability of the system at the design stage.
+ Systems of components may be assembled in series, in parallel, or in some
mixed combination. Block diagrams are useful in such situations, where
blocks represent components or sub-systems.
Series system:

R1=0.85 R2=0.92 R3=0.97

It contains ‘n’ components in series. If the reliability of components C1, C2, -


- - Cn are R1, R2, - - - Rn, the reliability of the system (RS) is the product of the
individual reliabilities, i.e.
RS = R1 x R2 x - - - - x Rn
Thus, if a product has 3 components, with reliabilities of 0.997, 0.980 and
0.975, the reliability of the system is given by RS = 0.997 x 0.980 x 0.975 =
0.953.
Note that RS decreases as additional components are added in series. The
more the number of components in series, the more complex is the system,
and the greater is the chance of failure.

Parallel system:
R1

R2

In such a system, failure of an individual component is less critical than in


series systems; the system will successfully operate as long as even one
component functions. The additional components are backup or redundant
components. Redundancy is often built into systems to improve their
reliability.
It contains ‘n’ components in parallel. If R1, R2, - - - Rn are the reliabilities of
the individual components, their probabilities of failure are respectively 1 –

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


R1, 1 – R2, - - - 1 – Rn. Because the system fails only if all components fail
together, the probability of system failure is
(1 – R1)(1 – R2) - - - (1 – Rn). Hence the system reliability is computed as
RS = 1 – (1 – R1)(1 – R2) - - - (1 – Rn)
If all components have identical reliabilities R, then RS = 1 – (1 – R)n.
Suppose 5 computers are installed in parallel on a space shuttle with built-in
redundancy in case of failure. If the reliability of each computer is 0.99, the
system reliability is
RS = 1 – (1 – 0.99)5 = 0.9999999999.

Combination system: Most systems are composed of combinations of series


and parallel systems. The reliability of such systems is computed in two or
more stages. RA=0.8

Example 1: RB=0.7 RC=0.91


111
RS = 0.99x0.999x0.96x0.98RD=0.75
= 0.93

Example 2:

RS = 1 – (1 – 0.92169)(1 – 0.9603) = 0.9969.


By appropriately decomposing complex systems into series and/or parallel
components, the system reliability can be easily computed.

* Approaches to optimizing reliability:


1. Standardization: Use of components with proven track records of
reliability. It also reduces costs since standardized components are used in
many different products.
2. Redundancy: Providing backup components when failure of any one
component can cause failure of the entire system. Examples are: backup
power supply systems in hospitals and factories, UPS for computers, etc.
Redundancy is crucial to systems in which failures can be costly, but can lead
to increase in cost, weight, and size of the system.
3. Over-design: Use of sophisticated materials or manufacturing processes
to take care of extreme conditions. E.g. Use of stainless steel instead of mild
steel with paint to withstand corrosion.
4. De-rating: Use of a heavy duty or high capacity component for a lower level
application. E.g. Use of a capacitor rated at 300V for a 200V application.
5. Design simplification: Reducing the number of interacting parts in a
product.
6. Understanding the physics of failure: Understanding the physical,
chemical, and other properties of materials (e.g. corrosion, chemical
reactions, humidity effects, etc.) and taking remedial measures helps to
eliminate potential failures and make the product robust to withstand
environmental conditions.
7. Burn-in: For electronic components which have high infant mortalities
(failures at initial stages), burn-in (or component stress testing) involves

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


exposing them to elevated temperatures in order to force latent defects to
occur. Devices that survive the burn-in tests are likely to have long, trouble-
free operating lives.
8. Failure Modes and Effects Analysis (FMEA): This topic will be dealt with
in detail later.

Product life characteristics curve:


+ All products fail at some point in their life time. In considering the failure
rate of a product, suppose that a large group of items is tested or used until
all fail, and the time of failure is noted for each item. The failure rate curve,
called the product life characteristics curve, shows the failure rates (failures
per unit time) against time. It is also called as bathtub curve because of its
shape.

+ The curve consists of three distinct stages: Early failure (‘infant mortality’
or ‘debug’), useful life (‘normal failure’ or ‘chance’) and wear out (‘old age’)
failure. The curve shows that the failure rates are higher at the early and end
stages of a product’s life and relatively low in between the two extremes.
1. Early failure: ‘Teething troubles’. Problems/weaknesses during
manufacturing, delivery, and initial start-up all come out at this stage. Hence
failure rates are high at this stage.
2. Useful life: Product stabilizes, gives consistent performance. Failure rate is
constant and low.
3. Wear out: Towards the end of the life of the product, failure rate increases
rapidly again as parts become worn out and eventually fail.
+ Knowing the product life characteristics curve for a particular product helps
engineers predict behavior and take suitable decisions. For example, if a
manufacturer knows that the early failure period of a microprocessor is 600
hours, it can test the chip for 600 hours (or more) under actual or simulated
operating conditions before releasing the chip to the market.

TPM – Concepts, Improvements needs:

Definition of TPM: It is the systematic execution of maintenance by all


employees through small group activities.

The dual goals of TPM are Zero breakdowns and Zero defects.
T: Total = All encompassing by maintenance and production individuals
working together.
P: Productive = Production of goods and services that meet or exceed
customer’s expectations.
M: Maintenance = Keeping equipment and plant in as good as or better than
the original condition at all times.
Overall goals of Total Productive Maintenance, which is an extension of
TQM are
i. Maintaining and improving equipment capacity
ii. Maintaining equipment for life
iii. Using support from all areas of the operation
iv. Encouraging input from all employees

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


v. Using teams for continuous improvement

Seven basic steps to get an organization started toward TPM.


a) Management learns the new philosophy
b) Management promotes the new philosophy
c) Training is funded and developed for everyone in the organization
d) Areas of needed improvement are identified
e) Performance goals are formulated
f) An implementation plan is developed
g) Autonomous work groups are established

Benefits of TPM:
❖ Increased equipment productivity
❖ Improved equipment reliability
❖ Reduced equipment downtime
❖ Increased plant capacity
❖ Extended machine line
❖ Lower maintenance and production costs
❖ Enhanced job satisfaction
❖ Improved return on investment
❖ Improved safety
❖ Improved teamwork between operators and maintenance people

TEROTECHNOLOGY

Terotechnology is the maintenance of assets in optimal manner. It is the


combination of management, financial, engineering, and other practices
applied to physical assets such as plant, machinery, equipment, buildings
and structures in pursuit of economic life cycle costs.
It is concerned with the reliability and maintainability of physical assets and
also takes into account the processes of installation, commissioning,
operation, maintenance, modification and replacement.
Decisions are influenced by feedback on design, performance and costs
information throughout the life cycle of a project.
It can be applied equally to products, as the product of one organization
often becomes the asset of another.

BUSINESS PROCESS IMPROVEMENT (BPI)


BUSINESS PROCESS REENGINEERING (BPR)

* A process is a group of activities that takes input(s), adds value to it, and
provides output(s) to an internal or external customer. A business process is
a set of logically related tasks to achieve a defined business outcome. A
business system comprises of a set of business processes.
* Origins of BPR: In 1990, two Americans, Michael Hammer and James
Champy, coined the word (BPR) in their famous book “Reengineering the
Corporation”.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


* Definition of BPR: “the fundamental rethinking and radical redesign of
business processes to improve performance dramatically in terms of
measures like cost, quality, service, and speed.”
The other terms for BPR are: ‘Reengineering’, ‘Process Reengineering’ and
‘Process Redesign’.

* When a goal of 10% improvement is set, managers or engineers can usually


meet it with some minor improvements. However, when the goal is 1,000%
improvement, employees must be creative and think “outside the box”. The
seemingly impossible is often achieved, yielding dramatic improvements and
boosting morale.
* Radical redesigning involves tossing out existing procedures and re-
inventing the process, not just incrementally improving it. The goal is to
achieve quantum leaps in performance. BPR involves basic questions about
business processes: ‘Why do we do it? Why is it done this way?’ This may
reveal obsolete, inappropriate or wrong assumptions.
* BPR is strong medicine and causes pain in the form of layoffs and large
investments in information technology. However, BPR can result in huge
payoffs.
* BPR requires a “clean slate” philosophy, i.e. starting with the way the
customer wants to deal with the company. Reengineers start from the future
and work backward, unconstrained by current approaches. But despite the
clean slate philosophy, a reengineering team must understand the current
process: what it does, how well it performs, and what factors affect it. Such
understanding can reveal areas in which new thinking will yield the biggest
payoff.
* A process selected for reengineering should be a core process, rather than
functional departments such as purchasing or marketing. A team, consisting
of members from each functional area affected by the process change, is
charged with carrying out a reengineering project.
* I.T. is a key input in BPR. Reengineering projects design processes around
information flows.
* The key requirements for success in BPR are: (a) fundamental
understanding of processes,
(b) creative thinking, and (c) effective use of information technology.
* Kaizen involves incremental improvements, whereas BPR involves
breakthrough improvements. Both are essential for successful
implementation of TQM.

* The seven principles of reengineering:


1. Organize around outcomes, not tasks: Tasks should be combined into
a single job that creates a well-defined outcome. This results in greater speed,
productivity, and responsiveness.
2. Those who use the output of the process must perform the process:
People closest to the process should perform the work. “Work must be carried
out where it is.”
3. Merge information processing work into the real work that produces
the information: People who collect information should also be responsible
for processing it.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


4. Treat geographically dispersed resources as though they are
centralized: This is achieved through centralized databases,
telecommunication networks, internet, videoconferencing, etc.
5. Link parallel activities instead of integrating their results: The
concept of only integrating the outcome of parallel activities is the primary
cause of rework, delays and high costs. Such parallel activities should be
linked continuously and coordinated during the process.
6. Put the decision point where the work is performed: Decision-making
should be made part of the work performed. This is made possible through
knowledgeable workforce and decision-aiding technology.
7. Capture information once – at the source: Collect information on-line
only once at the source. This avoids wrong data entries and costly re-entries.

* Steps in BPR implementation:


1. Develop business vision and process objectives.
2. Study the existing procedures.
3. Identify the process for reengineering.
4. Identify customer requirements.
5. Understand the current process.
6. Identify gaps between current process and customer requirements.
7. Evaluate enablers (organizational issues, information technology).
8. Develop improved process.
9. Develop action plan for implementation.
10. Implement the reengineered process.
11. Follow up.
* Some BPR tools: Flow charts, Benchmarking, Simulation, Reengineering
software, etc.

* Success factors in BPR: (i) Critical/core processes, (ii) Strong leadership,


(iii) Cross-functional teams, (iv) Information technology, (v) ‘Clean slate’
philosophy, and (vi) Process analysis.
* Benefits of BPR: 1. Better financial performance, 2. Enhanced customer
satisfaction, 3. Cost reduction, 4. Better product/service quality, 5. Increase
in productivity, 6. Improved flexibility / responsiveness, 7. Reduced process
times, 8. Improved employee participation, 9. Increased competitiveness, 10.
Improved delivery performance.

* Some examples of successful reengineering:


1. Motorola’s Six Sigma thrust was driven by a goal of improving product and
services quality ten times within two years, and at least 100-fold within four
years.
2. IBM Credit Corporation, the financial arm of IBM, cut the process of
financing IBM computers from 7 days to 4 hours by rethinking the process.
It replaced a team of specialists by a single person aided by a user-friendly
computer system.
3. The accounts department of Ford employed 500 people before
reengineering. In a joint venture with Mazda of Japan, it found that Mazda
had only 5 people in the accounts payable department. By copying Mazda,
Ford managed to reduce its staff in accounts payable department by 75%.

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA


4. Bell Atlantic reengineered its telephone business. After 5 years, it cut the
time to connect new customers from 16 days to just a few hours. The company
had to lay off 20,000 employees, but it became more competitive in the
process.
5. In rethinking its purpose as a customer-driven, retail service company,
Taco Bell eliminated the kitchen from its restaurants. Meat and beans are
cooked at central locations and reheated at the restaurants. Other food items
such as tomatoes, onions, and olives are prepared off-site. This innovation
saved about 11 million hours of work and $7 million per year over the entire
chain.

* Limitations of BPR:
A recent survey estimates the percentage of BPR failure to be as high as 70%.
Some companies have made extensive BPR efforts only to achieve marginal or
even negligible benefits. Others have succeeded only in destroying the morale
and momentum built up over the life time of an organization. These failures
indicate that reengineering involves a great deal of risk. Some major
limitations of BPR are:
(i) BPR is strong medicine, often resulting in massive layoffs,
(ii) (ii) It could cause disruptions in existing jobs, management systems,
and organizational structures,
(iii) (iii) It often involves large investments, especially in I.T.,
(iv) (iv) BPR cannot succeed in organizational cultures which are resistant
to change, and
(v) (v) BPR is not simple or easily done, nor is it appropriate for all
processes and for all organizations.

****************************ALL THE BEST *************************************

BA5107 TQM UNIT 3 NOTES Prepared By Mr.M.S.Bala Ragavendran.Ap/MBA

You might also like