Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

DRIVE- FALL 2013

PROGRAM/SEMESTER- MBADS – (SEM 3/SEM 5) / MBAN2 / MBAFLEX – (SEM 3)

PGDTQMN – (SEM 1)

SUBJECT CODE & NAME- QM0012- STATISTICAL PROCESS CONTROL AND PROCESS CAPABILITY

Q1. Explain the concept of Pareto Chart and Scatter Diagram. (Explanation of the concept of Pareto Chart,
Explanation of the concept of Scatter Diagram) 5, 5

Answer: Pareto Chart


Vilfredo Pareto (1848-1923) discovered that:

 80% of the wealth in Italy was held by 20% of the population


 20% of customers accounted for 80% of sales
 20% of parts accounted for 80% of cost, etc.

These observations were confirmed by Juran (1960) and resulted in what is known as the Pareto Principle. The Pareto
Principle states that: "Not all of the causes of a particular phenomenon occur with the same frequency or with the same
impact". Pareto analysis gives recognition to the fact that, invariably, a small number of problem types account for a
large percentage of the total number of problems that occur. Thus it is also called as “80/20” rule. Such characteristics
can be highlighted using Pareto Charts
.
Pareto charts show the most frequently occurring factors. The lengths of the bars represent frequency and are arranged
with longest bars on the left and the shortest to the right. In this way the chart visually depicts which situations are
more significant. Analysis of Pareto charts help to make best use of limited resources by targeting the most important
problems to tackle. Pareto chart is also called as Pareto diagram or Pareto analysis.

Pareto chart is used:


 When analyzing data about the frequency of problems or causes in a process
 When there are many problems or causes and you want to focus on the most significant

Scatter Diagram
A scatter diagram is a tool for analyzing relationships between two variables. One variable is plotted on the horizontal
axis and the other is plotted on the vertical axis. The pattern of their intersecting points can graphically show
relationship patterns. Most often a scatter diagram is used to prove or disprove cause-and-effect relationships. While
the diagram shows relationships, it does not by itself prove that one variable causes the other. In addition to showing
possible cause and effect relationships, a scatter diagram can show that two variables are from a common cause that is
unknown or that one variable can be used as a surrogate for the other.
A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval.
Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower
left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes
from upper left to lower right, it suggests a negative correlation.

Q2. a. Explain Poisson distribution. Give any two examples of Poisson distribution.

Answer: Poisson Distribution

Poisson process is obtained when the binomial experiment is conducted many number of times. Here, the number of
trials would be a large number. It is also a discrete probability distribution. If the probability of success ‘p’ is small
and the number of trials ‘n’ is large, the binomial probabilities are hard to calculate. In such cases, when ‘n’ is large
and ‘p’ is small, the binomial distributions are approximated to Poisson distributions.
The probability distribution of a Poisson random variable X is given by:

Where, ‘p’ is the probability of success and ‘n’ is the number of trials.
It is a unimodal distribution. It is also known as the distribution of „rare events‟. It is the limiting form of binomial
distribution as ‘n’ tends to infinity.

Assumptions for applying the Poisson distribution

Poisson distribution can be applied under the following assumptions:

 The outcome of trial / experiment must be of dichotomous nature


 The probability of success must remain the same for trials
 The trials should be conducted under identical conditions
 iv)The trials should be statistically independent
 The probability of success should be very small and ‘n” should be large such that ‘np’ is a constant m
[Generally, p < 0.1 and n > 10]

Real life examples of Poisson variate

Some of the real life examples of Poisson variate are:

 Number of accidents in any traffic circle


 Number of incoming telephone calls at an exchange per minute
 Number of radio-active particles emitted by substances
 Number of defects in a product
 Number of micro-organisms developed during a period

b. If 2% of electric bulbs manufactured by a certain company are defective, find the probability that in a
sample of 200 bulbs i) less than 2 bulbs, ii) more than 3 bulbs are defective.[e-4 = 0.0183]

(Explanation of Poisson Distribution, Examples of Poisson Distribution, Calculation/Solution to the problem) 4,


1, 5

Answer: Solution
Solution

The probability of a defective bulb


Given that n = 200 since p is small and n is large
We use the Poisson distribution
mean, m = np = 200 ´ 0.02 = 4

Now, Poisson Probability function,


i) Probability of less than 2 bulbs are defective
= P(X<2)
= P(x = 0) + P(x = 1)
= e- 4 + e- 4 (4)
= e- 4 (1 + 4) = 0.0183 ´ 5
= 0.0915
ii) Probability of getting more than 3 defective bulbs
P(x > 3) = 1- P(x £ 3)
= 1- {P(x = 0) + P(x =1) + P(x=2) + P(x=3)}

= 1- {0.0183 ´ (1 + 4 + 8 + 10.67)}
= 0.567

Q3. Explain the procedure generally followed in testing of a hypothesis. Write a detailed note on One- tailed and
Two-tailed tests. (Procedure followed in testing hypothesis, Explanation of One tailed test, Explanation of Two
tailed test) 5, 2.5, 2.5

Answer: “Hypothesis” is a statement about a population parameter subject to verification. Let us illustrate with an
example. Suppose a Manager of a Car Company says “if we drop the price of this car model by Rs.8000, we will sell
50,000 cars this year”, then such statement is called a “hypothesis”. In simple words, we make a claim. Such claim is
called a “Hypothesis”.
The terms “Testing a Hypothesis” and “Hypothesis Testing” are used interchangeably. Hypothesis testing begins with
an assumption or statement, called a “hypothesis” that we make about a population parameter. In that statement, we
assume a certain value for a population mean. We then develop a procedure to test the validity of such
statement(hypothesis).

Procedure followed in testing hypothesis

Null and alternate hypothesis


The conventional approach to hypothesis testing is not to construct a single hypothesis about the population parameter,
but rather to set up two different hypotheses. These hypotheses must be so constructed that if one hypothesis is
accepted, the other is rejected and vice versa.

Interpreting the level of significance


The purpose of hypothesis testing is not to question the computed value of the sample statistic but to make a judgment
about the difference between that sample statistic and a hypothesized population parameter. The next step after stating
the null and alternative hypotheses is to decide what criterion to be used for deciding whether to accept or reject the
null hypothesis. If we assume the hypothesis is correct, then the significance level will indicate the percentage of
sample means that is outside certain limits (In estimation, the confidence level indicates the percentage of sample
means that falls within the defined confidence limits).

Hypotheses are accepted and not proved


Even if our sample statistic does fall in the non-shaded region (the region shown in figure 1 that makes up 95 percent
of the area under the curve), this does not prove that our null hypothesis (H0) is true; it simply does not provide
statistical evidence to reject it. Why? It is because the only way in which the hypothesis can be accepted with certainty
is for us to know the population parameter; unfortunately, this is not possible. Therefore, whenever we say that we
accept the null hypothesis, we actually mean that there is not sufficient statistical evidence to reject it. Use of the term
accept, instead of do not reject, has become standard. It means that when sample data do not cause us to reject a null
hypothesis, we behave as if that hypothesis is true.

Fig 1.

Two – Tailed Tests and One – Tailed Tests


Two – tailed tests
A two-tailed test of a hypothesis will reject the null hypothesis if the sample mean is significantly higher than
or lower than the hypothesized population mean. Thus, in a two-tailed test, there are two rejection regions.

A two tailed test representation is as shown in the figure 1.

A two-tailed test is appropriate when:


the null hypothesis is = Ho (where Ho is some specified value)
the alternative hypothesis is Ho

One – tailed tests


There are situations in which a two-tailed test is not appropriate, and we must use a one-tailed test.

Fig 1.1

In general, a left tailed (lower-tailed) test is used if the hypotheses are Ho: = Ho. In such a situation, it is
sample evidence with the sample mean significantly below the hypothesised population mean that leads us
to reject the null hypothesis in favour of the alternative hypothesis. Stated differently, the rejection region is
in the lower tail (left tail) of the distribution of the sample mean, and that is why we call this a lower-tailed
test.

A left-tailed test is one of two kinds of one-tailed tests. As you have probably guessed by now, the other kind
of one-tailed test is a right-tailed test (or an upper-tailed test). An upper-tailed test is used when the
hypotheses are Ho: > Ho. Only values of the sample mean that are significantly above the hypothesised
population mean will cause us to reject the null hypothesis in favour of the alternative hypothesis. This is
called an upper-tailed test as shown in figure 1.2, because the rejection region is in the upper tail of the
distribution of the sample mean.

Fig 1.2

This is to remind you again that, in each example of hypothesis testing, when we accept a null hypothesis
on the basis of sample information, we are really saying that there is no statistical evidence to reject it. We
are not saying that the null hypothesis is true. The only way to prove a null hypothesis is to know the
population parameter, and that is not possible with sampling. Thus, we accept the null hypothesis and
behave as if it is true simply because we can find no evidence to reject it.

Q4. What is analysis of variance? State all the assumptions involved in analysis of variance technique. Explain
the structure for One way analysis of variance or one way classification. (Meaning of analysis of variance,
Assumptions, Structure of One way analysis of variance) 2, 3, 5

Answer: Analysis of Variance (ANOVA)


The Analysis of Variance is one of the most powerful statistical techniques. It is a statistical test for heterogeneity of
means by analysis of group variances. The analysis of variance technique, developed by R.A. Fisher in 1920s, is
capable of fruitful application to diversity or practical problems. Basically, it consists of classifying and cross-
classifying statistical results and testing whether the means of a specified classification differ significantly. In this way
it is determined whether the given classification is important in affecting the results. For example, the output of a given
process might be cross-classified by machines and operators (each operator having worked on each machine). From
this cross-classification, it could be determined whether the mean qualities of the outputs of the various machines
differed significantly. Also it could independently be determined whether the mean qualities of the outputs of the
various machines differed significantly. Such a study would determine, for example, whether uniformity in quality of
outputs could be increased by standardizing the procedures of the operators (say, through special training), and whether
it could be increased by standardizing the machines (say, through resetting). Analysis of variance, thus, enables us to
analyze the total variation of data into components which may be attributed to various “sources” or “causes” of
variation.

Assumptions for study of ANOVA


The underlying assumptions for the study of ANOVA are:

i) Each of the samples is a simple random sample


ii) Population from which the samples are selected are normally distributed
iii) Each of the samples is independent of the other samples
iv) Each of the population has the same variation and identical means
v) The effect of various components are additive

ANOVA table in one way analysis of variance


In one-way classification the data are classified according to only one criterion.
The “ANOVA” table presents the various results obtained while carrying out ANOVA. The table represents the
specimen of ANOVA table.

Where,

 SST = Total Sum of the Squares


 SSC = Sum of the Squares of the columns
 SSE = Sum of the squares of the Error
 MSC = Variance between samples
 MSE = Variance within the samples

The distribution called Chi-square distribution depends on degrees of freedom. It has only one degree of freedom. But
the F-distribution has a pair of degrees of freedom. One is number of degrees of freedom in the numerator of F ratio.
The other is the degrees of freedom in the denominator. These degrees of freedom determine the shape of F-
distribution. Hence, these degrees of freedom are the parameters of F-distribution. Just like the Chi-Square distribution,
F-distribution is not a single distribution. It is a family of distributions. There are many different F distributions, one
for each pair of degrees of freedom. The number of degrees of freedom in numerator of the F ratio is calculated as:
Degrees of freedom in numerator = (Number of samples – 1) where, “k” is the number of samples taken.

Q5. Explain the different types of attribute control charts. Distinguish between control charts for variables and
control charts for attributes. (Meaning of attribute control charts, Types of attribute control charts,
Differences) 2, 4, 4

Answer: The Quality of many products and services is dependent upon characteristics which cannot be measured as
variables. These are called attributes and may be counted, having been judged simply as either present or absent,
conforming or non-conforming, acceptable or defective. Such properties as the general appearance of paint surface,
accidents, the particles of contamination in a sample of polymer, clerical errors in an invoice and the number of
telephone calls are all attribute parameters. It is clearly not possible to use the methods of measurement and control
designed for variables when addressing the problem of attributes.

different types of control chart, which are based on different statistical distributions:

1. In first type of attribute, each attribute can be wholly described as failing or nor failing (e.g. ball bearings),
acceptable or defective (e.g. invoices), present or not present (e.g. workers), etc.

2. In the second type of attribute, each attribute may be used to describe a product or service. E.g. number of sales
calls, number of goals scored, number of defects, number of errors or number of faults.

These two types of attribute data lead to the use of two types of attribute control charts:

1. Number of non-conforming units (or defectives) chart


2. Number of non-conformities (or defects) chart
Comparison between Attribute Charts and Variable Charts
Having understood the concept of attribute data, let us know see some of the major difference between Control charts
for Variables and Control charts for attributes.

The following Table shows the difference:

Q6. Explain the methodology for Statistical Process Control implementation (SPC). What are the benefits
derived from SPC? (Methodology of SPC, Benefits) 6, 4

Answer:
Implementation of Statistical Process Control
Successful implementation of SPC depends on the approach to the work being structured. This applies to all
organizations, whatever their size, technology or product-service range. Unsuccessful SPC implementation programs
usually show weakness within either the structure of the project or commitment to it. Any procedure adopted requires
commitment from senior management to the objectives of the work and an in-house coordinator to be made available.

Thus, two major requirements necessary for the successful implementation of SPC are:
1. Real commitment and understanding from senior management
2. Dedicated and well informed quality related managers

1 Proposed methodology for SPC implementation

The following proposed methodology helps improve product consistency and implementation of SPC.

 Review Quality Management Systems


 Review the requirements and design specifications
 Emphasize the need for process understanding and control
 Plan for education and training
 Tackle one process or problem at a time
 Measure the capability of the process
Review Quality Management Systems
Even though a Company does not have any Quality problems, it will always benefit from a review of its “Quality
Management Systems”. The first formal step should be a written outline of the objectives, program of work, timing and
reporting mechanism.

Review of requirements and design specification It is not possible to manufacture a product or carry out the
operations to provide a service without specification. The specification should describe in detail what has to be done,
how it has to be done, etc.

Emphasize the need for process understanding and control


For a variety of reasons the control of quality is still, in some organization, perceived as being closely related to
inspection. It is vital that the organization recognizes that the way to control quality is to understand and control the
various processes involved.

Plan for education and training


This is always required whether it is to launch a new management system or to maintain or improve an existing one.
Too often organizations see training as useful and profitable only when it is limited to the technical processes or those
of its suppliers and customers.

Tackle one process or problem at a time


In many organizations, people try to solve different problems at a time. This will not help them in any way and will
only lead to confusion and more problems. Thus it is important to choose one process or problem and work on it until
satisfactory progress has been achieved before passing on to a second.

Measure the capability of the process


Process capability must be assessed and not assumed. The capability of processes can be measured. This is true both
when the results are assessed as attributes and when measured as variables.

Benefits derived from SPC


The benefits to be derived from the application of statistical methods of process control are many and varied. Some of
them are as follows:

 Reduction in costs of internal failure – scrap, rework, etc.


 reduction in wasted time
 improved efficiency
 improved process consistency
 direct reduction in external failure cost - warranty claims, customer complaints and the intractable “loss of
good will”
 increases productivity
 overall reduction in checking, inspection and testing efforts
 improved or continuing reputation for consistent quality products and service
 Steady or expanding, always healthy share of market, etc.

You might also like