Download as pdf or txt
Download as pdf or txt
You are on page 1of 150

Attribute Control chart

Dr C K Biswas
Prof.
Dept of ME

1 PCAS
Content

Variables vs attributes
p chart
np chart
c chart
u chart
Summary

2 PCAS
Types of control charts

1. Variable control charts


1. X chart (monitoring mean)
2. R chart (monitoring variability)
3. σ chart (monitoring variability)
2. Attribute control chart
1. p chart (fraction defective chart)
2. np chart (number of defective chart)
3. c chart (defect chart)
4. u chart (defect chart with variable sample size)

3 PCAS
Variables vs attributes
Variables Attribute
Characteristics measurable countable
continuous discrete units or occurrences
May derive from counting good/bad

Data length no. of defects


volume no. of defectives
time no. of scrap items

Examples width of door gap audit points lost


lug nut torque paint chips per unit
fan belt tension defective lamps

Data Examples 1.7 inches 10 scratches


32.06 psi 6 rejected parts
10.542 s 25 paint runs

4 PCAS
p chart - Problem statement
An optical CD has many quality attributes
which are necessary to be up to the mark for
being used by customer. The CD
manufacturer is plagued with lot of non
conforming CDs during the manufacturing
process. It is suggested to use control charts
to statistically control the process. Data for
defective CDs from 20 samples (sample size
n = 100) are shown in the next slide.
Use appropriate control chart and recommend
control limits.

5 PCAS
Data sheet
calculate

No. of Proportion No. of Proportion


Sample Sample
Defective Defective, Defective Defective,
No. No.
CDs, = /n CDs, = /n

1 4 .04 11 6 .06
2 3 .03 12 5 .05
3 3 .03 13 4 .04
4 5 .05 14 5 .05
5 6 .06 15 4 .04
6 5 .05 16 7 .07
7 2 .02 17 6 .06
8 3 .03 18 8 .08
9 5 .05 19 6 .06
6 PCAS 10 6 .06 20 8 .08
UCL & LCL calculation
The distribution of fraction non-conforming, pi
is Binomial distribution.

̅
mean = ̅ SD

Mean fraction non-conforming, ̅ ∑

Where m = no of samples (usually 20 to 25)


.
So, ̅ 0.051
̅
̅ 3 =0.1169
UCL
Center line ̅ = 0.051
̅
LCL ̅ 3 =0.0
(Positive Lower Control Limit)
7 PCAS
Plotting & interpreting

The data points that there is an increasing trend in the average proportion
defectives beyond sample no. 15 also, data show cyclic pattern.
Process appears to be out of control and also there is a strong evidence
that
8 dataPCAS
are not from independent source.
Revised control charts

Efforts must be made to find the special causes and


revised limits are advised to calculate after deleting
these data.
Revised control limits are recommended for future use

9 PCAS
No of
Subgroup defectives,
np control chart no Di
1 10
2 9
The data collected from a production unit of ICs 3 10
4 14
is given in table. If sample size, n=42 was used
5 4
for all subgroups, compute the control limit for 6 11
the np chart. 7 9
8 8
The distribution of defective, np is Binomial 9 12
10 12
distribution with
11 9
mean = ̅ SD ̅ 1 ̅ 12 14
13 8
!" 14 9
Mean defective, n ̅ ∑ 10.32 15 11
!
.$ 16 17
So, ̅ 0.2457
% 17 13
18 5
̅ 3 ̅ 1 ̅ =18.69 19 10
UCL
20 9
Center line ̅ = 10.32 21 12
LCL ̅ 3 ̅ 1 ̅ =1.95 22 10
(≥0) 23 8
10 PCAS 24 13
25 11
The process is stable
11 PCAS
c control chart
c-chart is used to monitor nonconformities per unit.
It is also occasionally used to monitor the total number
of events occurring in a given unit of time.
Example
Monitoring the number of voids per inspection unit
in injection molding or casting processes
Monitoring the number of discrete components that must be
re-soldered per printed circuit board
Monitoring the number of product returns per day

12 PCAS
Wafer No. of Wafer No. of
No. Defects No. Defects

Problem statement 1 16 14 16

2 14 15 15

An production unit making silicon wafers wants 3 28 16 13


to apply statistical control in the inspection 4 16 17 14
unit. On inspecting 25 wafers, each containing 5 12 18 16
100 chips. The observed number of defects are 6 20 19 11
as given in the table.
7 10 20 20
Construct appropriate control chart.
8 12 21 11

The distribution of defects is Poisson 9 10 22 19

distribution with mean, 10 17 23 16


total number of defects % 11 19 24 31
(̅ 16
total number of samples !
12 17 25 13

13 14
UCL (̅ 3 (̅ 16 3 16 28
CL = (̅ 16 sum 205 195
LCL (̅ 3 (̅ 16 3 16 4

13 PCAS
The 24th point is OOC and the 3rd point is very near UCL.

14 PCAS
Revised control limits
It is assumed that all assignable cause(s) were detected and
removed.
Dropping the 24th point, revised control limits were calculated.
It was found that 3rd point is OOC.
Revised control limits were calculated by Dropping the 24th
and 3rd points as given below:

UCL = 26.3775
CL = 14.8261
LCL = 3.27469

These limits were recommended for future use.

15 PCAS
u control chart

u chart is a type of control chart used to monitor the


average number of nonconformities per unit.
The u chart differs from the c chart in that it accounts for
the possibility that the number or size of inspection units for
which nonconformities are to be counted may vary.

,-
Average no. of defects per unit, + for each sample no.
-
The distribution of u is Poisson distribution with mean,
total number of defects
+.
total number of samples
0
Standard deviation will vary for each sample no. /
-

16 PCAS
sample sample Defect per
unit, + CBC ABC
no, i size, defects, (
Problem statement 0.238 0.108 0.695
1 42 10
A paper mill uses a control chart 0.368
2 87 32 0.198 0.606
to monitor the imperfections on 3 53 10 0.189 0.141 0.663
finished rolls of paper. The 4 70 14 0.200 0.174 0.629
results of 25 days inspection 5 101 37 0.366 0.212 0.591
results are shown in the table. 6 91 29 0.319 0.202 0.601
Comment on the stability of the 7 36 11 0.306 0.085 0.719
process. 8 83 38 0.458 0.193 0.610
9 64 23 0.359 0.164 0.639
10 53 12
Center line 0.226 0.141 0.663
11 83 46
total number of defects 0.554 0.193 0.610
+. 12 93 54
total number of samples 13 78 21
0.581 0.205 0.599
7 ! 0.269
0.4017 14 41 15 0.186 0.617
7" 0.366
15 84 45 0.105 0.699
16 78 35 0.536 0.194 0.609
For each sample no. calculate 17 91 51 0.449 0.186 0.617
( 0.560 0.202 0.601
+ 18 69 25
19 36 13 0.362 0.173 0.631
20 48 9 0.361 0.085 0.719
+. 21 98 46 0.188 0.127 0.676
ABC +. 3
22 80 39 0.469 0.210 0.594
23 74 44 0.488 0.189 0.614
+. 24 60 27 0.595 0.181 0.623
CBC +. 3 25 87 29 0.450 0.156 0.647
sum 1780 715 0.333 0.198 0.606

17 PCAS
The process seems to be statistically stable.

18 PCAS
Choose a control chart

Data
Continuous Attribute
type

Data What
collected you are
in Yes counting
subgroup
No
n≤8

Yes No Defectives Defects/ unit

I-MR Charts D R Charts D S Charts

19
p Charts u Charts
PCAS
D. Chart-Collecting the data
Collect data in rational subgroups.

A rational subgroup is a small sample of similar items produced over


a short period of time that are representative of the output from the
Collect
process.subgroups
The items inat each
appropriate
subgrouptime intervals.
should be collected under the
same inputs and conditions, such as personnel, equipment,
suppliers,
Collect dataor at
environment.
equally spacedIf any input or
intervals, condition
such as oncechanges and
every hour,
causes the shift,
once every subgroups
or onceto avary
day. more
Select than
a timenormal,
intervalthe estimated
that is short
control
enough limits may that
to ensure be too
youwide.
will Check
captureforchanges
additional variability
to the processininthe
a
subgroups when
timely manner. the control limits are estimated from the data.

20 • PCAS
D. Chart-Collecting the data (contd.)
Collect enough data to obtain precise control limits.

When you do not have known values for the control limits, they must
be estimated from the data. To ensure that the control limits are
precise, you need an appropriate amount of data. The appropriate
amount of data depends on the subgroup size.
If the subgroup size n ≤ 2, collect at least 100 observations.
If the subgroup size n = 3, collect at least 80 observations.
If the subgroup size n = 4 or 5, collect at least 70 observations.
If the subgroup size n ≥ 6, collect at least 60 observations.

In addition, sampling the process at more points in time increases the


chances of having a representative estimate of the process variation. If
you have less than the recommended number of data points, you can
still use the control chart, but you should consider the results
preliminary. If you are using the chart to monitor the process or to
assess process control on an ongoing basis, it is recommended that
you re-estimate the control limits after you have collected the
recommended number of data points. Check that the number of data
21 PCAS
points• is large enough.
D. Chart-Collecting the data (contd.)
Data do not need to be normal.

There should be no substantial difference in the false alarm


Data
rate within normal
between each subgroup should
and non-normal notThe
data. be correlated.
false alarm
rate represents the chance that a point is identified as out of
control when infact it is not.
When you do not have known values for the control limits, they
must be estimated from the data. If 2.0% or more of the
points fail Test 1 and there are at least two points that fail Test
1 on the D. chart, check whether the data is correlated. If
consecutive data points within each subgroup are correlated,
the control limits will be too narrow and you may see a large
number of false alarms.

22 • PCAS
Using the D. chart
Check the R chart before you use the F chart.

The control limits on the D. chart are based on the estimated


process
Tests forvariation. If the process variation is not stable, the
special causes.
control limits on the D. chart may not be valid. Check the R chart
first to ensure that the process variation is stable.

When you first establish the control limits based on estimates from
your data, use Test 1, Test 2, and Test 7 to assess process control
(with the D. chart). Once the control limits are established, you
should use the known values of those limits,

23 PCAS
Using the D. chart (contd.)
Check the R chart before you use the F chart.
Control limits are typically established early in a project, during the
baseline
Estimate analysis. If you limits
new control changeonlythe when
process during
the the changes.
process project, the
baseline control limits are no longer valid, and you should establish
new
Testcontrol limitsoutside
1 (a point by re-estimating
the controlthem from
limits) the data.
detects a single out-of-
Once thepoint.
control control limits
Test 2 (9are established,
points in a row they should
on one side remain fixed to
of the center
monitor the process
line) detects or to shift
a possible assess
in process control.
the process TheTest
mean. control limits
7 (too
should
many be re-estimated
points only when
in a row within the process
1 standard changes,
deviation of thenot every
center
time
line)you collect
detects new data.
control limits that are too wide. Overly wide control
limits are often caused by stratified data, which occur when you
have a systematic source of variation within each subgroup.

24 PCAS
P chart -Collecting the data
Collect data in rational subgroups (samples, lots, etc).

A subgroup is a collection of similar items that are representative of the


output from the process. The items in each subgroup should be
collected
Collectunder the same
subgroups inputs and time
at appropriate conditions, such as personnel,
intervals.
equipment, suppliers, or environment.

Collect data at equally spaced intervals, such as once every hour,


once every shift, or once a day. Select a time interval that is short
enough to ensure that you will capture changes to the process in a
timely manner.

25 • PCAS
P chart -Collecting the data (contd.)
Collect enough subgroups.

When you don't have known values for the control limits, they must be
Subgroups must be large enough.
estimated from the data. To estimate precise control limits, you must
collect enough subgroups. The number of subgroups required depends
on theyou
When average
don't number
have of defectives
known values forand
the the subgroup
control limits, size.
they Check that
must be
Subgroup sizes can be unequal.
the number
estimated of subgroups
from is ensure
the data. To large enough based
that the on limits
control your data.
are accurate,
the average proportion of defective items times the subgroup size must
be at least 0.5 for all subgroups. Check that the subgroup size is large
enough based on your data.
Subgroups can vary in size. For example, if a call center tracks 100
incoming calls each hour and counts ̅ G 0.5
the number of unsatisfactory wait
times, the subgroup sizes are all 100. However, if the call center tracks all
of the incoming calls during a randomly selected hour of the day, the
number of calls is likely to vary and result in unequal subgroup sizes.
The center line of the chart is not affected by unequal subgroup sizes.
However, control limits shift in response to changes in subgroup size. If
the subgroup sizes do not vary by more than 20%, you can use the
average subgroup size to make the control limits constant.
26 • PCAS
P chart -Collecting the data (contd.)

Count the number of defective items in each subgroup.

A defective item has one or more defects that make it unacceptable. If


you can determine only whether an item is defective, use p chart. If you
can also count the number of defects on each item, you may want to
use a U chart to plot the number of defects per unit.

P chart -Using the chart

(same as D. chart)

27 • PCAS
U chart -Collecting the data
Collect data in subgroups (samples, lots).

A subgroup is a collection of similar items that are representative of the


output from the
Collect process.
subgroups atAappropriate
subgroup can timebeintervals.
either a single unit or a
collection of similarly sized units. For example, you could record the
Collect data
number at equally
of surface spaced intervals,
imperfections for one LCD suchpanel
as once every unit)
(a single hour, or
once
for
aevery shift,of
collection
Collect orLCD
once
enough a day.that
panels Select
subgroups. are aalltime
the interval thatIfisthe
same size. short enoughisto
subgroup a
ensure that
collection you will
of units, capture
they shouldchanges to theunder
be collected process
theinsame
a timely
inputs and
manner.
conditions,
When such as
you don't personnel,
have equipment,
known values for thesuppliers, or environment.
control limits, they must be
estimated from the data. To estimate precise control limits, you must
collect enough subgroups. The number of subgroups required depends
on the average number of defects per unit and the subgroup size. Check
that the number of subgroups is large enough based on your data.

28 • PCAS
U chart -Collecting the data (Contd.)
A subgroup
Subgroups can must
be either a single
be large unit or a collection of similarly sized
enough.
units. In either case the subgroup size can vary and may be defined
by a length of time, an area, or number of items. For example,
inspectors
Subgroupfor asizes
textile
canmanufacturer
be unequal. sample fabric pieces from rolls of
fabric and inspect them for defects:
When you don't have known values for the control limits, they must be
• If inspectors always sample fabric pieces that are the same length
estimated from the data. To obtain accurate control limits, the average
(for example,
Count the2total
meters),
number eachofsubgroup
defects forcan contain
each the
unit in a same number
subgroup. .
defect rate per unit times the subgroup size must be at least 0.5.
of fabric pieces or a different number of fabric pieces. The subgroup
size can be either the number + .ofG 0.5
fabric pieces sampled in each
Use this chart
subgroup when
or the totalyou can count
number the number
of meters of defects
of all fabric pieceson each unit.
sampled in
Instead of
each subgroup. classifying a unit as defective when you observe a defect,
continue to count
• If inspectors the total
sample number
fabric pieces ofthat
defects on the unit.
are different If you
lengths, can
each
only determine
subgroup whether
can contain theeach
sameunit is defective
number or nondefective,
of fabric use a P
pieces or a different
chart.
number of fabric pieces. Because the length of fabric pieces differs,
the subgroup size should be defined by the total length of the fabric in
each subgroup.
The center line of the chart is not affected by unequal subgroup sizes.
However, control limits shift in response to changes in subgroup size.
If 29
the subgroup
• PCAS sizes do not vary by more than 20%, use the average
subgroup size to make the control limits constant.
U chart -Using the chart
test for special causes.

Use Test 1 and Test 2 to evaluate process stability. Test 1 (a point


Estimate
outside new control
the control limits only
limits) detects whenout-of-control
a single the process changes.
point. Test 2 (9
points in a row on one side of the center line) detects a possible shift in
the defects per unit.
Control limits are typically established early in a project, during the
baseline analysis. If you change the process during the project, the
baseline control limits are no longer valid, and you should establish
new control limits by re-estimating them from the data. Once the
control limits are established, they should remain fixed to monitor
the process or to assess process control. The control limits should
be re-estimated only when the process changes, not every time you
collect new data.

30 • PCAS
Application of attribute control charts

Defectives Defects
(non conforming) (non conformities)

constant np chart c chart


sample size

Variable p chart u chart


sample size

31 PCAS
Summary
p chart np chart c chart u chart
(fraction non (non conforming) (non (non conformities
conforming) conformities) per unit)

+.
1 ̅ +. 3
UCL ̅ 3 ̅ 3 ̅ 1 ̅ (̅ 3 (̅

CL ̅ ̅ (̅ +.

+.
LCL ̅ +. 3
(≥0)
̅ 3 ̅ 3 ̅ 1 ̅ (̅ 3 (̅

32 PCAS
33 PCAS
Dr C K Biswas
Asso Prof.
Dept of ME

PCAS 1
Content
Introduction
Double Sampling Plan (DSP)
Probability of acceptance, Pa
Operating characteristics (OC) curve
Average Outgoing Quality (AOQ)
Average Outgoing Quality Limit (AOQL)
Average Sample Number (ASN)
Average Total Inspection(ATI)

PCAS 2
Double Sampling Plan (DSP)
Inspect 1st samples of size
no of defectives in 1st sample
no of defectives in 2nd sample
yes No
Usually, 2

No
yes

Inspect 2nd samples of size

yes
No
Reject the lot
Accept the lot 100 % inspection

3
Probability of acceptance, Pa
Probability of acceptance on the 1st sample,
= P( 0) + P( =1)+ P( =2) + …. P( )

!
Sample size
Use cumulative Poisson’s distribution table
row x=
column λt= p

PCAS 4
Probability of acceptance,
from Poisson’s
Pa (contd.)
from Poisson’s
e.g. DSP: 50,
cumulative 1, 100,
cumulative 3
distribution table distribution table
For p=0.05 λt= p=2.5 column λt= p=5.0 column

Possibilities Prob Prob remark


i 0 - - No 2nd sample
0.2873
ii 1 - - No 2nd sample
iii 2 0.2565 0 2 draw 2nd sample
0.0404
iv -do- 1 3 draw 2nd sample
v 3 0.2138 0 0.0067 3 draw 2nd sample

Use table or
6 7 89 :
Calculate using Poisson3 s formula
; !
= 0.2873

PCAS 5
Prob of
Prob of combined no
Probability of acceptance,
drawing 2nd
sample
Pa (contd.)
of defectives

Probability of acceptance after 2nd sample,


= P( ) P( )

= P( 2) {P( 0) + P( 1)} + P( =3)P( 0)


= 0.2565 x 0.0404 + 0.2138 x 0.0067
= 0.0118

All numerical values are from the table in previous slide

PCAS 6
Probability of acceptance, Pa (contd.)

<= >?@ >?@@ A. CDEF A. AGCA A. CHHF

Pa is a function of p

PCAS 7
Operating characteristics (OC) curve
e.g. DSP : 50, 1, 100, 3

PCAS 8
Average Outgoing Quality (AOQ) Incoming quality p
No of defectives in lot pN
Inspect 1st samples of size

yes No

No
yes

Inspect 2nd samples of size

OP
yes
No
Reject the lot
Accept the lot 100 % inspection
OP P

IJK L MIJKK L N
AOQ= L
9
Average Outgoing Quality Limit
AOQL

PCAS 10
Average Total Inspection(ATI) Incoming quality p
No of defectives in lot pN
Inspect 1st samples of size

yes No

No

yes

Inspect 2nd samples of size

yes
No
Reject the lot
Accept the lot 100 % inspection

1P P O
ATI= 1P P O
11
Average Sample Number (ASN)
Inspect 1st samples of size = P{lot is accepted on the 1st sample}
+ P{lot is rejected on the 1st sample}
yes No

No

yes

Inspect 2nd samples of size

yes
No
Reject the lot
Accept the lot 100 % inspection

ASN= 1P = 1P
12
PCAS 13
DSP vrs SSP
Advantage of DSP
• psychological advantage - people may feel more secure with
the idea of providing a 2nd chance to the lot of material
before it is rejected.
• if a lot is very bad or if it is very good, it can be either
rejected or accepted with smaller 1st sample
• Suppose 1st sample in DSP is smaller than for SSP
If lot is accepted or reject on 1st sample, cost of inspection is
lower
Also possible to reject a lot without completing inspection of 2nd
sample

PCAS 14
DSP vrs SSP (contd.)
Disadvantage of DSP
• DSP needs more administrative effort than SSP
• If incoming lots quality, p close to the AQL and
consistent, the SSP is more economical than DSP

PCAS 15
Problem statement
A vendor ships components in lots of size 5000. A double
sampling plan with 50, 2, 100, 6 has
been decided as acceptance inspection procedure to be
adopted. If the incoming quality have fraction defectives
p=0.05, calculate the following
i. the probability of acceptance after 1st sample is drawn.
ii. the probability of acceptance after 2nd sample is drawn.
iii. the probability of rejection after 1st sample is drawn.
iv. Average sample number.

PCAS 16
from Poisson’s cumulative from Poisson’s cumulative

Solution distribution table λt= p=2.5


column
distribution table λt= p=5.0
column

possibilities Prob Prob remark


i 0 - - No 2nd sample
ii 1 0.5438 - - No 2nd sample
iii 2 - - No 2nd sample
iv 3 0.2138 0 3 draw 2nd sample
v -do- 1 4 draw 2nd sample
vi -do- 2 0.2650 5 draw 2nd sample
vii -do- 3 6 draw 2nd sample
viii 4 0.1336 0 4 draw 2nd sample
ix -do- 1 0.1247 5 draw 2nd sample
x -do- 2 6 draw 2nd sample
xi 5 0 5 draw 2nd sample
0.0668
0.0404
xii -do- 1 6 draw 2nd sample
xiii 6 0.0278 0 0.0067 6 draw 2nd sample

PCAS 17
Solution (contd.)
i. Probability of acceptance after 1st sample, = 0.5438

ii. = P( 3){P( 0) + P( 1) + P( 2) + P( 3)}


+ P( 4){P( 0) + P( 1) + P( 2) }
+P( 5){P( 0) + P( 1)}
+P( 6){P( 0) }
= 0.2138 x 0.2650 + 0.1336 x 0.1247 + 0.0668 x 0.0404 +
0.0278 x 0.0067
= 0.0762
All numerical values are from the table in previous slide

PCAS 18
Solution (contd.)
iii. Probability of rejecting after 1st sample,
S = P( T 7) = 1- P( 6 = 1- 0.9858 = 0.0142

iv. ASN= 1P
Where 2) + P T7
= 0.5438 + 0.0142
= 0.558
So, ASN= 50 + 100 (1- 0.558) = 94.2

PCAS 19
PCAS 20
Dr C K Biswas
Prof.
Dept of ME

PCAS 1
Content
Introduction
Types of sampling plans
Differences Between Control Charts and Sampling Plans
Single Sampling Plan (SSP)
Probability of acceptance, Pa
Operating characteristics (OC) curve
Average Outgoing Quality (AOQ)
Average Outgoing Quality Limit (AOQL)
Average Total Inspection(ATI)
Problem statement
PCAS 2
Inspect the random samples

No yes
no of defectives in
sample <= c

Reject the lot Accept the lot

PCAS 3
Acceptance sampling
Acceptance sampling is useful when
The cost of testing is high compared
The testing is destructive
100% inspection is not possible due to time constrains
Inspection error rate is high
Vendor has high quality record and process capability
ratio
Continuously monitor product quality of vendor with
good reputation

PCAS 4
Advantages of sampling plan
Less expensive due to less inspection
Less material handling
Applicable to destructive testing
Fewer personnel required for inspection
Less inspection error
Rejection of whole lot often provides motivation to
vendor to improve quality

PCAS 5
Disadvantages of sampling plan
Risk of accepting “bad” lots and rejecting “good” lots
Less information is generated about the product or
about the process that manufactures the products
It requires planning and documentation whereas 100%
inspection doesn’t

PCAS 6
Lot formation
1. Lots should be homogeneous
2. Larger lots are preferred over smaller ones
3. Lots should be conformable to the materials-
handling systems used in both the vendor and
consumer facilities.

PCAS 7
Random Sampling
The selected units for inspection from the lot should
be chosen at random,
They should be representative of all the items in the
lot.
Without random sample,
bias will be introduced in
the results.

PCAS 8
Types of sampling plans
I. By Attribute
1. Single sampling plan (SSP)
2. Double sampling plan (DSP)
3. Multiple sampling plan
4. Sequential sampling plan
5. Chain sampling plan (ChSP)
6. Continuous sampling plan (CSP)
II. By Variable
1. Variability unknown
– single sided specification limit, k method
– single sided specification limit, M method
– Double sided specification limit, M method
2. Variability known
– single sided specification limit, k method
– single sided specification limit, M method
– Double sided specification limit, M method

PCAS 9
Differences Between Control Charts
and Sampling Plans

Control Chart Sampling Plan


Act On Process Product or Lot

Decision Adjust or Leave Alone Accept or Reject

Focus Future Product Past Product

PCAS 10
Incoming quality p
Single Sampling Plan (SSP) No of defectives in lot pN

Draw random samples from lot size N


Acceptance no. c Inspect samples of size n

yes No
no of defectives in
sample d <= c

Reject the lot


Accept the lot
100 % inspection
Prob of
Prob of rejection (1-Pa)
acceptance Pa

11
Probability of finding defectives

No of “bad” items in lot, D =pN No of “bad” items in sample = d


No of “good” items in lot = N – D No of “good” items in sample = n – d

No of ways to choose “bad” items in sample =


Hypergeometric
No of ways to choose “good” items in sample =
dist.
Total no of ways to choose any items in sample =

Probability of finding d defectives in sample =

PCAS 12
Hypergeometric distribution
Probability
of getting d
defectives
in sample

No of defectives in sample,

PCAS 13
Approximation of distribution
Hypergeometric to Binomial Binomial to Poisson Poisson to Normal

0.10 n is large ≥100 np =λ ≥ 10

p is not close to 0 or 1 p is very small ≤0.01

np = λ (≤10)
!
P(d)= 1 !
P(d)= P(d) =
!

Binomial dist is to be calculated in Poisson dist is easy to calculate in


exams, b’cos NO tables are available exams, b’cos tables are available
Probability of acceptance, Pa
For given fraction defective or incoming quality, p

Pa = P(d=0) + P(d=1)+ P(d=2) + …. P(d=c)


(

"# $ " % $& ' 1 '


'
')*
Binomial
distribution
Pa is a function of p

PCAS 15
5- 10 % of
learning
useful

PCAS 16
Definitions
AQL: The Acceptable quality level is α - producer’s risk
the poorest level of quality from a
supplier's process that would be
considered acceptable as a process
average
LTPD: The Lot Tolerance Percent β - consumer’s risk
Defective is a designated high
defective level that would be
unacceptable to the consumer.

PCAS 17
Operating characteristics (OC) curve
Ideal OC curve
α=0.05 α - producer’s risk
β - consumer’s risk

β=0.1

Acceptable Quality level


(AQL) Lot Tolerance Percent Defective
(LTPD)

PCAS 18
Type – A & B curves
Type- A curve when Hypergeometric distribution is
used
Type- B curve when Binomial or other distributions
are used

All practical purpose, type-B curve will be used

PCAS 19
Average Outgoing Quality (AOQ) Incoming quality p
No of defectives in lot pN

Draw random samples from lot size N


Inspect samples of size n

yes No
no of defectives in
sample d <= c

Reject the lot


Accept the lot
100 % inspection
Prob of Prob of
acceptance Pa rejection (1-Pa)

+,-.
"# (pN-pn) AOQ= 0 x N(1-Pa)

20
Average Outgoing Quality Limit
When n
AOQL=2.35 increases,
AOQL
decreases

PCAS 21
Average Total Inspection(ATI)
Draw random samples from lot size N
Inspect samples of size n

samples are always inspected

yes No
no of defectives in
sample d <= c

Reject the lot


Accept the lot
100 % inspection
Prob of
rejection (1-Pa)

(1-Pa) (N-n)
ATI= n 0 1 "#

22
ATI vrs p

PCAS 23
Problem statement
A product is shipped in lots of size N=5000. The
receiving inspection procedure used SSP with sample
size n=100 and acceptance number c=2.
i. Draw the OC curve.
ii. Find AQL and LTPD.
iii. Draw the AOQ curve.
iv. Find AOQL
v. Draw ATI curve vrs incoming quality.

PCAS 24
Solution
From cumulative Poisson distribution table, the
following table is calculated; n=100, c=2
2 3 1 4 5

np=λt p=np/n Pa AOQ ATI


0.0 0.000 1.0000 Approx > 0.95
0.00000 100.00
0.8 0.008 0.9526 0.00747
Approx 0.8 332.26
1.5 0.015 0.8088 0.01189 1036.88
Approx 0.5
2.6 0.026 0.5184 0.01321 2459.84
Approx 0.2
4.0 0.040 0.2381 0.00933 3833.31
Approx < 0.10
5.4 0.054 0.0948 0.00502 4535.48

PCAS 25
Solution OC curve
1.0
i.
0.9
0.8
0.7
0.6
Pa

0.5
0.4
0.3
0.2
0.1
0.0
0.00 0.01 0.02 0.03 0.04 0.05 0.06
AQL=0.0082 p
ii. LTPD=0.0532

PCAS 26
AOQL=0.0135
iii.

PCAS 27
v.

PCAS 28
Class test 16-3-2023
A company uses the following acceptance sampling
procedure. A batch is accepted if not more than two items are
found to be defective in a sample of 50 items taken randomly
from the batch; otherwise, the batch is rejected. If 500
batches obtained from a process that manufactures 2%
defective items are submitted to this plan,
1. how many batches are accepted? (Ans: 0.9197)
2. What is the probability of getting 2 defectives in the
sample? (Ans: 0.1839)

PCAS 29
PCAS 30
MIL-STD-105E

Dr C K Biswas
Prof.
Dept of ME
1 PCAS 4/18/2023
Content

Standard Acceptance sampling plans


Introduction to MIL-STD-105E
Switching Procedures
Procedure to use MIL-STD-105E
Problem statement & solution

2 PCAS 4/18/2023
Standard Acceptance sampling plans
Objective Attribute Variable
Assure quality levels for Select plan for Select plan for
consumer/ producer specific OC specific OC
Maintain quality at a AQL plans: AQL plans:
target MIL-STD-105E MIL-STD-414
Assure out going quality AOQL plans: AOQL plans
level Dodge-Romig plans
Reduced inspection, with Chain sampling Narrow limit gaging
small sample size & good
quality history
Reduced inspection, with Skip lot sampling, Skip lot sampling,
good quality history double sampling double sampling
Assure quality no worse LTPD plans: LTPD plans:
than target Dodge-Romig plans Hypothesis test

3 PCAS 4/18/2023
Standard plans What is it for?

MIL-STD-105E Sampling plans for attribute data

MIL-STD-414 Sampling plans for measurement data

MIL-STD-1235C Sampling plans for continuous production

Dodge-Romig Single
AOQL-based rectifying plan for attribute data
Sampling AOQL
Dodge-Romig Single
LTPD-based rectifying plan for attribute data
Sampling LTPD
MIL-STD-1916 for Accept-on-Zero (c=0) sampling plans for attribute
Attributes data
MIL-STD-1916 for Accept-on-Zero (c=0) sampling plans for
Variables measurement data
MIL-STD-1916 for Accept-on-Zero (c=0) sampling plans for continuous
Continuous Sampling production

Squeglia Zero-Based AQL-based Accept-on-Zero (c=0) sampling plans for


4PlansPCAS attribute data 4/18/2023
Introduction to MIL-STD-105E

• Standard Sampling procedures for inspection by


attributes were developed during World War II.
• MIL-STD-105E is the most widely used
acceptance sampling system for attributes in
the world today.
• The original version of the standard, MIL-STD-
105A, was issued in 1950. Since then, there
have been four revisions; the latest version was
issued in 1989.

MIL STD 105E is a collection of sampling schemes;


therefore, it is an acceptance sampling system.
5 PCAS 4/18/2023
Description of MIL-STD-105E

• Three types of sampling, single sampling, double


sampling, and multiple sampling.
• For each type of general sampling plan, a provision
is made for either normal inspection, tightened
inspection, or reduced inspection.
• Normal inspection is used at the start of the
inspection activity.
• Tightened inspection is instituted when the vendor’s
recent quality history has deteriorated. Acceptance
requirements for lots under tightened inspection are
more stringent than under normal inspection.
• Reduced inspection is instituted when the vendor’s
recent quality history has been exceptionally good.
• Special inspection plans are also provided
6 PCAS 4/18/2023
Special inspection Levels

• There are also four special inspection levels, S-1, S-


2, S-3 and S-4.
• The special inspection levels use very small samples,
and should only be employed when the small sample
sizes are necessary and when large sampling risks
can or must be tolerated.

ANSI/ASQC Z1.4 Equivalent plans


ISO 2859-1 MIL-STD-105E

7 PCAS 4/18/2023
Switching Procedures

Switching procedures between normal, tightened, and


reduced inspection:

1. Normal to tightened. When normal inspection is


in effect, tightened inspection is instituted when two
out of five consecutive lots have been rejected on
original submission.

2. Tightened to normal. When tightened is in effect,


reduced inspection is instituted when five
consecutive lots or batches are accepted on original
inspection.

8 PCAS 4/18/2023
Switching Procedures

3. Normal to reduced. When normal inspection is in


effect, reduced inspection is instituted provided all
four of the following conditions are satisfied.
a. The preceding 10 lots have been on normal
inspection, and none of the lots have been rejected
on original inspection.
b. The total number of defectives in the samples from
the preceding 10 lots is less than or equal to the
applicable limit number specified in the standard.
c. Production is at a steady state; that is, no difficulty
such as machine breakdowns, material shortages,
or other problems have recently occurred.
d. Reduced inspection is considered by the authority
responsible for sampling.
9 PCAS 4/18/2023
Switching Procedures

4. Reduced to normal. When reduced inspection is


in effect, normal inspection is instituted provided
any of the following four conditions has been met.
a. A lot or batch is rejected.
b. When the sampling procedure terminates with
neither acceptance nor rejection criteria having
been met, the lot or batch is accepted, but normal
inspection is reinstituted starting with the next lot.
c. Production is irregular or delayed.
d. Other conditions warrant that normal inspection be
instituted.

10 PCAS 4/18/2023
Switching Rules for normal, tightened and reduced inspection

Start
“and” conditions

• production steady
• 10 consecutive lots 2 out of 5
accepted consecutive lots
rejected
• Approved by
responsible authority

Reduced Normal Tightened

“or” conditions
• Lot rejected 5 consecutive
• Irregular production lots accepted

• A lot meets neither


the accept nor the 10 consecutive
reject criteria lots remain on
tightened
• Other conditions inspection
warrant return to
normal inspection
Discontinue
inspection
11 PCAS 4/18/2023
Procedure to use MIL-STD-105E

Example
Suppose product comes from vendor in lots of size, N
2000 units. The acceptable quality level (AQL) is
0.65%. Determine the MIL STD 105E acceptance-
sampling system.

It is decided that Single sampling plan will be


adopted for inspection.

12 PCAS 4/18/2023
Sample Size Code Letters (MIL STD 105E, Table 1)
General Inspection
Special Inspection Levels Levels
Lot or Batch Size S-1 S-2 S-3 S-4 I II III
2 to 8 A A A A A A B
9 to 15 A A A A A B C
16 to 25 A A B B B C D
26 to 50 A B B C C D E
51 to 90 B B C C C E F
91 to 150 B B C D D F G
151 to 280 B C D E E G H
281 to 500 B C D E F H J
501 to 1200 C C E F G J K
1201 to 3200 C D E G H K L
3201 to 10000 C D F G J L M
10001 to 35000 C D F H K M N
35001 to 150000 D E G J L N P
150001 to 500000 D E G J M P Q
500001 and over D E H K N Q R
13 PCAS 4/18/2023
Nonconforming or defectives (%) Nonconformities or defects / 100
Normal Inspection
AQL

Plan K

Normal Inspection:
Sample 125 units.
Ac = 2, accept the lot if no of defectives ≤ 2.
14 PCAS Re = 3, reject entire the lot if no of defectives4/18/2023
≥ 3.
Tightened inspection
AQL

Plan K

Tightened Inspection:
Sample 125 units
Ac = 1, accept the lot if defects ≤ 1.

15 PCAS
Re = 2, reject entire the lot if defects ≥4/18/2023
2.
Reduced Inspection
AQL

Plan K

Reduced Inspection:
Sample 50 units
Ac = 1, accept if defects ≤ 1.
16 PCAS Re = 3, reject entire lot if defects ≥ 4/18/2023
3.
Single sampling Plan
Reduced Normal Tightened
Inspection Inspection Inspection
Sample size, n 50 125 125
Acceptance 1 2 1
number, Ac
Rejection 3 3 2
number, Rc

what happens if no of defectives in the sample is 2?

17 PCAS 4/18/2023
During reduced inspection,
what happens if no of defectives in the sample is 2?

Switch to normal inspection

18 PCAS 4/18/2023
19 PCAS
Reliability

Dr C K Biswas
Prof.
Dept of ME
1 PCAS 4/18/2023
Content 
 Intro
 Bathtub curve
 Reliability measurement 
 System reliability
 Series system
 Parallel system
 Standby Redundancy Models
 M out of N Models (Identical Items) 
 Series­Parallel Systems
 Maintainability & availability 
 Reliability improvement
 Reliability vrs Quality 

2 PCAS 4/18/2023
Reliability defination
q Reduction of things gone wrong (Johnson and 
Nilsson 2003).
q An attribute of a product that describes whether the 
product does what the user wants it to do, when 
the user wants it to do so (Condra 2001).
q The capability of a product to meet customer 
expectations of product perfor­mance over time 
(Stracener 1997).
q The probability that a device, product, or system 
will not fail for a given period of time under 
specified operating conditions (Shishko 1995).

3 PCAS 4/18/2023
Reliability
Definition

Product  reliability  is  the  probability  that  the 


product  will  perform  its  intended  function  under 
specific operating conditions and for a given time.

e.g.: Reliability of a bulb is 95% for 1,000 hrs
means that the probability of bulb survival for 1,000 
hrs under specified operating condition is 95%.
Failure
An item is considered to  have failed under one of the three 
conditions:
 When it becomes completely inoperable
 When it is still operable but no longer able to perform its 
intended function
 When serious deterioration has made it unreliable or  unsafe.

5 PCAS 4/18/2023
Failure
Failure Modes in Mechanical Systems
The failure modes for mechanical components are classified into three 
categories,
1. Failures due to operating load
2. Failures due to environment
3. Failures due to poor manufacturing quality.

Causes are 
many aspects of design, and material selection material 
imperfections, fabrication and processing, reworking, 
assemble, 
inspection, testing, quality control,
storage and shipment, 
unanticipated exposure to overload or stresses in service.
6 PCAS 4/18/2023
Hazard Rate

 The  failure  of  a  population  of  fielded  products  can  arise  from 
inherent  design  weaknesses,  manufacturing­  and  quality 
control­related  problems,  variability  due  to  customer  usage, 
the maintenance policies of the customer, and improper use or 
abuse of the product. 
 The  hazard  rate,  h(t),  is  the  number  of  failures  per  unit  time 
per number of  non­failed  products  remaining  at  time  t.
 It is “instantaneous” failure rate

7 PCAS 4/18/2023
Life characteristics pattern Bathtub 
curve

Early  Wear­out 
failure  Constant failure rate  failure 
period
hazard rate

period period

Useful life

I II III

Time t
Life characteristics pattern

Early failure (burn-in): are those occur in 


the beginning of the life of a system's 
operation. 
Chance failures: are predominant during 
the actual working of the system. This type 
of failures are totally irregular and 
unexpected. 
Wear-out: failures occur due to aging or 
wearing out of the components.

9 PCAS 4/18/2023
Modes of failures

Catastrophic failure: e.g. A blowing 
Degradation (or creeping) failure: e.g. 
Resistor failure 
Independently failure: e.g. Failure tire of a 
motor car. 
Secondary failure: e.g. A resistor getting 
shorted might cause extensive damage to 
other components  

10 PCAS 4/18/2023
Life Expectancy of Some Objects
Object Life Expectancy*
Mosquito 1 day
Light bulb 8­10 months
Washing machine 6­7 years
Dog 8­10 years
Motor car 10­12 years
Undersea cables 40 years
Man 70­80 years
House 100 years
Integrated circuits 1000 years
Plastic 1000 – 2000 years

11 PCAS 4/18/2023
Failure Rates
Component/ Failure
Part Rate/1,000 hr
Resistors 0.2210
Capacitors 0.1517
Transformers 0.2837
Connector 0.3083
Coils 0.3893
Crystal diodes 0.0750
Motors 0.1410
Tubes 3.3000

12 PCAS 4/18/2023
Mean Time Between Failures
Equipment MTBF*
Elevator 40,000 hr
Computers 4000­5000 hr
TV sets 5000­10,000 hr
Washing machines 5000­10,000 hr
Satellites 10­12 years
Military ground 1000­2000 hr
equipment
Missiles 1­200 hr
*The figures indicated are approximate.

13 PCAS 4/18/2023
Reliability Measurement 

1. Failure rate (l) – number of 


failures per unit time
2. Mean time between failures 
(MTBF) – average time between 
two failures

14
Failure rate 
Four transformers failed after the following test time periods: 
Transformer Fails
no after hrs
1 50
2 150
3 250
4 450

1 1
MTBF = = = 225�ℎᵅᵆ
ᵰ 0.00444

15 PCAS 4/18/2023
Failure rate (contd.)
A life testing was conducted on 12 electrical pumps for 1000 hrs of 
which 5 pumps failed. The test results are as follows: 
Pump Failed
no after hrs
2 620
3 390
4 780
8 860
9 950
1 1
MTBF = = = 2,120�ℎᵅᵆ
ᵰ 0.0004716

16 PCAS 4/18/2023
Reliability function

What is the reliability of pump (previous slide) if it 
has to operate 10,000 hrs?  

17 4/18/2023
Lamp no failure time hrs
1 5495.05
2 8817.71
Problem statement 3
4
539.66
2253.02
5 18887.00
6 2435.62
7 99.33
To  ensure  proper  illumination  in  control  rooms,  8 3716.24
higher  reliability  of  electric­lamps  is  necessary.  Let  9 12155.56
us  consider  that  the  failure  times  (in  hours)  of  a  10 552.75
11 3511.42
population  of 30  electric­lamps  from a control room 
12 6893.81
are  given  in  the  following  Table.  Calculate  failure  13 1853.83
density, reliability and hazard functions. 14 3458.40
15 7710.78
16 324.61
17 866.69
18 6311.47
19 3095.62
20 927.41
21 4037.11
22 933.79
23 1485.66
24 4158.11
25 6513.43
26 8367.92
27 1912.24
18 PCAS
28 13576.97
4/18/2023
29 1843.38
30 4653.99
Lamp no failure time hrs
7 99.33

Problem statement (soln) 16
3
324.61
539.66
10 552.75
17 866.69
20 927.41
22 933.79
23 1485.66
29 1843.38
13 1853.83
27 1912.24
4 2253.02
6 2435.62
19 3095.62
14 3458.40
11 3511.42
8 3716.24
21 4037.11
24 4158.11
30 4653.99
1 5495.05
18 6311.47
25 6513.43
12 6893.81
15 7710.78
26 8367.92
2 8817.71
19 9 12155.56
4/18/2023
28 13576.97
5 18887.00
Problem statement (soln)

no survived  no of 
interval  interval beginning  failures  reliability  unreliability  
low high ns(t) end nf(t) R(t) F(t)
0 3150 30 14 1.000 0.000
3150 6300 16 7 0.533 0.467
6300 9450 9 6 0.300 0.700
9450 12600 3 1 0.100 0.900
20 4/18/2023
12600 15750 2 1 0.067 0.933
15750 18900 1 1 0.033 0.967
Problem statement (soln)

no survived 
interval interval  begining  no of failures  reliability  unreliability   failure density Hazard rate 
low high ns(t) end nf(t) R(t) F(t) function f(t) h(t)
0 3150 30 14 1.000 0.000 1.481E­04 1.481E­04
3150 6300 16 7 0.533 0.467 7.407E­05 1.389E­04
6300 9450 9 6 0.300 0.700 6.349E­05 2.116E­04
9450 12600 3 1 0.100 0.900 1.058E­05 1.058E­04
12600 15750 2 1 0.067 0.933 1.058E­05 1.587E­04
21
15750 18900 1 1 0.033 0.967 1.058E­05 4/18/2023
3.175E­04
Problem statement (soln)

no survived 
interval interval  begining  no of failures  reliability  unreliability   failure density Hazard rate 
low high ns(t) end nf(t) R(t) F(t) function f(t) h(t)
0 3150 30 14 1.000 0.000 1.481E­04 1.481E­04
3150 6300 16 7 0.533 0.467 7.407E­05 1.389E­04
6300 9450 9 6 0.300 0.700 6.349E­05 2.116E­04
9450 12600 3 1 0.100 0.900 1.058E­05 1.058E­04
12600
22 15750 2 1 0.067 0.933 1.058E­05 1.587E­04
4/18/2023
15750 18900 1 1 0.033 0.967 1.058E­05 3.175E­04
System reliability and Modeling
The nature of reliability models to be used for quantitative 
evaluation would depend on the type of system. 
There are basically two types of system models.
 Non­reparable systems: e.g. fuses, missiles, bulbs
 Repairable system
i. Continuously operating system: e.g. furnaces, satellite, etc.
ii. On and off operating system: e.g. Computer, communication 
equipment.
iii. Intermittently operating system: e.g. Telephone, radar, etc.

23 PCAS 4/18/2023
System reliability­ Series

1 2 n

RS = R1 R2 ... Rn

24
System reliability­ Parallel

RS = 1 - (1 - R1) (1 - R2)... (1 - Rn)


25
Parallel redundancy

Parallel redundancy often used under following 
considerations 
i. To minimize the effect of chance failure.
ii. Faulty components are difficult to detect.
iii. Assembly/ disassembly of components is not 
that easy.
iv. Components of tactical importance.
v. When system need high reliability due to 
strategic importance.

26 PCAS 4/18/2023
Standby Redundancy Systems
Standby units are of five different types
i. Cold Standby: The Standby units don't fail when not in operation.
ii. Hot standby: The standby units deteriorate when not in used e.g. 
batteries.
iii. Tepid Standby: The reliability of the Standby units decreases 
with time.
iv. Sliding standby: The sliding standby units will function when any 
of the n components of the system fails
v. Sliding standby with AFL: The system has an Automatic Fault 
Location (AFL) device to detect fault and automatic switching is 
done to connect Standby unit. e.g. UPS

27 PCAS 4/18/2023
Standby Redundancy Models
In  Figure,  item  A  is  the  on­line  active 
item, and item B is standing by waiting to 
be  switched  on  to  replace  A  when  the 
latter fails. 

With the following assumptions
a) when  operating,  both  items  have  a 
constant  failure  rate  k  and  have  zero 
failure rate in standby mode;
b) the switch is perfect;
c) switching­over time is negligible; and
d) stand  by  unit  does  not  fail  while  in 
standby mode.

28 PCAS 4/18/2023
Standby Redundancy Models

29 PCAS 4/18/2023
Problem statement
A  typical  UPS  (Uninterrupted  Power  Supply)  circuit  is  shown  in  the 
figure.  Normally, the  power  is  supplied  to  the  load  by  the  AC  supply 
and  the  battery  is  charged  by  the  rectifier. In  case  of  power  failure, 
which  is  detected  by  Automatic  Fault  Location  (AFL),  the  rectifier  is 
disconnected  and  the  power  is  supplied  to  the  load  by  the  battery  via 
inverter. Given  the  failure  rates  of  components  (as  shown  in  Table), 
calculate the UPS reliability for 1000 hr.

Component failure repair


rate λ rate μ
A.C. supply 2.28 × 10−4 2.000
Rectifier 1 × 10−4 0.250
Battery 1 × 10−6 0.125
Inverter 1 × 10−4 0.250

30 PCAS 4/18/2023
Reliability Block Diagram

31 PCAS 4/18/2023
Reliability Block Diagram

32 PCAS 4/18/2023
Standby Redundancy Models with AFL

33 PCAS 4/18/2023
M out of N Models (Identical Items)

34 PCAS 4/18/2023
Series­Parallel Systems
C
RA RB RD
RC
A B D
C

RC Convert to 
equivalent series 
system 
RA RB RD
A B C’ D

RC’ = 1 – (1-RC)(1-RC)

35
Problem statement

36 PCAS 4/18/2023
Maintainability

 Maintainability is the totality of design 
factors that allows maintenance to be 
accomplished easily
 Preventive maintenance reduces the risk 
of failure
 Corrective maintenance is the response 
to failures

37
Availability
Operational availability
(as seen by user)

Inherent availability
(as seen by maintenance personnel)

MTBM = mean time between maintenance
MTD   = mean down time
MTBF = mean time between failures
MTTR = mean time to repair

38
Reliability improvement 

 Standardization
 Redundancy
 Physics of failure
 Reliability testing
 Burn­in
 Failure mode and effects analysis
 Fault tree analysis

39 PCAS 4/18/2023
Reliability testing 
 To determine whether the equipment under test meets its 
reliability requirement 
 The criterion for acceptability is minimum acceptable life 
 Three types of reliability tests 
I. Tests terminated upon occurrence of a number of 
preassigned failures
a. With Replacement of failed items
b. Without Replacement of failed items

II. Tests terminated upon preassigned time


III.Sequential tests

40 PCAS 4/18/2023
Tests terminated upon occurrence of a number of 
preassigned failures With Replacement of failed item

41 4/18/2023

PCAS
Soln (contd.)
For, a probability of 90%, = 0.10 and r=6, find k from table.

42 PCAS 4/18/2023
Solu. (contd.)

The choice of sample size, n and the no. preassigned 
failures to terminate life test, r is decided based on 
cost of equipment and test­time consideration. 

43 PCAS 4/18/2023
Reliability vrs Quality 
Reliability Quality
Its function of time No time element
Probability of Performance  Degree of conformance with 
standard, specifications & design 
requirement
Not necessarily meant high  High quality
reliability

44 PCAS 4/18/2023
Reliability and safety

The term safety mean absolute safety, in the sense 
that the system will never fail during operation. 
In reality such thing is not possible to achieve. 
Hence, in all design purpose a safety factor is 
taken. 
The choice of factor of safety (FOS) depends upon 
the acceptable level of risk on the basis of economic 
and/or social consequences associated with such 
failures in the field.

45 PCAS 4/18/2023
Risk 
 Risk is generally defined as “ the chance of injury or loss
resulting from exposure to a source of danger ” , 
 While safety means “ freedom from danger or hazard ” . 
 Risk = probability x consequences

46 PCAS 4/18/2023
Risk 

X 100 500 750 1000 1500 2500 5000 7000 8000 10,000
P(X>x) 0.1869 0.1769 0.1569 0.1269 0.0869 0.0069 0.0059 0.0009 0.0008 0.0005

47 4/18/2023
End of lecture

Thank yo
u for you
kind atte r
ntion

51 PCAS

You might also like