Definition of Reliability

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Introduction

In general, reliability is the ability of a person or system to perform and maintain its functions in
routine circumstances, as well as hostile or unexpected circumstances.
First, let us review some concepts of reliability. At a given point in time, a component or system
is either functioning or it has failed, and that the component or system operating state changes as
time evolves. A working component or system will eventually fail. The failed state will
continue forever, if the component or system is non-repairable. A repairable component or
system will remain in the failed state for a period of time while it is being repaired and then
transcends back to the functioning state when the repair is completed. This transition is assumed
to be instantaneous. The change from a functioning to a failed state is failure while the change
from a failure to a functioning state is referred to as repair. It is also assumed that repairs bring
the component or system back to an “as good as new” condition. This cycle continues with the
repair-to-failure and the failure-to-repair process; and then, repeats over and over for a repairable
system.

Reliability (for non-repairable items) can be defined as the probability that an item will perform
a defined function without failure under stated conditions for a stated period of time.

DEFINITION OF RELIABILITY

In general, reliability is the ability of a person or system to perform and maintain its functions in
routine circumstances, as well as hostile or unexpected circumstances. Reliability refers to the
consistency of a measure. A test is considered reliable if we get the same result repeatedly. For
example, if a test is designed to measure a trait (such as introversion), then each time the test is
administered to a subject, the results should be approximately the same. Unfortunately, it is
impossible to calculate reliability exactly, but there several different ways to estimate reliability.
we cannot calculate reliability -- we can only estimate it

“Reliability is the probability of a device performing its purpose adequately for the period
intended under the given operating conditions.”

By focusing the definition, it brings four important factors which are required in the
determination and assignment of a reliability factor.
1. The reliability of a device is expressed as probability,
2. The device is required to give adequate performance,
3. The duration of adequate performance is specified, and
4. The environmental or operating conditions are prescribed.
RELIABILITY BLOCK DIAGRAMS
A reliability block diagram (RBD) is a graphical depiction of the system’s components and
connectors which can be used to determine the overall system reliability given the reliability of
its components. It performs the system availability analyses on large and complex systems using
block diagrams to show network relationships. The structure of the reliability block diagram
defines the logical interaction of failures within a system that are required to sustain system
operation.

An RBD has one or more paths through it which represent successful system operation. Every
path is constructed out of blocks (components) and lines (connectors). Blocks represent
computational elements, and the lines are used to describe which paths through the blocks are
essential to success. If any path is executed successfully, then the overall system is said to
succeed, otherwise if all paths fail, then the overall system also fails. For example, a simple
series configuration indicates that all of the components must operate for the system to operate, a
simple parallel configuration indicates that at least one of the components must operate, and so
on.

The rational course of a RBD stems from an input node located at the left side of the diagram.
The input node flows to arrangements of series or parallel blocks that conclude to the output
node at the right side of the diagram. A diagram should only contain one input and one output
node. The RBD system is connected by a parallel or series configuration or combination of both.
A parallel connection is used to show redundancy and is joined by multiple links or paths from
the Start Node to the End Node. In the diagram given below, successful operational systems
require at least one maintained path between the system input and the system output.
Reliability block diagrams often correspond to the physical arrangement of components in the
system. However, in certain cases, this may not apply. For instance, when two resistors are in
parallel, the system fails if one fails short. The reliability block diagram for such a system for the
"fail short" mode of failure would be composed of two series blocks. However, for other modes
of failure, such as an “open” failure mode, the reliability block diagram is composed of two
parallel blocks. The logical flow of a network diagram originates from an input node at the left
hand side of the diagram to an output node at the right hand side of the diagram. Blocks are
arranged in series and parallel arrangements between the system input and output nodes. It is
important to note that in the RBD module, the connection between blocks is directed.

RBD PARAMETERS

A reliability block diagram (RBD) is a drawing and calculation tool used to model complex
systems. An RBD is a series of images (blocks) representing portions of a system. Once the
images (blocks) are configured properly and image data is provided, the failure rate, MTBF,
reliability, and availability of the system can be calculated. As the configuration of the diagram
changes, the calculation results also change. There are different parameters which are specified
to the blocks or images of the network and some are calculated.

1. CFI System Conditional Failure Intensity

Where:

λ s (t )=¿ System Conditional Failure Intensity at time t.

" The probability that the component fails per unit time at time t, given that it was as good as
new at time zero and is normal at time t".

ω ( t )=¿ Failure Frequency at time t.

Q (t) = Unavailability at time t.

Note: A large value of λ s (t)=¿ means that the system is about to fail.

2 Expected Number of Failures

This gives the expected number of failures of the system in lifetime t.


t2

W(t 1 , t 2 ¿=∫ ω ( t ) dt
t1
Where:

W(t 1 , t 2 ¿=Expected Number of Failures over interval ¿ given that the component was as
good as new at time zero.

ω (t)= Failure Frequency at time t (Unconditional Failure Intensity), Or System Failure


Frequency

Note: The W(0,t) of a non repairable component is equal to F(t) (Unreliability) and approaches
unity as t gets larger. The W(0,t) of a repairable component goes to infinity as t becomes
infinite.

3 Failure Frequency of a Cut Set

Failure frequency unlike failure rate is the probability of failure per unit time at life time t,
independent of whether the failure has occurred before time t. Failure rate on the other hand is
the conditional failure rate that the failure has not occurred before time t.
η η

ω cutset =∑ ω j ∏ Qi
j=1 i=1
j=2

Where:

ω cutset = Failure frequency of the cut set

ω j= Failure frequency of the jth event in the cut set

Q i= Unavailability of the ith event in the cut set

η = Events in the cut set

4. System Failure Rate

Assuming a constant rate model, the system failure rate is given by,

ω(t ) 1
λ ( t )= =
1−Q(t) MTBF

Where:

λ(t) = Failure Rate at time t

ω(t) = Failure Frequency at time t (Unconditional Failure Intensity).


5. MTBF System Mean Time Between Failures

t
MTBF= =MTTF + MTTR
W (0 , t )

Where:

MTBF=¿ Mean time between failures: The expected value of the time between two consecutive
failures.

W(t,0) = Expected Number of Failures over interval [ ) t ,0 given that the component was as
good as new at time zero.

6. MTTF System Mean Time To Failures



MTTF =∫ tf ( t ) dt
0

Where:

MTTF=¿ Mean time to failure

f ( t ) dt =¿ Probability that the TTF is around t.

Alternative:

MTBF=MTTF+ MTTR

MTTF=MTBF−MTTR

Where:

MTBF=¿ Mean time between failure

MTTF=¿ Mean time to failure

MTTR=¿ Mean time to repair

7. MTTR System Mean Time To Repair

TDT
MTTR= =MTBF−MTTF
W (0 , t)

Where:

MTTR=¿ Mean time to repair

TDT = Total Down Time


W ( 0 , t )=¿ Expected Number of Failures over interval [ ) t ,0 given that the component was as
good as new at time zero.

MTBF=¿ mean time between failures: The expected value of the time between two consecutive
failures.
8. System Unavailability
If Quantification Method is set to Esary-Proschan, system Failure Frequency is obtained by
applying:
n n
ω sys=∑ ωcutse t ∏ ¿ ¿ )
i
i=1 j=1
j ≠i

Where:
ω sys= System failure frequency

ω cutse t = Failure frequency of cut set i


i

Q cutse t = Unavailability of cut set j


j

n = Number of cut sets

If Quantification Method is set to Rare, system Failure Frequency is obtained by applying:


n
ω sys=∑ ωcutse t i
i=1

Where:
ω sys = System failure frequency

ω cutse t = Failure frequency of cut set i


i

n = Number of cut sets

9. System Unreliability
Unreliability is the probability of one or more failures to age t

f ( t )=1−e−(1−Q (t ))

Where:
F(t) = Unreliability at time t
Q(t) = Unavailability at time t
RBD Examples for a series system
In series system network, all the blocks are connected end-to-end. If anyone of the components
(blocks) failed, then the whole system fails. So, all the components are needed to make the
system work. For example, take a system of computer consisting of monitor, processor and
keyboard. Here, if anyone of the three components failed, then the whole system will fail to input
a text into the computer.

RBD for series system


Let λ 1 be the failure rate of the monitor .
Assume the exponential distribution for the failures, then

Rmonitor (t )=e− λ1.t

Similarly, R processor (t )=e− λ2.t and Rkeyboard (t )=e− λ3.t

So, R system ( t ) =Rmonitor (t ) . R processor ( t ) . Rkeyboard (t )

R system ( t ) =e−λ 1.t . e−λ 2.t . e−λ 3.t

R system ( t ) =e−( λ1+ λ2 +λ 3)t

When exponential failure distribution is assumed, the failure rate of a series system is the sum of
individual components’ failure rates.

You might also like