Professional Documents
Culture Documents
06 Chapter-1 PDF
06 Chapter-1 PDF
1.1 INTRODUCTION
The mathematical theory of reliability has grown out of the demands of the
modern technology and particularly of the experiences in the World War II with
especially life testing and electronic and missile reliability problems started to
receive a great deal of attention both from mathematical statisticians and from the
whole reliability situation and recommended measures that would increase the
The overall scientific discipline that deals with general methods and
components parts or from strength and stress variable has received the name
The concept of reliability has been known for a number of years but has got
greater significance and importance during the past decade, particularly, due to the
With increasing automation and the use of highly complex systems, the
overall costs. For example, the yearly cost of maintaining some military systems
in an operable state is as high as ten times the original cost of the system. The
could result in personal injury and undue expenses. Also, caused by unreliability
are scheduled delays, inconvenience, customer dissatisfaction and perhaps also the
loss of national security. The need for reliability has been felt both by the
2
government and industry. For example, the Department of Defence and NASA
(Requirements for Reliability program for system and Equipments) and NASA
NPC 250-1 (Reliability program provisions for space system contractors), provide
It usually seems like an unnecessary waste of time. In our private lives, we have
the option of seeking service elsewhere or going without the service. Such
defections have direct economic consequences for the organization providing the
aspect of system design is to balance this cost against the expense of additional
capacity. The study of waiting lines, called the ‘queuing theory’, is one of the
oldest and most widely used operation research techniques. The first recognized
effort to analyze queues was made by a Danish engineer, A.K. Erlang, in his
one or more service facilities. On arrival at the facility the customer may be
serviced immediately or if willing, may have to wait until the facility is made
available. The service time allocated to each customer may be fixed or random
depending on the type of service. Situations of this type exists in every-day life.
A typical example occurs in a barbershop. Here, the arriving individuals are the
customers and the barbers are the servers. Another example is represented by
3
letters arriving at a typist’s desk represent another example. The letter represents
commonly referred to as customers, wait for service or the service facilities, stand
idle and wait for customers. Some customers wait when the total number of
customers requiring service exceeds the number of service facilities, some service
facilities stand idle when the total number of service facilities exceeds the number
engineering staff, aircraft wait to land at an airport and breakdowns await repair by
maintenance crews. These examples show that the term “Customer” may be
either the server to the customer or the customer to the server. Such service
randomness element in the arrival and service patterns. A mathematical theory has
thus evaluated that provides means for analyzing such situations. This is queuing
(or waiting line) theory, which in based on describing the arrival and/or departure
Examples of these characteristics are the expected waiting time until the service of
such measures enables analysis to make inferences concerning the operation of the
4
system. The parameter of the system (such as the service rate) may then be
(behaviour) vary with time. This usually occurs at the early stages of the system's
operation where its behaviour is still dependent on the initial condition. However,
queuing theory analysis has been directed to steady state results. A steady state
condition is said to prevail when the behaviour of the system becomes independent
of time.
studied systematically i.e., first of all, its basic concepts are understood. The
some important concepts which are necessary to understood before entering the
reliability theory.
today man has to his credit so many sophisticated systems which are fully
5
designed by his hands and brain. As for example, computer system, electric power
supply system, television system, etc. are some man made systems.
(ii) Natural or God Made System - Besides, the man made systems, the universe
have some other systems whose existence is independent of human hands and thus
called the natural or God made system. Human body system, solar energy system,
weather changing system etc. are some examples of God made systems. Generally,
when we perform the life-testing experiments with man made systems, we call it
'Reliability Analysis' while on the other hand, when we deal with God made
1.2.1 Reliability
of an equipment to operate without failure when put into service. A more rigorous
component performs its intended function adequately for a specified period of time
Mathematically if T is the time till the failure of the unit occurs, then the
probability that it will not fail in a given environment before time 't' is
6
where F(t) is the cumulative distribution function (c.d.f.) of T, called unreliability
R(t) of the system so that F (t) + R (t) = 1. Thus, the reliability is a function of
time and depends on environmental conditions which may or may not vary with
time.
Experiences have shown that a very good way to present the failure data is to
compute and plot either the failure density function or the hazard-rate as a function
of time. The failure density is a measure of the overall speed at which failures are
As the time passes on, the unit get out worn and begin to deteriorate. There are
(a) Careless planning, substandard equipment and raw material used, lack poor
7
(d) Poor manufacturing techniques
Since the item is likely to fail at any time, it is quite customary to assume
that the life of the item is a random variable with a distribution function F(t),
where F(t) is the probability that the item fails before time T. A failure is the
partial or total loss or change in the properties of a device (or system) in such a
instantaneous failure rate of a unit at time t, given that the unit has survived up to
this function is called "Mill's ratio" and in demography, its name is 'age specific
death rate'. This function is also known as force of mortality in actuarial and life
P [a device of age t will fail in (t, t+dt)/it has survived upto t)]
h(t) = Lim
dt 0 dt
P[t < T < t + dt/ T > t]
= Lim
dt 0 dt
[T t+dt] P [T t]
= Lim
dt 0 P[T t]. dt
F(t t) F(t)
= Lim
dt 0 [1 F(t)]dt
f(t)
= ...(2)
R(t)
On integrating the expression in (2), one gets
t t
h(t) dt = log [1 F(t)] 0
log R(t)
0
8
t
R(t) = exp h(u) du ...(3)
o
t2
R(t1 , t 2 ) = exp h(t)d(t)
t1
t
Let us denote h(t) dt by H(t), represents the cumulative hazard function.
0
t
f (t) = h (t). exp h(t) dt ...(4)
0
From relation (4), it is clear that hazard function is helpful in deciding the form of
be represented by
(IFR),
(DFR),
Mean time to system failure or mean life of the system is the expected value
9
MTSF = E(T) = t. f (t).dt
0
d d
But, f(t) = F(t) = R(t)
dt dt
Hence, MTSF = t.dR(t)
0
= tR(t) 0
R(t)dt
0
MTSF = R(t)dt ...(5)
0
find the system reliability, we must know the reliabilities of its units/components
and their network sufficiently well. Suppose that a system with life time T,
consists of n-different limits / components C1, C2, ..., Cn with life time Ti of the ith
R (t) = P (T > t)
10
1.3.1 Series Configuration
The simplest and the most common structure in the reliability analysis is the
series configuration. In this case, the functional operation of the system depends
configuration are:
guidance subsystem, computer subsystem and the fire control subsystem. This
(ii) Deepawali or Christmas glow bulbs, where if one bulb fails, it leads to entire
system failure.
n n
P [Ti > t] = R i (t) ...(6)
i=1 i=1
If hi (t) is the instantaneous failure rate of the ith unit and h (t) is the instantaneous
n
h (t) = h i (t) ...(7)
i=1
obtain the system reliability and the unit's/component's hazard rates are added up
shown in Fig. 1
11
IN C1 C2 Ci Cn OUT
FIG. 1
In many systems, several signal paths perform the same operation. If the
system configuration is such that the function of atleast one path is sufficient to do
the job, the system can be represented by a parallel model. In this configuration,
all the units/components of the system are arranged in parallel and the failure of
the system occurs only when all the units/components of the system fail.
n
R(t) 1 (1 R i (t)) ...(8)
i=1
C1
C2
IN OUT
Ci
Cn
FIG.2
Another practical system is one where more than one of its parallel
components are required to meet the demands. For instance, two of the four
12
generators in a generating station may be necessary to supply the required power
to the customers. The other two are added to increase the supply reliability.
Let us consider a such type of system which functions if at least k (l < k < n)
m
R(t) = R c (t)i 1 R c (t) n i ...(9)
i=k
where, Rc (t) is the reliability of each component at time t. In particular, the block
diagram for 2 out of 3 configuration having three identical components C1, C2, C3
C1 C2
C1 C3
IN . . OUT
C2 C3
FIG. 3
The series and parallel configurations are the particular cases of k-out of n
expensive. There are several situations where it is neither possible nor desirable to
13
use complete sampling. We, therefore, make the sample censored. Obviously, in
In this type of censoring plan, the amount of time required to obtain the
information from a complete sample is reduced. We may put the complete sample
to test and the test is terminated at a prefixed time. This type of censoring is called
agricultural research. Mathematically, let n items are put on test and the test is
terminated at the t0. Let T be the r.v. denoting the lifetime of an item, such that
F (t 0 ) P[T t 0 ]
m ~ B n,F (t o ) ; m = 0, 1, 2, ... n
Note that from type I censored sample we get the following information:
(ii) (n m) items did not fail time to. The likelihood of the sample
L x (1) x (2) ...x (m) | F(t 0 )
14
[1 F(t o )]n ; m=0
n n
f (x i )[1 F(t 0 )]n m ; m =1,2...n
(n m)! i 1
the first 'r' individuals, where r (< n) is pre-assigned number of failure. Then, the
before the data are collected. But the time of failure of r (fixed) individuals is a
random variable. It is mostly used in dealing with high cost sophisticated items
such as vacuum tube, X-ray machine, colour T.V. picture tubes, etc.
Suppose the failure times consist of the first 'r' smallest life times be
x (1) x (2) ...x (r) out of a r sample of n. Lifetimes x1, x2, ..., xn, which are i.i.d.
random variables having pdf f(t) and reliability function R(t). Then, it follows
from the general results on order statistic that the likelihood function of the sample
n! n
L x (1) x (2) ...x (n) f (x (1) )[R(x (r) )]n r
(n r)! i 1
15
1.5 BAYESIAN APPROACH AT A GLANCE
prior information with the information contained in the data to formulate the
developed and straight forward procedure for facing the problem of optimal action
Control. It addresses the question of how the model underlying the data may be
almost always equivalent to classical large-sample procedure when the sample size
is very large. The foundation stone of this technique is Baye's theorem and
an English Minister who lived in the 18th century. Laplace modified the Bayes
respectively with their density (, y). From standard probability theory, we have
16
p(y/θ) p(θ)
From (10) and (11), we get p ( / y) = ...(12)
p(y)
where E indicates averaging with respect to distribution of [e.g. Box and Tiao
(1973); Gelman, Carlin, Stern and Rubin (1995) and Lee (1997)], it is clear that
(1971)]. Finally, p(/y) is the posterior density, which tell us what is known about
Suppose that n items are placed on test. It is assumed that their recorded life-
times form random sample say x1, x2, ..., xn which follow a distribution with
17
p.d.f. f(x, ). Here, we will assume to be a random variable. Let g() be the
the failure time p.d.f. f (x, ) can be regarded as a conditional p.d.f. of x given .
Here g () is known as the prior p.d.f. Therefore, the joint p.d.f. of (x1, x2, ..., xn;
) is
n
H x 1 , x 2 , ..., x n ; f x i , g () = L x 1 , x 2 , ..., x n ; g() ...(15)
i 1
Therefore
H (x1 , x 2 , ..., x n ; )
g (/ x1 , x 2 , ..., x n ) =
p (x1 , x 2 , ..., x n )
The variation in observed prior to the data (x1, x2, ..., xn) is represented by g()
and is known as prior p.d.f. The conditional distribution of given (x1, x2, ..., xn)
g(|x1, x2,...,xn). Just as the prior distribution reflects beliefs about prior to the
posterior to observing the sample x1, x2,...,xn. In other word, the uncertainty about
18
the parameter prior to the experiment is represented by the prior p.d.f. g () and
the same after the experiment is posterior p.d.f. denoted by (|x1, x2,...,xn).
these parameters may be drawn with the help of these distribution. In case when
prior distribution of is discrete, the integral sign in (16) and (17) are replaced by
expected loss w.r.f. the posterior distribution, i.e., it depends on the loss function
chosen. If the loss function is taken as quadratic loss function defined as L (*,)
= (*)2, then the Bayes estimator that accomplishes the task of estimating is the
P
2
( | x1 , x 2 , ..., x n ) d = 1 ...(18)
P
1
Rev. Thomas Bayes republished in 1958 due to its fundamental importance. The
details of Bayesian statistical theory can be found in Raifla and Schlaifer (1961),
19
Jeffreys (1961), Savage (1962), Lindeley (1965), Box and Tiao (1973), Berger
unknown parameter before the data are available, plays an important role in
space . A detailed discussion to obtain the solution to the problem concerning the
choice of a prior distributions of is given in Raiffa and Schlaifer (1961), but here
we shall confine ourselves just in defining them. Priors for the parameters in
The natural conjugate priors satisfy the closer property implying that the
prior and posterior distributions for the parameter belong to the same family. This
'closer under sampling' property Weltherill (1961). Raiffa and Schlaifer (1961)
family of such densities has been called by them a 'natural conjugate family'. For
example in case of an exponential density, the gamma priors form such a family.
In case, when the decision maker does not have any prior knowledge about
the parameter, non-informative quasi density may be used. The role of non-
20
(1967). Jeffreys (1961) proposed a general rule for obtaining the prior distribution
g ( ) I ( ) ...(19)
for the case when there is a single unknown parameter . For a situation when is
a vector valued parameter, the determinant of the information matrix i.e. |I()|
2 log (f)
I () = E ...(21)
i j
A difficulty arises when the prior information about the parameter is vague or
worse still, there is no prior information whatever. This leads to the consideration
of what are known as improper or quasi prior distribution. For a proper prior we
have
g () 0
Jeffreys prior may or may not be proper. Various other rules have also
been suggested for the selection of a prior but no neat solution appears to the
distribution can not be tested unless we make use of additional information on the
Obviously, it seems more logical to infer about the parameters of the prior
distribution with the help of the compound distribution, which also involves these
parameters.
estimate by some statistics ˆ . Let L (ˆ , ) represent the loss incurred when the
true value of the parameter is and we are estimating by the statistic ̂ . The
22
(ii) L (ˆ , )= 0 for ̂ = .
have L (ˆ , ) = (ˆ )2 , known as squared error loss function (SELF). Under the
SELF, the Bayes estimator is the posterior mean. The squared error loss function is
a symmetric function of ̂ and . The reason for the popularity of SELF is due to
A symmetric loss function assumes that positive and negative errors are
be inappropriate, Canfield (1970) points out that the use of symmetric loss
then the over estimation of the failure rate. This lead to the statistician to think
about asymmetrical loss function which has been proposed in statistical literature
Ferguson (1967), Zellner and Geisel (1968), Aitcheson and Dunsmore (1975) and
Berger (1980) have considered the linear asymmetric loss function. Varian (1975)
23
introduced the following convex loss function known as Linex (linear -
c = a.b, therefore,
where a and b are the parameters of the loss function may be defined as shape and
scale respectively. This loss function has been considered by Zellner (1986), Rajo
ˆ
where =
And studied the Bayesian estimation under the Linex loss function for
exponential life time distribution. This loss function is suitable for the situations
where observations of is more costly than its underestimation. This loss function
(i) For a = 1, the function is quite asymmetric about zero with overestimation
(ii) For a < 0, L () rises exponentially when < 0 (underestimation) and almost
(iii) For small value of |a|, L() is almost symmetric and not far from a squared
24
22
e 1 + +
z
or L() is a squared error loss function. Thus for small values of |a|, optimal
estimates are not far different from those obtained with a squared error loss
function.
function and also present a general class of precautionary loss function with
quadratic loss function such as a special case. These loss functions approach
infinitely near the origin to prevent underestimation and thus giving a conservative
estimators, especially when, low failure rates are being estimated. These
(ˆ )
L (ˆ , ) =
ˆ
ˆ
in terms of the ratio . In this case, Calabria and Pulcini (1994) point out that a
ˆ
where =
25
And who minimum occurs at ̂ when a positive error ( ̂ ) causes more
serious consequences than a negative error and vice-versa. For small |p| value, the
2
L () p 2 loge (ˆ ) log e () ...(28)
Also, the loss function L () has been used in [Dey et al. (1987)] and [Dey and Liu
ˆ
where =
engineering sciences. The analysis of failure time data over the years has given
birth to a number of parametric models. These models were found suitable for
The choice of the distributions depends on past experience with the process,
mathematical expediency and to some extent faith. However, in some cases, the
used in making a choice. A useful series of references for this purpose is the
26
Johnson and Kotz (1970), which extensively catalogs mathematical and statistical
concerning their areas of application. Some frequently used lifetime models are as
follows:
life testing experiments as the part played by the normal distribution in agricultural
exponential distribution was the first life time model for which statistical methods
were extensively developed. Early, Sobel (1953, 1954, 1955) and Epstein (1954,
1960) gave numerous results and popularized the exponential as a life time
distribution, especially in the area of industrial life testing. The desirability of the
exponential distribution is due to its simplicity and its inherent association with the
1 x|
f (x, ) = e ; x, > 0 ...(30)
where , the scale parameter, is the average or the mean life of the item.
1
In life testing, , the first part of the density function in (30), is referred to
as constant hazard rate. The reliability function for time t of items whose life time
27
In many life testing problems, often it has been found useful to fit a two-
1 (x )
f (x, , ) exp ; x > , > 0 ...(32)
the one parameter exponential density. Again, for this model the reliability
1 if t
R(t) ...(33)
exp[(t ) / ] if t >
k x k 1
f (x; , k) = e x ; x, , k > 0 ...(34)
k
where and k are the scale and the shape parameters respectively. For k =1, the
parameter . For integer value of k, the gamma p.d.f. is also known as the
Erlangian p.d.f. Its reliability and hazard functions involve the incomplete gamma
function i.e.
f(t)
R(t) = 1 I (k, t) and h(t) = , t >0 ...(35)
R(t)
1 t k 1
I(k, t) = e d ...(36)
k 0
28
It can be shown that h (t) is monotonic decreasing (increasing) for k < 0 (k>1)
constant for k = 1. The shape parameter k is also defined as the intensity of IFR or
manufactured items. It has been used as a model with diverse type of items such as
vacuum tubes (Kao, 1959), ball-bearings (Lieblein and Zelen, 1956) and electrical
1966; Peto et al., 1972). The p.d.f. of the Weibull distribution is given by
k x k k
f (x; ,k) = e x ; x, k, > 0 ...(37)
variable X having p.d.f. in (37), is said to have two parameter (k and ) Weibull
distribution. For this distribution, the reliability function for time t, is given by
k
R(t) = e t / ...(38)
k k 1
h(t) = t ...(39)
29
Obviously, h(t) is monotonically decreasing (increasing) for k < 1 (k >1) leads to
Generally, the life time distributions of the system's components are assumed
to be continuous. However, there exist systems whose components life time are
made at periodic time point given rise to discrete situation and therefore a discrete
model may be more appropriate. Yakub and Khan (1981) Mishra (1982), Patel
and Gajjan (1990), Mishra and Singh (1992), Patel (2003), Dillip (2004), Krishna
and Jain (2004), and Anwar Hasan et al. (2007), etc. considered the geometric
called Bernoulli trials such that there are only two outcomes E1 (occurrence of a
description space of this experiment is S = {E1, E2}. Define a r.v. Xi such that
Also, let P (Xi = 0) = and P (Xi = 1) = () i. Define another random
variate 'X' to denote the number of independent trials to the first non-occurrence.
30
The p.m.f. in (41) has been suggested as life-time distribution as it gives the
is the probability of survival. For this distribution, the reliability function for
time t is
R(t) = t ...(42)
f(t,)
h(t) = = (1 ...(43)
R(t)
Since the exponential distribution as a life time distribution, has some nice
exponential distribution, the geometric distribution also has a pride place among
life time distributions. It is the only discrete life time distribution having constant
31
[F(x|1, 2,..., n)], the expectation being taken with respect to the joint distribution
of 's.
1
f (x, , ) = e x e d
0
= ; , , x > 0
(x+)(+1)
This is the p.m.f. of Pareto distribution with mean = and
1
2
variance = ;>2
( 1)2 ( 2)
parameter .
1 1
i.e. f (x, u, v) = (1 ) x u 1 (1 )v 1 d
0 B(u, v)
[x]
vu
; x = 0, 1, 2 ...
(u v)[1 x]
32
u[r] = u (u+1) ... (u+r
(12 22 ) .
1.10 ROBUSTNESS
methods. The motivation is to produce estimators that are not unduly affected by
the small departures from model assumptions. In statistics, classical methods rely
often assumed that the data residuals are normally distributed at last
normally distributed estimates. Unfortunately, when there are outliers in the data,
33
classical methods often have very poor performance. Robust statistics seeks to
provide methods that emulate classical methods but which are not unduly affected
to the robust statistics was introduced by Huber (1981). His book on this topic
made this fundamental work to a wider audience. Other good books on robust
statistics are Hampel et al. (1986) and Rousseeuw and Leroy (1987). A modern
whereas the book by Rousseeuw and Leroy is very practical. Hampel et al.
(1987) and Maronna et al. (2006) fall somewhere in the middle ground. Robust
methods with the t-distribution with low degree of freedom (i.e. high Kurtosis;
degree of freedom between 4 and 6 have often been found to be useful in practice)
statistics are
(i) The 'median' is a robust measure of central tendency while the 'mean' is not.
(ii) The 'median absolute deviation' and inter-quartile range are robust measures
of statistical dispersion while the 'standard deviation' and 'range' are not Robust
statistics, in a loose, non-technical sense, is concern with the fact that assumptions
methods. The moral is clear. One should check carefully to see that the underlying
34
1.11 REVIEW OF LITERATURE
theory and mathematical statistics is necessary. At present, not only engineers and
scientists but also government leaders are concerned with increasing the reliability
of a system.
function, nature of hazard rate, mean time to system failure, availability, etc. called
reliability characteristics, the research area can be broadly classified into the
(1) In studies like Dhillon and Singh (1980), Govil (1983), and Balagurusamy
(1984), the authors developed stochastic models under the various assumption
which best fit to the engineering system used in day to day practical life. They
obtained the reliability characteristics and net expected profit during a finite
interval of time using the well known techniques such as regenerative point
(2) On the other hand, studies like Epstein and Sobel (1952), Barlow and
Proschan (1965, 1975), Mann et al. (1974), Kapur and Lamberson (1977), Elandt-
Johnson and Johnson (1980), Kalbfleich and Prentice (1980), Miller (1981), Cox
and Oakes (1984), Lawless (1982), Martz and Waller (1982), Sinha (1986),
35
techniques are used to estimate various reliability characteristics of the system.
biological system and its analysis is gaining importance for research workers in the
life time data and these are used for analysing the life phenomenon of various
function, increasing or decreasing hazard rate, mean time to survival etc. In other
words, the recorded life time data are used to draw inferences on the reliability
(1986), Kapoor and Lamberson (1977), Mann et al., (1974), Martz and Walheer
(1982), Harris and Soms (1983), Nandi and Aich (1974), Basu and Ebrahimi
Life testing experiments are costly and time consuming phenomenon and
considered with the experimental data for analysing the reliability characteristics
of the system. Thomas Bayes (1763) introduced Bayesian inference in his famous
36
research paper entitled, "An essay towards solving a problem in the Doctrine of
Chance". Further, for basic theory and foundations one can also refer to Jeffreys
(1961) and Savage (1962). Lindley (1965) and Box and Tiao (1973) have
popularized and given this approach an unique important place in the field of
text is available. A few of them are Savage (1962), Bhattacharya (1967), Martz
and Waller (1982), Sinha (1986) and Gelman et al. (1995) presented the Bayesian
analysis of the system reliability using many prior distributions. Some, priors with
their inherent statistical properties are also given in the study by Raiffa and
Schlaifer (1961). Studies like Sharma et al. (1993, 1994, 1995) are also effort in
the same direction. Apostolakis (1990) reviewed the literature on Bayesian theory
practical situation it may happen that the operational experiment with the complete
need to predict the reliability of complete system at the early stage of designing.
In this regard, Kaplan et al. (1989) studied about the prediction of reliability of
complete system assuming that the operational experience with the complete
turn may be used to express our degree of confidence about the complete system
reliability and the way in which Bayes' theorem updates prior probability curves to
Masters and Lewis (1987) have obtained confidence interval for steady state
However, this approach was not considered satisfactory and a modified approach
MTTR. However, when failure and repair information are recorded over a large
parameters of failure time and repair time distributions. These parametric random
Krishna (1994, 1995a,b), Sharma and Bhutani (1994a, b) and Sharma et al.
(2004).
of queues with findings. For example, the probability distribution of the number
in the queue from which the mean and variance of queue length can be found. In
queuing theory, the investigators must measure the existing system to make an
objective assessment of its characteristics and must determine how changes may
be made to the system and what effect of various kinds of changes in the system's
steady state situation is the basis of analyzing various queue systems in respect of
their characteristics. Traffic intensity () defined as the ratio of the arrival rate to
service rate is an important parameter of the p.m.f. Saaty (1961), Ackoff and
38
Sasieni (1968) and Taha (1976) studied various queue characteristics which have
been defined using the parameter . D.G. Kendal (1953) introduced a useful
notation for multiple - server queuing models which describes the three
parallel service channels. Later, A. Lee (1966) added the fourth and fifth
characteristics to the notation; that is, the service discipline and the maximum
the sixth characteristics describing the calling source. The complete notation has
(a / b / c) : (d / e / f)
where
d service discipline
f calling source.
The following conventional codes are usually used to replace the symbols a,
b and d.
39
Ek Erlangian or gamma inter-arrival or service time distribution with parameter
k.
Symbol d:
parallel serves. The symbol e and f represent a finite or infinite number in the
service time), c parallel servers, "first come, first served" discipline, maximum
period of time the assumption about constant seems to be restrictive. With the
recognized that the investigator has considerable a-priori knowledge about the
we can have a strong base for collecting prior information showing variations in .
40
Following the concept, Muddapur (1972) and Armero (1985) presented a Bayesian
analysis of some queue characteristics. The primary aim of these studies has been
updated the prior knowledge about the parameters using experimental data. The
studies in Ackoff and Sasieni (1968) and Taha (1976) have not considered
characteristics.
The arrival and servicing patterns in the system are greatly influenced by a
therefore, the sample information may be used to draw valid inferences about
priori knowledge about the variations in these parameters. Now, the investigator
needs to combine this a-priori knowledge with the operational data. On the
queue's system, obviously the objective can be met with the Bayesian analysis of
(1989) include the conceptual framework and methodology for such analysis.
Some more studies (Martz and Waller, 1982; Sharma and Bhutani,1992, 1994;
Krishna and Sharma, 1995; Sharma et al., 2004) include the classical and Bayesian
41
Reviewing all the above studies, the investigator has been able to highlight
Queuing Theory.
The present thesis includes six chapters. The contents, developments and
reliability and queuing theory. The important statistical techniques and concepts
like Bayesian inference, concept of prior distributions, life time models or - life
form. This thesis also include work on queue systems. A brief summary of the
queue system is also included in this chapter. In the end, we have important
matter on review of literature and thesis at a glance providing a brief outline of the
For time consuming life testing experiments, it seems unrealistic to treat the
parameters involve in the life time distribution as constant throughout. Thus, the
parameters in the life time distribution are treated as a random variable. Following
this concept, the chapter-2 of the thesis deals with the development of statistical
methodology useful in the analysis of the robust character of various static system
The chapter-3 of the present thesis deals with the reliability analysis of
non-series parallel complex system with Raileigh life time of component. In this
chapter an easier alternative method for the construction of the structure function
42
of different types of systems configurations are suggested. Further, evaluation as
well as estimation of the different reliability characteristics have been carried out.
The chapter-4 deals with the analysis of the robust character of queue system
with beta distribution of second kind. Here, it is assumed that the random variable
X follows binomial distribution with parameter and the prior belief about
1
depends upon the information available on units of system. Now for developing
also comes out to be Polya-Eggenberger distribution. The study reveals that the
noted that estimates tend to be more precise and consistent in the case of predictive
Chapter -5 in its first section presents the Bayesian analysis of various queue
characteristics in a power supply system model. The analysis depends upon the
operational data on the queue system. The arrival and service time distributions for
system are taken to be exponential. Prior belief about arrival and service rate of
the system have been employed in the analysis. The posterior distribution of traffic
Bayesian framework. Also for this purpose, the Squared Error Loss Function
43
(SELF) and Linex Loss Function (LLF) have been used in the analysis. The
second section of this chapter deals with the development of methodology to study
the effect of random variations in the parameter of arrival and service time
The sixth chapter of the present thesis considered the (M/M/1) (/FCSF)
queue system model. Since over a long period of time, the assumption about
situation, the parameter involved in arrival and service time distributions are
characteristics of a (M/M/1) (/FCSF) queue system, this chapter deals with the
developments of methodology for updating the basic arrival and service time
been used to study the robust character of various queue characteristics of the
44