CC Assignment

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

ASSIGNMENT

L I T T L E S T H E O R E M
Consider a queuing system w here custom ers arrive at random tim es to obtain service. This is
analogous to packets arriving in a queue and w aiting to be transm itted. To determ ine the
delay characteristics of the system , the param eters of interest are the follow ing:
N = average num ber of custom ers in the system (either w aiting for service or being
served),
T = average delay per custom er (includes tim e spent w aiting in the queue and service
tim e).
These quantities are usually estim ated in term s of the know n inform ation such as:
= custom er arrival rate (i.e., the typical num ber of custom ers entering the system per
unit tim e) and the custom er service rate (i.e. the typical num ber of custom ers the system
serves per unit tim e ).
In m any cases w e w ould require detailed statistical inform ation about custom er inter-arrival
and service tim es. H ow ever, if w e ignore such inform ation, w e can derive an im portant result
know n as Littles theorem w hich states the follow ing:
N = * T w here N = steady state tim e average of the num ber of custom ers in the system
,
T = steady state tim e average of the delay per custom erand
= steady state average custom er arrival rate
Since the process is a stochastic process (random function of tim e), N (t ) ( the num ber of
custom ers in the system at tim e = t ) is a random variable. H ence, N (t ) has associated
ensem ble averages and tim e averages. The tim e average N
t
= 1/t (
0
t
N (t ) dt ) . In m any
system s of interest the tim e average N
t
tends to a steady state value of N as t increases that
is:
N = lim N
t
t
Sim ilarly, if o (t) = the num ber of custom ers w ho arrived in the interval [0,t] then the tim e
average custom er arrival rate over the interval [0,t] is
t
= o (t)/t. For exam ple, if 100 people
arrived in the first 20 seconds then = 5 people/sec..
The steady state average arrival rate is defined as = lim
t
t
If T
i
= Tim e spent in the system by the i
th
arriving custom er then the tim e average of the
custom er delay up to tim e t is defined as T
t
= E
o (t)
T
i
/ o (t)
i=0
The steady state tim e average custom er delay is then T = lim T
t
t
Littles theorem expresses the natural idea that crow ded system s (large N ) are associated
w ith long custom er delays (large T) and the converse.
P r o o f o f L i t t l e s t h e o r e m :
The theorem has various proofs. It can be derived sim ply by using a graphical proof. W e
assum e that the system is initially em pty N (0) = 0 and that custom ers depart in the order that
they enter the system .
If o (t) = num ber of custom ers w ho arrive in the system in the tim e interval [0,t] and
| (t) = num ber of custom ers w ho have departed in the tim e interval [0,t] then
N (t) = o (t) -| (t) = num ber of custom ers in the system in the tim e interval [0,t].
The shaded area betw een the graphs of o (t ) -| (t ) can be expressed as
0
t
N (t )dt
and if t is any tim e for w hich the system is em pty [N (t) =0] then the shaded area is also equal
to
o (t)
T
i
i=1
w here T
1
is the service tim e for the first custom er, T
2
is the service tim e for the second
custom er ,..
and T
i
is the service tim e for the i
th
custom er.
Custom er service tim e t
1
2
3
4
5
6
7
8
o (t)
| (t)
T
1
T
2
T
3
T
4
T
5
F i g u r e 4 : P r o o f o f L i t t l e s T h e o r e m
N um ber of
custom ers
The shaded area in the Figure 4 can be represented as follow s:
T
1
+ T
2
+ T
3
+ T
4
+ . + T
| (t)
s
0
t
N (t )dt s T
1
+ T
2
+ T
3
+ T
4
+ . + T
o (t)
D ividing and m ultiplying the LH S by | (t) and the RH S by o (t) and dividing the entire
equation by t w e get
(| (t)/t ) * 1/(| (t)) *
| (t)
T
i
s (1/t) * (
0
t
N (t )dt ) s (o (t)/t) * (1/o (t)) *
o (t)
T
i
i= 1 i=1
A ssum ing that the system becom es em pty infinitely often at arbitrary large tim es w e can see
that
for a stable queue the follow ing m ust be true:
lim
t
(o (t)/t) = lim
t
(| (t)/t) =
If w e assum e that lim
t
(| (t)/t) s lim
t
(o (t)/t) then w e have a backlog and the probability
m odel breaks dow n.
Since 1/(| (t)) *
| (t)
T
i
= T = (1/o (t)) *
o (t)
T
i
i= 1 i=1
W e have the follow ing result : * T s N s * T or equivalently N = * T
The sim plifying assum ptions used above can be relaxed only if w e assum e that
lim
t
(o (t)/t) = lim
t
(| (t)/t) = and that T = lim
t
T
t
In particular it is not necessary that custom ers are served in the order they arrive and that the
system is initially em pty.
M a r k o v c h a i n
Form ally, a M arkov chain is a discrete (discrete-tim e) random process w ith the M arkov
property. O ften, the term "M arkov chain" is used to m ean a M arkov process w hich has a
discrete (finite or countable) state-space. U sually a M arkov chain w ould be defined for a
discrete set of tim es (i.e. a discrete-tim e M arkov chain) although som e authors use the sam e
term inology w here "tim e" can take continuous values. A lso see continuous-tim e M arkov
process. The use of the term in M arkov chain M onte Carlo m ethodology covers cases w here
the process is in discrete-tim e (discrete algorithm steps) w ith a continuous state space. The
follow ing concentrates on the discrete-tim e discrete-state-space case.
A "discrete-tim e" random process m eans a system w hich is in a certain state at each "step",
w ith the state changing random ly betw een steps. The steps are often thought of as tim e, but
they can equally w ell refer to physical distance or any other discrete m easurem ent; form ally,
the steps are just the integers or natural num bers, and the random process is a m apping of
these to states. The M arkov property states that the conditional probability distribution for the
system at the next step (and in fact at all future steps) given its current state depends only on
the current state of the system , and not additionally on the state of the system at previous
steps.
Since the system changes random ly, it is generally im possible to predict the exact state of the
system in the future. H ow ever, the statistical properties of the system 's future can be
predicted. In m any applications it is thesestatistical properties that are im portant.
The changes of state of the system are called transitions, and the probabilities associated w ith
various state-changes are called transition probabilities. The set of all states and transition
probabilities com pletely characterizes a M arkov chain. By convention, w e assum e all
possible states and transitions have been included in the definition of the processes, so there
is alw ays a next-state and the process goes on forever.
A fam ous M arkov chain is the so-called "drunkard's w alk", a random w alk on the num ber
line w here, at each step, the position m ay change by +1 or 1 w ith equal probability. From
any position there are tw o possible transitions, to the next or previous integer. The transition
probabilities depend only on the current position, not on the w ay the position w as reached.
For exam ple, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other
transition probabilities from 5 are 0. These probabilities are independent of w hether the
system w as previously in 4 or 6.
A nother exam ple is the dietary habits of a creature w ho eats only grapes, cheese or lettuce,
and w hose dietary habits conform to the follow ing (artificial) rules: it eats exactly once a day;
if it ate cheese yesterday, it w ill not today, and it w ill eat lettuce or grapes w ith equal
probability; if it ate grapes yesterday, it w ill eat grapes today w ith probability 1/10, cheese
w ith probability 4/10 and lettuce w ith probability 5/10; finally, if it ate lettuce yesterday, it
w on't eat lettuce again today but w ill eat grapes w ith probability 4/10 or cheese w ith
probability 6/10. This creature's eating habits can be m odeled w ith a M arkov chain since its
choice depends on w hat it ate yesterday, not additionally on w hat it ate 2 or 3 (or 4, etc.) days
ago. O ne statistical property one could calculate is the expected percentage of the tim e the
creature w ill eat grapes over a long period.
A series of independent events for exam ple, a series of coin flips does satisfy the form al
definition of a M arkov chain. H ow ever, the theory is usually applied only w hen the
probability distribution of the next step depends non-trivially on the current state.
M any other exam ples of M arkov chainsexist.
Form al definition
A M arkov chain is a sequence of random variablesX
1
, X
2
, X
3
, ... w ith the M arkov property,
nam ely that, given the present state, the future and past states are independent. Form ally,
The possible values of X
i
form a countable setS called the state space of the chain.
M arkov chains are often described by a directed graph, w here the edges are labeled by the
probabilities of going from one state to the other states.
V ariations
- Continuous-tim e M arkov processeshave a continuous index.
- Tim e-hom ogeneous M arkov chains (or stationary M arkov chains) are processes
w here
for all n. The probability of the transition is independent of n.
- A M arkov chain of order m (or a M arkov chain w ith m em ory m ) w here m is finite, is
a process satisfying
In other w ords, the future state depends on the past m states. It is possible to construct
a chain (Y
n
) from (X
n
) w hich has the 'classical' M arkov property as follow s:
Let Y
n
= (X
n
, X
n1
, ..., X
nm +1
), the ordered m -tuple of X values. Then Y
n
is a M arkov
chain w ith state space S
m
and has the classical M arkov property.
- A n additive M arkov chain of order m w here m is finite, is w here
forn > m .
Exam ple
A sim ple exam ple is show n in the figure on the right, using a directed graph to picture the
state transitions. The states represent w hether the econom y is in a bull m arket, a bear m arket,
or a recession, during a given w eek. A ccording to the figure, a bull w eek is follow ed by
another bull w eek 90% of the tim e, a bear m arket 7.5% of the tim e, and a recession the other
2.5% . From this figure it is possible to calculate, for exam ple, the long-term fraction of tim e
during w hich the econom y is in a recession, or on average how long it w ill take to go from a
recession to a bull m arket.
A thorough developm ent and m any exam ples can be found in the on-line m onograph M eyn &
Tw eedie 2005.The appendix of M eyn 2007, also available on-line, contains an abridged
M eyn & Tw eedie.
A finite state m achine can be used as a representation of a M arkov chain. A ssum ing a
sequence of independent and identically distributed input signals (for exam ple, sym bols from
a binary alphabet chosen by coin tosses), if the m achine is in state y at tim e n, then the
probability that it m oves to state x at tim e n + 1 depends only on the current state.
M arkov chains
The probability of going from state ito state jin n tim e steps is
and the single-step transition is
For a tim e-hom ogeneous M arkov chain:
and
The n-step transition probabilities satisfy the Chapm anK olm ogorov equation, that for any k
such that 0 < k < n,
w here S is the state space of the M arkov chain.
The m arginal distribution Pr(X
n
= x) is the distribution over states at tim e n. The initial
distribution is Pr(X
0
= x). The evolution of the process through one tim e step is described by
N ote: The superscript (n) is an index and not an exponent.
Reducibility
A state jis said to be accessible from a state i(w ritten i j) if a system started in state ihas a
non-zero probability of transitioning into state jat som e point. Form ally, state jis accessible
from state iif there exists an integer n 0 such that
A llow ing n to be zero m eans that every state is defined to be accessible from itself.
A state iis said to com m unicate w ith state j(w ritten i j) if both i jand j i. A set of
states C is a com m unicating class if every pair of states in C com m unicates w ith each other,
and no state in C com m unicates w ith any state not in C. It can be show n that com m unication
in this sense is an equivalence relation and thus that com m unicating classes are the
equivalence classes of this relation. A com m unicating class is closed if the probability of
leaving the class is zero, nam ely that if iis in C but jis not, then jis not accessible from i.
That said, com m unicating classes need not be com m utative, in that classes achieving greater
periodic frequencies that encom pass 100% of the phases of sm aller periodic frequencies, m ay
still be com m unicating classes provided a form of either dim inished, dow ngraded, or
m ultiplexed cooperation exists w ithin the higher frequency class.
Finally, a M arkov chain is said to be irreducible if its state space is a single com m unicating
class; in other w ords, if it is possible to get to any state from any state.
Periodicity
A state ihas period k if any return to state im ust occur in m ultiples of k tim e steps. Form ally,
the period of a state is defined as
(w here "gcd" is the greatest com m on divisor). N ote that even though a state has period k, it
m ay not be possible to reach the state in k steps. For exam ple, suppose it is possible to return
to the state in {6,8,10,12,...} tim e steps; then k w ould be 2, even though 2 does not appear
in this list.
If k = 1, then the state is said to be aperiodic i.e. returns to state ican occur at irregular tim es.
O therw ise (k > 1), the state is said to be periodic w ith period k.
It can be show n that every state in a com m unicating class m ust have overlapping periods w ith
all equivalent-or-larger occurring sam ple(s)
It can be also show n that every state of a bipartite graph has an even period
]
Recurrence
A state iis said to be transientif, given that w e start in state i, there is a non-zero probability
that w e w ill never return to i. Form ally, let the random variable T
i
be the first return tim e to
state i(the "hitting tim e"):
Then, state iis transient if and only if:
If a state iis not transient (it has finite hitting tim e w ith probability 1), then it is said to be
recurrentor persistent. A lthough the hitting tim e is finite, it need not have a finite expectation
. Let M
i
be the expected return tim e,
Then, state iis positive recurrentif M
i
is finite; otherw ise, state iis null recurrent(the term s
non-null persistentand null persistentare also used, respectively).
It can be show n that a state is recurrent if and only if
A state iis called absorbing if it is im possible to leave this state. Therefore, the state i is
absorbing if and only if
Ergodicity
A state iis said to be ergodic if it is aperiodic and positive recurrent. If all states in a M arkov
chain are ergodic, then the chain is said to be ergodic. In other w ords an ergodic M arkov
chain is aperiodic, irreducible and positive recurrent.
[6]
It can be show n that a finite state irreducible M arkov chain is ergodic if it has an aperiodic
state. A m odel has the ergodic property if there's a finite num ber N such that any state can be
reached from any other state in exactly N steps. In case of a fully-connected transition m atrix
w here all transitions have a non-zero probability, this condition is fulfilled w ith N =1. A
m odel w ith just one out-going transition per state cannot be ergodic.
Steady-state analysis and lim iting distributions
If the M arkov chain is a tim e-hom ogeneous M arkov chain, so that the process is described by
a single, tim e-independent m atrix p
ij
, then the vector is called a stationary distribution (or
invariant m easure) if its entries
j
are non-negative and sum to 1 and if it satisfies
A n irreducible chain has a stationary distribution if and only if all of its states are positive
recurrent. In that case, is unique and is related to the expected return tim e:
Further, if the chain is both irreducible and aperiodic, then for any iand j,
N ote that there is no assum ption on the starting distribution; the chain converges to the
stationary distribution regardless of w here it begins. Such is called the equilibrium
distribution of the chain.
If a chain has m ore than one closed com m unicating class, its stationary distributions w ill not
be unique (consider any closed com m unicating classin the chain; each one w ill have its ow n
unique stationary distribution. A ny of these w ill extend to a stationary distribution for the
overall chain, w here the probability outside the class is set to zero). H ow ever, if a state jis
aperiodic, then
and for any other state i, let f
ij
be the probability that the chain ever visits state jif it starts ati,
If a state iis periodic w ith period k > 1 then the lim it
does not exist, although the lim it
does exist for every integerr.
Steady-state analysis and the tim e-inhom ogeneousM arkov chain
A M arkov chain need not necessarily be tim e-hom ogeneous to have an equilibrium
distribution. If there is a probability distribution over states such that
for every state jand every tim e n then is an equilibrium distribution of the M arkov chain.
Such can occur in M arkov chain M onte Carlo (M CM C) m ethods in situations w here a
num ber of different transition m atrices are used, because each is efficient for a particular kind
of m ixing, but each m atrix respects a shared equilibrium distribution.
Finite state space
If the state space is finite, the transition probability distribution can be represented by a
m atrix, called the transition m atrix, w ith the (i, j)th elem entof P equal to
Since each row of P sum s to one and all elem ents are non-negative, P is a right stochastic
m atrix.
Tim e-hom ogeneous M arkov chain w ith a finite state space
If the M arkov chain is tim e-hom ogeneous, then the transition m atrix P is the sam e after each
step, so the k-step transition probability can be com puted as the k-th pow er of the transition
m atrix, P
k
.
The stationary distribution is a (row ) vector, w hose entries are non-negative and sum to 1,
that satisfies the equation
In other w ords, the stationary distribution is a norm alized (m eaning that the sum of its
entries is 1) left eigenvectorof the transition m atrix associated w ith the eigenvalue1.
A lternatively, can be view ed as a fixed point of the linear (hence continuous)
transform ation on the unit sim plex associated to the m atrix P. A s any continuous
transform ation in the unit sim plex has a fixed point, a stationary distribution alw ays exists,
but is not guaranteed to be unique, in general. H ow ever, if the M arkov chain is irreducible
and aperiodic, then there is a unique stationary distribution . A dditionally, in this case P
k
converges to a rank-one m atrix in w hich each row is the stationary distribution , that is,
w here 1 is the colum n vector w ith all entries equal to 1. This is stated by the Perron
Frobenius theorem . If, by w hatever m eans, is found, then the stationary
distribution of the M arkov chain in question can be easily determ ined for any starting
distribution, as w ill be explained below .
For som e stochastic m atrices P, the lim it does not exist, as show n by this exam ple:
Because there are a num ber of different special cases to consider, the process of finding this
lim it if it exists can be a lengthy task. H ow ever, there are m any techniques that can assist in
finding this lim it. Let P be an nn m atrix, and define
It is alw ays true that
Subtracting Q from both sides and factoring then yields
w here I
n
is the identity m atrix of size n, and 0
n,n
is the zero m atrix of size nn. M ultiplying
together stochastic m atrices alw ays yields another stochastic m atrix, so Q m ust be a
stochastic m atrix. It is som etim es sufficient to use the m atrix equation above and the fact that
Q is a stochastic m atrix to solve for Q .
H ere is one m ethod for doing so: first, define the function f(A ) to return the m atrix A w ith its
right-m ost colum n replaced w ith all 1's. If [f(P I
n
)]
1
exists then
O ne thing to notice is that if P has an elem ent P
i,i
on its m ain diagonal that is equal to 1 and
the ith row or colum n is otherw ise filled w ith 0's, then that row or colum n w ill rem ain
unchanged in all of the subsequent pow ers P
k
. H ence, the ith row or colum n of Q w ill have
the 1 and the 0's in the sam e positions as in P.
Reversible M arkov chain
A M arkov chain is said to be reversible if there is a probability distribution over states, ,
such that
for all tim es n and all states iand j. This condition is also know n as the detailed balance
condition. W ith a tim e-hom ogeneous M arkov chain, Pr(X
n+1
= j|X
n
= i) does not change
w ith tim e n and it can be w ritten m ore sim ply as p
ij
. In this case, the detailed balance equation
can be w ritten m ore com pactly as
Sum m ing the original equation over igives
so, for reversible M arkov chains, is alw ays a steady-state distribution of Pr(X
n+1
= j|X
n
= i)
for every n.
If the M arkov chain begins in the steady-state distribution, i.e., if Pr(X
0
= i)=
i
, then
Pr(X
n
= i)=
i
for all n and the detailed balance equation can be w ritten as
The left-and right-hand sides of this last equation are identical except for a reversing of the
tim e indices n and n + 1.
Reversible M arkov chains are com m on in M arkov chain M onte Carlo (M CM C)approaches
because the detailed balance equation for a desired distribution necessarily im plies that the
M arkov chain has been constructed so that is a steady-state distribution. Even w ith tim e-
inhom ogeneous M arkov chains, w here m ultiple transitions m atrices are used, if each such
transition m atrix exhibits detailed balance w ith the desired distribution, this necessarily
im plies that is a steady-state distribution of the M arkov chain
P O I S S O N P R O C E S S
A Poisson process, nam ed after the French m athem atician Sim on-D enis Poisson (1781
1840), is a stochastic process in w hich events occur continuously and independently of one
another (the w ord eventused here is not an instance of the concept of eventfrequently used
in probability theory). Exam ples that are w ell-m odeled as Poisson processes include the
radioactive decay of atom s, telephone calls arriving at a sw itchboard, page view requests to a
w ebsite, and rainfall.
The Poisson process is a collection {N (t):t 0} of random variables, w here N (t) is the
num ber of events that have occurred up to tim e t(starting from tim e 0). The num ber of events
betw een tim e a and tim e b is given as N (b) N (a) and has a Poisson distribution. Each
realization of the process {N (t)} is a non-negative integer-valued step function that is non-
decreasing, but for intuitive purposes it is usually easier to think of it as a point pattern on
[0, ) (the points in tim e w here the step function jum ps, i.e. the pointsin tim e w here an event
occurs).
The Poisson process is a continuous-tim e process; the Bernoulli processcan be thought of as
its discrete-tim e counterpart (although strictly, one w ould need to sum the events in a
Bernoulli process to also have a counting process). A Poisson process is a pure-birth process,
the sim plest exam ple of a birth-death process. By the aforem entioned interpretation as a
random point pattern on [0, ) it is also a point processon the real half-line.
The basic form of Poisson process, often referred to sim ply as "the Poisson process", is a
continuous-tim e counting process{N (t), t 0} that possesses the follow ing properties:
- N (0)= 0
- Independent increm ents (the num bers of occurrences counted in disjoint intervals are
independent from each other)
- Stationary increm ents (the probability distribution of the num ber of occurrences
counted in any tim e interval only depends on the length of the interval)
- N o counted occurrences are sim ultaneous.
Consequences of this definition include:
- The probability distribution of N (t) is a Poisson distribution.
- The probability distribution of the w aiting tim e until the next occurrence is an
exponential distribution.
- The occurrences are distributed uniform ly on any interval of tim e. (N ote that N (t), the
total num ber of occurrences, has a Poisson distribution over (0, t], w hereas the
location of an individual occurrence on is uniform .)
O ther types of Poisson process are described below .
T y p e s
H o m o g e n e o u s
Sam ple Poisson process N (t)
The hom ogeneous Poisson process is one of the m ost w ell-know n Lvy processes. This
process is characterized by a rate param eter , also know n as intensity, such that the num ber
of events in tim e interval(t,t+ ] follow s a Poisson distribution w ith associated param eter
. This relation is given as
w here N (t+ ) N (t) is the num ber of events in tim e interval (t,t+ ].
Just as a Poisson random variable is characterized by its scalar param eter , a hom ogeneous
Poisson process is characterized by its rate param eter , w hich is the expected num ber of
"events" or "arrivals" that occur per unit tim e.
N (t) is a sam ple hom ogeneous Poisson process, not to be confused w ith a density or
distribution function.
N o n - h o m o g e n e o u s
M ain article: N on-hom ogeneous Poisson process
In general, the rate param eter m ay change over tim e; such a process is called a non-
hom ogeneous Poisson process or inhom ogeneous Poisson process. In this case, the
generalized rate function is given as (t). N ow the expected num ber of events betw een tim e a
and tim e b is
Thus, the num ber of arrivals in the tim e interval (a,b], given as N (b) N (a), follow s a
Poisson distribution w ith associated param eter
a,b
A hom ogeneous Poisson process m ay be view ed as a special case w hen (t) = , a constant
rate.
S p a t i a l
A further variation on the Poisson process, called the spatialPoisson process, introduces a
spatial dependence on the rate function and is given as w here for som e
vector space V (e.g. R
2
or R
3
). For any set (e.g. a spatial region) w ith finite m easure
(S), the num ber of events occurring inside this region can be m odelled as a Poisson process
w ith associated rate function
S
(t) such that
In the special case that this generalized rate function is a separable function of tim e and
space, w e have:
for som e function . W ithout loss of generality, let
(If this is not the case, (t) can be scaled appropriately.) N ow , represents the spatial
probability density function of these random events in the follow ing sense. The act of
sam pling this spatial Poisson process is equivalent to sam pling a Poisson process w ith rate
function (t), and associating w ith each event a random vector sam pled from the
probability density function . A sim ilar result can be show n for the general (non-
separable) case.
Q U E U E I N G M O D E L
In queueing theory, a queueing m odelis used to approxim ate a real queueing situation or
system , so the queueing behaviour can be analysed m athem atically. Q ueueing m odels allow a
num ber of useful steady state perform ance m easuresto be determ ined, including:
- the average num ber in the queue, or the system ,
- the average tim e spent in the queue, or the system ,
- the statistical distribution of those num bers or tim es,
- the probability the queue is full, or em pty, and
- the probability of finding the system in a particular state.
These perform ance m easures are im portant as issues or problem s caused by queueing
situations are often related to custom er dissatisfaction w ith service or m ay be the root cause
of econom ic losses in a business. A nalysis of the relevant queueing m odels allow s the cause
of queueing issues to be identified and the im pact of proposed changes to be assessed.
In queueing theory, a queueing m odelis used to approxim ate a real queueing situation or
system , so the queueing behaviour can be analysed m athem atically. Q ueueing m odels allow a
num ber of useful steady state perform ance m easuresto be determ ined, including:
- the average num ber in the queue, or the system ,
- the average tim e spent in the queue, or the system ,
- the statistical distribution of those num bers or tim es,
- the probability the queue is full, or em pty, and
- the probability of finding the system in a particular state.
These perform ance m easures are im portant as issues or problem s caused by queueing
situations are often related to custom er dissatisfaction w ith service or m ay be the root cause
of econom ic losses in a business. A nalysis of the relevant queueing m odels allow s the cause
of queueing issues to be identified and the im pact of proposed changes to be assessed.
Q ueuing m odelscan be represented using K endall's notation:
A /B/S/K /N /D
w here:
- A is the interarrival tim e distribution
- B is the service tim e distribution
- S is the num ber of servers
- K is the system capacity
- N is the calling population
- D is the service discipline assum ed
M any tim es the last m em bers are om itted, so the notation becom es A /B/S and it is assum ed
that K = , N = and D = FIFO .
Som e standard notation for distributions (A or B) are:
- M for a M arkovian (exponential) distribution
- E for an Erlang distribution w ith phases
- D for degenerate (or determ inistic) distribution (constant)
- G for general distribution (arbitrary)
- PH for a phase-type distribution
M o d e l s
Construction and analysis
Q ueueing m odels are generally constructed to represent the steady state of a queueing system ,
that is, the typical, long run or average state of the system . A s a consequence, these are
stochastic m odels that represent the probability that a queueing system w ill be found in a
particular configuration or state.
A general procedure for constructing and analysing such queueing m odels is:
1. Identify the param eters of the system , such as the arrival rate, service tim e, queue
capacity, and perhaps draw a diagram of the system .
2. Identify the system states. (A state w ill generally represent the integer num ber of
custom ers, people, jobs, calls, m essages, etc. in the system and m ay or m ay not be
lim ited.)
3. D raw a state transition diagram that represents the possible system states and identify
the rates to enter and leave each state. This diagram is a representation of a M arkov
chain.
4. Because the state transition diagram represents the steady state situation betw een state
there is a balanced flow betw een states so the probabilities of being in adjacent states
can be related m athem atically in term s of the arrival and service rates and state
probabilities.
5. Express all the state probabilities in term s of the em pty state probability, using the
inter-state transition relationships.
6. D eterm ine the em pty state probability by using the fact that all state probabilities
alw ays sum to 1.
W hereas specific problem s that have sm all finite state m odels can often be analysed
num erically, analysis of m ore general m odels, using calculus, yields useful form ulae that can
be applied to w hole classes of problem s.
S i n g l e - s e r v e r q u e u e
Single-server queues are, perhaps, the m ost com m only encountered queueing situation in real
life. O ne encounters a queue w ith a single server in m any situations, including business (e.g.
sales clerk), industry (e.g. a production line), transport (e.g. a queues that the custom er can
select from .) Consequently, being able to m odel and analyse a single server queue's
behaviour is a particularly useful thing to do.
M / M / 1 Q u e u i n g s y s t e m ( a r e m i n d e r )
For the M /M /1 Q ueuing system presented in the previous lecture w e w ere able, under
reasonable assum ptions on the arrival and service statistics, to fully characterize the statistical
properties of the queue. A lthough a continues tim e m odel had been assum ed (for the arrival
and service of inform ation) one can still analyze the queuing scenario in the fram ew ork of
discrete tim e M arkov chains. Som e of the results obtained in the previous lecture are
sum m arized in the follow ing equations:
( )
{ }
system in the stayes packet a tim e the of value expected -
-
1
T
packets of num ber the of value expected -
1
n E N
system in the packets of num ber of pdf - 1 P(n)
factor n utilizatio - 1

=
=
<
n
W here is the expected service rate and can beinterpreted as the system capacity and is the
expected arrival rate. If one w ere to increase the system capacity and arrival rate by a factor
of K , so that = K and = K the system utilization factor w ould not change but the
average delay w ould decrease by a factor of K since each packet w ould be transm itted K
tim es faster.
M / M / m Q u e u i n g s y s t e m
The M /M /m queue differs from the M /M /1 queue in that it incorporates m servers in parallel
that can process inform ation. Each server has a service rate of and the total arrival rate is .
For the analysis w e shall assum e (as w as assum ed for the M /M /1 queue) that only one event
occurs in a tim e interval of . The M arkov chain for describing the M /M /m queue process is
given in the follow ing figure:
0 m+1 m-1 2 1
1



2 3

) 1 ( m m m
m

m
W hen there are up to j m users in the system , at each m om ent a single user can exit the
system w ith probability j. This is since any one of the j occupied servers can finish
processing its stream w ith a probability of and all j servers are independent of one another.
W hen there are j>m users in the system at each m om ent a single user can exit the system
w ith probability m .
A ssum ing a steady state scenario the balance equations are given by:
( ) ( )
( ) ( )

> =
s =
m n n p m n p
m n n p n n p


1
1
The above equations sim ply state that during steady state the total num ber of transitions from
state n-1 to state n m ust differ from the total num ber of transitions from n to n-1 by at m ost 1,
thus asym ptotically the probability that the system is in state n-1 and m akes a transition to
state n isequal to the probability that the system is in state n and m akes a transition to state n-
1.
The solution to the above difference equations is given by:
( )
( )
( )
( )

>
s
m n
m
m
p
m n
n
m
p
n
n
!
0
!
0

W here:
( ) ( )
( )
1
1
0
1 ! !
) 0 (
condition) (stability 1


+
|
|
.
|

\
|

<

m
m
n
m
p
m
m
m
n
n
In order to calculate the average num berof users and average delay in the queue w e shall
com pute the probability that an arrival w ill find all the servers busy and w ill need to w ait in
the queue. W e shall denote this probability by P
Q .
In order for all servers to be buys the
num ber of users in the system m ust be larger than m . and so P
Q
is given by:
( )


=

= =
m n
m
Q
m
m
p
n p P

1
1
!
) 0 (
) (
The above equation is know n as the Erlang C form ula, and is often used in the analysis of
telephone netw orks. N ow the average num ber of users w aiting in the queue is given by:


=

= =
m n
Q Q
P n p m n N

1
) ( ) (
It is w orth noting that

=
1
Q
Q
P
N
w hich represents the expected num ber of users found in a
queue by an arriving user given that he is forced to w ait in line. This quantity is independent
of the num ber of servers m for a given value of . This suggests that as long there are users
w aiting in line the M /M /m queue behaves identically to the M /M /1 queue.
By using Littles theorem one can derive the average am ount of tim e a user has to w ait in the
queue:
) 1 (



= =
Q Q
P N
W
A nd so the total average w aiting tim e is given W plus the service tim e thus:



+ =


+ =
m
P P
T
Q Q
1
) 1 (
1
M / M / 1 ( m , ) v s M / M / m ( , )
Based on the derivations m ade in 0 w e can now com pare an M /M /1 system w ith service rate
m and an M /M /m system w ith service rate . W e now know that the average w aiting and
service tim e for an M /M /1 (m , )queue is given by:


m
m
P
m
T
Q

+ =
Q
P

1
'
The average w aiting and service tim e for an M /M /m (, )queue is given by:

+ =
m
P
T
Q
1
W here
Q Q
P P ,

denote the queuing probability in each case.


W hen <<1, corresponding w ith lightly loaded system s, 0 , 0

~ ~
Q Q
P P and so m
T
T
~
'
.
W hen =~1, corresponding w ith heavily loaded system s, 1
'
~
T
T
.
This m eans that w hen the queue is not loaded the average w aiting and service tim e for the
M /M /1 system is faster by a factor of m , w hile w hen the system s are heavily loaded both
behave identically
M / M /
W hen the num ber of servers goes to infinity then only the service tim e plays a role in the
analysis since there is alw ays an available server. The balance equation is given by:
( ) ( ) n p n n p = 1
The solution is then given by:

=

|
|
.
|

\
|
=
|
|
.
|

\
|
=
=

|
|
.
|

\
|
=

e
n n
p n p
e
n
p
n n
n
n
!
1
!
1
) 0 ( ) (
!
1
1
) 0 (
0
Clearly p(n) ~ Poisson (/) and so N =E{p(n)} = / and by Littles theorem T= 1/.
M /M /m /m
In this queuing system the num ber of users is lim ited to m . w hen the system is full all new
users are denied service The balance equations for the M /M /m /m queue system is by:
( ) ( )
( ) m n n p
m n n p n n p
> =
s s =
0
0 1
The solution is then given by:
1
0
!
1
) 0 (
!
1
) 0 ( ) (

|
|
.
|

\
|
=

|
|
.
|

\
|
=

m
n
n
n
n
p
n
p n p

A n interesting quantity is the probability that a user be denied service w hich is given by:

=
|
.
|

\
|
|
.
|

\
|
=
m
n
n
m
n
m
P
0
!
!
denied} service {

The above equation is know n as the Erlang B form ula.


M / G / 1 & P - K f o r m u l a
The M /G /1 queuing system assum es that users arrive in a m em ory-less fashion w ith a rate of
. The service tim e is still assum ed to be a m em ory-less random variable (X 1,X 2, . . .) w ith
an arbitrary distribution unlike the previous queuing system s that assum ed an exponential
distribution. It turns out that even for this sim ple queuing system it ishard to obtained closed
form expressions for the exact queue behavior. H ow ever expressions for the average system
behavior can be derived. A ssum e that
{ }
{ } tim e service of m om ent Second X
tim e service average
1
2 2
X E
X E X
=
= =

are w ell defined. The average w aiting tim e in such a scenario is given by the Pollaczek-
K hinchin form ula:



=
) 1 ( 2
2
X
W
A nd thus the total w aiting and service tim e is given by:
) 1 ( 2
1
2



+ =
X
T
The above form ula holds for every random m em ory-less service process. To gain som e
insight let us recall that for the M /M /1 queuing system :
) 1 (
2 1 / /



=
M M
W
A ssum ing an exponentially distributed service process w ith param eter resulting in
2
2
2
2
=
X
In such a scenario the P-K form ula coincides w ith the w ell-know n M /M /1 w aiting tim e. In
case that the service tim e is determ inistic and equals 1/ then
2
2
1

= X and the w aiting tim e


is given by the P-K form ula:
) 1 ( 2
2 1 / /

=
D M
W
w hich is half the w aiting tim e for the exponentially distributed scenario. Since the M /D /1
case has the m inim um possible value for
2
X given , it follow s that the values of W ,T,N
Q
for an M /D /1 queue are low er bounds to the corresponding M /G /1 queue w ith the sam e and
.
P R O T O C O L S T A C K D E S I G N
A protocol stack (som etim es com m unications stack) is a particular softw are im plem entation
of a com puter netw orking protocol suite. The term s are often used interchangeably. Strictly
speaking, the suite is the definition of the protocols, and the stack is the softw are
im plem entation of them .
Individual protocols w ithin a suite are often designed w ith a single purpose in m ind. This
m odularization m akes design and evaluation easier. Because each protocol m odule usually
com m unicates w ith tw o others, they are com m only im agined as layersin a stack of protocols.
The low est protocol alw ays deals w ith "low -level", physical interaction of the hardw are.
Every higher layer adds m ore features. U ser applications usually deal only w ith the topm ost
layers (see also O SI m odel).
In practical im plem entation, protocol stacks are often divided into three m ajor sections:
m edia, transport, and applications. A particular operating system or platform w ill often have
tw o w ell-defined softw are interfaces: one betw een the m edia and transport layers, and one
betw een the transport layers and applications.
The m edia-to-transport interface defines how transport protocol softw are m akes use of
particular m edia and hardw are types ("card drivers"). For exam ple, this interface level w ould
define how TCP/IP transport softw are w ould talk to Ethernethardw are. Exam ples of these
interfaces include O D Iand N D IS in the M icrosoft W indow sand D O S environm ent.
The application-to-transport interface defines how application program s m ake use of the
transport layers. For exam ple, this interface level w ould define how a w eb brow serprogram
w ould talk to TCP/IP transport softw are. Exam ples of these interfaces include Berkeley
socketsand System V STREA M S in the U nix w orld, and W insock in the M icrosoft w orld.
G eneral protocol suite description
T ~ ~ ~ T
[ A ] [ B ] _ _ _ _ _ [ C ]
Im agine three com puters: A , B, and C. A and B both have radio equipm ent, and can
com m unicate via the airw aves using a suitable netw ork protocol (such as IEEE 802.11.) B
and C are connected via a cable, using it to exchange data (again, w ith the help of a protocol,
for exam ple Ethernet). H ow ever, neither of these tw o protocols w ill be able to transport
inform ation from A to C, because these com puters are conceptually on different netw orks.
O ne, therefore, needs an inter-netw ork protocol to "connect" them .
O ne could com bine the tw o protocols to form a pow erful third, m astering both cable and
w ireless transm ission, but a different super-protocol w ould be needed for each possible
com bination of protocols. It is easier to leave the base protocols alone, and design a protocol
that can w ork on top of any of them (the Internet Protocolis an exam ple.) This w ill m ake tw o
stacks of tw o protocols each. The inter-netw ork protocol w ill com m unicate w ith each of the
base protocol in their sim pler language; the base protocols w ill not talk directly to each other.
A request on com puter A to send a chunk of data to C is taken by the upper protocol, w hich
(through w hatever m eans) know s that C is reachable through B. It, therefore, instructs the
w ireless protocol to transm it the data packet to B . O n this com puter, the low er layer handlers
w ill pass the packet up to the inter-netw ork protocol, w hich, on recognizing that B is not the
final destination, w ill again invoke low er-level functions. This tim e, the cable protocol is
used to send the data to C. There, the received packet is again passed to the upper protocol,
w hich (w ith C being the destination) w ill pass it on to a higher protocol or application on C.
O ften an even higher-level protocol w ill sit on top, and incur further processing.
A n exam ple protocol stack and the corresponding layers:
P r o t o c o l L a y e r
H T T P A p p l i c a t i o n
T C P T r a n s p o r t
I P I n t e r n e t
E t h e r n e t L i n k
I E E E 8 0 2 . 3 u P h y s i c a l
S O C K E T P R O G R A M M I N G
M ost operating system s provide precom piled program s that com m unicate across a netw ork.
Com m on exam ples into the TCP/IP w orld are w eb clients(brow sers) and w eb servers, and the
FTP and TELN ET clients and servers. Som etim es w hen w e are using this utilities of the
internet w e don't think about all the process involved. To better understand this aspects w e, in
our research group(G TI, G rupo de Tecnologia em Inform tica) at G oias Catholic U niversity
(U niversidade Catlica de G ois), decide to build, w rite our ow n netw ork program s, m ini-
chat, using the basic structure about sockets, an application program interface or A PI, that
m echanism that m ake all this com m unication possible over the N et.
W e exam ine the functions for com m unication through sockets. A socket is an endpoint used
by a process for bi-directional com m unication w ith a socket associated w ith another process.
Sockets, introduced in Berkeley U nix, are a basic m echanism for IPC on a com puter system ,
or on different com puter system s connected by local or w ide area netw orks(resource 2). To
understand som e structs into this subject is necessarya deeper know ledge about the operating
system and his netw orking protocols. This subject can be used as either beginners
program m ers or as a reference for experienced program m ers.
T h e S o c k e t F u n c t i o n
M ost netw ork applications can be divided into tw o pieces: a client and a server.
Creating a socket
#include <sys/types.h>
#include <sys/socket.h>
W hen you create a socket there are three m ain param eters that you have to specify:
- the dom ain
- the type
- the protocol
int socket(int dom ain, int type, int protocol);
The D om ain param eter specifies a com m unications dom ain w ithin w hich com m unication w ill
take place, in our exam ple the dom ain param eter w as A F_IN ET, that specify the A R PA
Internet Protocols The Type param eter specifies the sem antics of com m unication, in our m ini
chat w e used the Stream socket type(SO CK _STREA M ), because it offers a bi-directional,
reliable, tw o-w ay connection based byte stream (resource 2). Finally the protocol type, since
w e used a Stream Socket type w e m ust use a protocol that provide a connection-oriented
protocol, like IP, so w e decide to use IP in our protocol Type, and w e saw in /etc/protocols
the num ber of ip, 0. So our function now is:
s = socket(A F_IN ET , SO CK _STREA M , 0)
w here 's' is the file descriptor returned by the socket function.
Since our m ini chat is divided in tw o parts w e w ill divided the explanation in the server, the
client and the both, show ing the basic differences betw een them , as w e w ill see next.
T h e M i n i - c h a t S e r v e r s t r u c t u r e
Binding a socket to a port and w aiting for the connections
Like all services in a N etw ork TCP/IP based, the sockets are alw ays associated w ith a port,
like Telnet is associated to Port 23, FTP to 21... In our Server w e have to do the sam e thing,
bind som e port to be prepared to listening for connections ( that is the basic difference
betw een Client and Server), Listing 2. Bind is used to specify for a socket the protocol port
num ber w here it w ill be w aiting for m essages.
So there is a question, w hich port could w e bind to our new service? Since the system pre-
defined a lot of ports betw een 1 and 7000 ( /etc/services ) w e choose the port num ber 15000.
The function of bind is:
int bind(int s, struct sockaddr *addr, int addrlen)
The struct necessary to m ake socket w orks is the struct sockaddr_in address; and then w e
have the follow lines to say to system the inform ation about the socket.
The type of socket
address.sin_fam ily = A F_IN ET /* use a internet dom ain */
The IP used
address.sin_addr.s_addr = IN A D D R_A N Y /*use a specific IP of host*/
The port used
address.sin_port = htons(15000); /* use a specific port num ber */
A nd finally bind our port to the socket
bind(create_socket , (struct sockaddr *)& address,sizeof(address));
N ow another im portant phase, prepare a socket to accept m essages from clients, the listen
function is used on the server in the case of connection oriented com m unication and also the
m axim um num ber of pending connections(resource 3).
listen (create_socket, M A X N U M BER)
w here M A X N U M ER in our case is 3. A nd to finish w e have to tell the server to accept a
connection, using the accept() function. A ccept is used w ith connection based sockets such as
stream s.
accept(create_socket,(struct sockaddr *)& address,& addrlen);
A s w e can see in Listing 2 The param eters are the socket descriptor of the m aster socket
(create_socket), follow ed by a sockeaddr_in structure and the size of the structure.(resource
3)
T h e M i n i - c h a t C l i e n t s t r u c t u r e
M aybe the biggest difference is that client needs a Connect() function. The connect operation
is used on the client side to identify and, possibly, start the connection to the server. The
connect syntax is
connect(create_socket,(struct sockaddr *)& address,sizeof(address)) ;
T h e c o m m o n s t r u c t u r e
A com m on structure betw een Client and the Server isthe use of the struct hostent as seeing in
Listing 1 and 2. The use of the Send and Recv functions are another com m on codes.
The Send() function is used to send the buffer to the server
send(new _socket,buffer,bufsize,0);
and the Recv() function is used to receive the buffer from the server, look that it is used both
in server and client.
recv(new _socket,buffer,bufsize,0);
Since the softw are of the TCP/IP protocol is inside the operating system , the exactly interface
betw een an application and the TC P/IP protocols depends of the details of the operating
system (resource 4).In our case w e exam ine the U N IX BSD socket interface because Linux
follow this. The M ini-chat developed here is nothing m ore than a explain m odel of a
client/server application using sockets in Linux and should be used like a introduction of how
easy is to develop applications using sockets. A fter understand this you can easily start to
think about IPC (interprocess Com m unication), fork, threads(resource 5) and m uch m ore.
The basic steps to m ake it w ork is:
1. Run the server
2. Run the client w ith the address of the server
A m azing, dont you think?
This exam ple w as the start of our server program in our last project, a netw ork m anagem ent
program . H ere are the source listings:
client.c,server.c .
**********************************************************************

You might also like