Christains Algorithm

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Christian’s Algorithm 

is a clock synchronization algorithm is used to synchronize


time with a time server by client processes. This algorithm
works well with low-latency networks where Round Trip Time is
short as compared to accuracy while redundancy-prone
distributed systems do not go hand in hand with this algorithm.
Here Round Trip Time refers to the time duration between the
start of a Request and the end of the corresponding Response.
Below is an illustration imitating the working of Christian’s
algorithm: 
Algorithm:
1) The process on the client machine sends the request for
fetching clock time (time at server) to the Clock Server at time  .

2) The Clock Server listens to the request made by the client


process and returns the response in form of clock server time.

3) The client process fetches the response from the Clock Server
at time     and calculates the synchronized client clock time
using the formula given below.

    

where TClient refers to the synchronized clock time, 

TServer refers to the clock time returned by the server, 

T0 refers to the time at which request was sent by the client


process, 

T1 refers to the time at which response was received by the client


process

Working/Reliabilityof the above formula:

T1 – T0 refers to the combined time taken by the network and the


server to transfer the request to the server, process the request,
and return the response back to the client process, assuming that
the network latency T0 and T1 are approximately equal.
The time at the client-side differs from actual time by at most  (T1
– T0) /2 seconds. Using the above statement we can draw a
conclusion that the error in synchronization can be at most  (T1 –
T0) /2 seconds. 

Improvision in Clock Synchronization:


Using iterative testing over the network, we can define a minimum
transfer time using which we can formulate an improved
synchronization clock time (less synchronization error). Here, by
defining a minimum transfer time, with a high confidence, we can
say that the server time will
always be generated after   and the   will
always be generated before  , where   is the
minimum transfer time which is the minimum value of   
and   during several iterative tests. Here
synchronization error can be formulated as follows:

  

Similarily, if   and   differ by a


considerable amount of time, we may substitute   by   
and  , where   is the minimum observed request time
and   refers to the minimum observed response time over
the network.
The synchronized clock time in this case can be calculated as:

So, by just introducing response and request time as separate


time latencies, we can improve synchronization of clock time and
hence decrease the overall synchronization error. Number of
iterative tests to be ran depends on the overall clock
drift observed.

You might also like