Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Question 2

Explain the meaning of the following terms and give examples where appropriate:

a) Middleware:

Middleware, as name suggests, is simply a type of software that lies between operating


system and applications that are running on it to provide services as well as act as a bridge
among applications and other databases or tools. It is software that provides services beyond
those provided by the operating system to enable the various components of a distributed system
to communicate and manage data. Middleware often enables interoperability between
applications that run on different operating systems, by supplying services so the application can
exchange data in a standards-based way. Middleware is sometimes called plumbing because it
connects two applications and passes data between them. A middleware architecture in a
distributed system effectively acts as the “glue” that puts together many different hardware and
software entities. Features of middleware includes reusability, self-discovery, supporting Quality
of Service, simplifying development process. Benefits of middleware includes streamline
process, improve efficiency, allow real-time information access. Common middleware types
include remote procedure call middleware, message-oriented middleware, message oriented
middleware, and event based middleware.

 Remote Procedure Call: An RPC is exactly what it sounds like. It calls procedures on
remote systems and is used to perform synchronous or asynchronous interactions
between applications or systems. It is usually utilized within a software application.
 Object middleware: Object middleware, also called an object request broker, gives
applications the ability to send objects and request services via an object oriented system.
In short, it manages the communication between objects.
 Message Oriented Middleware: This type of middleware is an infrastructure that
supports the receiving and sending of messages over distributed applications. It enables
applications to be disbursed over various platforms and makes the process of creating
software applications spanning many operating systems and network protocols much less
complicated. It holds many advantages over middleware alternatives (e.g. hard coding
logic) and is one of the most widely used types of middleware.
b) Persistent objects

In object technology, a persistent object is one that continues to exist after the program that
created it has been unloaded. Persistence refers to the characteristic of state of a system that
outlives (persists more than) the process that created it. This is achieved in practice by storing the
state as data in computer data storage. An object's class and current state must be saved for use in
subsequent sessions. In distributed systems Persistent objects will survive even if their server is
shut down as opposed to Transient objects that dies if their server is shut down.

A persistent object continues to exist even if it is currently not contained in the address space of
any server process. In other words, a persistent object is not dependent on its current server. In
practice, this means that the server that is currently managing the persistent object, can store the
object's state on secondary storage and then exit. Later, a newly started server can read the
object's state from storage into its own address space, and handle invocation requests.

A persistent object store decouples the object model from individual programs, effectively
providing a persistent heap that can be shared by many different programs. Objects in the heap
survive and can be used by applications so long as they can be referenced. Such a system can
guarantee safe sharing, ensuring that all programs that use a shared object use it in accordance
with its type (by calling its methods). If in addition the system provides support for atomic
transactions, it can guarantee that concurrency and failures are handled properly. Such a platform
allows sharing of objects across both space and time. Objects can be shared between applications
running now and in the future. Also, objects can be used concurrently by applications running at
the same time but at different locations: different processors within a multiprocessor; or different
processors within a distributed environment.

c) Logical clock

Logical clock refers to the implementation of a protocol on all machines within your distributed
system, so that the machines are able to maintain consistent ordering of events within some
virtual timespan. A logical clock is a mechanism for capturing chronological and causal
relationships in a distributed system. Logical clock concerned on the order of events not the time
they occurred, an example will be lamport algorithm which is a synchronization algorithm, it is
used in situations where ordering is important not time. Also, logical Clocks refer to
implementing a protocol on all machines within your distributed system, so that the machines are
able to maintain consistent ordering of events within some virtual timespan.

Distributed systems may have no physically synchronous global clock, so a logical clock allows
global ordering on events from different processes in such systems. A logical Clock C is some
abstract mechanism which assigns to any event e∈E the value C(e) of sometime domain T such
that certain conditions are met.

Implementation of Logical Clocks requires Data structures local to every process to represent
logical time and a protocol to update the data structures to ensure the consistency condition.
There 3 types of logical clocks:

1. Scalar
2. Vector
3. Matrix
For instance, we have more than 10 computers in a distributed system and every computer is
doing its own work but then to make them work together we use LOGICAL CLOCKS for
synchronization.

Taking the example into consideration, this means if we assign the first computer as 1, second
computer as 2, third computer as 3 and so on. Then we always know that the first computer will
always come first and then so on execution. Similarly, if we give each PC their individual
number than it will be organized in a way that 1st PC will complete its process first and then
second and so on.

This is more formally specified as a way of placing events in some timespan so the following
property will always be true:

Given 2 events (e1, e2) where one is caused by the other (e1 contributes to e2 occurring). Then
the timestamp of the ‘caused by’ event (e1) is less than the other event (e2).

To provide this functionality any Logical Clock must provide 2 rules:

Rule 1: this determines how a local process updates its own clock when an event occurs.

Rule 2: determines how a local process updates its own clock when it receives a message from
another process. This can be described as how the process brings its local clock inline with
information about the global time.

The simplest implementation of this is Scalar Time. In this representation each process keeps a
local clock which is initially set to 0. It then provides the following implementation of the 2
rules:

Rule 1: before executing an event (excluding the event of receiving a message) increment the
local clock by 1.

local_clock = local_clock + 1

Rule 2: when receiving a message (the message must include the senders local clock value) set
your local clock to the maximum of the received clock value and the local clock value. After
this, increment your local clock by 1 [Figure 4].

1. local_clock = max(local_clock, received_clock)


2. local_clock = local_clock + 1
3. message becomes available.

d) Concurrent server

Server-type applications that communicate with many clients simultaneously demand both a high
degree of concurrency and high performance from the I/O subsystem. A concurrent server
handles multiple clients at the same time. The simplest technique for a concurrent server is to
call the fork function, creating one child process for each client. An alternative technique is to
use threads instead (i.e., light-weight processes). For lengthy transactions, a different sort of
server is needed that is the concurrent server, as shown in picture below. Here, Client A has
already established a connection with the server, which has then created a child server process to
handle the transaction. This allows the server to process Client B’s request without waiting for
A’s transaction to complete. More than one child server can be started in this way.

The following list describes the concurrent server process.

1. When a connection request arrives in the main process of a concurrent server, it


schedules a child process and forwards the connection to the child process.

2. The child process takes the connection from the main process.

3. The child process receives the client request, processes it, and returns a reply to the client.

4. The connection is closed, and the child process terminates or signals to the main process
that it is available for a new connection.
You can implement a concurrent server in the following MVS environment, Native MVS (batch
job, started task, or TSO). In this environment you implement concurrency by using traditional
MVS sub tasking facilities. These facilities are available from assembler language programs or
from high-level languages that support multitasking or multithreading.

e) Home-based approach to flat naming

Flat naming is a simple random bit strings, it does not contain any information whatsoever on
how to locate an access point of its associated entity. Flat naming allows for unique and location-
independent way to refer to distributed entities Names play a very important role in all computer
systems. They are used to share resources, to uniquely identify entities, to refer to locations, and
more. An important issue with naming is that a name can be resolved to the entity it refers to.
Name resolution thus allows a process to access the named entity. To resolve names, it is
necessary to implement a naming system. The difference between naming in distributed systems
and non- distributed systems lies in the way naming systems are implemented.

By using home based approach, it keeps track where the entity is. We have two home based
approaches single tired scheme and two tired scheme. In single tier we send packet to home and
it return the address of the current location then it sends packet to current location after this has
been completed it send successful packets to current location. However, in two tier we have to
keep track of visiting entities by checking local visitor register first, fall back to home location if
local lookup fails.

Typical examples include resources such as hosts, printers, disks, and files

You might also like