Professional Documents
Culture Documents
Ch. Devilal State Institute of Engineering and Technology
Ch. Devilal State Institute of Engineering and Technology
ASSIGNMENT-1
In
That is, the programmer writes essentially the same code whether the subroutine is local
to the executing program, or remote. This is a form of client–server interaction (caller is
client, executor is server), typically implemented via a request–response message-passing
system. In the object-oriented programming paradigm, RPC calls are represented by
remote method invocation (RMI).
The RPC model implies a level of location transparency, namely that calling procedures
are largely the same whether they are local or remote, but usually they are not identical,
so local calls can be distinguished from remote calls. Remote calls are usually orders of
magnitude slower and less reliable than local calls, so distinguishing them is important.
RPCs are a form of inter-process communication (IPC), in that different processes have
different address spaces: if on the same host machine, they have distinct virtual address
spaces, even though the physical address space is the same; while if they are on different
hosts, the physical address space is different.
Many different (often incompatible) technologies have been used to implement the
concept. RPC is a powerful technique for constructing distributed, client-server based
applications. It is based on extending the notion of conventional, or local procedure
calling, so that the called procedure need not exist in the same address space as the
calling procedure.
The two processes may be on the same system, or they may be on different systems with
a network connecting them. By using RPC, programmers of distributed applications
avoid the details of the interface with the network.
The transport independence of RPC isolates the application from the physical and logical
elements of the data communications mechanism and allows the application to use a
variety of transports.
RPC makes the client/server model of computing more powerful and easier to program.
When combined with the ONC RPCGEN protocol compiler clients transparently make
remote calls through a local procedure interface.
An RPC is analogous to a function call. Like a function call, when an RPC is made, the calling
arguments are passed to the remote procedure and the caller waits for a response to be returned
from the remote procedure. shows the flow of activity that takes place during an RPC call
between two networked systems.
The client makes a procedure call that sends a request to the server and waits. The thread is
blocked from processing until either a reply is received, or it times out. When the request arrives,
the server calls a dispatch routine that performs the requested service, and sends the reply to the
client. After the RPC call is completed, the client program continues. RPC specifically supports
network applications.
Each version consists of a collection of procedures which are available to be called remotely.
Version numbers enable multiple versions of an RPC protocol to be available simultaneously.
Each version contains a a number of procedures that can be called remotely. Each procedure has
a procedure number.
Consider an example:
We use UNIX to run a remote shell and execute the command this way. There are some
problems with this method:
GROUP COMMUNICATION
A client wishes to obtain a service which can be performed by any member of the group
without affecting the state of the service.
A client wishes to obtain a service which must be performed by each member of the
group.
In the first case, the client can accept a response to its multicast from any member of the
group as long as at least one responds. The communication system need only guarantee
delivery of the multicast to a nonfaulty process of the group on a best-effort basis. In the
second case, the all-or-none atomic delivery requirements requires that the multicast
needs to be buffered until it is committed and subsequently delivered to the application
process, and so incurs additional latency.
Failure may occur during a multicast at the recipient processes, the communication links
or the originating process.
Failures at the recipient processes and on the communication links can be detected by the
originating process using standard time-out mechanisms or message acknowledgements.
The multicast can be aborted by the originator, or the service group membership may be
dynamically adjusted to exclude the failed processes and the multicast can be continued.
If the originator fails during the multicast, there are two possible outcomes. Either the
message has not have arrived at any destination or it has arrived at some. In the first case,
no process can be aware of the originator's intention and so the multicast must be aborted.
In the second case it may be possible to complete the multicast by selecting one of the
recipients as the new originator. The recipients would have to buffer messages until safe
for delivery in case they were called on for this role.
A reliable multicast protocol imposes no restriction on the order in which messages are
delivered to group processes. Given that multicasts may be in progress by a number of
originators simultaneously, the messages may arrive at different processes in a group in
different orders. Also, a single originator may have a number of simultaneous multicasts
in progress or may have issued a sequence of multicast messages whose ordering we
might like preserved at the recipients.
A number of possible scenarios are given below which may require different levels of
ordering semantics. G and s represent groups and message sources. s may be inside or
outside a group. Note that group membership may overlap with other groups, that is,
processes may be members of more than one group.
SCHEDULING IN DOS
The scheduler is an operating system module that selects the next jobs to be admitted into
the system and the next process to run. Operating systems may feature up to three distinct
scheduler types: a long-term scheduler (also known as an admission scheduler or high-
level scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The
names suggest the relative frequency with which their functions are performed.
Process scheduler
The process scheduler is a part of the operating system that decides which process runs at
a certain point in time.
It usually has the ability to pause a running process, move it to the back of the running
queue and start a new process; such a scheduler is known as preemptive scheduler,
otherwise it is a cooperative scheduler.
Long-term scheduling
The long-term scheduler, or admission scheduler, decides which jobs or processes are to
be admitted to the ready queue (in main memory); that is, when an attempt is made to
execute a program, its admission to the set of currently executing processes is either
authorized or delayed by the long-term scheduler.
Thus, this scheduler dictates what processes are to run on a system, and the degree of
concurrency to be supported at any one time – whether many or few processes are to be
executed concurrently, and how the split between I/O-intensive and CPU-intensive
processes is to be handled. The long-term scheduler is responsible for controlling the
degree of multiprogramming.
Long-term scheduling is also important in large-scale systems such as batch
processing systems, computer clusters, supercomputers, and render farms.
For example, in concurrent systems, coscheduling of interacting processes is often
required to prevent them from blocking due to waiting on each other.
In these cases, special-purpose job scheduler software is typically used to assist these
functions, in addition to any underlying admission scheduling support in the operating
system.
Medium-term scheduling
The medium-term scheduler temporarily removes processes from main memory and
places them in secondary memory (such as a hard disk drive) or vice versa, which is
commonly referred to as "swapping out" or "swapping in" (also incorrectly as
"paging out" or "paging in").
The medium-term scheduler may decide to swap out a process which has not been active
for some time, or a process which has a low priority, or a process which is page
faulting frequently, or a process which is taking up a large amount of memory in order to
free up main memory for other processes, swapping the process back in later when more
memory is available,
Short-term scheduling
Dispatcher
Another component that is involved in the CPU-scheduling function is the dispatcher,
which is the module that gives control of the CPU to the process selected by the short-
term scheduler. It receives control in kernel mode as the result of an interrupt or system
call. The functions of a dispatcher mop the following:
Context switches, in which the dispatcher saves the state (also known as context) of
the process or thread that was previously running; the dispatcher then loads the initial or
previously saved state of the new process.
Switching to user mode.
Jumping to the proper location in the user program to restart that program indicated by its
new state.
The dispatcher should be as fast as possible, since it is invoked during every process
switch. During the context switches, the processor is virtually idle for a fraction of time,
thus unnecessary context switches should be avoided. The time it takes for the dispatcher
to stop one process and start another is known as the dispatch latency.
Election Algorithms:
Algorithm –
Algorithm –
If process P1 detects a coordinator failure, it creates new active list which is empty
initially. It sends election message to its neighbour on right and adds number 1 to its
active list.
If process P2 receives message elect from processes on left, it responds in 3 ways:
o (I) If message received does not contain 1 in active list then P1 adds 2 to its active
list and forwards the message.
o (II) If this is the first election message it has received or sent, P1 creates new
active list with numbers 1 and 2. It then sends election message 1 followed by 2.
o (III) If Process P1 receives its own election message 1 then active list for P1 now
contains numbers of all the active processes in the system. Now Process P1
detects highest priority number from list and elects it as the new coordinator.