Professional Documents
Culture Documents
cs556 Section3
cs556 Section3
cs556 Section3
Layered Protocols
Layers, interfaces, and protocols in the OSI model.
2-1
Layered Protocols
International Standards Organization (ISO)
Developed a reference model that clearly identifies the
various levels involved, gives them standard names, and
points out which level should do which job.
Open systems Interconnection Reference Model
Protocols: rules determining how an open system
can communicate with another open system.
Connection-oriented protocols: Before exchanging
data the sender and receiver first explicitly
establish a connection, and possibly negotiate the
protocol they will use
Connectionless protocols: No setup in advance is
needed.
Dropping a letter in a mailbox
Layered Protocols
2-2
Layered Protocols
Physical Layer
Undertakes the actual transmission of the message.
How many volts to use for 0 and 1?
2-3
a) 2-4
Normal operation of TCP.
b) Transactional TCP.
Higher Level Protocols
Session Layer
Provides dialog control, to keep track of which party is
currently talking
Provides synchronization facilities
Insert checkpoints into long transfers, so that in the event
of a crash, it is necessary to go back only to the last
checkpoint
Presentation Layer
Is concerned with the meaning of the bits
It is possible to define records such a name, address, amount
of money, and other valuable info that might be contained in a
message, and have the sender notify the receiver that a
message contains a particular record in a certain format.
Makes communication easier between machines with different
internal representations
Application Protocols
• Call by value
• Call by reference
• Call by copy/restore
2-8
Possible Solutions
1. Forbid pointers
2. Copy the object pointed to by the pointer into the message
and send it to the server.
The server copies it at some place in its memory space, and calls
the routine passing a pointer to it.
When the routine ends, the server copies back the object’s value
into one parameter of the message and sends the message to the
client.
If the stubs know whether the object is an input or output
parameter to the server, one of the two copies can be avoided.
The above approach does not work with complex objects (i.e,
dynamic arbitrary data structures).
Parameter Specification and Stub Generation
a) A procedure
b) The corresponding message:
• A character is placed in the rightmost
byte of a word
• A float is transmitted as a whole word
• An array as a group of words
equal to the array length, preceded
by a word giving the length.
Parameter Specification and Stub Generation
The caller and the callee agree on:
The format of the messages
UDP?
Stubs for the same RPC protocol but different procedures generally
differ only in their interface to the applications.
An interface consists of a collection of procedures that can be
called by a client, and which are implemented by a server.
Interfaces are often specified by means of an Interface Definition
Language (IDL),
then compiled into a client stub and a server stub, along
with the appropriate compile-time or run-time interfaces.
Asynchronous RPC
Examples where blocking is not necessary
Transferring money from one account to
another
Adding entries into a database
Starting remote services
Batch processing
Asynchronous RPCs
A client immediately continues after
issuing the RPC request.
Asynchronous RPC (1)
2-12
2-13
2-16
Common organization of a remote object with client-side
proxy.
Binding a Client to an Object
Object references are supported by RMI
systems
When a process holds an object reference, it
must first bind to the reference’s object before
invoking any of its methods.
Binding results in a proxy being installed in the process’s
address space.
Implicit Binding: binding is done automatically
The client is offered a mechanism that allows it to
directly invoke methods using only a reference to the
object.
Explicit Binding: more transparent to the client
The client first calls a special function to bind to the
object and then invokes any method.
Implementation of Object References
An object reference should provide the following
information:
The network address of the machine where the state of the
object resides
An endpoint (port) identifying the server that manages the
object
An id identifying which object in this server.
• The Stubs and Skeleton layer intercepts method calls made by the
client to the interface reference variable and redirects these calls to a
remote RMI service. Implements the stub and skeletons needed.
• The Remote Reference layer connects clients to remote service objects
that are running and exported on a server. Defines and supports the
invocation semantics of the RMI connection.
• The transport layer is based on TCP/IP connections between machines
in a network. A client is connected to a a remote service implementation
by establishing a unicast point-to-point connection.
The stub and skeleton files are created using the RMI compiler, rmic (rmic
CalculatorImpl)
This generates the Calculator_Stub.class and Calculator_Skel.class
Host Server
public class CalculatorServer {
public CalculatorServer() {
try {
Calculator c = new CalculatorImpl();
Naming.rebind("rmi://localhost:1099/CalculatorService", c);
} catch (Exception e) {
System.out.println("Trouble: " + e);
}
}
public static void main(String args[]) {
new CalculatorServer();
}
}
2-20
Persistence and Synchronicity in Communication
Persistent communication
a message that has been submitted for transmission is stored by
the communication system as long as it takes to deliver it to the
receiver.
Transient communication
a message is stored by the communication system only as long as
the sending and receiving application are executing.
If a communication server cannot deliver a message to the next
communication server or the receiver, the message will be discarded
It works like a traditional store-and-forward router
Asynchronous Communication
A sender continues its execution immediately after it has
submitted its message for transmission
Synchronous Communication
The sender is blocked until its message is stored in a local buffer
at the receiving host, or actually delivered to the receiver.
Persistence and Synchronicity in Communication
Persistent Asynchronous Communication
Each message is either persistently stored in a buffer at the
local host or at the first communication server.
A e-mail system is an example
Persistent Synchronous Communication
Messages can be persistently stored at the receiving host
and a sender is blocked until this happens
Transient Asynchronous Communication
The message is temporarily stored at a local buffer at the
sending host, after which the sender immediately continues
UDP is an example
2-22.2
int main(void) {
int sd, len;
struct sockaddr_un sockaddr;
sd = socket(AF_UNIX,SOCK_STREAM,0);
sockaddr.sun_family = AF_UNIX;
strcpy(sockaddr.sun_path, ADDRESS);
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node);
sprintf(buf,”file%d”,node);
fp = fopen(buf,”r”);
fprintf(fp, "Hello World from Node %d\n",node);
fclose(fp);
MPI_Finalize();
}
MPI – Basic Concepts
A communicator is a collection of
processes that can send messages to each
other.
There is a default communicator whose
group contains all initial processes, called
MPI_COMM_WORLD.
A process is identified by its rank in the
group associated with a communicator.
MPI – A simple example with send-receive
int main(int argc, char **argv) {
int rank, size;
double x[10];
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
for (int i=0; i<10 ; i++) x[i] = 0.1*i;
MPI_Send(x, 10, MPI_DOUBLE, 1, 666, MPI_COMM_WORLD);
}
else if (rank == 1) {
MPI_Recv(x, 10, MPI_DOUBLE, 0, 666, MPI_COMM_WORLD, &status);
}
MPI_Finalize();
}
Send an array from process 0 to 1
- int MPI_Send(void 8message, int count, MPI_Datatype datatype, int dest, int tag)
- int MPI_recv(void *message, int count, MPI_Datatype datatype, int source, int tag,
MPI_Comm comm, MPI_Status *status)
MPI Tags
MPI_INIT(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD,&p);
if (my_rank != 0) {
sprinf(message, “Greetings from process %d!”, my_rank);
dest =0;
MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
else {
for (source = 1; source < p; sopurce++) {
MPI_Recv(message,100, MPI_CHAR, source, tag, MPI_COMM_WORLD,
&status);
printf(“%s\n”, message);
}
}
}
Introduction to Collective Operations in MPI
Books:
Using MPI: Portable Parallel Programming with the Message-Passing
Interface, by Gropp, Lusk, and Skjellum, MIT Press, 1994.
MPI: The Complete Reference, by Snir, Otto, Huss-Lederman, Walker,
and Dongarra, MIT Press, 1996.
Designing and Building Parallel Programs, by Ian Foster, Addison-Wesley,
1995.
Parallel Programming with MPI, by Peter Pacheco, Morgan-Kaufmann,
1997.
MPI: The Complete Reference Vol 1 and 2,MIT Press, 1998(Fall).