Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

1

R-20 III-I IT-CN Unit-5


Part-1: Transport Layer

1. The network layer provides end-to-end packet delivery using datagrams or virtual
circuits. After this, the transport layer provides data transport from a process on a
source machine to a process on a destination machine.

The transport is provided with a reliability that is independent of the physical


networks in between. It provides the abstractions that applications need to use the
network.

The ultimate goal of the transport layer is to provide efficient, reliable, and cost-
effective data transmission service to its users, normally processes in the application
layer. The relationship of the network, transport, and application layers is illustrated
in Fig. 6-1.

2. Transport Layer Primitives:


2

3. NOTE: The term segment is used for messages sent from transport entity to
transport entity. Internet protocols use this term.

4. Berkeley Sockets: Sockets were first released as part of the Berkeley UNIX 4.2BSD
software distribution in 1983. They quickly became popular. The primitives are now
widely used for Internet programming on many UNIX operating systems.
3

Primitives:

5. Transport Layer Protocols: The transport service is implemented by a transport


protocol used between the two transport entities. In some ways, transport protocols
resemble the data link protocols. Both have to deal with error control, sequencing,
and flow control, and other issues.

However, differences also exist. These differences are due to dissimilarities between
the environments in which the two protocols operate, as shown in Fig. 6-7.

At the data link layer, two routers communicate directly via a physical channel. At the
transport layer, this physical channel is replaced by the entire network.

The process of establishing a connection over the wire is simple: the other end is
always there – just sending a message is sufficient. If the message is not
acknowledged due to an error, it can be resent.

In the transport layer, initial connection establishment is complicated, as we will see


later.
4

Another difference between the data link layer and the transport layer is the
existence of storage capacity in the network. When a router sends a packet over a
link, it may arrive or be lost, but it cannot go into hiding and suddenly emerge later.

A final difference between the data link and transport layers is the transport layer is
dealt in different approach in each of them.

6. Port Number: TCP service is obtained by both the sender and the receiver creating
end points, called sockets. Each socket has a socket number (address) consisting
of the IP address of the host and a 16-bit number local to that host, called a
port.

A socket may be used for multiple connections at the same time. In other words, two
or more connections may terminate at the same socket. Connections are identified
by the socket identifiers at both ends that is, (socket1, socket2 etc.).

Port numbers below 1024 are reserved for standard services of privileged users
(e.g., root in UNIX systems). They are called well-known ports.

7. User Datagram Protocol: The Internet protocol supports a connectionless transport


protocol called UDP (User Datagram Protocol). Through UDP, applications can
send IP datagrams without having to establish a connection.

NOTE: A datagram is a basic transfer unit associated with a network and provides
connectionless communications. Datagrams are structured as header and payload
sections. The delivery, arrival time, and order of arrival of datagrams are not
guaranteed by the network.

UDP transmits segments consisting of an 8-byte header followed by the payload.


5

When a UDP packet arrives, its payload is handed to the process attached to the
destination port. This attachment occurs through the BIND primitive.

The advantage of UDP over IP is the usage of source and destination ports. Without
the port fields, transport layer would not know what to do with the incoming packets.

The UDP length field includes the 8-byte header and the data. The minimum length
is 8 bytes and maximum length is 65,515 bytes, lower than the largest number that
will fit in 16 bits.

An optional Checksum is also provided for extra reliability. It checksums the header,
the data, and an IP pseudo-header.

It contains the 32-bit IPv4 addresses of the source and destination machines, the
protocol number for UDP (17), and the byte count for the UDP. It is used to detect
misdelivered packets.

8. UDP does not do flow control, congestion control, or retransmission upon receipt of a
bad segment. It just provides an interface to the IP protocol with multiple processes
using the ports and optional end-to-end error detection.

9. Remote Procedure Call: When a process on machine 1 calls a procedure on


machine 2, the calling process on 1 is suspended and execution of the called
procedure takes place on 2. Information can be transported from the client to the
server in the parameters and can come back in the procedure result.
6

10. Transmission Control Protocol: TCP was specifically designed to provide a


reliable end-to-end byte stream over an unreliable internetwork. An internetwork
differs from a single network since different parts may have different topologies,
bandwidths, delays, packet sizes, and other parameters.

TCP was designed to adapt the properties of the internetwork dynamically and to be
robust if failures occur (fault tolerance).

Each machine supporting TCP has a TCP transport entity (ex: a library procedure, a
user process, or part of the kernel). It manages TCP streams and interfaces to the IP
layer.

A TCP entity accepts user data streams from local processes, breaks them up into
pieces and sends each piece as a separate IP datagram. When datagrams
containing TCP data arrive at destination, they are given to local TCP entity, which
reconstructs the original byte streams.

11. NOTE: In computing, a daemon is a program that runs continuously as a


background process and wakes up to handle periodic service requests, coming from
remote processes. The daemon program is alerted to the request by the operating
system (OS); it either responds to the request or forwards it to another program.

The following are some examples of daemons:

 init. This is the first daemon to start up when Unix boots, and it spawns all other
processes.
 inetd. This listens for internet requests on a designated port number and assigns
a server program to handle them. Services handled by inetd include rlogin,
telnet, ftp etc.
 crond. This daemon executes scheduled commands.
 dhcpd. This daemon provides Dynamic Host Configuration Protocol services.
 ftpd. This daemon is often started by inetd to handle File Transfer Protocol
requests.
 httpd. This daemon acts as a web server.

All TCP connections are full duplex and point-to-point.

NOTE:
 Simplex: Only one of the two devices on a link can transmit, the other can only
receive. This mode uses the channel to send data in one direction. Example:
Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.
 Half-Duplex: Each station can both transmit and receive, but not at the same
time. When one device is sending, the other can only receive. Example: Walkie-
talkie in which message is sent one at a time and messages are sent in both
directions.
7

 Full Duplex: Both stations can transmit and receive simultaneously. Ex:
Telephone.
 Point-to-point means that each connection has exactly two end points.
 TCP does not support multicasting or broadcasting.

12. A TCP connection is a byte stream, not a message stream. Message boundaries
are not preserved end to end.

For example, if the sending process sends four 512-byte packets to a TCP stream,
these data may be delivered to the receiving process as four 512-byte chunks, two
1024-byte chunks, or one 2048-byte chunk.

When an application passes data to TCP, TCP may send it immediately or buffer it
(in order to collect a larger amount to send at once), at its discretion.

If some data is to be sent immediately (ex: interactive game), TCP has the notion of
a PUSH flag that is carried on packets. Through this, applications tell TCP
implementations not to delay the transmission.

NOTE: Urgent data is an interesting feature of TCP service that remains in the
protocol but is rarely used. When an application has high priority data, the sending
application passes it to TCP with the URGENT flag. This causes TCP to stop
accumulating data and transmit everything it has immediately.
13. The TCP Protocol: Every byte on a TCP connection has its own 32-bit sequence
number. Separate 32-bit sequence numbers are carried on packets for the sliding
window position in one direction and for acknowledgements in the reverse direction.
The sending and receiving TCP entities exchange data in the form of segments. A
TCP segment consists of a 20-byte header followed by data bytes. The TCP
software decides how big segments should be.
The protocol used by TCP entities is the sliding window protocol with a dynamic
window size.
When a sender transmits a segment, it starts a timer. When the segment arrives at
the destination, the receiving TCP entity sends back a segment bearing an
8

acknowledgement number equal to the next sequence number it expects to receive


and the remaining window size.
If the sender’s timer goes off before the acknowledgement is received, the sender
transmits the segment again.
14. Disadvantages:
 Segments can arrive out of order. Ex: Bytes 3072–4095 can arrive but cannot be
acknowledged because bytes 2048–3071 have not turned up yet.
 Segments can be delayed so long in transit that the sender times out and
retransmits them.
 Retransmissions may include different ranges than the original transmission.

15. The TCP Segment Header: The figure below shows the layout of a TCP segment.

Every segment begins with a 20-byte header. Note that Segments without any data
are legal and are used for acknowledgements and control messages.

 The Source port and Destination port fields identify the local end points of the
connection.
 The source and destination points together identify the connection, which is called
a 5 tuple: protocol (TCP), source IP and source port, and destination IP and
destination port.
 Eight 1-bit flags: CWR and ECE are used to signal congestion when ECN
(Explicit Congestion Notification) is used, as specified in RFC 3168. ECE is set
to signal an ECN-Echo to a TCP sender to tell it to slow down when the TCP
receiver gets a congestion indication from the network. CWR is set to signal
9

Congestion Window Reduced from the TCP sender to the TCP receiver so that
it knows the sender has slowed down and can stop sending the ECN-Echo.
 URG -> Urgent
 ACK-> Acknowledgement
 PSH-> Pushed data
 RST -> Reset a connection
 SYN -> Synchronization
 FIN-> Finish

16. TCP Connection: Connections are established in TCP by means of the three-way
handshake.
10

To establish a connection the server waits for an incoming connection by executing


the LISTEN and ACCEPT primitives in that order, by specifying a source.
The client, executes a CONNECT primitive, by specifying
 the IP address and port to which it wants to connect,
 the maximum TCP segment size and
 some user data (e.g., a password).

The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit off
and waits for a response.
When this segment arrives at the destination, the TCP entity checks to see if there is
a process that has done a LISTEN on the port field. If not, it sends a reply with the
RST bit on to reject the connection.
If some process is listening to the port, that process is given the incoming TCP
segment. If it accepts the connection, an acknowledgement is sent back.
Note that a SYN segment consumes 1 byte of sequence space so that it can be
acknowledged.

If two hosts simultaneously try to establish a connection, the sequence of events is


as illustrated in Fig. 6-37(b). The result of these events is that just one connection is
established, because connections are identified by their end points. If the first and
second setups result in a connection identified by (x, y), only one table entry is made
for (x, y).
17. TCP Connection Management: The steps required to establish and release
connections can be represented in a finite state machine with the 11 states are listed
in Fig. 6-38.
11

Each connection starts in the CLOSED state. It leaves that state when it turns into a
passive open (LISTEN) or an active open (CONNECT). If the other side agrees, a
connection is ESTABLISHED.
Connection release can be initiated by either side. When it is complete, the state
returns to CLOSED.
18. TCP Sliding Window: Window management in TCP decouples the issues of
acknowledgement of the segments and buffer allocation.
Ex: Suppose the receiver has a 4096-byte buffer. If the sender transmits a 2048-byte
segment, the receiver will acknowledge the same. Now only 2048 bytes of buffer
space is left; hence, it will advertise a window of 2048 starting at the next byte.
Now the sender transmits another 2048 bytes, which are acknowledged, but the
advertised window is of size 0. The sender must stop until the application on the
receiving host has removed some data from the buffer.
NOTE: When the window is 0, the sender may not normally send segments, with two
exceptions.
 First, urgent data may be sent, for example, to allow the user to kill the
process running on the remote machine.
 Second, the sender may send a 1-byte segment to force the receiver to re-
announce the next byte expected and the window size. This packet is called a
window probe.
The TCP standard provides this option to prevent deadlock (if a window update ever
gets lost).
12

19. Delayed Acknowledgements: The idea is to delay acknowledgements and window


updates for up to 500 msec in the hope of acquiring some data on which to hitch a
free ride.

20. Nagle’s Algorithm: When data comes into the sender in small pieces, just send the
first piece and buffer all the rest until the first piece is acknowledged. Then send all
the buffered data in one TCP segment and start buffering again until the next
segment is acknowledged.

Nagle’s algorithm will put the many pieces in one segment, greatly reducing the
bandwidth used.

21. Silly Window Syndrome: This problem occurs when data is passed to the sending
TCP entity in large blocks, but the application on the receiving side reads data only 1
byte at a time.
13

Initially, the TCP buffer on the receiving side is full (i.e., it has a window of size 0)
and the sender knows this. Then the application reads one character from the TCP
stream.
The receiving TCP now sends a window update to the sender saying that it is all
right to send 1 byte. The sender agrees and sends 1 byte. The buffer is now full, so
the receiver acknowledges the 1-byte segment and sets the window to 0. This
behaviour can go on forever.
22. Clark’s Solution: Clark’s solution is to prevent the receiver from sending a window
update for 1byte. Instead, it is forced to wait until it has a decent amount of space
available (ex: 10 KB). This space depends upon the agreed maximum size at the
time of connection or if half of its buffer is empty etc.
Also, the sender should not send tiny segments and should wait till it receives half of
the packet.
23. NOTE: Nagle’s algorithm and Clark’s solution to the silly window syndrome are
complementary. Nagle was trying to solve the problem caused by the sending
application delivering data to TCP a byte at a time. Clark tried to solve the problem of
the receiving application accepting the data from TCP one byte at a time.

Both solutions are valid and can work together. The goal is for the sender not to
send small segments and the receiver not to ask for them.

24. Cumulative Acknowledgement: Acknowledgements can be sent only when all the
acknowledged data bytes have been received. This is called a cumulative
acknowledgement. If the receiver gets segments 0, 1, 2, 4, 5, 6, and 7, it can
acknowledge everything up to and including the last byte in segment 2. If the sender
times out, it then retransmits segment 3. Since the receiver has buffered segments
14

4 to 7, upon receipt of segment 3 it can acknowledge all bytes up to the end of


segment 7.

25. Error Control and Flow Control: Error control is ensuring that the data is delivered
with the desired level of reliability, i.e. all data is delivered without any errors. Flow
control is keeping a fast transmitter from overrunning a slow receiver.

26. Concepts used in TCP are almost same as those in DLL.


 A frame carries an error-detecting code (e.g., a CRC or checksum) that is used
to check if the information was correctly received.
 A frame carries a sequence number to identify itself and is retransmitted by the
sender until it receives an acknowledgement of successful receipt from the
receiver. This is called ARQ (Automatic Repeat reQuest).
 There is a maximum number of frames that the sender will allow to be
outstanding at any time, pausing if the receiver is not acknowledging quickly. If
this maximum is one packet, the protocol is called stop-and-wait.
 The sliding window protocol combines these features and is also used to support
bidirectional data transfer.
 Even though the same mechanisms are used in DLL and TL, there are
differences in function and degree.

27. TCP Congestion Control: When the load offered to any network is more than it can
handle, congestion builds up. The network layer detects congestion when queues
grow large at routers and tries to manage it by dropping packets. In transport layer,
the goal is to avoid congestion, instead of handling it.

NOTE: Goodput, is the rate at which useful packets are delivered by the network.

28. Efficiency and Power: An efficient allocation of bandwidth across transport entities
will use all of the network capacity that is available.
15

As the load increases in Fig. 6-19(a) goodput initially increases at the same rate, but
as the load approaches the capacity, goodput rises more gradually. If the transport
protocol is poorly designed and retransmits the delayed packets, congestion collapse
can occur.
The corresponding delay is given in Fig. 6-19(b). Initially the delay is at a fixed rate.
As the load approaches the capacity, the delay rises. Note that packets will be lost
after experiencing the maximum buffering delay.
For both goodput and delay, performance begins to degrade at the onset of
congestion. We will obtain the best performance if we allocate bandwidth before the
delay starts to climb rapidly.
Metric of Power  power = load / delay
Power will initially rise as delay remains constant, but will reach a maximum and fall
as delay grows rapidly. The load with the highest power is called as an efficient load
for the transport entity.
29. Max-min Fairness: This deals with the concept of how to allocate bandwidth for the
existing senders. Dividing bandwidth into equal fractions is not correct – a sender
might be idle at that time or sending heavy payload.
In max-min fairness, the bandwidth given to one flow (path or connection) cannot be
increased without decreasing the bandwidth given to another flow. That is,
increasing the bandwidth of a flow will decrease the same for other flows.
Ex: A max-min fair allocation is shown for a network with four flows, A, B, C, and D,
in Fig. 6-20.

Each of the links between routers has the same capacity, taken to be 1 unit.
Three flows compete for the bottom-left link between routers R4 and R5. Each of
these flows therefore gets 1/3 of the link.
The remaining flow, A, competes with B on the link from R2 to R3. Since B has an
allocation of 1/3, A gets the remaining 2/3 of the link.
If more of the bandwidth on the link between R2 and R3 is given to flow B, there will
be less for flow A. This is reasonable since A already has more bandwidth and it
loses nothing.
16

However, to give more bandwidth to B, the capacity C or D (or both) must be


decreased, and these flows will have less bandwidth than B. Thus, the allocation is
max-min fair.
30. Regulating the Sending Rate: This is another concept to avoid congestion in the
flow between transport entities.

In Fig. 6-22(a), we see a thick pipe leading to a small-capacity receiver. This is a


flow-control limited situation. As long as the sender does not send more water than
the bucket can contain, no water will be lost.
In Fig. 6-22(b), the limiting factor is not the bucket capacity, but the internal carrying
capacity of the network. If too much water comes in, it will back up and some will be
lost by overflowing the tunnel (not bucket).
31. TCP Congestion Control: This is carried out in transport layer using AIMD rule.
 Additive Increase Multiplicative Decrease (AIMD): In this proposal, the users
additively increase their bandwidth allocations and then multiplicatively
decrease them when congestion is signalled. This is shown in Fig. 6-25.
17

TCP congestion control is based on implementing AIMD approach using a window


and with packet loss as the binary signal. TCP maintains a congestion window
whose size is the number of bytes the sender may have in the network at any time.
The corresponding rate is the window size divided by the round-trip time of the
connection.
 Congestion Collapse: It is a prolonged period during which goodput dropped
swiftly due to congestion in the network.

32. Slow Start Algorithm: When a connection is established, the sender initializes the
congestion window to a small initial value of four segments (maximum). The sender
then sends the initial window. The packets will take a round-trip time to be
acknowledged.
For each segment that is acknowledged, the sender adds one segment the
congestion window. Since that segment has been acknowledged, there is now one
less segment in the network.
The result is that every acknowledged segment allows two more segments to be
sent. The congestion window is doubling every roundtrip time.
18

Part-2 Application Layer


1. World Wide Web: The Web is an architectural framework for accessing linked
content spread out over millions of machines all over the Internet.
The initial idea was to help teams, with members in many countries and time zones,
to collaborate using a collection of reports, blueprints, drawings, photos, and other
documents produced by experiments in particle physics.
The proposal for a web of linked documents came from CERN physicist Tim
Berners-Lee.
Marc Andreessen at the University of Illinois developed the first graphical browser
Mosaic in February 1993. Others are Netscape navigator, Internet Explorer etc.
2. Through the 1990s and 2000s, Web content grew exponentially until there were
millions of sites and billions of pages. A small number of these sites became
tremendously popular. Ex: Amazon. Google, Facebook (Meta), ChatGPT etc.

3. Architecture of the WWW: the Web consists of a vast, worldwide collection of


content in the form of Web pages, often just called pages for short. Each page may
contain links to other pages anywhere in the world.

Users can follow a link by clicking on it, which then takes them to the page pointed
to. This process can be repeated indefinitely. The idea of having one page point to
another is called hypertext.
Pages are generally viewed with a program called a browser. Firefox, Internet
Explorer, and Chrome are examples of popular browsers.
19

4. HTTP—The Hyper Text Transfer Protocol: HTTP is a simple request-response


protocol that normally runs over TCP. It specifies what messages clients may send
to servers and what responses they get back in return.
The content may simply be a document that is read off a disk, or the result of a
database query and program execution. The page is a static page if it is a document
that is the same every time it is displayed.
In contrast, if it was generated on demand by a program or contains a program it is a
dynamic page.
5. Client Side: A browser is a program that can display a Web page and catch mouse
clicks to items on the displayed page. When an item is selected, the browser follows
the hyperlink and fetches the page selected.
The web needed mechanisms for naming and locating pages. Each page is
assigned a URL (Uniform Resource Locator) that effectively serves as the page’s
worldwide name.
URLs have three parts:
 Protocol (also known as the scheme),
 DNS (Domain Name System)of the machine on which the page is located,
 Path uniquely indicating the specific page (a file to read or program to run on the
machine)
Ex: http://www.gvpcew.ac.in/it.php
This URL consists of three parts: the protocol (http), the DNS name of the host
(www.gvpcew.ac.in), and the path name (it.php).
20

When a user clicks on a hyperlink, the browser carries out a series of steps in order
to fetch the page pointed to. Let us trace the steps that occur when our example link
is selected:
1. The browser determines the URL.
2. The browser asks DNS for the IP address of the server www.gvpcew.ac.in.
3. DNS replies with 128.208.3.88.
4. The browser makes a TCP connection to 128.208.3.88 on port 80, for the HTTP
protocol.
5. It sends over an HTTP request asking for the page /it.php.
6. The www.gvpcew.ac.in server sends the page as an HTTP response, by sending
the file /it.php.
7. If the page includes URLs that are needed for display, the browser fetches the
other URLs using the same process. In this case, the URLs include multiple
embedded images, an embedded video, and a script from google-analytics.com.
8. The browser displays the page /index.html.
9. The TCP connections are released if there are no other requests.

6. Some simplified URL forms are listed below:

7. A plug-in is a third-party code module that is installed as an extension to the


browser, as illustrated in Fig. 7-20(a). Common examples are plug-ins for PDF,
Flash, and Quicktime to render documents and play audio and video. Because plug-
ins run inside the browser, they have access to the current page and can modify its
appearance.
21

The other way to extend a browser is make use of a helper application. This is a
complete program, running as a separate process. It is illustrated in Fig. 7-20(b).
Since the helper is a separate program, the interface accepts the name of a scratch
file where the content file has been stored, opens the file, and displays the contents.
Ex: Microsoft Word or PowerPoint.
8. Server Side: The browser parses the URL and interprets the part between http://
and the next slash as a DNS name to look up. Then, the browser establishes a TCP
connection to port 80 on that server and sends a command containing the rest of the
URL. The server then returns the page for the browser to display.
The steps that the server performs are:
 Accept a TCP connection from a client (a browser).
 Get the path to the page, which is the name of the file requested.
 Get the file (from disk).
 Send the contents of the file to the client.
 Release the TCP connection.

9. To tackle the problem of serving a single request at a time, we make the server
multithreaded. In one design, the server consists of a front-end module that accepts
all incoming requests and k processing modules, as shown in Fig. 7-21.

The k + 1 threads all belong to the same process, so the processing modules all
have access to the cache within the process’ address space. When a request comes
in, the front end accepts it and builds a short record describing it. It then hands the
record to one of the processing modules.
22

10. HTTP Connections: The browser contacts a server to establish a TCP connection
to port 80 on the server’s machine. The advantage of using TCP is that neither
browsers nor servers have to worry about data, reliability, or congestion control – all
are handled by TCP.

11. Persistent Connections: Through these types of connections, it is possible to


establish a TCP connection, send a request and get a response, and then send
additional requests and get additional responses. This strategy is also called
connection reuse.
23

In Fig. 7-36(b), the page is fetched with a persistent connection. That is, the TCP
connection is opened at the beginning, three requests are sent, one after the other,
and then the connection is closed.

Observe that the procedure completes more quickly. There are two reasons for the
speedup.
 Time is not wasted setting up additional connections.
 The transfer of the same images proceeds more quickly, because of TCP
congestion control.

12. Parallel Connection Method: It is possible to send one request per TCP connection,
but run multiple TCP connections in parallel. It has the disadvantage as of extra
overhead but offers better performance.

13. Methods: HTTP is designed for use in the Web, but was made more general to
support object-oriented uses. For this reason, methods are supported.

 The GET method requests the server to send the page.


 The HEAD method just asks for the message header, without the actual page.
This method can be used to collect information for indexing, or to test a URL.
 The POST method is used when forms are submitted; it uploads data to the
server.
 The PUT method is the reverse of GET: instead of reading the page, it writes the
page.
 DELETE removes the page.
 The TRACE method is for debugging. It instructs the server to send back the
request that it has received. This method is useful when requests are not being
processed correctly.
 The CONNECT method lets a user make a connection to a Web server through
an intermediate device, such as a Web cache.
 The OPTIONS method is used by a client to query the server for a page and
obtain the methods and headers that can be used on the page.
24

14. Electronic Mail:


(a) Architecture and Services:

The architecture of the email system consists of two kinds of subsystems:


 user agents -> allow people to read and send email, and
 message transfer agents -> move the messages from the source to the
destination.
The user agent is a program that provides a GUI that lets users interact with the
email system. It includes a means to compose messages and replies to messages,
display incoming messages, and organize messages by filing, searching, and
discarding them. The act of sending new messages into the mail system for delivery
is called mail submission.
The message transfer agents are typically system processes. They run in the
background on mail server machines. Their job is to automatically move email
through the system from the originator to the recipient with SMTP (Simple Mail
Transfer Protocol).
Mailboxes store the email that is received for a user. They are maintained by mail
servers.
There is a distinction between the envelope and its contents. The envelope contains
all the information needed for transporting the message, such as the destination
address, priority, and security level. The message transport agents use the envelope
for routing, just as the post office does.
The message inside the envelope consists of two separate parts: the header and the
body.
25

15. TELNET: TELNET stands for Teletype Network, which is a protocol that enables a
computer to connect to the local computer. It is used as a standard TCP/IP protocol
for virtual terminal service which is provided by ISO.

NOTE: The computer which starts the connection is known as the local computer.

During telnet operation, whatever is being performed on the remote computer will be
displayed by the local computer. Telnet operates on a client/server principle.

The local computer uses a telnet client program and the remote computers use a
telnet server program.

16. Logging: The logging process can be further categorized into two parts:
(a) Local Login
(b) Remote Login

17. Local Login: Whenever a user logs into its local system, it is known as local login.
26

Procedure of Local Login:


 Keystrokes are accepted by the terminal driver when the user types at the
terminal.
 Terminal Driver passes these characters to OS.
 OS validates the combination of characters and opens the required application.

18. Remote Login: It is a process in which users can log in to a remote site i.e.,
computer and use services that are available on the remote computer. With the help
of remote login, a user is able to understand the result of transferring the data from
the remote computer to the local computer.
27

The Procedure of Remote Login:


 When the user types on the local computer, the local OS accepts the data.
 The local computer send the data TELNET client.
 TELNET client transforms these characters to a universal character set called
Network Virtual Terminal (NVT).
 Commands in the form of NVT, travel through the Internet and it will arrive at
the TCP/IP stack at the remote computer.
 The remote operating system receives characters from a pseudo-terminal
driver, which is a piece of software that pretends that characters are coming
from a terminal.
 The operating system then passes the character to the appropriate application
program.

19. Domain Name System: The DNS is a hierarchical, domain-based naming scheme
and a distributed database system for implementing this naming scheme. It is used
for mapping host names to IP addresses and can also be used for other purposes.
DNS is used is as follows. To map a name onto an IP address, an application
program calls a library procedure called the resolver, passing it the name as a
parameter.
The resolver sends a query containing the name to a DNS server, which looks up the
name and returns a response containing the IP address. The resolver then returns it
to the caller.
The query and response messages are sent as UDP packets. Now the program can
then establish a TCP connection with the host or send it UDP packets.
20. For the Internet, the top of the naming hierarchy is managed by an organization
called ICANN (Internet Corporation for Assigned Names and Numbers). ICANN was
created as part of maturing the Internet to a worldwide, economic concern.
Internet is divided into 250 domains, where each domain covers many hosts. Each
domain is partitioned into subdomains, and these are further partitioned, and so on.
All these domains can be represented by a tree, as shown in Fig. 7-1. The leaves of
the tree represent domains that have no subdomains (but do contain machines, of
course). A leaf domain may contain a single host, or it may represent a company and
contain thousands of hosts.
28

21. The process of looking up for a name and finding an address is called name
resolution. When a resolver has a query about a domain name, it passes the query
to a local name server. If the concerned falls under the server, it returns the
authoritative resource records.
An authoritative record is one that comes from the authority that manages the record
and is thus always correct. Authoritative records are in contrast to cached records,
which may be out of date.
29

**********************************************************************************************

You might also like