Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

DEPARTMENT OF COMPUTER SCIENCE

DUAL DEGREE INTEGRATED POST GRADUATE PROGRAM

RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA, BHOPAL (M.P.)

Network Managment Assignment File

Roll Number: 0007CS16DD10


Semester: VIII (DDI-PG)

Submitted by: Submitted to:


KIRTI BAGHELE PROF.CHETNA INDOREKAR

Q1. What Functions are provided by SNMP?


Ans. Simple Network Management Protocol (SNMP) is an application-layer protocol for
monitoring and managing network devices on a local area network (LAN) or wide area network
(WAN). The purpose of SNMP is to provide network devices such as routers, servers and printers
with a common language for sharing information with a network management system (NMS).

SNMP is part of the original Internet Protocol Suite as defined by the Internet Engineering Task
Force (IETF). The most recent version of the protocol, SNMPv3, includes security mechanisms
for authentication, encryption and access control.

The Simple Network Management Protocol (SNMP) is the network management tool for TCP/IP
networks. It provides the following three basic capabilities:

(i) Get: enables the management station to retrieve the value of objects at the agent

(ii) Set: enables the management station to set the value of objects at the agent

(iii) Trap: enables an agent to notify the management station of significant events

SNMP software agents on network devices and services communicate with a network
management system to relay status information and configuration changes. The NMS provides a
single interface from which administrators can issue batch commands and receive automatic
alerts.

SNMP relies on the concept of a management information base (MIB) to organize how
information about device metrics gets exchanged. The MIB is a formal description of a network
device’s components and status information. MIBs can be created for any network device in the
Internet of Things (IoT), including IP video cameras, vehicles, industrial equipment and medical
equipment. In addition to hardware, SNMP can be used to monitor services such as Dynamic Host
Configuration Protocol (DHCP).

SNMP uses a blend of pull and push communications between network devices and the network
management system. The SNMP Agent, which resides with the MIB on a network device, collects
status information constantly but will only push information to the network monitoring system
upon request or when some aspect of the network crosses a pre-defined threshold known as a trap.
Trap Messages are typically sent to the management server when something significant, such as a
serious error condition, occurs.

Q2. Explain What might Happen if two stations are accidentally assigned the same
hardware address?
Ans. Depending on the hardware used you generally get an intermittent failure on both devices
because the network sees them both as one device. If you have intelligent hubs and network
management software the can be identified and locked out. MAC addresses are configured by the
hardware manufacturer in the network card, and no two network cards have the same MAC
address around the world.
Q3. Why wireless LAN can not use the same CSMA/CD mechanism that Ethernet uses?
Ans. The physical characteristics of Wi-Fi make it impossible and impractical for the CAMA/CD
mechanism to be used. This is due to CSMA/CD’s nature of ‘listening’ if the medium is free
before transmitting packets. Using CSMA/CD, if a collision is detected on the medium, end-
devices would have to wait a random amount of time before they can start the retransmission
process. For this reason, CSMA/CD works well for wired networks, however, in wireless
networks, there is no way for the sender to detect collisions the same way CSMA/CD does since
the sender is only able to transmit and receive packets on the medium but is not able to sense data
traversing that medium. Therefore, CSMA/CA is used on wireless networks. CSMA/CA doesn’t
detect collisions (unlike CSMA/CA) but rather avoids them through the use of a control message.
Should the control message collide with another control message from another node, it means that
the medium is not available for transmission and the back-off algorithm needs to be applied
before attempting retransmission.

Q4. What is gratuitous ARP?


Ans. Gratuitous ARP is a special ARP (Address Resolution Protocol) reply that is not a response
to an ARP request. A Gratuitous ARP reply is a reply to without a ARP request. No reply is
expected for a Gratuitous ARP A Gratuitous ARP packet has the following characteristics.
• The source and destination IP Addresses are both set to the IP of the machine sending the
Gratuitous ARP packet.
• Destination MAC address is the broadcast MAC address ff: ff: ff:ff:ff:ff.

Q5. Find the network and Host ID of the following IP addresses?


 117.34.4.8
 23.34.41.5
Ans.
1) 117.34.4.8
Since the given IP address comes under the range of Class A address. In Class A address first
octet comes in the part of NETID and last three octet (24 bits) comes in HOSTID.
IP address: 117.34.4.8
NetID: 117.0.0.0
HostID: 34.4.8
2) 29.34.41.5
Since the given IP address comes under the range of Class A address. IN Class A address first
octet comes in the part of NETID and last three octet (24 bits) comes in HOSTID.

IP address: 29.34.41.5
NetID: 29.0.0.0
HostID: 34.41.5
Q6: what are main advantages of ipv6 over ip4
Ans:

IPv4 address have approximately 4.3 billion addresses and managed and distributed by Internet


Assigned Numbers Authority (IANA) to the Regional Internet Registries (RIRs) in blocks of
approximately 16.8 million addresses each. The Internet Protocol Version 4 (IPv4) exhaustion
started in 2011 for the pool of unallocated addresses. This depletion led to the research and
development to the its next successor which is the Internet Protocol Version 6(IPV6).

The new Internet Protocol Version 6(IPv6) is the successor technology designed to address the
problem. IPV6 supports approximately 3.4×1038 network addresses which translate to equivalent
of 340 trillion trillion trillion addresses in figures, that’s about 670 quadrillion addresses per
square millimeter of the Earth’s surface.

The table below will list the key differences between IPv4 and is to why IPv6 better. See the side
by side comparison of IPv4 and IPv6 it’s six important areas showing the benefits of using it.

Not adopting to IPv6 have some serious drawbacks and problems ahead for organisations. For
more about the positives of IPv6 and the negatives of not using it, see below:
1. IPv4 is Over

On the surface, the IPv4 world seems calm. However, the top-level body that assigns IPv4
addresses, IANA, announced as long ago as 2011 it had no more blocks of IPv4 left to
distribute. The Asia-Pacific registry APNIC also hit IPv4 exhaustion in 2011, as did the
European RIPE-NCC registry in 2012, and South American LACNIC in 2014. The North
American registry,  ARIN, announced in April 2014 it has also reached its final stages of
IPv4. All registries strongly recommend immediate IPv6 adoption. IPv4 is done. It’s old
technology. Your current IPv4 range may be enough for life support for some time yet, but if
expansion or diversification is required, your networks will suffer. Any new technology
requiring Internet access will push network demand to the limit. Yes, there are stop-gaps
such as NAT boxes, but they are costly and require time-consuming expertise and
maintenance. Far better to put scarce resources into something with a future, and to do it
before IPv4 exhaustion becomes an emergency.

As Vint Cerf said on this issue, “Engineering in a crisis is never a good idea…”
2. Things and Clouds Need IPv6

Cloud computing is now fundamental to most enterprises, providing cheap, powerful


resources such as databases, applications, security and system administration that cannot be
afforded individually. IP addresses are critical for orchestrating cloud processes. To
commission or decommission cloud virtual machines, multiple IP addresses need to be
reserved or freed up with blinding speed. The IPv4-based Internet, increasingly hamstrung by
NATs, cannot provide such functionality, and the required numbers of addresses simply do
not exist in IPv4.The Internet of Things, the concept of communicating networks of
independent devices, is estimated to reach twenty to thirty billion devices by 2020. Every
networked device needs an address, and IPv4 has a hard limit of 4.3 billion. IPv6 has
340,282,366,920,938,000,000,000,000,000 billion addresses.

IPv6 is the only technology that can scale to deal with massively distributed cloud
infrastructure and the Internet of Things.
3. IPv6 is On by Default

Almost all current device operating systems have working IPv6, many with IPv6 enabled by
default. See Wikipedia’s comparison of IPv6 support in operating systems, and the IPv6 for
Microsoft Windows FAQ. There is far more IPv6 traffic on most networks than commonly
recognized. If enterprise firewalls have not been expressly configured to handle IPv6, then
the enterprise is vulnerable to malicious traffic, no matter how sturdy the old IPv4 defenses
are.  IPv6 is on by default, and can be accidentally or deliberately used to bypass usage and
security policies.

4. Shadow Networks and IPv6

While IPv6 remains uncommon, it will be used by those seeking to avoid attention. The most
shadowy networks remain hidden except to devotees, but one well-known peer-to-peer
filesharing network, the Pirate Bay, went to IPv6 two years ago after courts began ordering
European ISPs to block Pirate Bay IPv4 addresses. IPv6 is also being used for free, fast
Internet. In 2012, large numbers of students began downloading the IPv6Now tunnel client to
avoid their slow ISP and use a free academic IPv6 server. Since then, the client has been
downloaded tens of thousands of times worldwide. While not illegal, this is certainly flying
under the radar of their network service providers. If you think your network’s not carrying
IPv6, it just means you don’t know about it.
5. Government Use IPv6

Governments worldwide take IPv6 very seriously. The US government has already
transitioned to supporting IPv6 on all external services, and in 2014 mandated IPv6 for all
internal services. The Australian Government met a deadline in 2012 for external services to
be IPv6 capable. In Australia, the Department of Defense began its IPv6 migration in 2005.
In the US, DREN, the defense research and engineering network, has dedicated significant
effort to IPv6 implementations in everything from ‘network-centric warfare’ to networked
uniforms. Governments in India, Japan, Korea, Malaysia, Vietnam, etc., have mandated
IPv6-transition timetables. In April 2014, the Chinese government announced it would be
providing 20 billion Chinese yuan (3.2 billion US dollars) for IPv6 promotion and expansion.
IPv6 transition is actively supported by governments globally.
6. Business Continuity Needs IPv6

Connectivity is now essential to the viability of most enterprises. Management must always
be aware of issues that will impact on service delivery.  IPv4 exhaustion is a serious that will
prevent enterprises from significantly expanding networks or taking competitive advantage
of new features. Sadly, some levels of management dismiss IPv6 as a technical upgrade with
no commercial relevance.  Avoiding IPv6 is flimsy, even in light of governmental adoption
globally, and not acting is a neglect of corporate responsibilities.  Adopting IPv6 is a low-
cost business continuity strategy.
Ques.7. Explain IP protocol with packet format?
Ans. Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4
(Transport) and divides it into packets. IP packet encapsulates data unit received from above
layer and add to its own header information.

The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.

IP header includes many relevant information including Version Number, which, in this
context, is 4. Other details are as follows −
 Version − Version no. of Internet Protocol used (e.g. IPv4).
 IHL − Internet Header Length; Length of entire IP header.
 DSCP − Differentiated Services Code Point; this is Type of Service.
 ECN − Explicit Congestion Notification; It carries information about the congestion
seen in the route.
 Total Length − Length of entire IP Packet (including IP header and IP Payload).
 Identification − If IP packet is fragmented during the transmission, all the fragments
contain same identification number. to identify original IP packet they belong to.
 Flags − As required by the network resources, if IP Packet is too large to handle,
these ‘flags’ tells if they can be fragmented or not. In this 3-bit flag, the MSB is
always set to ‘0’.
 Fragment Offset − This offset tells the exact position of the fragment in the original
IP Packet.
 Time to Live − To avoid looping in the network, every packet is sent with some TTL
value set, which tells the network how many routers (hops) this packet can cross. At
each hop, its value is decremented by one and when the value reaches zero, the
packet is discarded.
 Protocol − Tells the Network layer at the destination host, to which Protocol this
packet belongs to, i.e. the next level Protocol. For example protocol number of ICMP
is 1, TCP is 6 and UDP is 17.
 Header Checksum − This field is used to keep checksum value of entire header
which is then used to check if the packet is received error-free.
 Source Address − 32-bit address of the Sender (or source) of the packet.
 Destination Address − 32-bit address of the Receiver (or destination) of the packet.
 Options − This is optional field, which is used if the value of IHL is greater than 5.
These options may contain values for options such as Security, Record Route, Time
Stamp, etc.

Ques.8. Describe UDP Encapsulation and Protocol layering.

Ans. UDP Packets


UDP is a “connectionless” protocol. Unlike TCP, UDP does not check that data arrived at
the receiving host. Instead, UDP formats the message that is received from the application
layer into UDP packets. UDP attaches a header to each packet. The header contains the
sending and receiving ports, a field with the length of the packet, and a checksum.
The sending UDP process attempts to send the packet to its peer UDP process on the receiving
host. The application layer determines whether the receiving UDP process acknowledges the
reception of the packet. UDP requires no notification of receipt. UDP does not use the three-
way handshake.
Layering in protocol models
The layering principle says that two instances of a protocol at layer N must communicate with
each other by making calls to the protocol at layer N-

For example, two instances of TCP talk with each other by exchanging IP packets. A
correct TCP implementation should never interact directly with the device driver for the
Ethernet card.
The chunks of information exchanged are called protocol data units. For example, two
instances of TCP exchange transport protocol data units or T-PDUs. We use special
terminology for the PDUs at the data link and network layers:
 a data link PDU is called a frame;
 a network PDU is called a packet.

Ques.9. Explain TCP window size and counter flow control policy.
Ans. TCP window size is one of the most popular options for network troubleshooting or
performing an application baseline. I’ve read many articles and books that can make this option
quite overwhelming, but it's actually pretty straight forward. The TCP window size, or as some
call it, the TCP receiver window size, is simply an advertisement of how much data (in bytes)
the receiving device is willing to receive at any point in time. The receiving device can use this
value to control the flow of data, or as a flow control mechanism.
Some operating systems will use a multiple of their maximum segment size (MSS) to calculate
the maximum TCP window size. For example, in Microsoft Windows 2000 on Ethernet
networks, the default value is 17,520 bytes, or 12 MSS segments of 1,460 bytes each. I suggest
you document your system's default since it can change when installing an application.
Pay close attention if the operating system uses TCP scaling option since it will increase the
total TCP window size by providing a multiplier value

The throughput of a communication is limited by two windows: the congestion window and the
receive window. The congestion window tries not to exceed the capacity of the network
(congestion control); the receive window tries not to exceed the capacity of the receiver to
process data (flow control). The receiver may be overwhelmed by data if for example it is very
busy (such as a Web server). Each TCP segment contains the current value of the receive
window. If, for example, a sender receives an acknowledgment which acknowledges byte 4000
and specifies a receive window of 10000 (bytes), the sender will not send packets after byte
14000, even if the congestion window allows it.
Flow Control basically means that TCP will ensure that a sender is not overwhelming a receiver by
sending packets faster than it can consume. It’s pretty similar to what’s normally called Back
pressure in the Distributed Systems literature. The idea is that a node receiving data will send
some kind of feedback to the node sending the data to let it know about its current condition.
It’s important to understand that this is not the same as Congestion Control. Although there’s
some overlap between the mechanisms TCP uses to provide both services, they are distinct
features.
Congestion control is about preventing a node from overwhelming the network (i.e. the
links between two nodes), while Flow Control is about the end-node.
How it works
When we need to send data over a network, this is normally what happens.

The sender application writes data to a socket, the transport layer (in our case, TCP ) will wrap
this data in a segment and hand it to the network layer (e.g. IP ), that will somehow route this
packet to the receiving node.
On the other side of this communication, the network layer will deliver this piece of data to
TCP , that will make it available to the receiver application as an exact copy of the data sent,
meaning if will not deliver packets out of order, and will wait for a retransmission in case it
notices a gap in the byte stream.
If we zoom in, we will see something like this.

TCP stores the data it needs to send in the send buffer, and the data it receives in the receive
buffer. When the application is ready, it will then read data from the receive buffer.
Flow Control is all about making sure we don’t send more packets when the receive buffer is
already
full, as the receiver wouldn’t be able to handle them and would need to drop these packets.
To control the amount of data that TCP can send, the receiver will advertise its Receive
Window (rwnd), that is, the spare room in the receive buffer.
Ques.10. Explain FTP, TFTP and RIP.

Ans.
FTP: The File Transfer Protocol (FTP) is a standard network protocol used for the transfer of
computer files between a client and server on a computer network.
FTP is built on a client-server model architecture using separate control and data connections between the
client and the server.[1] FTP users may authenticate themselves with a clear-text sign-in protocol, normally
in the form of a username and password, but can connect anonymously if the server is configured to allow
it. For secure transmission that protects the username and password, and encrypts the content, FTP is
often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).
The first FTP client applications were command-line programs developed before operating
systems had graphical user interfaces, and are still shipped with most Windows, Unix,
and Linux operating systems. Many FTP clients and automation utilities have since been developed
for desktops, servers, mobile devices, and hardware, and FTP has been incorporated into
productivity applications, such as HTML editors.
TFTP: Trivial File Transfer Protocol (TFTP) is a simple lockstep File Transfer Protocol which
allows a client to get a file from or put a file onto a remote host. One of its primary uses is in the
early
stages of nodes booting from a local area network. TFTP has been used for this application because it is
very simple to implement.
TFTP was first standardized in 1981.
Due to its simple design, TFTP can be easily implemented by code with a small memory footprint. It is
therefore the protocol of choice for the initial stages of any network booting strategy
like BOOTP, PXE, BSDP, etc., when targeting from highly resourced computers to very low
resourced Single-board computers (SBC) and System on a Chip (SoC). It is also used to
transfer firmware images and configuration files to network appliances like routers, firewalls, IP
phones, etc. Today, TFTP is virtually unused for Internet transfers.
TFTP is a simple protocol for transferring files, implemented on top of the UDP/IP protocols using
well-known port number 69. TFTP was designed to be small and easy to implement, and
therefore it lacks most of the advanced features offered by more robust file transfer protocols. TFTP only
reads and writes files from or to a remote server. It cannot list, delete, or rename files or directories and it
has no provisions for user authentication. Today TFTP is generally only used
on local area networks (LAN).
RIP: The Routing Information Protocol (RIP) is one of the oldest distance-vector routing protocols
which employ the hop count as a routing metric. RIP prevents routing loops by implementing a
limit on the number of hops allowed in a path from source to destination. The
largest number of hops allowed for RIP is 15, which limits the size of networks that RIP can support.
RIP implements the split horizon, route poisoning and hold down mechanisms to prevent incorrect
routing information from being propagated.
In RIPv1 routers broadcast updates with their routing table every 30 seconds. In the early deployments,
routing tables were small enough that the traffic was not significant. As networks grew in size, however, it
became evident there could be a massive traffic burst every 30 seconds, even if the routers had been
initialized at random times.
In most networking environments, RIP is not the preferred choice for routing as its time to
converge and scalability are poor compared to EIGRP, OSPF, or IS-IS. However, it is easy to configure,
because RIP does not require any parameters, unlike other protocols.
RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is assigned the
reserved port number 520.

You might also like