Download as pdf or txt
Download as pdf or txt
You are on page 1of 77

Routing and Switching (BTEC-905A-18)

Chapter 3

Network Construction
Link Aggregation: Link aggregation is a way of bundling a bunch of individual (Ethernet) links
together so they act like a single logical link.

If you have a switch with a whole lot of Gigabit Ethernet ports, you can connect all of them to another
device that also has a bunch of ports and balance the traffic among these links to improve performance.

Another important reason for using link aggregation is to provide fast and transparent recovery in case
one of the individual links fails.

Individual packets are kept intact and sent from one device to the other over one of the links. In fact,
the protocol usually tries to keep whole sessions on a single link. A packet from the next conversation
could go over a different link.

What is link aggregation?


The idea is to achieve improved performance by transmitting several packets simultaneously down
different links. But standard Ethernet link aggregation never chops up the packet and sends the bits over
different links.

The official IEEE standard for link aggregation used to be called 802.3ad, but is now 802.1AX, as I
will explain later. However, several vendors have also developed their own proprietary variants.

Common link aggregation terminology


A lot of potentially confusing terms appears in any discussion of link aggregation. So let’s quickly
review them before digging a bit further into the technology.

 A group of ports combined together is called a link aggregation group, or LAG. Different
vendors have their own terms for the concept. A LAG can also be called a port-channel, a bond,
or a team.
 Link aggregation groups can also be coupled together onto one network switch, creating a link
aggreagtion switch.
 The rule that defines which packets are sent along which link is called the scheduling
algorithm.
 The active monitoring protocol that allows devices to include or remove individual links from
the LAG is called Link Aggregation Control Protocol (LACP).

Link aggregation allows you to combine multiple Ethernet links into a single logical link between two
networked devices.

Department of Computer Science & Engineering, CEC, Landran Page 1


Routing and Switching (BTEC-905A-18)

Link aggregation is sometimes called by other names:


 Ethernet bonding
 Ethernet teaming
 Link bonding
 Link bundling
 Link teaming
 Network interface controller (NIC) bonding
 NIC teaming
 Port aggregation
 Port channeling
 Port trunking
The most common device combinations involve connecting a switch to another switch, a server, a
network attached storage (NAS) device, or a multi-port access point.
Network devices and management functions treat the link aggregation group (LAG) of multiple Ethernet
connections as a single link. For example, you can include a LAG in a virtual local area network
(VLAN). You can also configure more than one LAG on the same switch, or add more than two
Ethernet links to the same LAG (the maximum number of links per LAG depends on your device).
Some network devices support Link Aggregation Control Protocol (LACP), which helps to prevent
errors in the link aggregation setup process.

What are the benefits of link aggregation?


Link aggregation offers the following benefits:
 Increased reliability and availability. If one of the physical links in the LAG goes down, traffic is
dynamically and transparently reassigned to one of the other physical links.
 Better use of physical resources. Traffic can be load-balanced across the physical links.
 Increased bandwidth. The aggregated physical links deliver higher bandwidth than each individual
link.
 Cost effectiveness. A physical network upgrade can be expensive, especially if it requires new cable
runs. Link aggregation increases bandwidth without requiring new equipment.

Department of Computer Science & Engineering, CEC, Landran Page 2


Routing and Switching (BTEC-905A-18)

What are the different types of LAGs?


The two primary types of LAGs are static (also known as manual) and dynamic. Dynamic LAGs use
Link Aggregation Control Protocol (LACP) to negotiate settings between the two connected devices.
Some devices support static LAGs, but do not support dynamic LAGs with LACP. Refer to your
product’s user manual to see whether your device supports LACP.
Linux-based devices, such as NETGEAR ReadyNAS storage devices, often offer several additional
types of link aggregation that provide increased fault tolerance or load balancing instead of increased
bandwidth.

What is Link Aggregation Control Protocol (LACP)?


Link Aggregation Control Protocol is an IEEE standard defined in IEEE 802.3ad. LACP lets devices
send Link Aggregation Control Protocol Data Units (LACPDUs) to each other to establish a link
aggregation connection. You still need to configure the LAG on each device, but LACP helps prevent
one of the most common problems that can occur during the process of setting up link aggregation:
misconfigured LAG settings. If the devices detect that they cannot establish a link aggregation
connection, they do not try to establish it, and the link shows as “down” in the admin interface.
Another useful feature of LACP is that when one member link stops sending LACPDUs (if the cable is
unplugged, for example), it is removed from the LAG. This helps to minimize packet loss.
Both devices must support LACP for you to set up a dynamic LAG between those devices. We
recommend using LACP instead of a static LAG whenever both devices support LACP.

How do I set up link aggregation in my network?


The following instructions describe in general terms how to set up link aggregation between two devices
in your network. For more information about setting up link aggregation on specific NETGEAR devices,
see the NETGEAR product support page for your product. To find your product’s support page,
visit https://www.netgear.com/support/ and enter your product model number in the search box.
To set up link aggregation between two devices in your network:
1. Make sure that both devices support link aggregation.
2. Configure the LAG on each of the two devices.
3. Make sure that the LAG that you create on each device has the same settings for port speed, duplex
mode, flow control, and MTU size (on some devices, this setting might be called jumbo frames).
4. Make sure that all ports that are members of a LAG have the same virtual local area network
(VLAN) memberships.
If you want to add a LAG to a VLAN, set up the LAG first and then add the LAG to the VLAN; do
not add individual ports.
5. Note which ports on each device you add to the LAG, and make sure that you connect the correct
ones.

Department of Computer Science & Engineering, CEC, Landran Page 3


Routing and Switching (BTEC-905A-18)

The LAG issues an alert and rejects the configuration if port members have different settings for port
speed, duplex mode, or MTU size, or if you accidentally connect ports that are not members of the
LAG.
6. Use Ethernet or fiber cable to connect the ports that you added to the LAG on each device.
7. Verify that the port LED for each connected port on each NETGEAR switch is blinking green.
8. Verify in the admin interface for each device that the link is UP.

Department of Computer Science & Engineering, CEC, Landran Page 4


Routing and Switching (BTEC-905A-18)

VLAN Principles:
VLAN stands for Virtual LAN. It is a logical grouping of network devices. When we create VLAN, we
break the large broadcast domain into smaller broadcast domains. Consider VLAN as a subnet. The
same as two different subnets cannot communicate without a router, and different VLANs also require
the router to transmit.

Features
There are various features of VLAN, which are as follows −

 Virtual LANs offer structure for creating groups of devices, even if their networks are different.

 It can raise the broadcast domains possible in a LAN.

 VLANs can be performing VLANs reduces the security risks as the number of hosts connected
to the broadcast domain decreases.

 This is performed by configuring a separate virtual LAN for only the hosts having sensitive
information.

 It has a flexible networking model that groups users depending on their departments instead of
network location.

 It can change hosts/users on a VLAN is relatively easy. It just needs a new portlevel
configuration.

 It can reduce congestion by sharing traffic as individual VLAN works as a separate LAN.

Virtual Local Area Networks or Virtual LANs (VLANs) are a logical group of computers that appear to
be on the same LAN irrespective of the configuration of the underlying physical network. Network
administrators partition the networks to match the functional requirements of the VLANs so that each
VLAN comprise of a subset of ports on a single or multiple switches or bridges. This allows computers
and devices in a VLAN to communicate in the simulated environment as if it is a separate LAN.

Department of Computer Science & Engineering, CEC, Landran Page 5


Routing and Switching (BTEC-905A-18)

Features of VLANs

 A VLAN forms sub-network grouping together devices on separate physical LANs.


 VLAN's help the network manager to segment LANs logically into different broadcast domains.
 VLANs function at layer 2, i.e. Data Link Layer of the OSI model.
 There may be one or more network bridges or switches to form multiple, independent VLANs.
 Using VLANs, network administrators can easily partition a single switched network into
multiple networks depending upon the functional and security requirements of their systems.
 VLANs eliminate the requirement to run new cables or reconfiguring physical connections in the
present network infrastructure.
 VLANs help large organizations to re-partition devices aiming improved traffic management.
 VLANs also provide better security management allowing partitioning of devices according to
their security criteria and also by ensuring a higher degree of control connected devices.
 VLANs are more flexible than physical LANs since they are formed by logical connections. This
aids is quicker and cheaper reconfiguration of devices when the logical partitioning needs to be
changed.
Types of VLANs

Department of Computer Science & Engineering, CEC, Landran Page 6


Routing and Switching (BTEC-905A-18)

 Protocol VLAN − Here, the traffic is handled based on the protocol used. A switch or bridge
segregates, forwards or discards frames the come to it based upon the traffics protocol.
 Port-based VLAN − This is also called static VLAN. Here, the network administrator assigns
the ports on the switch / bridge to form a virtual network.
 Dynamic VLAN − Here, the network administrator simply defines network membership
according to device characteristics.
Advantages
The advantages of VLAN are as follows −

 It can be allowing network administrators to apply additional security to network


communication.

 It is used to make the expansion and relocation of a network or a network device easier.

 It can provide flexibility because administrators can configure in a centralized environment while
the devices might be located in different geographical locations.

 It can be decreasing the latency and traffic load on the network and the network devices, offering
increased performance.

Disadvantages
The disadvantages of VLAN are as follows −

 It can have a high risk of virus issues because one infected system may spread a virus through
the whole logical network.

 It is more effective at controlling latency than a WAN but less efficient than a LAN.

VLAN Examples

To understand VLAN more clearly let's take an example.

Department of Computer Science & Engineering, CEC, Landran Page 7


Routing and Switching (BTEC-905A-18)

 Our company has three offices.


 All offices are connected with back links.
 Company has three departments Development, Production and Administration.
 Development department has six computers.
 Production department has three computers.
 Administration department also has three computers.
 Each office has two PCs from development department and one from both production and
administration department.
 Administration and production department have sensitive information and need to be separate from
development department.

With default configuration, all computers share same broadcast domain. Development department can
access the administration or production department resources.

With VLAN we could create logical boundaries over the physical network. Assume that we created
three VLANs for our network and assigned them to the related computers.

 VLAN Admin for Administration department


 VLAN Dev for Development department
 VLAN Pro for Production department

Physically we changed nothing but logically we grouped devices according to their function. These
groups [VLANs] need router to communicate with each other. Logically our network look likes
following diagram.

With the help of VLAN, we have separated our single network in three small networks. These networks
do not share broadcast with each other improving network performance. VLAN also enhances the

Department of Computer Science & Engineering, CEC, Landran Page 8


Routing and Switching (BTEC-905A-18)

security. Now Development department cannot access the Administration and Production department
directly. Different VLAN can communicate only via Router where we can configure wild range of
security options.

So far in this article we have explained VLAN, in following section we will explain VLAN terms in
more details.

VLAN Membership

VLAN membership can be assigned to a device by one of two methods

1. Static
2. Dynamic

These methods decide how a switch will associate its ports with VLANs.

Static
Assigning VLANs statically is the most common and secure method. It is pretty easy to set up and
supervise. In this method we manually assign VLAN to switch port. VLANs configured in this way are
usually known as port-based VLANs.

Static method is the most secure method also. As any switch port that we have assigned a VLAN will
keep this association always unless we manually change it. It works really well in a networking
environment where any user movement within the network needs to be controlled.

Dynamic
In dynamic method, VLANs are assigned to port automatically depending on the connected device. In
this method we have configure one switch from network as a server. Server contains device specific
information like MAC address, IP address etc. This information is mapped with VLAN. Switch acting as
server is known as VMPS (VLAN Membership Policy Server). Only high end switch can configured as
VMPS. Low end switch works as client and retrieve VLAN information from VMPS.

Dynamic VLANs supports plug and play movability. For example if we move a PC from one port to
another port, new switch port will automatically be configured to the VLAN which the user belongs. In
static method we have to do this process manually.

VLAN Connections

During the configuration of VLAN on port, we need to know what type of connection it has.

Switch supports two types of VLAN connection

Department of Computer Science & Engineering, CEC, Landran Page 9


Routing and Switching (BTEC-905A-18)

 Access link
 Trunk link

Access link
Access link connection is the connection where switch port is connected with a device that has a
standardized Ethernet NIC. Standard NIC only understand IEEE 802.3 or Ethernet II frames. Access
link connection can only be assigned with single VLAN. That means all devices connected to this port
will be in same broadcast domain.

For example twenty users are connected to a hub, and we connect that hub with an access link port on
switch, then all of these users belong to same VLAN. If we want to keep ten users in another VLAN,
then we have to purchase another hub. We need to plug in those ten users in that hub and then connect it
with another access link port on switch.

Trunk link
Trunk link connection is the connection where switch port is connected with a device that is capable to
understand multiple VLANs. Usually trunk link connection is used to connect two switches or switch to
router. Remember earlier in this article I said that VLAN can span anywhere in network, that is happen
due to trunk link connection. Trunking allows us to send or receive VLAN information across the
network. To support trunking, original Ethernet frame is modified to carry VLAN information.

Trunk Tagging
In trunking a separate logical connection is created for each VLAN instead of a single physical
connection. In tagging switch adds the source port’s VLAN identifier to the frame so that other end
device can understands what VLAN originated this frame. Based on this information destination switch
can make intelligent forwarding decisions on not just the destination MAC address, but also the source
VLAN identifier.

Department of Computer Science & Engineering, CEC, Landran Page 10


Routing and Switching (BTEC-905A-18)

Since original Ethernet frame is modified to add information, standard NICs will not understand this
information and will typically drop the frame. Therefore, we need to ensure that when we set up a trunk
connection on a switch’s port, the device at the other end also supports the same trunking protocol and
has it configured. If the device at the other end doesn’t understand these modified frames it will drop
them. The modification of these frames, commonly called tagging. Tagging is done in hardware by
application-specific integrated circuits (ASICs).

Department of Computer Science & Engineering, CEC, Landran Page 11


Routing and Switching (BTEC-905A-18)

GVRP (GARP VLAN Registration Protocol or Generic VLAN Registration Protocol)


GVRP (GARP VLAN Registration Protocol or Generic VLAN Registration Protocol) is a standards-
based protocol that facilitates control of virtual local area networks (VLANs) within a larger network.
GVRP conforms to the Institute of Electrical and Electronics Engineers (IEEE) 802.1Q specification,
which defines a method of tagging frames with VLAN configuration data over
network trunk interconnects. This enables network devices to dynamically exchange VLAN
configuration information with other devices.

GVRP is based on Generic Attribute Registration Protocol (GARP) and IEEE 802.1r, which defines
procedures for end stations and switches in a VLAN to register and deregister attributes, such as
identifiers or addresses, with each other. It provides every end station and switch with a current record
of all the other end stations and switches that can be reached on the network. GVRP is similar to GARP,
as both eliminate unnecessary network traffic by preventing attempts to transmit information to
unregistered users. In addition, it is necessary to manually configure only one switch with all the other
switches then being updated automatically.

Becoming part of a formal IEEE 802.1ak standard amendment in 2007, Multiple VLAN Registration
Protocol replaced GVRP, as it was found to be prone to performance issues that could potentially cause
prolonged network convergence. This delay was found to create bandwidth degradation on the network
at the point where the delayed convergence appeared. Technically, GVRP is still included as part of the
IEEE standard, as the amendment did not completely remove it. It is expected to be removed in the
future, but until that happens, GVRP is still being used.

Why use GVRP?


On large networks that consist of dozens or even hundreds of VLAN segments, GVRP can be used to
keep VLAN configurations on trunk interfaces organized across the network. There are three benefits
for administrators that enable GVRP on a network:

1. It enables switches to automatically delete unused VLANs so that only the VLANs that are in use
are transported across 802.1Q trunk links.
2. It enables admins to configure a new VLAN on one switch and then have it propagate the
configuration across all network switches participating in the GVRP process.
3. GVRP can eliminate some unnecessary broadcast traffic on the network, reducing bandwidth
overhead used for network management.
How does GVRP work?
Two or more switches in a network that are connected via 802.1Q trunk ports with GVRP enabled will
begin communicating both statically and dynamically with VLAN information. Switches with statically
configured VLANs will advertise them to connected switches using GVRP data units. Those units are

Department of Computer Science & Engineering, CEC, Landran Page 12


Routing and Switching (BTEC-905A-18)

specifically designed management packets used to share VLAN information. If a switch learns of a new
VLAN from its neighbor, this VLAN is added to the list of VLAN tags that can be transported across the
link. The VLAN that learned the new information can then pass along its own statically configured
VLANs, in addition to ones learned from its neighbor. The caveat is that the switch cannot send
dynamically learned VLAN information out the same interface that it was learned on. This is a loop
avoidance technique.

All the dynamically learned VLAN information is stored in switch memory. So, if power is lost or the
switch is rebooted, the dynamically learned VLAN info is lost, and the VLANs are pruned from the
trunk interface. But, once the switches begin communicating again, they will relearn the shared VLAN
information to bring the network and all VLANs back into a fully informed state.

How do I enable GVRP?


While the exact processes and syntax for enabling GVRP on a compatible switch vary from one vendor
and switch model to the next, there are some similarities when it comes to enabling the use of GVRP:

 GVRP can be enabled or disabled globally on the switch.


 If GVRP is enabled, each individual switch port that is set up as an 802.1Q trunk can then be set to
one of the following modes:
o GVRP-enabled. This mode enables GVRP on the trunk interface and passes both static and
dynamically learned VLAN information to connected switches that are also in an enabled mode.
o GVRP-disabled. This mode disables the use of GVRP on the port, and a switch ignores any
GVRP communications from connected switches on this interface.
o Block GVRP registration. This mode will unlearn or discard all previously learned dynamic
VLANs and prevent VLAN creation or VLAN registration on the trunk port.
What is an example of GVRP?
The following example shows that distribution switch A is connected to two access switches labeled B
and C. The access switches are connected to the distribution switch by an 802.1Q trunk with GVRP
enabled globally and on the trunk interface.

Department of Computer Science & Engineering, CEC, Landran Page 13


Routing and Switching (BTEC-905A-18)

GARP (Generic Attribute Registration Protocol)

VRP(GARP VLAN Registration Protocol) is a protocol that is used to control VLANs, dynamic
VLAN registration and VLAN prunning in a network. GVRP does this dynamically VLAN
management with exchanging VLAN information and prunning unnecessary broadcast and unicast
traffic over 802.aQ trunk links. With GVRP, different vendor’s switches can exchange VLAN
information eachother.

GVRP is the standard based equivalent of Cisco VTP (VLAN Trunking Protocol). You do not need to
configure all the switches in a network with the same VLAN information. With GVRP (like VTP), you
can configure the VLAN information in one switch and then this information is propagated to the other
swicthes in the network. The above shape summarize this mechanism very good.

GARP (Generic Attribute Registration Protocol)

GVRP uses the operating mechanism of GARP (Generic Attribute Registration


Protocol). GARP provides dynamic propagation of VLAN information and it is used for registration,
unregistration of different VLAN attributes. Gratuitous Address Resolution Protocol (GARP) requests
provide duplicate IP address detection. A GARP request is a broadcast request for a router's own IP
address. ... It is used primarily by a host to inform the network about its IP address. Gratuitous Address
Resolution Protocol (GARP) requests provide duplicate IP address detection. A GARP request is a

Department of Computer Science & Engineering, CEC, Landran Page 14


Routing and Switching (BTEC-905A-18)

broadcast request for a router’s own IP address. If a router sends an Address Resolution Protocol (ARP)
request for its own IP address and no ARP replies are received, the router’s assigned IP address is not
being used by other nodes. If a router sends an ARP request for its own IP address and an ARP reply is
received, the router’s assigned IP address is already being used by another node.

A GARP is an ARP broadcast in which the source and destination MAC addresses are the same. It is
used primarily by a host to inform the network about its IP address. A spoofed gratuitous ARP message
can cause network mapping information to be stored incorrectly, causing a network malfunction.

GARP is a method of establishing an association between a logical IP address and a hardware address
whenever an interface is created or the state of the interface shifts to the operationally up state. On the
other hand, ARP dynamically binds the IP address (the logical address) to the correct MAC address. The
device that transmits a GARP populates both the source and destination fields with its own information.
The devices that receive the GARP requests might update the ARP caches with the new information
contained in the GARP packets.

By default, updating the ARP cache on GARP replies is disabled on the router. On Ethernet interfaces,
you can enable transmission of GARP packets on a specific interface by using the ip gratuitous-arps
command in Interface Configuration mode and specify the number of GARP packets to be sent,
depending on the changes to IP interface settings. If an IP address is configured directly on the physical
Ethernet interface and a VLAN major interface is not configured on the Ethernet interface for VLAN
encapsulation, transmission of GARP packets does not take place.

When you create an IP interface or the administrative status of the interface transitions to the up state,
three GARP packets are transmitted for each IP address. Each GARP packet is sent at an interval of 10
seconds. By default, the router generates GARP requests. An IP interface can support up to a maximum
of 16 secondary IP addresses. Therefore, with the maximum number of secondary IP addresses
configured, a total of 48 GARP messages for each IP interface are sent. In a fully scaled environment,
such a transmission of a large number of GARP messages creates a storm of GARP packets in the entire
broadcast domain, which contains dynamic subscriber line access multiplexers (DSLAMs) and other
BRAS devices within the same Metro Ethernet network. In such a network, reducing the number of
GARP packets transmitted for interface changes reduces performance impact on the router and improves
the processing efficiency of the router.

GARP Packets Transmission Scenarios

The following scenarios describe the manner in which GARP packets are generated, based on the default
configuration settings for transmission of GARP packets and the network topology:

• Three GARP packets are sent when you configure a new primary or secondary IP address on an
IP interface.

• Three GARP packets are transmitted when an IP interface state transitions from the down state to
the up state.

Department of Computer Science & Engineering, CEC, Landran Page 15


Routing and Switching (BTEC-905A-18)

• Three GARP packets are sent for each IP address of the numbered interface when a new
unnumbered interface associated with the numbered interface is created.

• Three GARP packets are sent for all the unnumbered interfaces whenever any secondary IP
address on the numbered interface that it is associated with is modified.

• Three GARP packets are sent for all the unnumbered interfaces for all the IP addresses whenever
the primary IP address of the numbered interface that it is associated with is modified.

In all of the these scenarios, you can modify the number of GARP packets to be transmitted to be less
than three by using the ip gratuitous-arps command.

The following two scenarios describe the method of transmission of GARP packets, regardless of
whether the sending of GARP packets is disabled. In such cases, even if you configure the no ip
gratuitous-arps command to disable sending GARPs, these packets are sent to denote the changes in
system and interface conditions.

• One GARP packet is always sent for each virtual address of a VRRP interface. If you configure
VRRP on a virtual router and associate the IP address with the VRRP instance ID (VRID) using the ip
vrrp command in Interface Configuration mode, one GARP packet is always transmitted for each virtual
address of the interface enabled for VRRP.

• Three GARP packets are always sent when a failover occurs to the secondary link of the
redundant port on GE-2 and GE-HDE line modules that are paired with GE-2 SFP I/O modules, 2xGE
APS I/O SFP modules, and GE-2 APS I/O SFP modules, with physical link redundancy.

GARP that is used by GVRP, has different Messages like Attribute Join Messages and Attribute Leave
Messages. This Messages contain multiple Attributes such as Type, Event, Value. The operation of
GVRP is done through these GARP Messages. For example if a device that reside in the network sends
a VLAN ID from a Port in a GARP Message, the other end will record this information, the VLAN and
the Port information.

GARP PDU

GARP PDU states in an Ethernet Packet. It is a Layer 2 PDU. GARP PDU includes some parts consist
of:
• Protocol ID
Messages (1 to N)
• End Mask
There are multiple Messages in a GARP PDU. These Messages has some sub parts. These are:

Department of Computer Science & Engineering, CEC, Landran Page 16


Routing and Switching (BTEC-905A-18)

• Attribute Type

• Attribute List

Department of Computer Science & Engineering, CEC, Landran Page 17


Routing and Switching (BTEC-905A-18)

VLAN routing

 VLAN is a network segment on a switched LAN which groups together the hosts on the
network, logically, regardless of their physical locations on the network.
 A Local Area Network (LAN) is a physical interconnection of network devices in a small
geographical area.
 Layer 3 switch/multilayer switch is a special type of switch which performs the function of both
switches at layer 2 and routers at layer 3 of the OSI model i.e., it can forward both the frames
and packets.
 Switched Virtual Interface (SVI) refers to a virtual interface which connects VLANs on the
network devices to their respective router engines.
 Default Gateway is a device that forwards IP packets and provides an access point to the other
networks. For instance, a router.
 Access Point is a network device that connects to a router or switch, creating a wireless local
area network and acting as a link between the router and the network users.
 Switch port refers to a physical opening on the switch or router where ethernet cables can be
plugged.
 Router Interface refers to a path that enables the router to connect to the network.

What is Inter-VLAN routing?

Virtual LANs (VLANS) are networks segments on a switched LAN. Inter-VLAN routing refers to the
movement of packets across the network between hosts in different network segments.

VLANs make it easier for one to segment a network, which in turn improves the performance of the
network and makes it more flexible, since they are logical connections.

VLANs act as separate subnet on the network. To move packets from one VLAN to another and enable
communications among hosts, we have to configure the VLAN network.

Inter-VLAN routing methods

1) Legacy inter-VLAN routing

In this method, multiple router interfaces are used, each connecting to a switch port in different VLANs.
These interfaces are served as default gateways, which requires additional cabling when the network has
to be expanded.

Hence, adding additional network cables and improving infrastructure is more expensive.

Department of Computer Science & Engineering, CEC, Landran Page 18


Routing and Switching (BTEC-905A-18)

2) Router-on-a-stick

In this method, unlike the legacy routing, one physical interface port is used for routing the traffic
between the network segments. The network administrator doesn’t need to create separate VLAN
interfaces like fa0/1 to fa0/10.

Instead, all the interfaces from 1 to 10 are created with a single interface. This method is simple to
implement and used for small to medium-sized networks.

3) Layer 3 switch using Switched Virtual Interface (SVI)

Currently, this method of inter-VLAN routing that uses layer 3/multilayer switch and Switched Virtual
Interfaces (SVI) is the most preferred.

SVIs are created for VLANs exists on the switch which performs the same function for the VLANs as
that of a router.

Layer 3 switches are expensive, which are primarily suitable for large organization networks.

Network configurations for Inter-VLAN communication using Router-On-Stick method

In this article, we will learn how to configure inter-VLAN routing using the router-on-a-stick method.

Consider a LAN with 4 PCs, 1 switch, and a router connected as shown in the image:

Department of Computer Science & Engineering, CEC, Landran Page 19


Routing and Switching (BTEC-905A-18)

Now, we have to configure two VLANs 10 and 20, with PC0 and PC1 on VLAN10, and PC2 and PC3
are on VLAN20.

 IP Address of PC0 - 192.168.1.10


 IP Address of PC1 - 192.168.1.20
 IP Address of PC2 - 192.168.2.10
 IP Address of PC3 - 192.168.2.20
 The default gateway for VLAN10 - 192.168.1.1
 The default gateway for VLAN20 - 192.168.2.1

Step 1

For us to subdivide the network into two subnets, we have to create two VLANS on the
switch, VLAN10 and VLAN20. Give them custom names like VLAN 10 - student and VLAN 20 - staff.

To create two VLANs, we enter the configuration mode using the config terminal command, and then
we enter the VLAN number like vlan 10 along with the name.

Switch>enable !moving from user exec mode to priviledge mode


Switch#config terminal !moving from priviledge mode to global configuration mode
Switch(config)#vlan 10 !assigning vlan number
Switch(config-vlan)#name student !assigns the vlan 10 the name student
Switch(config-vlan)#exit
Switch(config)#vlan 20
Switch(config-vlan)#name staff !assigns vlan 20 the name staff
Switch(config-vlan)#exit

Step 2

Assign switch ports to the VLANS. Ports fa0/1 and fa0/2 acting as access ports for VLAN10, while
ports fa0/3 andfa0/4 for VLAN20.

We shall use fa0/5 port for the trunk port for carrying the traffic between the two VLANS via the router.

NOTE: fa refers to fast ethernet ports used for connecting the network hosts to the switch or router.

Configurations for access ports fa0/1 and fa0/2


Switch>enable !moving from user exec mode to the priviledge mode
Switch#config terminal !moving from priviledge mode to the global configuration mode
Switch(config)#int fa 0/1 !entering the interface port
Switch(config-if)#switchport mode access !making the interface fa0/1 an access port
Switch(config-if)#switchport access vlan 10 !making interface access port for vlan 10

Department of Computer Science & Engineering, CEC, Landran Page 20


Routing and Switching (BTEC-905A-18)

Switch(config-if)#exit !exiting from the interface

Switch(config)#int fa 0/2 !entering the interface port


Switch(config-if)#switchport mode access !making the interface fa0/2 an access port
Switch(config-if)#switchport access vlan 10 !making interface access port for vlan 10
Switch(config-if)#exit !exiting from the interface

In the configuration above, fa0/1 and fa0/2 are configured as access ports using the command switchport
mode access.

Since they belong to vlan10, the switchport access vlan 10 command is used to configure them as access
ports within vlan10.

Configurations for access ports fa0/3 and fa0/4


Switch(config)#int fa 0/3 !entering the interface port
Switch(config-if)#switchport mode access !making the interface fa0/3 an access port
Switch(config-if)#switchport access vlan 20 !making interface access port for vlan 10
Switch(config-if)#exit !exiting from the interface

Switch(config)#int fa 0/4 !entering the interface port


Switch(config-if)#switchport mode access !making the interface fa0/4 an access port
Switch(config-if)#switchport access vlan 20 !making interface access port for vlan 10
Switch(config-if)#exit

In the configuration above, fa0/3 and fa0/4 are configured as access ports using the command switchport
mode access.

Since they belong to vlan20, the switchport access vlan 20 command is used to configure them as access
ports within vlan20.

Configurations for trunk port fa0/5


Switch(config)#int fa 0/5 !entering the interface port
Switch(config-if)#switchport mode trunk !making interface a trunk port.
Switch(config-if)#do write !saving the running configurations to start-up file

From the above code interface, fa0/5 is serving as our trunk port. To configure it to serve as a trunk port
and not an access port, we use the command switchport mode trunk in the global interface mode.

Step 3

Using static IP addressing, set the IP addresses to static on each PC on the network.

Department of Computer Science & Engineering, CEC, Landran Page 21


Routing and Switching (BTEC-905A-18)

Step 4

Configure the router to enable the traffic to move from VLAN10 to VLAN20. For the PCs to
communicate, we subdivide the single interface into many sub-interfaces, where each sub-interface will
act as the default gateways for each of the VLANs. This will allow two sub networks to communicate
using the single interface.

Router>enable !moving from user exec mode to the priviledge exec mode
Router#config terminal !moving from priviledge exec mode to the global configuration mode
Router(config)#int g0/0 !entering on our physical router interface gigabitEthernet 0/0
Router(config-if)#no shutdown !activating the interface
Router(config-if)#int g0/0.10 !first sub interface for vlan 10 on g0/0
Router(config-subif)#encapsulation dot1q 10 !configuring the sub interface to respond to traffic
from vlan 10
Router(config-subif)#ip add 192.168.1.1 255.255.255.0 !configuring the IP address of the sub interface
g0/0.10
Router(config-subif)#exit !exiting from the sub interface

Router(config)#int g0/0 !entering our physical router interface


Router(config-if)#no shutdown !activating our physical interface
Router(config-if)#int g0/0.20 !second sub interface for vlan 20 on g0/0

Department of Computer Science & Engineering, CEC, Landran Page 22


Routing and Switching (BTEC-905A-18)

Router(config-subif)#encapsulation dot1q 20 !configuring the sub interface to respond to traffic


from vlan 20
Router(config-subif)#ip add 192.168.2.1 255.255.255.0 !configuring the IP address of the sub interface
g0/0.20
Router(config-subif)#exit
Router(config)#do write !saving our running configurations into the start-up configuration file
Router(config)#exit

From the configurations above, the interface g0/0 is subdivided into two sub
interfaces: g0/0.10 for VLAN10 and g0/0.20 for VLAN20.

Then, the two sub-interfaces are assigned IP addresses and serve as the trunk ports for carrying the
traffic.

Step 5

Test the inter-VLAN connectivity by trying to ping the different PCs.

For instance, if we ping PC2 in VLAN20 from PC0 in VLAN10, it should be successful as shown
below:

Department of Computer Science & Engineering, CEC, Landran Page 23


Routing and Switching (BTEC-905A-18)

Wireless LAN:
Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio waves instead
of cables for connecting the devices within a limited area forming LAN (Local Area Network). Users
connected by wireless LANs can move around within this limited area such as home, school, campus,
office building, railway platform, etc.

Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.

Components of WLANs
The components of WLAN architecture as laid down in IEEE 802.11 are −

 Stations (STA) − Stations comprises of all devices and equipment that are connected to the
wireless LAN. Each station has a wireless network interface controller. A station can be of two
types −

o Wireless Access Point (WAP or AP)


o Client
 Basic Service Set (BSS) − A basic service set is a group of stations communicating at the
physical layer level. BSS can be of two categories −

o Infrastructure BSS
o Independent BSS
 Extended Service Set (ESS) − It is a set of all connected BSS.

 Distribution System (DS) − It connects access points in ESS.

Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad hoc mode.

Department of Computer Science & Engineering, CEC, Landran Page 24


Routing and Switching (BTEC-905A-18)

 Infrastructure Mode − Mobile devices or clients connect to an access point (AP) that in turn
connects via a bridge to the LAN or Internet. The client transmits frames to other clients via the
AP.

 Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer fashion.

Advantages of WLANs

o Flexibility: Within radio coverage, nodes can communicate without further restriction. Radio
waves can penetrate walls, senders and receivers can be placed anywhere (also non-visible, e.g.,
within devices, in walls etc.).
o Planning: Only wireless ad-hoc networks allow for communication without previous planning,
any wired network needs wiring plans.
o Design: Wireless networks allow for the design of independent, small devices which can for
example be put into a pocket. Cables not only restrict users but also designers of small notepads,
PDAs, etc.
o Robustness: Wireless networks can handle disasters, e.g., earthquakes, flood etc. whereas,
networks requiring a wired infrastructure will usually break down completely in disasters.
o Cost: The cost of installing and maintaining a wireless LAN is on average lower than the cost of
installing and maintaining a traditional wired LAN, for two reasons. First, after providing
wireless access to the wireless network via an access point for the first user, adding additional
users to a network will not increase the cost. And second, wireless LAN eliminates the direct
costs of cabling and the labor associated with installing and repairing it.
o Ease of Use: Wireless LAN is easy to use and the users need very little new information to take
advantage of WLANs.

Disadvantages of WLANs

o Quality of Services: Quality of wireless LAN is typically lower than wired networks. The main
reason for this is the lower bandwidth due to limitations is radio transmission, higher error rates
due to interference and higher delay/delay variation due to extensive error correction and
detection mechanisms.
o Proprietary Solutions: Due to slow standardization procedures, many companies have come up
with proprietary solutions offering standardization functionality plus many enhanced features.
Most components today adhere to the basic standards IEEE 802.11a or 802.11b.
o Restrictions: Several govt. and non-govt. institutions world-wide regulate the operation and
restrict frequencies to minimize interference.
o Global operation: Wireless LAN products are sold in all countries so, national and international
frequency regulations have to be considered.

Department of Computer Science & Engineering, CEC, Landran Page 25


Routing and Switching (BTEC-905A-18)

o Low Power: Devices communicating via a wireless LAN are typically power consuming, also
wireless devices running on battery power. Whereas the LAN design should take this into
account and implement special power saving modes and power management functions.
o License free operation: LAN operators don't want to apply for a special license to be able to use
the product. The equipment must operate in a license free band, such as the 2.4 GHz ISM band.
o Robust transmission technology: If wireless LAN uses radio transmission, many other
electrical devices can interfere with them (such as vacuum cleaner, train engines, hair dryers,
etc.).Wireless LAN transceivers cannot be adjusted for perfect transmission is a standard office
or production environment.

Fundamentals of WLANs

1. HiperLAN

o HiperLAN stands for High performance LAN. While all of the previous technologies have been
designed specifically for an adhoc environment, HiperLAN is derived from traditional LAN
environments and can support multimedia data and asynchronous data effectively at high rates
(23.5 Mbps).
o A LAN extension via access points can be implemented using standard features of the
HiperLAN/1 specification. However, HiperLAN does not necessarily require any type of access
point infrastructure for its operation.
o HiperLAN was started in 1992, and standards were published in 1995. It employs the 5.15GHz
and 17.1 GHz frequency bands and has a data rate of 23.5 Mbps with coverage of 50m and
mobility< 10 m/s.
o It supports a packet-oriented structure, which can be used for networks with or without a central
control (BS-MS and ad-hoc). It supports 25 audio connections at 32kbps with a maximum
latency of 10 ms, one video connection of 2 Mbps with 100 ms latency, and a data rate of 13.4
Mbps.
o HiperLAN/1 is specifically designed to support adhoc computing for multimedia systems, where
there is no requirement to deploy a centralized infrastructure. It effectively supports MPEG or
other state of the art real time digital audio and video standards.
o The HiperLAN/1 MAC is compatible with the standard MAC service interface, enabling support
for existing applications to remain unchanged.
o HiperLAN 2 has been specifically developed to have a wired infrastructure, providing short-
range wireless access to wired networks such as IP and ATM.

The two main differences between HiperLAN types 1 and 2 are as follows:
o Type 1 has a distributed MAC with QoS provisions, whereas type 2 has a centralized schedule
MAC.

Department of Computer Science & Engineering, CEC, Landran Page 26


Routing and Switching (BTEC-905A-18)

o Type 1 is based on Gaussian minimum shift keying (GMSK), whereas type 2 is based on OFDM.
o HiperLAN/2 automatically performs handoff to the nearest access point. The access point is
basically a radio BS that covers an area of about 30 to 150 meters, depending on the
environment. MANETs can also be created easily.

The goals of HiperLAN are as follows:


o QoS (to build multiservice network)
o Strong security
o Handoff when moving between local area and wide areas
o Increased throughput
o Ease of use, deployment, and maintenance
o Affordability
o Scalability

One of the primary features of HiperLAN/2 is its high speed transmission rates (up to 54 Mbps). It uses
a modulation method called OFDM to transmit analog signals. It is connection oriented, and traffic is
transmitted on bidirectional links for unicast traffic and unidirectional links toward the MSs for
multicast and broadcast traff

This connection oriented approach makes support for QoS easy, which in turn depends on how the
HiperLAN/2 network incorporates with the fixed network using Ethernet, ATM, or IP.

The HiperLAN/2 architecture shown in the figure allows for interoperation with virtually any type of
fixed network, making the technology both network and application independent.

Department of Computer Science & Engineering, CEC, Landran Page 27


Routing and Switching (BTEC-905A-18)

HiperLAN/2 networks can be deployed at "hot spot" areas such as airports and hotels, as an easy way of
offering remote access and internet services.

2. Home RF Technology

o A typical home needs a network inside the house for access to a public network telephone and
internet, entertainment networks (cable television, digital audio and video with the IEEE 1394),
transfer and sharing of data and resources (printer, internet connection), and home control and
automation.
o The device should be able to self-configure and maintain connectivity with the network. The
devices need to be plug and play enabled so that they are available to all other clients on the
network as soon as they are switched on, which requires automatic device discovery and
identification in the system.
o Home networking technology should also be able to accommodate any and all lookup services,
such as Jini. Home RF products allow you to simultaneously share a single internet connection
with all of your computers - without the hassle of new wires, cables or jacks.
o Home RF visualizes a home network as shown in the figure:

Department of Computer Science & Engineering, CEC, Landran Page 28


Routing and Switching (BTEC-905A-18)

o A network consists of resource providers, which are gateways to different resources like phone
lines, cable modem, satellite dish, and so on, and the devices connected to them such as cordless
phone, printers and fileservers, and TV.
o The goal of Home RF is to integrate all of these into a single network suitable for all applications
and to remove all wires and utilize RF links in the network suitable for all applications.
o This includes sharing PC, printer, fileserver, phone, internet connection, and so on, enabling
multiplayer gaming using different PCs and consoles inside the home, and providing complete
control on all devices from a single mobile controller.
o With Home RF, a cordless phone can connect to PSTN but also connect through a PC for
enhanced services. Home RF makes an assumption that simultaneous support for both voice and
data is needed.

Advantages of Home RF
o In Home RF all devices can share the same connection, for voice or data at the same time.
o Home RF provides the foundation for a broad range of interoperable consumer devices for
wireless digital communication between PCs and consumer electronic devices anywhere in and
around the home.
o The working group includes Compaq computer corp. Ericson enterprise network, IBM Intel
corp., Motorola corp. and other.
o A specification for wireless communication in the home called the shared wireless access
protocol (SWAP) has been developed.

3. IEEE 802.11 Standard

IEEE 802.11 is a set of standards for the wireless area network (WLAN), which was implemented in
1997 and was used in the industrial, scientific, and medical (ISM) band. IEEE 802.11 was quickly
implemented throughout a wide region, but under its standards the network occasionally receives
interference from devices such as cordless phones and microwave ovens. The aim of IEEE 802.11 is to
provide wireless network connection for fixed, portable, and moving stations within ten to hundreds of
meters with one medium access control (MAC) and several physical layer specifications. This was later
called 802.11a. The major protocols include IEEE 802.11n; their most significant differences lie in the
specification of the PHY layer.

4. Bluetooth

Bluetooth is one of the major wireless technologies developed to achieve WPAN (wireless personal area
network). It is used to connect devices of different functions such as telephones, computers (laptop or
desktop), notebooks, cameras, printers, and so on.

Department of Computer Science & Engineering, CEC, Landran Page 29


Routing and Switching (BTEC-905A-18)

Architecture of Bluetooth
o Bluetooth devices can interact with other Bluetooth devices in several ways in the figure. In the
simplest scheme, one of the devices acts as the master and (up to) seven other slaves.
o A network with a master and one or more slaves associated with it is known as a piconet. A
single channel (and bandwidth) is shared among all devices in the piconet.

o Each of the active slaves has an assigned 3-bit active member address. many other slaves can
remain synchronized to the master though remaining inactive slaves, referred to as parked nodes.
o The master regulates channel access for all active nodes and parked nodes. Of two piconets are
close to each other, they have overlapping coverage areas.
o This scenario, in which nodes of two piconets intermingle, is called a scatternet. Slaves in one
piconet can participate in another piconet as either a master or slave through time division
multiplexing.
o In a scatternet, the two (or more) piconets are not synchronized in either time or frequency. Each
of the piconets operates in its own frequency hopping channel, and any devices in multiple
piconets participate at the appropriate time via time division multiplexing.
o The Bluetooth baseband technology supports two link types. Synchronous connection oriented
(SCO) types, used primarily for voice, and asynchronous connectionless (ACL) type, essentially
for packet data.

Department of Computer Science & Engineering, CEC, Landran Page 30


Routing and Switching (BTEC-905A-18)

o Difference between WLAN and WWAN :

S.NO. WLAN WWAN

WLAN is known as Wireless Local Area


Network or LAWN (Local Area Wireless WWAN is known as Wireless Wide Area
01. Network). Network.

WLAN uses radio, infrared, and microwave


02. transmission. WWAN uses cellular network.

WLAN can cover only a small organization


03. at most. WWAN has worldwide coverage.

WLAN has excellent speed and


04. performance. WWAN has very low speed and performance.

It uses 128-bit encryption which makes it very


05. It uses WEP or WPA which are less secure. secure.

WLAN takes very little time to set up and


run. WWAN users have to make a contract WWAN users have to make a contract with
06. with their ISPs. their ISPs.

07. WLAN can be used with all devices. WWAN is limited to mobile devices only.

08. WLAN upgrade is very cheap. WWAN upgrades cost higher comparatively.

WWAN has 802.20 standards. GSM, GPRS


and CDMA are the standards of Wireless Wide
09. WLAN has 802.11 standards. Area Network.

Typical applications of WLAN includes


Inventory Management, Internet access Typical applications of WWAN includes Field
10. from hotspot etc. service, Field sales, Mobile messaging etc.

Department of Computer Science & Engineering, CEC, Landran Page 31


Routing and Switching (BTEC-905A-18)

Bridging Enterprise Networks with Serial WAN Technology:

Serial has in recent years been slowly phased out in many parts of all networks in favor of Ethernet
technology, however still remains active as a legacy technology in a great number of enterprise networks
alongside Ethernet. Serial has traditionally provided solutions for communication over long distanced
and therefore remains a prominent technology for Wide Area Network (WAN) communications, for
which many protocols and legacy WAN technologies remain in operation at the enterprise edge. A
thorough knowledge of these technologies is required to support many aspects of WAN operation.

WAN (Wide Area Network) is the network that connects the geographical far areas. It can be used for
a client to connect to the corporate network, connections between the branch offices of a company and
similar wide area connections etc.

As a reminder, Local Area Network (LAN) is the smaller network. And we can simply say that, each
Enterprise Branch has one more Local Area Networks (LAN).

Department of Computer Science & Engineering, CEC, Landran Page 32


Routing and Switching (BTEC-905A-18)

WANs and the related protocols operates at the bottom two layer (Physical and Data-Link) of OSI
Model. The Physical Layer determines the connections, The Data-Link Layer provides the Encapsulated
transmission. The protocols used in Data-Link for WAN are HDLC, PPP,Frame-Relay, ATM etc. We
will talke about these protocols detailly in other lessons.

Serial WAN Communication

Serial WAN communication is divided into two group. These are :


• Syncronous Communication
• Asyncronous Communication
Syncronous Communication is the communication that uses clocks (timing). Sender and receiver are
syncronized with this clock. It is faster and less overheaded transfer method. A block of characters are
sent at a time.
Asynronous Communication are the communication that do not use timing. Sender and Receiver is not
synronized. One character is sent at a time.
Serial interfaces can be DTE (Data Terminal Equipment) or DCE (Data Communications Equipment).
DCE provides clocking and converts user data into the service provider’s format. CSU/DSU is an
exmple of DCE. DTE needs a DCE for clocking.

Department of Computer Science & Engineering, CEC, Landran Page 33


Routing and Switching (BTEC-905A-18)

Serial Communication

Figure shows a simple representation of a serial communication across a WAN. Data is encapsulated by
the communications protocol used by the sending router. The encapsulated frame is sent on a physical
medium to the WAN. There are various ways to traverse the WAN, but the receiving router uses the
same communications protocol to de-encapsulate the frame when it arrives.

Serial Communication Process

There are many different serial communication standards, each one using a different signaling method.
There are three important serial communication standards affecting LAN-to-WAN connections:

 RS-232: Most serial ports on personal computers conform to the RS-232C or newer RS-422 and
RS-423 standards. Both 9-pin and 25-pin connectors are used. A serial port is a general-purpose
interface that can be used for almost any type of device, including modems, mice, and printers.
These types of peripheral devices for computers have been replaced by new and faster standards
such as USB but many network devices use RJ-45 connectors that conform to the original RS-
232 standard.
 V.35: Typically used for modem-to-multiplexer communication, this ITU standard for high-
speed, synchronous data exchange combines the bandwidth of several telephone circuits. In the
U.S., V.35 is the interface standard used by most routers and DSUs that connect to T1 carriers.
V.35 cables are high-speed serial assemblies designed to support higher data rates and
connectivity between DTEs and DCEs over digital lines. There is more on DTEs and DCEs later
in this section.

Department of Computer Science & Engineering, CEC, Landran Page 34


Routing and Switching (BTEC-905A-18)

 HSSI: A High-Speed Serial Interface (HSSI) supports transmission rates up to 52 Mbps.


Engineers use HSSI to connect routers on LANs with WANs over high-speed lines, such as T3
lines. Engineers also use HSSI to provide high-speed connectivity between LANs, using Token
Ring or Ethernet. HSSI is a DTE/DCE interface developed by Cisco Systems and T3 Plus
Networking to address the need for high-speed communication over WAN links.

WAN Protocols

There are several WAN Protocols that are used between different locations of different networks. These
protocols are:
•HDLC
•PPP
•FrameRelay

HDLC (High-Level Data Link Control) is a layer 2 WAN Encapsulation Protocol that is used on
syncronous data links. It is the simplest WAN Protocol that can connect your remote offices over leased
lines. It has both industry standard and Cisco proprietary version.

HDLC Encapsulation

HDLC is a synchronous data link layer protocol developed by the International Organization for
Standardization (ISO). Although HDLC can be used for point-to-multipoint connections, the most
common usage of HDLC is for point-to-point serial communications.

WAN Encapsulation Protocols

On each WAN connection, data is encapsulated into frames before crossing the WAN link. To ensure
that the correct protocol is used, the appropriate Layer 2 encapsulation type must be configured. The
choice of protocol depends on the WAN technology and the communicating equipment. Figure 3-
16 displays the more common WAN protocols and where they are used. The following are short
descriptions of each type of WAN protocol:

 HDLC: The default encapsulation type on point-to-point connections, dedicated links, and
circuit-switched connections when the link uses two Cisco devices. HDLC is now the basis for
synchronous PPP used by many servers to connect to a WAN, most commonly the Internet.
 PPP: Provides router-to-router and host-to-network connections over synchronous and
asynchronous circuits. PPP works with several network layer protocols, such as IPv4 and IPv6.
PPP uses the HDLC encapsulation protocol, but also has built-in security mechanisms such as
PAP and CHAP.

Department of Computer Science & Engineering, CEC, Landran Page 35


Routing and Switching (BTEC-905A-18)

 Serial Line Internet Protocol (SLIP): A standard protocol for point-to-point serial connections
using TCP/IP. SLIP has been largely displaced by PPP.
 X.25/Link Access Procedure, Balanced (LAPB): An ITU-T standard that defines how
connections between a DTE and DCE are maintained for remote terminal access and computer
communications in public data networks. X.25 specifies LAPB, a data link layer protocol. X.25
is a predecessor to Frame Relay.
 Frame Relay: An industry standard, switched, data link layer protocol that handles multiple
virtual circuits. Frame Relay is a next-generation protocol after X.25. Frame Relay eliminates
some of the time-consuming processes (such as error correction and flow control) employed in
X.25.
 ATM: The international standard for cell relay in which devices send multiple service types,
such as voice, video, or data, in fixed-length (53-byte) cells. Fixed-length cells allow processing
to occur in hardware; thereby, reducing transit delays. ATM takes advantage of high-speed
transmission media such as E3, SONET, and T3.

Figure 3-16 WAN Encapsulation Protocols

HDLC Encapsulation (3.1.2.2)

HDLC is a bit-oriented synchronous data link layer protocol developed by the International
Organization for Standardization (ISO). The current standard for HDLC is ISO 13239. HDLC was
developed from the Synchronous Data Link Control (SDLC) standard proposed in the 1970s. HDLC
provides both connection-oriented and connectionless service.

HDLC uses synchronous serial transmission to provide error-free communication between two points.
HDLC defines a Layer 2 framing structure that allows for flow control and error control through the use
of acknowledgments. Each frame has the same format, whether it is a data frame or a control frame.

Department of Computer Science & Engineering, CEC, Landran Page 36


Routing and Switching (BTEC-905A-18)

When frames are transmitted over synchronous or asynchronous links, those links have no mechanism to
mark the beginning or end of frames. For this reason, HDLC uses a frame delimiter, or flag, to mark the
beginning and the end of each frame.

Cisco has developed an extension to the HLDC protocol to solve the inability to provide multiprotocol
support. Although Cisco HLDC (also referred to as cHDLC) is proprietary, Cisco has allowed many
other network equipment vendors to implement it. Cisco HDLC frames contain a field for identifying
the network protocol being encapsulated. Figure 3-17 compares standard HLDC to Cisco HLDC.

Figure 3-17 Standard and Cisco HLDC Frame Format

HDLC Frame Types (3.1.2.3)

HDLC defines three types of frames, each with a different control field format.

Flag

The Flag field initiates and terminates error checking. The frame always starts and ends with an 8-bit
Flag field. The bit pattern is 01111110. Because there is a likelihood that this pattern occurs in the actual
data, the sending HDLC system always inserts a 0 bit after every five consecutive 1s in the data field, so
in practice the flag sequence can only occur at the frame ends. The receiving system strips out the
inserted bits. When frames are transmitted consecutively, the end flag of the first frame is used as the
start flag of the next frame.

Address

The Address field contains the HDLC address of the secondary station. This address can contain a
specific address, a group address, or a broadcast address. A primary address is either a communication
source or a destination, which eliminates the need to include the address of the primary.

Department of Computer Science & Engineering, CEC, Landran Page 37


Routing and Switching (BTEC-905A-18)

Control

The Control field, shown in Figure 3-18, uses three different formats, depending on the type of HDLC
frame used:

 Information (I) frame: I-frames carry upper layer information and some control information.
This frame sends and receives sequence numbers, and the poll final (P/F) bit performs flow and
error control. The send sequence number refers to the number of the frame to be sent next. The
receive sequence number provides the number of the frame to be received next. Both sender and
receiver maintain send and receive sequence numbers. A primary station uses the P/F bit to tell
the secondary whether it requires an immediate response. A secondary station uses the P/F bit to
tell the primary whether the current frame is the last in its current response.
 Supervisory (S) frame: S-frames provide control information. An S-frame can request and
suspend transmission, report on status, and acknowledge receipt of I-frames. S-frames do not
have an information field.
 Unnumbered (U) frame: U-frames support control purposes and are not sequenced. Depending
on the function of the U-frame, its Control field is 1 or 2 bytes. Some U-frames have an
Information field.

Figure 3-18 HDLC Frame Types

Protocol

Only used in Cisco HDLC. This field specifies the protocol type encapsulated within the frame (e.g.,
0×0800 for IP).

Department of Computer Science & Engineering, CEC, Landran Page 38


Routing and Switching (BTEC-905A-18)

Data

The Data field contains a path information unit (PIU) or exchange identification (XID) information.

Frame Check Sequence (FCS)

The FCS precedes the ending flag delimiter and is usually a cyclic redundancy check (CRC) calculation
remainder. The CRC calculation is redone in the receiver. If the result differs from the value in the
original frame, an error is assumed.

Department of Computer Science & Engineering, CEC, Landran Page 39


Routing and Switching (BTEC-905A-18)

PPP (Point to Point Protocol)


PPP (Point to Point Protocol) is also a WAN Encapsulation Protocol that is based on HDLC but we
can say that PPP is the enhanced version of HDLC. There are many additional features in PPP like
Authentication, Multilink support, Error Detection, Quality Check.

Point-to-Point Communication Links

When permanent dedicated connections are required, a point-to-point link is used to provide a single,
pre-established WAN communications path from the customer premises, through the provider network,
to a remote destination, as shown in Figure

Point-to-Point Communication Links

A point-to-point link can connect two geographically distant sites, such as a corporate office in New
York and a regional office in London. For a point-to-point line, the carrier dedicates specific resources
for a line that is leased by the customer (leased line).

PPP Operation

This section discusses the PPP operations, including the benefits of PPP, LCP, and NCP protocols, and
establishing a PPP session.

Benefits of PPP

PPP has several advantages over its predecessor HDLC. In this section, PPP is introduced along with
examining the benefits of PPP.

Introducing PPP

Recall that HDLC is the default serial encapsulation method when connecting two Cisco routers. With
an added protocol type field, the Cisco version of HDLC is proprietary. Thus, Cisco HDLC can only

Department of Computer Science & Engineering, CEC, Landran Page 40


Routing and Switching (BTEC-905A-18)

work with other Cisco devices. However, when there is a need to connect to a non-Cisco router, PPP
encapsulation should be used, as shown in the Figure 3-19.

Figure 3-19 What is PPP?

PPP encapsulation has been carefully designed to retain compatibility with most commonly used
supporting hardware. PPP encapsulates data frames for transmission over Layer 2 physical links. PPP
establishes a direct connection using serial cables, phone lines, trunk lines, cellular telephones,
specialized radio links, or fiber-optic links.

PPP contains three main components:

 HDLC-like framing for transporting multiprotocol packets over point-to-point links.


 Extensible Link Control Protocol (LCP) for establishing, configuring, and testing the data-link
connection.
 Family of Network Control Protocols (NCPs) for establishing and configuring different network
layer protocols. PPP allows the simultaneous use of multiple network layer protocols. Some of
the more common NCPs are Internet Protocol (IPv4) Control Protocol, IPv6 Control Protocol,
AppleTalk Control Protocol, Novell IPX Control Protocol, Cisco Systems Control Protocol, SNA
Control Protocol, and Compression Control Protocol.

Advantages of PPP

PPP originally emerged as an encapsulation protocol for transporting IPv4 traffic over point-to-point
links. PPP provides a standard method for transporting multiprotocol packets over point-to-point links.

Department of Computer Science & Engineering, CEC, Landran Page 41


Routing and Switching (BTEC-905A-18)

There are many advantages to using PPP including the fact that it is not proprietary. PPP includes many
features not available in HDLC:

 The link quality management feature, as shown in Figure 3-20, monitors the quality of the link.
If too many errors are detected, PPP takes the link down.

Figure 3-20 Advantages of PPP

 PPP supports PAP and CHAP authentication. This feature is explained and practiced in a later
section.

LCP and NCP

LCP and NCP are two key components to PPP. An understanding of these two protocols will help you
understand and troubleshoot PPP operations.

PPP Layered Architecture

A layered architecture is a logical model, design, or blueprint that aids in communication between
interconnecting layers. Figure 3-21 maps the layered architecture of PPP against the Open System
Interconnection (OSI) model. PPP and OSI share the same physical layer, but PPP distributes the
functions of LCP and NCP differently.

Figure 3-21 PPP Layered Architecture

At the physical layer, you can configure PPP on a range of interfaces, including

Department of Computer Science & Engineering, CEC, Landran Page 42


Routing and Switching (BTEC-905A-18)

 Asynchronous serial, such as leased line services


 Synchronous serial, such as those that use basic telephone service for modem dialup connections
 HSSI
 ISDN

PPP operates across any DTE/DCE interface (RS-232-C, RS-422, RS-423, or V.35). The only absolute
requirement imposed by PPP is a full-duplex circuit, either dedicated or switched, that can operate in
either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames. PPP does
not impose any restrictions regarding transmission rate other than those imposed by the particular
DTE/DCE interface in use.

Most of the work done by PPP is at the data link and network layers by the LCP and NCPs. The LCP
sets up the PPP connection and its parameters, the NCPs handle higher layer protocol configurations,
and the LCP terminates the PPP connection.

PPP – Link Control Protocol (LCP)

The LCP functions within the data link layer and has a role in establishing, configuring, and testing the
data-link connection. The LCP establishes the point-to-point link. The LCP also negotiates and sets up
control options on the WAN data link, which are handled by the NCPs.

The LCP provides automatic configuration of the interfaces at each end, including

 Handling varying limits on packet size


 Detecting common misconfiguration errors
 Terminating the link
 Determining when a link is functioning properly or when it is failing

After the link is established, PPP also uses the LCP to agree automatically on encapsulation formats
such as authentication, compression, and error detection. Figure 3-21 shows the relationship of LCP to
the physical layer and NCP.

PPP – Network Control Protocol (NCP)

PPP permits multiple network layer protocols to operate on the same communications link. For every
network layer protocol used, PPP uses a separate NCP, as shown in Figure 3-21. For example, IPv4 uses
the IP Control Protocol (IPCP) and IPv6 uses IPv6 Control Protocol (IPv6CP).

NCPs include functional fields containing standardized codes to indicate the network layer protocol that
PPP encapsulates. Table 3-3 lists the PPP protocol field numbers. Each NCP manages the specific needs
required by its respective network layer protocols. The various NCP components encapsulate and
negotiate options for multiple network layer protocols.

Department of Computer Science & Engineering, CEC, Landran Page 43


Routing and Switching (BTEC-905A-18)

Table 3-3 Protocol Fields


Value (in hex) Protocol Name
8021 Internet Protocol (IPv4) Control Protocol
8057 Internet Protocol Version 6 (IPv6) Control Protocol
8023 OSI Network Layer Control Protocol
8029 Appletalk Control Protocol
802b Novell IPX Control Protocol
c021 Link Control Protocol
c023 Password Authentication Protocol
c223 Challenge Handshake Authentication Protocol

PPP Frame Structure

A PPP frame consists of six fields. The following descriptions summarize the PPP frame fields
illustrated in Figure 3-22:

 Flag: A single byte that indicates the beginning or end of a frame. The Flag field consists of the
binary sequence 01111110. In successive PPP frames, only a single Flag character is used.
 Address: A single byte that contains the binary sequence 11111111, the standard broadcast
address. PPP does not assign individual station addresses.
 Control: A single byte that contains the binary sequence 00000011, which calls for transmission
of user data in an unsequenced frame. This provides a connectionless link service that does
require the establishment of data links or links stations. On a point-to-point link, the destination
node does not need to be addressed. Therefore, for PPP, the Address field is set to 0xFF, the
broadcast address. If both PPP peers agree to perform Address and Control field compression
during the LCP negotiation, the Address field is not included.
 Protocol: Two bytes that identify the protocol encapsulated in the information field of the frame.
The 2-byte Protocol field identifies the protocol of the PPP payload. If both PPP peers agree to
perform Protocol field compression during LCP negotiation, the Protocol field is 1 byte for the
protocol identification in the range 0×00-00 to 0×00-FF. The most up-to-date values of the
Protocol field are specified in the most recent Assigned Numbers Request For Comments (RFC).
 Data: Zero or more bytes that contain the datagram for the protocol specified in the Protocol
field. The end of the Information field is found by locating the closing flag sequence and
allowing 2 bytes for the FCS field. The default maximum length of the Information field is 1500
bytes. By prior agreement, consenting PPP implementations can use other values for the
maximum Information field length.
 Frame Check Sequence (FCS): Normally 16 bits (2 bytes). By prior agreement, consenting PPP
implementations can use a 32-bit (4-byte) FCS for improved error detection. If the receiver’s

Department of Computer Science & Engineering, CEC, Landran Page 44


Routing and Switching (BTEC-905A-18)

calculation of the FCS does not match the FCS in the PPP frame, the PPP frame is silently
discarded.

Figure 3-22 PPP Frame Fields

LCPs can negotiate modifications to the standard PPP frame structure. Modified frames, however, are
always distinguishable from standard frames.

PPP Sessions

Understanding PPP session establishment, LCP and NCP are important parts of implementing and
troubleshooting PPP. These topics are discussed next.

Establishing a PPP Session

There are three phases of establishing a PPP session, as shown in Figure 3-23:

 Phase 1: Link establishment and configuration negotiation: Before PPP exchanges any
network layer datagrams, such as IP, the LCP must first open the connection and negotiate
configuration options. This phase is complete when the receiving router sends a configuration-
acknowledgment frame back to the router initiating the connection.
 Phase 2: Link quality determination (optional): The LCP tests the link to determine whether
the link quality is sufficient to bring up network layer protocols. The LCP can delay transmission
of network layer protocol information until this phase is complete.
 Phase 3: Network layer protocol configuration negotiation: After the LCP has finished the
link quality determination phase, the appropriate NCP can separately configure the network layer
protocols, and bring them up and take them down at any time. If the LCP closes the link, it
informs the network layer protocols so that they can take appropriate action.

Department of Computer Science & Engineering, CEC, Landran Page 45


Routing and Switching (BTEC-905A-18)

Figure 3-23 Establishing a PPP Session

The link remains configured for communications until explicit LCP or NCP frames close the link, or
until some external event occurs such as an inactivity timer expiring, or an administrator intervening.

The LCP can terminate the link at any time. This is usually done when one of the routers requests
termination, but can happen because of a physical event, such as the loss of a carrier or the expiration of
an idle-period timer.

LCP Operation

LCP operation includes provisions for link establishment, link maintenance, and link termination. LCP
operation uses three classes of LCP frames to accomplish the work of each of the LCP phases:

 Link-establishment frames establish and configure a link (Configure-Request, Configure-Ack,


Configure-Nak, and Configure-Reject).
 Link-maintenance frames manage and debug a link (Code-Reject, Protocol-Reject, Echo-
Request, Echo-Reply, and Discard-Request).
 Link-termination frames terminate a link (Terminate-Request and Terminate-Ack).

Link Establishment

Link establishment is the first phase of LCP operation, as seen in Figure 3-24. This phase must complete
successfully, before any network layer packets can be exchanged. During link establishment, the LCP
opens the connection and negotiates the configuration parameters. The link establishment process starts

Department of Computer Science & Engineering, CEC, Landran Page 46


Routing and Switching (BTEC-905A-18)

with the initiating device sending a Configure-Request frame to the responder. The Configure-Request
frame includes a variable number of configuration options needed to set up on the link.

Figure 3-24 PPP Link Establishment

The initiator includes the options for how it wants the link created, including protocol or authentication
parameters. The responder processes the request:

 If the options are not acceptable or not recognized, the responder sends a Configure-Nak or
Configure-Reject message. If this occurs and the negotiation fails, the initiator must restart the
process with new options.
 If the options are acceptable, the responder responds with a Configure-Ack message and the
process moves on to the authentication stage. The operation of the link is handed over to the
NCP.

When NCP has completed all necessary configurations, including validating authentication if
configured, the line is available for data transfer. During the exchange of data, LCP transitions into link
maintenance.

Link Maintenance

During link maintenance, LCP can use messages to provide feedback and test the link, as shown
in Figure 3-25.

 Echo-Request, Echo-Reply, and Discard-Request: These frames can be used for testing the
link.
 Code-Reject and Protocol-Reject: These frame types provide feedback when one device
receives an invalid frame due to either an unrecognized LCP code (LCP frame type) or a bad
protocol identifier. For example, if an uninterpretable packet is received from the peer, a Code-
Reject packet is sent in response. The sending device will resend the packet.

Department of Computer Science & Engineering, CEC, Landran Page 47


Routing and Switching (BTEC-905A-18)

Figure 3-25 PPP Link Maintenance

Link Termination

After the transfer of data at the network layer completes, the LCP terminates the link, as shown in Figure
3-26. NCP only terminates the network layer and NCP link. The link remains open until the LCP
terminates it. If the LCP terminates the link before NCP, the NCP session is also terminated.

Figure 3-26 PPP Link Termination

PPP can terminate the link at any time. This might happen because of the loss of the carrier,
authentication failure, link quality failure, the expiration of an idle-period timer, or the administrative
closing of the link. The LCP closes the link by exchanging Terminate packets. The device initiating the
shutdown sends a Terminate-Request message. The other device replies with a Terminate-Ack. A
Department of Computer Science & Engineering, CEC, Landran Page 48
Routing and Switching (BTEC-905A-18)

termination request indicates that the device sending it needs to close the link. When the link is closing,
PPP informs the network layer protocols so that they may take appropriate action.

LCP Packet

Figure 3-27 shows the fields in an LCP packet:

 Code: The Code field is 1 byte in length and identifies the type of LCP packet.
 Identifier: The Identifier field is 1 byte in length and is used to match packet requests and
replies.
 Length: The Length field is 2 bytes in length and indicates the total length (including all fields)
of the LCP packet.
 Data: The Data field is 0 or more bytes as indicated by the length field. The format of this field
is determined by the code.

Figure 3-27 LCP Packet Codes

Each LCP packet is a single LCP message consisting of an LCP Code field identifying the type of LCP
packet, an identifier field so that requests and replies can be matched, and a Length field indicating the
size of the LCP packet and LCP packet type-specific data.

Each LCP packet has a specific function in the exchange of configuration information depending on its
packet type. The Code field of the LCP packet identifies the packet type according to Table 3-4.

Table 3-4 LCP Packet Fields


LCP LCP Packet Description
Code Type
1 Configure- Sent to open or reset a PPP connection. Configure-Request contains a list of LCP
Request options with changes to default option values.
2 Configure- Sent when all of the values of all of the LCP options in the last Configure-Request

Department of Computer Science & Engineering, CEC, Landran Page 49


Routing and Switching (BTEC-905A-18)

LCP LCP Packet Description


Code Type
Ack received are recognized and acceptable. When both PPP peers send and receive
Configure-Acks, the LCP negotiation is complete.
3 Configure- Sent when all the LCP options are recognized, but the values of some options are
Nak not acceptable. Configure-Nak includes the mismatching options and their
acceptable values.
4 Configure- Set when LCP options are not recognized or not acceptable for negotiation.
Reject Configure-Reject includes the unrecognized or non-negotiable options.
5 Terminate- Optionally sent to close the PPP connection.
Request
6 Terminate- Sent in response to a Terminate-Request.
Ack
7 Code-Reject Sent when the LCP code is unknown. The Code-Request message includes the
rejected LCP packet.
8 Protocol- Sent when the PPP frame contains an unknown Protocol ID. The Protocol-Reject
Reject message includes the rejected LCP packet. Protocol-Reject is typically sent by a
PPP peer in response to PPP NCP for a LAN protocol not enabled on the PPP
peer.
9 Echo- Optionally sent to test PPP connection.
Request
10 Echo-Reply Sent in response to an Echo-Request. The PPP Echo-Request and Echo-Reply are
not related to the ICMP Echo Request and Echo Reply messages.
11 Discard- Optionally sent to exercise the link in the outbound direction.
Request

PPP Configuration Options

PPP can be configured to support various optional functions, as shown in Figure 3-28. These optional
functions include

 Authentication using either PAP or CHAP


 Compression using either Stacker or Predictor
 Multilink that combines two or more channels to increase the WAN bandwidth

Department of Computer Science & Engineering, CEC, Landran Page 50


Routing and Switching (BTEC-905A-18)

Figure 3-28 PPP Configuration Options

To negotiate the use of these PPP options, the LCP link-establishment frames contain option information
in the data field of the LCP frame, as shown in Figure 3-29. If a configuration option is not included in
an LCP frame, the default value for that configuration option is assumed.

Figure 3-29 LCP Option Fields

This phase is complete when a configuration acknowledgment frame has been sent and received.

Department of Computer Science & Engineering, CEC, Landran Page 51


Routing and Switching (BTEC-905A-18)

NCP Explained

After the link has been initiated, the LCP passes control to the appropriate NCP.

NCP Process

Although initially designed for IP packets, PPP can carry data from multiple network layer protocols by
using a modular approach in its implementation. PPP’s modular model allows LCP to set up the link and
then transfer the details of a network protocol to a specific NCP. Each network protocol has a
corresponding NCP and each NCP has a corresponding RFC.

There are NCPs for IPv4, IPv6, IPX, AppleTalk, and many others. NCPs use the same packet format as
the LCPs.

After the LCP has configured and authenticated the basic link, the appropriate NCP is invoked to
complete the specific configuration of the network layer protocol being used. When the NCP has
successfully configured the network layer protocol, the network protocol is in the open state on the
established LCP link. At this point, PPP can carry the corresponding network layer protocol packets.

IPCP Example

As an example of how the NCP layer works, the NCP configuration of IPv4, which is the most common
Layer 3 protocol, is shown in Figure 3-30. After LCP has established the link, the routers exchange
IPCP messages, negotiating options specific to the IPv4 protocol. IPCP is responsible for configuring,
enabling, and disabling the IPv4 modules on both ends of the link. IPv6CP is an NCP with the same
responsibilities for IPv6.

Figure 3-30 PPP NCP Operation

Department of Computer Science & Engineering, CEC, Landran Page 52


Routing and Switching (BTEC-905A-18)

IPCP negotiates two options:

 Compression: Allows devices to negotiate an algorithm to compress TCP and IP headers and
save bandwidth. The Van Jacobson TCP/IP header compression reduces the size of the TCP/IP
headers to as few as 3 bytes. This can be a significant improvement on slow serial lines,
particularly for interactive traffic.
 IPv4-Address: Allows the initiating device to specify an IPv4 address to use for routing IP over
the PPP link, or to request an IPv4 address for the responder. Prior to the advent of broadband
technologies such as DSL and cable modem services, dialup network links commonly used the
IPv4 address option.

After the NCP process is complete, the link goes into the open state, and LCP takes over again in a link
maintenance phase. Link traffic consists of any possible combination of LCP, NCP, and network layer
protocol packets. When data transfer is complete, NCP terminates the protocol link; LCP terminates the
PPP connection.

Department of Computer Science & Engineering, CEC, Landran Page 53


Routing and Switching (BTEC-905A-18)

Frame Relay
Frame Relay is a packet switched communication service from LANs (Local Area Network) to
backbone networks and WANs. It operates at two layers: physical layer and data link layer. It supports
all standard physical layer protocols. It is mostly implemented at the data link layer.

Frame Relay uses virtual circuits to connect a single router to multiple remote sites. In most cases,
permanent virtual circuits are used, i.e. a fixed network-assigned circuit is used through which the user
sees a continuous uninterrupted line. However, switched virtual circuits may also be used.

Frame relay is a fast packet technology based on X.25. Data is transmitted by encapsulating them in
multiple sized frames. The protocol does not attempt to correct errors and so it is faster. Error correction
is handled by the endpoints, which are responsible for retransmission of dropped frames

It is a packet switching technology developed to send and receive data over high speed digital
connection such as ISDN. It operates on layer-1 (i.e. physical layer) and layer-2 (i.e. data link layer) of
OSI stack.

ITU-T and ANSI have defined specifications for the frame relay connection between DTE and DCE.
Frame relay network supports two types of virtual circuits i.e. SVC (Switched Virtual Circuit) and PVC
(Permanent Virtual Circuit).

Frame Relay Features


Following are the features of Frame Relay:

• Developed for ISDN, but widely used in public and private non ISDN networks.
• Higher throughput compare to X.25 protocol.

• No hop to hop flow control or error control. Upper layer protocols should take care to detect and
recover discarded frames.
• Multiplexing and switching of logical connections are taken care by layer 2 and not layer-3.
• Control signals are carried out by separate logical connection than user data.
• Supports multiple protocols such as NetBIOS, ATM, TCP/IP, voice etc.
• Each connection is identified by unique DLCI(Data link connection identifier).

Department of Computer Science & Engineering, CEC, Landran Page 54


Routing and Switching (BTEC-905A-18)

Frame Relay Network

The figure-1 depicts frame relay network. It is packet switched network consisting of DTE (Data
Terminal Equipment) and DCE (Data Circuit Terminating Equipment). Refer . Here frame relay switch
is used to route the packets between two DTEs as explained below.
Frame Relay Frame-Standard, LMI frame types

The figure-2 depicts standard frame relay frame. The Data information to be transmitted over frame
relay network is encapsulated in a frame which uses only 2-5 bytes of overhead as mentioned below.

Standard frame relay frame consists of start flag, header, data (variable length up to 16000 octets), FCS
and end Flag. Start flag and end flag are used for frame delimiters or as frame synchronization. FCS is
used as checksum for detecting errors. The fields of frame relay header have been described in table-1
below.

Department of Computer Science & Engineering, CEC, Landran Page 55


Routing and Switching (BTEC-905A-18)

Standard Frame
Relay Header
Field Description

(6+4) bits, It is the short form of Data Link Connection Identifier. Represents
address of the frame which corresponds to PVC. DLCI value specified represents
virtual connection between DTE and Frame Relay Switch. Each virtual connection
DLCI (10 bits) which is multiplexed on single physical channel is represented by unique DLCI.

C/R 1 bit, designates whether frame is a command or a response.

1 bit, Extended Address field, designates upto 2 additional bytes in frame relay
EA header. It helps to expand number of possible addresses.

FECN 1 bit, Forward Explicit Congestion Notification, used in congestion control

BECN 1 bit, Backward Explicit Congestion Notification, used in congestion control

DE 1 bit, Discard Eligibility

EA 1 bit

The figure-3 depicts LMI type of frame relay frame.


The fields of LMI frame are mentioned in the table-2 below.

Department of Computer Science & Engineering, CEC, Landran Page 56


Routing and Switching (BTEC-905A-18)

LMI frame field Description

Flag It delimits the start and end of a frame.

If this field is of value 1023, it identifies frame as LMI frame instead of


LMI DLCI basic standard frame relay frame.

Unnumbered Information
Indicator Sets poll/final bit equal to zero

Protocol Discriminator contains value which indicates that frame is LMI frame

Call Reference contains zero, not used

Message Type labels frame either as "status inquiry message" or "status message"

Information Elements consists of variable number of IEs. Each IE consists of IE identifier, IE


(IEs) length and Data fields.

Frame Check Sequence It will ensure integrity of the data to be transmitted. It is used for error
(FCS) detection.

Department of Computer Science & Engineering, CEC, Landran Page 57


Routing and Switching (BTEC-905A-18)

Frame Relay Topology

Frame relay supports following WAN topologies:

• Peer (point-to-point)

• Star (hub and spoke), PVCs exist from main site to each remote site.

• Partial mesh

• Full mesh (Here each router has PVC to every router in the frame relay network)

The same have been depicted in the figure-4 above.


Frame Relay Service
Frame relay provides user with multiple independent data links to one or more destination stations.
Traffic directed on the data links are multiplexed to utilize access lines and resources efficiently. As
multiplexing is done at data link layer, end to end delay is minimized.

Frame relay provides service similar to leased line. The frame relay network transport user traffic within
frame regardless of its contents. Frame relay service is typically available at fractional and full T1/E1
rates. The service is also offered at T3 rates by some of the frame relay vendors.

Department of Computer Science & Engineering, CEC, Landran Page 58


Routing and Switching (BTEC-905A-18)

Frame Relay Switch, frame relay configuration and FRAD device

The figure-5 depicts Operation of Frame Relay Switch. Ports are mapped with DLCI values. There are
two tables used in frame relay configuration viz. frame relay map and frame relay switching table.

Frame relay map consists of two fields viz. IP address and DLCI values. Frame relay map is a table
stored in RAM which defines remote interface(ip address) to which specific DLCI number is mapped.
This table can be made automatically or manually depending upon frame relay topology.

Frame relay switching table consists of four fields viz. IN_port, IN_DLCI, OUT_port and OUT_DLCI.

The figure-6 depicts Operation of FRAD (Frame Relay Assembler and Disassembler). It is a specialized
device which is designed to provide connection between LAN and Frame Relay WAN (Wide Area
Network).

Department of Computer Science & Engineering, CEC, Landran Page 59


Routing and Switching (BTEC-905A-18)

Frame Relay Advantages and Disadvantges


Advantages:
• As there is no error detection incorporated in frame relay, greater speeds can be achieved.

• It can dynamically allocate bandwidth on need basis.

• Congestion control mechanism is implemented in frame relay. This reduces network overhead in the
network. It implements two congestion notification mechanisms (FECN, BECN) as mentioned above.
During congestion condition, these fields are set to '1'.

Disadvantages:
• It does not perform flow control and error control. This has to be taken care by upper layers.

Department of Computer Science & Engineering, CEC, Landran Page 60


Routing and Switching (BTEC-905A-18)

Establishing DSL/ADSL Networks with PPPoE:


DSL, ADSL, and VDSL

DSL (Digital Subscriber Line) delivers broadband to more people today than any other technology.
Roughly two-thirds of all broadband subscribers are DSL subscribers, and there are more new DSL
subscribers each month than new subscribers for all other broadband access technologies combined.
DSL is a technology that delivers broadband speeds over distances of miles or kilometers via copper
wiring, much of which are the same wires that are used to provide traditional voice telephony services.
While many of us just know the phrase “DSL”, there are three variants:
 ADSL: Asymmetric DSL, meaning the bandwidth and bitrates are greater toward the customer premises
(downstream) than the reverse (upstream).
 SDSL: Symmetrical DSL, meaning the bandwidth toward the customer premises (downstream) is
identical to the reverse (upstream). SDSL is not very common.
 VDSL: Very-high-bit-rate DSL which uses up to seven frequency bands, so one can allocate the data
rate between upstream and downstream differently depending on the service offering and spectrum
regulations.

DSL Deployment Architecture


DSL copper wiring runs from a telephone company’s Central Office (CO), the location where voice
switching and other traditional telephony functions are performed, to a subscriber’s home or business.
Increasingly, DSL is delivered from a device situated closer to the subscriber’s home or business that is
connected to a CO via an optical fiber link, and then to the subscriber’s premises via copper wires. In all
cases, however, DSL delivers broadband over the copper connections that exist already in almost every
residence and business in the developing and developed worlds.
This architecture is depicted in the figure below. At the CO, or at a remote location typically connected
to the CO via fiber optics, there is a DSL Access Multiplexer (DSLAM) that sends and receives
broadband data to many subscribers via DSL technology. At each subscriber’s location, there is a
modem (modulator-demodulator) that communicates with the DSLAM to send and receive that
subscriber’s broadband data to and from the Internet and other networks. A DSLAM communicates with
many individual subscriber modems. Each subscriber’s modem is dedicated to that subscriber’s
broadband connection.

Department of Computer Science & Engineering, CEC, Landran Page 61


Routing and Switching (BTEC-905A-18)

Voice services utilize only a small fraction of the total information carrying capacity of copper
connections. In an analogous manner to Ethernet technology, which can transmit a Gigabit (more than
one billion bits) per second of data over copper connections or the equivalent of tens of thousands of
simultaneous phone conversations, DSL exploits the information carrying capacity of copper lines to
deliver broadband services over long distances.
DSL Standards
To engineers, “DSL” means a set of formal standards for communicating broadband signals over copper
lines. It also means equipment that complies with those standards. The principal DSL standards are
published by the International Telecommunications Union (ITU), a standards body based in Geneva,
Switzerland.
DSL standards have evolved significantly since the first DSL standards were established in the early
1990’s. The DSL standards have evolved to support higher data rates, to take advantage of advances in
equipment technologies, and to ensure that DSL can coexist on copper lines with other communications
standards such as Integrated Services Digital Network (ISDN), an early digital voice and data service
that is still in use in many countries. The table below lists some of the principal DSL standards in use
today.

Common Name Peak Speed Standard Deployment Status

ADSL1 8 Mbps ITU-T G.992.1 Pervasive

ADSL2+ 24 Mbps ITU-T G.992.5 Pervasive

Department of Computer Science & Engineering, CEC, Landran Page 62


Routing and Switching (BTEC-905A-18)

VDSL2 50-75 Mbps ITU-T G.993.2 Pervasive

1) “Mbps” means Megabits per second. A Megabit is a million bits.2) DSLAMs and subscriber modems
are capable of the peak speeds listed in the table. Lower speeds may be delivered depending on the
service packages offered by a subscriber’s DSL provider, and also on the provider’s network design and
management practices.

3) There are several variants of the VDSL2 standard. The peak speed is dependent on the particular
VDSL2 variant implemented by the DSL service provider.

With few exceptions, DSL technology is unique among broadband access technologies in that
subscribers do not compete with one another for broadband access. Because each subscriber has their
own copper connection to the DSLAM, all subscribers can achieve the peak speeds listed in the table
above so long as the connection from the DSLAM to the Internet or other networks has adequate
capacity. This is a significant advantage of DSL relative to other broadband access technologies where
subscribers share a single physical connection, such as in cable, fiber, and radio/cellular (3G/4G/LTE)
access networks.

Department of Computer Science & Engineering, CEC, Landran Page 63


Routing and Switching (BTEC-905A-18)

Point-to-Point Protocol over Ethernet


The Point-to-Point Protocol over Ethernet (PPPoE) is a network protocol for encapsulating Point-to-
Point Protocol (PPP) frames inside Ethernet frames. It appeared in 1999, in the context of the boom
of DSL as the solution for tunneling packets over the DSL connection to the ISP's IP network, and from
there to the rest of the Internet. A 2005 networking book noted that "Most DSL providers use PPPoE,
which provides authentication, encryption, and compression. Typical use of PPPoE involves leveraging
the PPP facilities for authenticating the user with a username and password, predominately via
the PAP protocol and less often via CHAP.
On the customer-premises equipment, PPPoE may be implemented either in a unified residential
gateway device that handles both DSL modem and IP routing functions or in the case of a simple DSL
modem (without routing support), PPPoE may be handled behind it on a separate Ethernet-only router or
even directly on a user's computer. (Support for PPPoE is present in most operating systems, ranging
from Windows XP, Linux to Mac OS X.) More recently, some GPON-based (instead of DSL-based)
residential gateways also use PPPoE, although the status of PPPoE in the GPON standards is marginal.
PPPoE was developed by UUNET, Redback Networks (now Ericsson) and RouterWare (now Wind
River Systems) and is available as an informational RFC 2516.
In the world of DSL, PPPoE was commonly understood to be running on top of ATM (or DSL) as the
underlying transport, although no such limitation exists in the PPPoE protocol itself. Other usage
scenarios are sometimes distinguished by tacking as a suffix another underlying transport. For example,
PPPoEoE, when the transport is Ethernet itself, as in the case of Metro Ethernet networks. (In this
notation, the original use of PPPoE would be labeled PPPoEoA, although it should not be confused
with PPPoA, which is a different encapsulation protocol.)
PPPoE has been described in some books as a "layer 2.5" protocol, in some rudimentary sense similar
to MPLS because it can be used to distinguish different IP flows sharing an Ethernet infrastructure,
although the lack of PPPoE switches making routing decisions based on PPPoE headers limits
applicability in that respect

PPPoE and TCP/IP protocol stack

Application FTP SMTP HTTP ... DNS ...

Transport TCP UDP

Internet IP IPv6

PPP
Network
access
PPPoE

Department of Computer Science & Engineering, CEC, Landran Page 64


Routing and Switching (BTEC-905A-18)

Ethernet

The working standard for the PPPoE protocol was published by the IETF in 1999. The IETF
specification for PPPoE is RFC 2516. PPPoE expands the original capability of PPP by allowing a
virtual point to point connection over a multipoint Ethernet network architecture. PPPoE is a protocol
that is widely used by ISPs to provision digital subscriber line (DSL) high-speed Internet services, of
which the most popular service is ADSL. The similarity between PPPoE and PPP has led to the
widespread adoption of PPPoE as the preferred protocol for implementing high-speed Internet access.
Service providers can use the same authentication server for both PPP and PPPoE sessions, resulting in
cost savings. PPPoE uses standard methods of encryption, authentication, and compression specified by
PPP.

PPPoE is configured as a point to point connection between two Ethernet ports. As a tunneling protocol,
PPPoE is used as an effective foundation for the transport of IP packets at the network layer. IP is
overlaid over a PPP connection and uses PPP as a virtual dial-up connection between points on the
network. From the user’s perspective, a PPPoE session is initiated by using connection software on the
client machine or router. PPPoE session initiation involves the identification of the Media Access
Control (MAC) address of the remote device. This process, also known as PPPoE discovery, involves
the following steps:

1. Initiation – The client software sends a PPPoE Active Discovery Initiation (PADI) packet to the
server to initiate the session.

2. Offer – The server responds with a PPPoE Active Discovery Offer (PADO) packet.

3. Request – Upon receipt of the PADO packet, the client responds by sending a PPPoE Active
Discovery Request (PADR) packet to the server.

4. Confirmation – Upon receipt of the PADR packet, the server responds by generating a unique ID
for the PPP session and sends it in a PPPoE Active Discovery Session (PADS) confirmation
packet to the client.

When a PPPoE session is initiated, the destination IP address is only used when the session is active.
The IP address is released after the session is closed, allowing for efficient re-use of IP addresses.

Department of Computer Science & Engineering, CEC, Landran Page 65


Routing and Switching (BTEC-905A-18)

Internet Basics: What is PPPoE?

PPPoE stands for Point-to-Point Protocol over Ethernet, a network protocol for encapsulating Point-to-
Point Protocol (PPP) frames inside Ethernet frames.
How Does PPPoE Work?

It is used mainly with DSL services where individual users connect to a DSL modem over Ethernet.
Ethernet networks are packet-based and have no capacity for a connection or circuit. They also lack
basic security features to protect against IP and MAC conflicts and rogue DHCP servers.

With PPPoE, users virtually “dial” from one machine to another over an Ethernet network,
establish a point-to-point connection and then data packets are securely transported through the
connection.
Why Are PPPoE Connections Needed?

PPPoE is primarily used by telephone companies. The protocol allows for the easy separation of
digitalsubscriber line access multiplexers (DSLAM) where it is required by regulators.

PPPoE also allows ISPs to create pre-paid traffic business models (usually through DSL subscriptions),
either by allowing them to offer different speed tiers or QoS (Quality of Service) bandwidth controls
through a single DSL modem or by creating a different login for each static IP purchased by a customer.
(Explanation Adapted from Wikipedia)
Basically, PPPoE offers an Internet service provider (ISP) an easier way to track exactly how much
bandwidth you are using in case they want to charge for it in the future.

Department of Computer Science & Engineering, CEC, Landran Page 66


Routing and Switching (BTEC-905A-18)

How Can I Protect My Connection over PPPoE?

Thankfully, when you use a VPN with PPPoE, you are able to prevent an ISP from monitoring your
DSL connection since ISPs commonly use deep packet inspection to analyze your Internet traffic
and limit your bandwidth.
By encrypting your data, VPN service providers, like one of the ones below, can help stop your ISP
from inspecting the data you are receiving and limiting your Internet access. For those in the US, that is
especially important, given the repeal of net neutrality legislation.
How to Fix VPN Problems with PPPoE & DSL

Since both OpenVPN and PPPoE are tunneling protocols, most standard routers can not run them
simultaneously. However, using a DD-WRT router alongside your DSL modem/router allows unique
configuration opportunities for simultaneous PPPoE Internet/WAN connections while running an
OpenVPN client connection.
This means that VPN service used with a DSL modem/router will no longer be limited to one device in
your home but can be spread to any device (iPad, Roku, AppleTV, SmartTV) on your network simply
by connecting wirelessly to the FlashRouter.
Some solutions involve quick-to-install integrated easy-to-use MyPage add-ons for DD-WRT, while
other provider setups are a bit trickier.
DD-WRT PPPoE Solution for Various VPN Providers For Single Routers

Once you hook up your FlashRouter, navigate to 192.168.11.1 and log in using the information we sent
you. Then, navigate to Setup > Basic Setup and use the drop-down menu under WAN Connection Type
to select PPPoE. Then, simply add your credentials.

Once a PPPoE connection has been made, ensure you have an internet connection by navigating
to flashrouters.com and making sure it loads. Then, you can connect to the FlashRouters Privacy App on
your DD-WRT FlashRouter and set up your VPN provider!

Department of Computer Science & Engineering, CEC, Landran Page 67


Routing and Switching (BTEC-905A-18)

PPPoE VPN Solution For Dual Router Setup

If you have an existing router and want to add a DD-WRT router to simplify the setup, use your existing
router to connect to PPPoE locally, then add a DD-WRT FlashRouter to create a separate VPN-
dedicated network. Basically, something that looks like this in diagram form:

DSL-Modem/Gateway router (PPPoE) -> DD-WRT FlashRouter (VPN) -> Clients/Devices


(Computer, Smartphone, iPad, Playstation, Xbox, Roku)
A FlashRouter connected to another router will allow you to have a local ISP (wired and wireless)
connection through your existing router with PPPoE, while other devices can be tunneled through a
VPN connection with the FlashRouter either wired or wirelessly.

This solution of using a second router will allow you to use the FlashRouter for any basic VPN
connection from almost all personal VPN providers.

Department of Computer Science & Engineering, CEC, Landran Page 68


Routing and Switching (BTEC-905A-18)

Network Address Translation (NAT)


To access the Internet, one public IP address is needed, but we can use a private IP address in our private
network. The idea of NAT is to allow multiple devices to access the Internet through a single public
address. To achieve this, the translation of a private IP address to a public IP address is
required. Network Address Translation (NAT) is a process in which one or more local IP address is
translated into one or more Global IP address and vice versa in order to provide Internet access to the
local hosts. Also, it does the translation of port numbers i.e. masks the port number of the host with
another port number, in the packet that will be routed to the destination. It then makes the corresponding
entries of IP address and port number in the NAT table. NAT generally operates on a router or firewall.

Network Address Translation (NAT) working –


Generally, the border router is configured for NAT i.e the router which has one interface in the local
(inside) network and one interface in the global (outside) network. When a packet traverse outside the
local (inside) network, then NAT converts that local (private) IP address to a global (public) IP address.
When a packet enters the local network, the global (public) IP address is converted to a local (private) IP
address.
If NAT runs out of addresses, i.e., no address is left in the pool configured then the packets will be
dropped and an Internet Control Message Protocol (ICMP) host unreachable packet to the destination is
sent.
Why mask port numbers?
Suppose, in a network, two hosts A and B are connected. Now, both of them request for the same
destination, on the same port number, say 1000, on the host side, at the same time. If NAT does only
translation of IP addresses, then when their packets will arrive at the NAT, both of their IP addresses
would be masked by the public IP address of the network and sent to the destination. Destination will
send replies to the public IP address of the router. Thus, on receiving a reply, it will be unclear to NAT
as to which reply belongs to which host (because source port numbers for both A and B are the same).
Hence, to avoid such a problem, NAT masks the source port number as well and makes an entry in the
NAT table.
NAT inside and outside addresses
Inside refers to the addresses which must be translated. Outside refers to the addresses which are not in
control of an organization. These are the network Addresses in which the translation of the addresses
will be done.

Department of Computer Science & Engineering, CEC, Landran Page 69


Routing and Switching (BTEC-905A-18)

 Inside local address – An IP address that is assigned to a host on the Inside (local) network. The
address is probably not an IP address assigned by the service provider i.e., these are private IP
addresses. This is the inside host seen from the inside network.

 Inside global address – IP address that represents one or more inside local IP addresses to the
outside world. This is the inside host as seen from the outside network.

 Outside local address – This is the actual IP address of the destination host in the local network
after translation.

 Outside global address – This is the outside host as seen from the outside network. It is the IP
address of the outside destination host before translation.

Network Address Translation (NAT) Types –

There are 3 ways to configure NAT:

1. Static NAT – In this, a single unregistered (Private) IP address is mapped with a legally registered
(Public) IP address i.e one-to-one mapping between local and global addresses. This is generally
used for Web hosting. These are not used in organizations as there are many devices that will need
Internet access and to provide Internet access, a public IP address is needed.
Suppose, if there are 3000 devices that need access to the Internet, the organization has to buy 3000
public addresses that will be very costly.
2. Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a registered
(Public) IP address from a pool of public IP addresses. If the IP address of the pool is not free, then
the packet will be dropped as only a fixed number of private IP addresses can be translated to public
addresses.
Suppose, if there is a pool of 2 public IP addresses then only 2 private IP addresses can be translated
at a given time. If 3rd private IP address wants to access the Internet then the packet will be dropped
therefore many private IP addresses are mapped to a pool of public IP addresses. NAT is used when

Department of Computer Science & Engineering, CEC, Landran Page 70


Routing and Switching (BTEC-905A-18)

the number of users who want to access the Internet is fixed. This is also very costly as the
organization has to buy many global IP addresses to make a pool.

3. Port Address Translation (PAT) – This is also known as NAT overload. In this, many local
(private) IP addresses can be translated to a single registered IP address. Port numbers are used to
distinguish the traffic i.e., which traffic belongs to which IP address. This is most frequently used as
it is cost-effective as thousands of users can be connected to the Internet by using only one real
global (public) IP address.

Advantages of NAT –

 NAT conserves legally registered IP addresses.

 It provides privacy as the device’s IP address, sending and receiving the traffic, will be hidden.

 Eliminates address renumbering when a network evolves.

Disadvantage of NAT –

 Translation results in switching path delays.

 Certain applications will not function while NAT is enabled.

 Complicates tunneling protocols such as IPsec.

 Also, the router being a network layer device, should not tamper with port numbers(transport layer)
but it has to do so because of NAT.

Why Use NAT?


NAT is a straightforward enough process, but what is the point of it? Ultimately, it comes down to
conservation and security.

IP Conservation
IP addresses identify each device connected to the internet. The existing IP version 4 (IPv4) uses 32-bit
numbered IP addresses, which allows for 4 billion possible IP addresses, which seemed like more than
enough when it launched in the 1970s.

Department of Computer Science & Engineering, CEC, Landran Page 71


Routing and Switching (BTEC-905A-18)

However, the internet has exploded, and while not all 7 billion people on the planet access the internet
regularly, those that do often have multiple connected devices: phones, personal desktop, work laptop,
tablet, TV, even refrigerators.

Therefore, the number of devices accessing the internet far surpasses the number of IP addresses
available. Routing all of these devices via one connection using NAT helps to consolidate multiple
private IP addresses into one public IP address. This helps to keep more public IP addresses available
even while private IP addresses proliferate.

On June 6, 2012, IP version 6 (IPv6) officially launched to accommodate the need for more IP
addresses. IPv6 uses 128-bit numbered IP addresses, which allow for exponentially more potential IP
addresses. It will take many years before this process finishes; so until then, NAT will be a valuable
tool.

NAT Security
Additionally, NAT can provide security and privacy. Because NAT transfers packets of data from public
to private addresses, it also prevents anything else from accessing the private device. The router sorts the
data to ensure everything goes to the right place, making it more difficult for unwanted data to get by.
It’s not foolproof, but it often acts as the first means of defense for your device. If an organization wants
to protect its data, they’ll need to go further than just a NAT firewall — they’ll want to hire
a cybersecurity professional.

NAT also allows you to display a public IP address while on a local network, helping to keep data and
user history private.

All of this might seem complicated in theory, but it’s even more so in the real world. IT professionals
use NAT to secure their data and use several devices under the same IP – and everyone is interested in
securing their data. Getting the right certification helps IT professionals demonstrate their competence
and understanding of these complicated subjects

Department of Computer Science & Engineering, CEC, Landran Page 72


Routing and Switching (BTEC-905A-18)

Establishing Enterprise Radio Access Network Solutions.

 RAN is a collection of cellular wireless access points (AP) and is one of the main components of
Celona’s private mobile network. Celona’s indoor APs resemble your typical wireless access
point, and outdoor APs are built with a power level of up to 50 watts for coverage across very
large areas.
 Multiple access points in a Celona RAN takes advantage of the domain proxy functionality
within Celona’s network operating system (OS) to automatically register with Spectrum Access
System (SAS) services certified by FCC in the United States. This is done to reserve use of
private CBRS spectrum at their respective geolocations. As new APs are added, they use the
domain proxy to contact the SAS service to receive the proper channel allocation and
configuration.
 Existing RANs can simply upgrade their radios to 5G capable ones, and still use the existing
network architecture and configuration via Celona’s network OS.
 Celona has simplified the deployment of access points within a RAN through zero touch
provisioning and cloud networking principles - making it as simply deploy as the modern
enterprise Wi-Fi solutions.

Corning’s 5G small cell platform technology, the 5G Enterprise Radio Access Network (Enterprise
RAN), is an easy to install system that provides the 5G services enterprises and venue owners are
demanding. The 5G Enterprise RAN builds upon our decade of experience improving indoor cellular
services. It upholds the same high visual standards as the LTE product family to ensure facilities
managers won’t have aesthetic concerns with adding 5G to the buildings they manage.

Solutions Overview

The 5G Enterprise RAN is a simple system composed of:

 Central Unit (CU) – software that manages all the 5G radios attached to it, connections to the
mobile core, and cooperation with local LTE systems that create the 5G services with it. This CU
software is hosted onsite in an enterprise data center server or in an operator provided edge
compute server.
 Radio Nodes – compact radios that either mount on the ceilings and/or walls of enterprises to
provide the necessary 5G services coverage for subscribers. The Radio Nodes are connected via
10Gig Ethernet using Corning’s ActiFi composite cabling (copper power & optical data plane).
The copper conductors are powered from a Remote Power shelf in the IDF and the optical data
plane supports the demands of 5G device connections that operate faster than legacy copper
Ethernet can carry.
Wireless enterprise network upgrades can be a hassle for building owners and IT managers. The 5G
Enterprise RAN is designed for deployment in enterprises in usages ranging from hotspot areas to full

Department of Computer Science & Engineering, CEC, Landran Page 73


Routing and Switching (BTEC-905A-18)

blanket coverage. From carpeted floor commercial and enterprise space, to large venues, and through
manufacturing shop floors, the 5G Enterprise RAN can deliver the performance demanded by people
and systems in each application.
In carpeted floor enterprise applications, a 5G Enterprise RAN installation has three steps:
1. Develop design and installation plans for system.
2. Install the CU software and radio nodes/cabling adjacent to the existing Corning cellular system.
3. Upgrade and activate existing Corning cellular system to support interconnect to new 5G
Enterprise RAN.

Before the creation of the enterprise radio access network (E-RAN), there was not a cost effective
method to deliver cellular service into medium to large buildings. Equally problematic, if mobile
operators needed to host a cloud at the edge, they had to provision a separate server at the site and
manage it throughout it’s lifecycle. Overcoming these challenges, the E-RAN small cell system from
SpiderCloud Wireless* provides cellular service to indoor areas spanning upwards of 50,000 square feet
to 1.5 million square feet of space, and supports over 10,000 voice and data subscribers. The E-RAN’s
unique architecture features a Services Module with a quad-core Intel® Xeon® processor, whose costs
can be amortized across up to 100 small cell radios, thus providing an economical edge cloud hosting
location that also supports network functions virtualization (NFV). Since the E-RAN enables access to
the Internet, mobile core, enterprise DMZ, and 3G/LTE radio access networks, this edge cloud hosting
location enables independent software vendors (ISVs) to build unique innovative applications:
enterprise-facing, standalone, enterprise cloud helper, and telecom analytics/operations. This solution
brief describes this E-RAN system, which has been in commercial service for several years on three
continents in the world’s largest mobile operators. The system meets their stringent service quality per
3G/LTE performance indicators (KPIs) that are critical to mobile operators and enterprises. Due to its
unique architectural location, the SpiderCloud Services Module gives rise to new opportunities in edge
cloud enterprise services and telecom operations applications. The pent-up demand for an economical
edge cloud hosting location has resulted in various types of applications developed by a number of
solution providers, including some that are featured in the following.

Solving Enterprise Communications Challenges:


Poor cellular coverage and the lack of data capacity in-building are major problems for the enterprise
sector in its broadest sense - that is, not just in offices, but in retail establishments, transport hubs,
industrial premises, hotels, and more. The shortcomings are becoming more glaring as voice and data
usage shifts away from landlines to mobile devices, and many companies support bring your own device
(BYOD) policies. These factors are turning inadequate voice quality or data capacity from irritants into
critical competitive issues for businesses, as well as for mobile operators, whose customers are
increasingly willing to switch operators in order to get better service. Small cells can address these
problems by delivering improved coverage and capacity in a secure and managed fashion, while
dramatically cutting CapEx and OpEx for mobile operators and lowering mobile service costs for

Department of Computer Science & Engineering, CEC, Landran Page 74


Routing and Switching (BTEC-905A-18)

enterprises. They can also help businesses address additional challenges such as integrating
communications across the organization, and ensuring security and legal compliance.

Small Cell Challenge:


In just the United States alone, buildings larger than 50,000 sq. ft. account for more than 50 percent of
the commercial space, reports the U.S. Department of Energy in its Building Report. With respect to
mobile communications, many factors, such as building materials or outdoor tower network coverage,
drive the need to provide services from inside the building. However, the traditional gap in the market
has been a lack of small cell systems that support buildings over 50,000 square feet and meet the KPIs of
mobile operators. Furthermore, the alternatives to small cell technology are typically very expensive and
hand crafted. SpiderCloud took on the challenge to build a scalable small cell system, incorporating
enterprise technology, such as Cat 5/6 cabling, Power-over-Ethernet, and TCP/IP, that functions as a
harmonious member of the overall cellular network. The system also supports:
• Hardware features access: trusted platform module (TPM) and IPsec
• Transport access: mobile core, Internet, enterprise DMZ, and 3G/LTE RANs
• API access to platform events and traffic flows

The solution is also an edge cloud, featuring an Intel Xeon processor-based Services Module capable of
sharing an NFV hosting environment across a cloud of small cells and delivering services evenly to
every small cell in its domain. Intel championed the business case justification and technology on
different standalone small cells, demonstrating how Intel® processors could be integrated in an E-RAN.
Still, it was SpiderCloud that overcame a major challenge confronting both Intel and small cell
manufacturers, which is enabling the subsystem cost to be amortized across multiple small cells

Department of Computer Science & Engineering, CEC, Landran Page 75


Routing and Switching (BTEC-905A-18)

Small Cell System Overview

SpiderCloud offers a scalable, in-building 3G/4G small-cell system called the E-RAN, which can
provide cellular coverage and data capacity from upwards of 50,000 square feet to 1.5 million square
feet of space, and support over 10,000 voice and data subscribers. A single system comprises up to 100
Radio Nodes connected over an Ethernet-based network to an on-premises SpiderCloud Services Node
(SCSN), as shown in Figure 1.

The SCSN, which is the central configuration and services enabler, acts both as a controller of Radio
Nodes and an aggregator. The SCSN securely connects to the mobile operator’s core network with just a
single backhaul connection. This enables the operator to deliver managed mobility services to its
enterprise customers. Without the presence of a local control point on an enterprise customer’s Ethernet
network, a mobile operator cannot effectively coordinate small cells or support inter-small cell
signaling. In addition, a local controller avoids having hundreds and thousands of small cells connecting
back to the mobile operator’s gateways, which otherwise would slow down handovers and connections,
and increase the rate of interference coordination inside buildings for 3G and LTE small cells.

Department of Computer Science & Engineering, CEC, Landran Page 76


Routing and Switching (BTEC-905A-18)

Small Cell System Details: The SCSN supports up to three modules: two Access Modules (AMs) and a
Services Module (SM), as shown in Figure 2. The AM supports radio access technologies, including
UMTS and LTE. The SM is designed to support third party VNFs, and is powered by an Intel Xeon
processor with 8 GB of RAM that provides a virtualized environment for a wide range of applications.
The SM offers a Kernelbased virtual machine (KVM) environment that enables hosting of multiple
concurrent virtual appliances across different operating systems (Guest OSs) and programming
languages. Applications hosted on the Services Module have access to a 120 GB Intel® Solid-State
Drive (Intel® SSD).

Department of Computer Science & Engineering, CEC, Landran Page 77

You might also like