Unit - I - Unit-V Notes - Basic IP Services

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 271

RATHNAVEL SUBRAMANIAM COLLEGE OF ARTS AND SCIENCE

(AUTONOUMUS), SULUR.

SCHOOL OF COMPUTER STUDIES (UG)

B.Sc-INFORMATION TECHNOLOGY

Course II B.Sc-IT
Subject Major Theory: Basic IP Services
Batch 2016-2019
Semester IV - 2017-2018 (Even Semester)
Faculty V.YUVARAJ

UNIT- I to UNIT- V – Lecture Notes


UNIT-I
5.3 DHCP Server Configuration

5.3.1 Configuring a DHCP Server

In this demonstration we're going to practice configuring a DHCP server, and we're going to implement
that on this Windows Server 2012 system here. Notice this server has two different Ethernet interfaces
installed. This server is doing a lot of things and fulfilling a lot of network roles all at once. First of all
it is a router because we have two network interfaces installed; this one connected to the public
network that goes out to the internet, this one connected to a little private network with a couple of
hosts on it so it routes data from this network interface to this one to get out on the internet. It's also
providing DNS services-- name resolution services.
We have a couple of hosts on this network right here--this subnet right here--and they're configured to
use static IP addressing. As we keep adding hosts to that network segment however, using static
addressing is getting out of hand so we want to enable DHCP just on this subnet here. That way this
sever will automatically hand out IP addresses whenever a DHCP client comes up on that network
segment.
Installation of the DHCP Role

Let's go ahead and do that. To do this we have to add the DHCP role. We'll click on 'Add Roles and
Features' to bring up the Add Roles and Features wizard, click on 'Next', we'll do a role-based
installation, we'll make sure our server is selected down here in the list, there's only one server on the
network so it's the only one we can pick, and we'll click on 'Next'. We'll come down here and mark
DHCP to add the DHCP role and these DHCP features to support the DHCP role onto the server, click
'Next' and we'll click 'Next' in the features screen. Click 'Next' again, and click 'Install' to install the
role. We'll wait just a minute while the DHCP server role is installed.
Okay. The installation of the role is complete. However, it's not configured yet. The next thing we need
to do is actually click on this option right here, 'Complete DHCP configuration.' This will launch the
post-install wizard. We'll hit 'Next'. We have to specify which domain user credentials will be used to
authorize this DHCP server. The reason we have to authorize this DHCP server is because of a problem
we had in the past with rogue DHCP servers. What would happen is somebody either maliciously or
accidentally--which is usually the case--would set up a DHCP server such as we're doing right now and
then connect it to a production network. What would happen is it would start handing out IP addresses
to the various clients on the network, and it wasn't supposed to be doing that. It would cause all kinds
of problems. Before you can turn the DHCP server on, you've got to authorize it, and we have to do so
as the administrator user that I'm currently logged into this system as. We'll go ahead and click
'Commit' and we'll click 'Close'. Note, it does tell us that we need to restart the DHCP server for the
changes to be effective so we'll click 'Close' there, and we will click 'Close' here.
We'll go up under Tools, go to services, locate the DHCP server, service, and then we'll hit 'Restart'. We
are good to go. The DHCP service is now available on this server, but it hasn't been configured-- it can't
do much of anything. We need to fix that. Let's go over to Tools. Let's go to DHCP, bring up the DHCP
management console. Let's make this a little bigger. Here's our DHCP server--we're going to be
working with IPv4. Remember earlier I said that we want DHCP addresses to be handed out this
interface but not this interface. The network that this interface is connected to already has a DHCP
server, and we would become essentially a rogue DHCP server handing out wrong addressing
information--that's not cool. We want to limit the DHCP server to just this interface.
Let's go back over to DHCP console, we'll right click DHCP server itself, we'll click on add/remove
bindings, and we will turn off DHCP on the 10.0.0.60 interface and leave it bound to just the
192.160.2.254 interface. Click OK.
DHCP Scope

Notice over here under IPv4 that we don't have any scopes-- we have no address to hand out. We need
to define a scope. These scopes will say these are the IP addresses that you can hand out, these are the
IP addresses you may not hand out, and these are the options you can deliver with your IP addresses.
Let's right click on IPv4, and click on New Scope. The New Scope wizard appears, click 'Next.' We
need to give the scope a name. Let's call it 'Internal Network Scope.' Click 'Next.' We have to specify
the IP addresses that this DHCP server will be allowed to hand out. Let's go ahead and hand out all the
addresses that are available on that subnet. If we go back over here to the server manager we see that
this is on the 192.168.2 subnet. The subnet mask is 255.255.255.0 so let's just hand them all out
(192.168.2.1 to 192.168.2.254).
Remember we can't use a '0' up here because that's the network address, and can't use 255 down here
because that's the broadcast address. We'll use the default subnet mask, or class full subnet mask of
255.255.255.0. Click 'Next.' Notice we've got a little problem; we're handing out all the addresses
between 192.168.2.1 and 192.168.2.254. We've got other devices on that subnet that are statically
assigned an IP address. If that's the case, you cannot hand those addresses out via DHCP otherwise
you're going to get an IP address conflict. For example, notice we have 192.168.2.254 assigned to this
interface. We cannot hand that IP address out via DHCP. There's also another host on this network that
has a static IP address assigned of 192.168.2.253. We need to create an exclusion; we need to specify
that you are not allowed to hand out these specific IPs. Let's enter 192.168.2.253 to 192.168.2.254;
we'll add that exclusion. That will block those addresses out and not hand them out via DHCP.
Now we can specify the Lease Duration. You can specify how long DHCP clients are allowed to keep
an IP address from the DHCP server before they have to either renew it or give it up. We're going to
leave it set to the default of just eight days. You can change that however you need to. Be aware that if
you decrease the lease time, it does increase network traffic a little bit because clients will have to
continually renew their IP addresses, but it also makes it so that you use your addresses more
efficiently. For example, if you have a host that's assigned an IP address via DHCP, and that host goes
offline for some reason, if you leave this set to eight days, it it could be a very long time before that
address is available again. If you set it to a lower value, that'll clean up faster. You have to weigh the
benefits versus the cost of adjusting the lease duration. We'll just leave it at the default for our purposes.
Click 'Next'.
We need to specify whether or not we want to configure DHCP options. We do want to, but we're not
going to do it right now so I am going to turn this option to 'No, I will configure these later,' and we are
done. I'll click 'Finish,' and we now have a scope defined. If I expand the scope, I can see in my address
pool the range of addresses that will be handed out, including our excluded addresses that we will not
hand out.
In addition, we have an option down here to define reservations.
DHCP Reservation
Reservations are IP addresses that have to be assigned to a specific host in the network every single
time --it can't be assigned to anybody else. It makes that reservation based upon the MAC address of
the host. We can come over here, right-click and select 'New Reservation.' We could specify an IP
address that we want to be always assigned to the same host, say 192.168.2.3. If we do that, we would
then come down here and enter in the MAC address of that host that will always get this IP address.
That way the next time that host comes up this IP address would be assigned.
Scope Options

We're not actually going to do that today. We'll click 'close'.


Notice that there is also an option here for scope options. Scope options are additional information that
can be delivered via DHCP to our DHCP clients. By default, if we don't configure these, all we're going
to give our clients is an an IP address and subnet mask. That's great, but what if they need to get outside
their local subnet? Well, they got to have a default gateway router address, right? What if they want to
use DNS names instead of IP addresses? In that case, we have to know the IP address of a DNS server.
Let's go ahead and create options that will deliver those along with the IP address and subnet mask.
Configure options. We want to first set our default gateway, so I'll mark option '003 Router,' and I will
put in the IP address, which is actually this server back here because it's doing everything on this
network. 192.168.2.254. We will add it, and then we need to come down here and mark DNS server as
well. This server is also the DNS server, so we need to add it as well. 192.168.2.254. Add it. Okay.
Whenever they receive an IP address from this DHCP server.
Activation of Scope/Test Configuration

We are almost ready to go. However, notice here under the scope we have a red down arrow. That
means the scope actually isn't activated yet. It is all defined. It's all ready to go, but the DHCP server is
not using it to hand out IP addressing information. What we need to do is activate it. I think we are
ready to go. The DHCP is running. The scope is defined. We are ready to test this implementation with
a client system.
We have a Windows 7 system here. It's on the 192.168.2 subnet. However, it does not have an IP
address assigned to it. It's configured to use DHCP, but up until just a minute ago there was no DHCP
server on this segment. As you can see down here, it has no addressing information assigned. Let's go
ahead and fix that. We will open up a command prompt. Just in case there's something lurking around
that we don't want, we're going to use the IPCONFIG command with the release option. Just in case
there already was a DHCP lease in place, we want to clear that out and start fresh with the new DHCP
server. I don't think there is but just in case. We are all cleared out. You can see that APIPA has kicked
in. An IP address has been assigned via APIPA to the system, which is practically useless. Let's go
ahead and get a new address from our new DHCP server. We'll enter the IPCONFIG command again.
This time we will use the '/renew' option. This will tell IPCONFIG to go out and contact the DHCP
server, whatever one it can find, and get IP addressing information. It will take just a minute to
complete.
This is looking better. Notice here that we were assigned IPv4 address of 192.168.2.1. Here is the
subnet mask and here is the default gateway, 192.168.2.254. Notice that our icon, down here, has
changed. We have good addressing information. Let's do an ipconfig /all and just verify our DNS server
address. DNS server is 192.168.2.254.
Summary

Our DHCP server is working.


That's it for this demonstration. In this demo we practiced configuring a DHCP server. We first installed
the DHCP role on a Windows server, we then configured a DHCP scope, we then talked about DHCP
reservations, we then looked at DHCP options, and then we activated our scope and tested the
configuration on a Windows client workstation.
5.3.3 Configuring DHCP Options

In this demonstration, we're going to learn how to configure DHCP options. I'm going to go up to
'Tools,' and click on 'DHCP.' If I expand my server, you can see that I've set up a scope for the
192.168.1.0 network, and my address pool runs from 192.168.1.1 to 192.168.1.100, and I've excluded
192.168.1.1 as a static IP address in the environment, probably for the router.
View Initial Client Configuration

Currently, I don't have any options set. Let's go over to our client and take a look at what that looks
like, before we set up the options.
We'll go ahead and do an 'ipconfig.' If I do an ipconfig, you can see that we've picked up an IP address
from the DHCP server. Our IP address is 192.168.1.2, which is the first address in the pool. I don't have
a default gateway, and if I do an 'ipconfig /all,' and I scroll up, you can see that I don't have any IP
version 4 DNS servers, and I don't have a connection-specific DNS suffix, which are a couple things
that we're going to change when we go in and set up options.
Let's go back to our server and take a look at setting up the options.
Options happen at three levels that we commonly work with.
Server Options

The first level would be server options. Server options apply to everyone who's a client of this
particular server. Even if I had five scopes on there, 192.168.1.0, 192.168.2.0, 3.0, 4.0, and etc. When I
set a server option, all of the clients from all those different scopes are going to receive that option. I'm
simply going to right-click, and 'Configure Options.' I'm going to scroll down and specify a DNS
Server and we'll put in the IP address as 192.168.1.10. As you can see, it doesn't like this, because I
don't actually have DNS running on that address, but I'm going to say, 'Yes,' because eventually I will
have it running. Normally you would put that address in and it would be fine.
Now, we're going to go down and provide that connection-specific address name. It's the 'option-15,
DNS Domain Name.' I'm going to specify it as, company.com. There's lots and lots of options in here,
more than we could ever go through in any video, but essentially, the big thing is they provide extra
information. I'm going to go ahead a click 'OK.'
Scope Options

The server options apply to all clients of the server. We also have scope options and one thing I haven't
put in is the address of the router, because the router is always done as a scope option, because it's
specific to that particular Subnet. So I'm going to right-click my 'Scope Options' and do 'Configure
Options' and specify my router as 192.168.1.1.
The third level of options that we might use would be client options, which are done in a reservation.
Client Options

If I open up 'Reservations,' I've created one reservation for Printer 1. If Printer 1 needed different
information, let's say a different DNS Server, or a different DNS Suffix, then I could right-click the
'reservation' and go into 'Configure Options' and set up options just for that particular client. Client
options will override server options and scope options. I don't have a real printer, so I'm going to hit
'Cancel.'
Now that we've setup our router option as a scope option, we can see it's got a little bit different picture,
it's time to go back to our client and test it.
View Options on the Client

I'm going to go ahead and do an 'ipconfig /release.' Then we'll go ahead and do and 'ipconfig /renew.'
You can see, right away, that we've picked up the default gateway, which was that 003 router scope
option that we had specified. If I do an 'ipconfig /all,' you can see that I now have an IP version 4 DNS
server of 192.168.1.10 and a connection-specific DNS suffix of, company.com.
Again, options are to provide extra information. We can set them at the server level; they apply to all
clients of that server. At the scope level, they apply to clients of that scope, or do a client option that
applies only to that client, which is set in the Reservation. All three levels deliver additional
information to the client. By default, DHCP is only going to give you an IP address and a subnet mask,
unless you configure options.
Summary

In this demonstration, we learned how to configure DHCP options at the server level, scope level and
client level.

5.3.7 Configuring Host Addressing

In this demonstration, we'll look at the client settings you can use to automatically configure TCP/IP
values. For this demo, our computer is connected to the Internet. When we can right-click the 'Network'
icon and click 'Network and Sharing Center,' we see the computer is connected to the network and on
to the Internet. When we click Internet > Properties > Internet Protocol Version 4 > Properties, we see a
statically configured IP address, default gateway address, and DNS server address.
We have a DHCP server on our network. We want to use DHCP to assign the IP addresses instead of
manually configuring them. To do this, we click the 'Obtain an IP address automatically' option. Notice
that even though we're requesting an IP address from the DHCP server, we can still manually set the
DHCP server addresses. In this case, we want both the IP address and the DNS server address to come
from the DHCP server.
Let's close this dialog box and test the configuration. So we'll click OK > Close > Diagnose. Diagnose
instructs the computer to apply the changes immediately. We'll click 'Close.' Notice that we're still
connected to the Internet, but if we click 'Details,' we see that we now have an IP address of
192.168.20.31. We also have a subnet mask and a default gateway listed, as well as a DHCP server IP
address. This is the address it used to get the configuration information.
Summary

We also have two DNS server addresses. That's it for this demonstration.
In this demonstration, we used the different options for configuring a client's TCP/IP properties. You
can choose to obtain the information automatically or statically assign it.
5.3.9 DHCP Configuration Facts

The Dynamic Host Configuration Protocol (DHCP) centralizes management of IP addressing in a


network by allowing a server to dynamically assign IP addresses to clients. DHCP also allows mobile
users, who move from network to network, to easily obtain an IP address appropriate for each network
they connect to.
Because a DHCP client doesn't have an IP address when it initially boots, it must use broadcast frames
to communicate with a DHCP server. The table below describes the method used to obtain an address
from a DHCP server:
Broadcast Description
DHCP
The client begins by sending out a DHCP Discover frame to identify DHCP servers
Discover
on the network.
(D)
A DHCP server that receives a Discover request from a client responds with a DHCP
DHCP Offer Offer advertisement, which contains an available IP address. If more than one DHCP
(O) server responds with an offer, the client usually responds to the first offer that it
receives.
The client accepts the offered address by sending a DHCP Request back to the DHCP
DHCP server. If multiple offers were sent, the DHCP Request message from the client also
Request (R) informs the other DHCP servers that their offers were not accepted and the IP
addresses contained in their offers can be made available to other clients.
The DHCP server responds to the request by sending a DHCP ACK
DHCP ACK
(acknowledgement). At this point, the IP address is leased to and configured on the
(A)
DHCP client.
If the DHCP server is on a different subnet, additional configuration steps are required,
since the DHCP broadcast frames are dropped by network routers by default.

Keep in mind the following when configuring a DHCP Server:


 The DHCP service needs to autostart when the server boots.
 It must have a static IP address.
For a DHCP server to deliver IP addresses, it must have a scope configured. A scope is the range of IP
addresses that the DHCP server can assign to clients. When working with scopes, remember the
following:
 There should be only one scope per network segment.
 The scope must be activated before the DHCP server can assign addresses to clients. After you
activate a scope, you should not change it.
 A scope has a subnet mask that determines the subnet for a given IP address. You cannot change
the subnet mask of an existing DHCP scope; to change the subnet mask used by a scope, you
must delete and recreate the scope.
 Lease duration values are part of the scope properties, and they determine the length of time a
client can use an IP address leased through DHCP.
In addition to providing IP addresses, a DHCP server can also provide clients with additional IP
configuration parameters using options. Commonly used DHCP options include the subnet mask, the
default gateway address, and a DNS server address. The following three levels of options can be
configured:
 Server options are applied to all computers that get an IP address from the DHCP server,
regardless of which scope they obtain the address from. (e.g., if your organization has only one
DNS server, then all DHCP clients need the same DNS server address.)
 Scope options are applied to all computers that get an IP address from a particular scope on the
DHCP server. (e.g., because scopes are associated with specific subnets, each scope needs to be
configured with the appropriate default gateway address option.)
 Client options are applied to a specific DHCP client. The client's MAC address is used to
identify which system receives the option.
The DHCP console provides context-sensitive icons to reflect DHCP server status as follows:
 A check mark in a green circle indicates that the DHCP server is connected and authorized.
 A red down arrow indicates that the DHCP server is connected but not authorized.
 A horizontal white line inside a red circle indicates that the DHCP server is connected, but the
current user does not have the administrative credentials necessary to manage the server.
 An exclamation point inside a yellow triangle indicates that 90% of available addresses for
server scopes are either in use or leased.
 An exclamation point inside a blue circle indicates that 100% of available addresses for server
scopes are either in use or leased.
5.4 DHCP Relay

As you study this section, answer the following question:


 What is the difference between an RFC 1542 compliant router and a DHCP relay agent?
After finishing this section, you should be able to complete the following tasks:
 Configure a DHCP relay agent.
 Add a DHCP server on another subnet.
This section covers the following Network Pro exam objective:
 Domain 4.0 IP Configuration
 Given a Windows system, configure the network connection to communicate outside of
the local network.
5.4.1 Configuring DHCP Relay

In this demonstration, we're going to configure a DHCP relay agent. Let's go ahead and look in DHCP
and see what is going on right now.
If I go into DHCP, we can see that we have one scope set up. It's for the 192.168.1.0 network and it
runs from one to one hundred and I've excluded the address, 1, because that's the address of the router.
If I go in and I right-click 'IPv4' and I click 'Display Statistics,' you can see, right now, that I've seen 83
discovers, I've made seven offers, 137 requests.
Discoveries/Offers

Discovers are if the DHCP server hears the client broadcasting and asks for an IP address. An offer is
when it offers an IP address. It will offer an IP address when the incoming broadcast comes in on a
network card that has an IP address that matches the scope. For example, in this particular situation, if I
go back into server manager, my IP address is 192.168.1.10, which is on the 192.168.1.0 network.
Because I have a scope, 192.168.1.0, when the client sends out a broadcast, "Hey, I'm looking for a
DHCP server. Is there a DHCP server in the house?" That comes in on that IP address on the
192.168.1.0 network, which matches the scope, and my DHCP server makes an offer.
Verification of Broadcasts

We can verify that the broadcasts are getting to the DHCP server by going over to the client and
releasing or renewing our IP address and we'll come back and look at the statistics.
I want to show you what's going on before we do the relay agent, so you can see what happens. Other
than that, it's hard to conceptualize.
Let's go over to our DHCP client.
Here on our DHCP client, I'm going to do an 'ipconfig /release' and I'll go ahead and do an 'ipconfig
/renew.' You can see, I've picked up my IP address. Let's go back to the server and see what happened
on the server side.
Back on my server, if I right-click 'IPv4,' and I go to display statistics, you can see that discovers has
increased by one and that's from the renew that I did after I released the IP address.
Purpose of Relay Agent

A relay agent is used to service clients that are not on the same network segment as the server. They're
across a router from the DHCP server. The problem with routers is they don't pass broadcast traffic. If
the client is on a different network segment, the router is never going to get that DHCP discover.
Setup New Scope

It's not going to hear the client calling for an IP address, so it's not going to answer.
The first thing we want to do in DHCP, before we set up the relay agent, is make a scope to
accommodate that new subnet. I'm going to right-click and do a 'New Scope' and this scope we'll call
Additional Subnet. We'll run it from 192.168.2.1 to 192.168.2.100. I'm going to go ahead and exclude
192.168.2.1 because that's the address of my router. I'll hit 'Next.' Leave the default lease as fine by us,
and we will configure the options; the most important option we need is the gateway, which is
192.168.2.1. I'm going to go ahead and add that in. I'm fine with DNS, no wins, and I'll go ahead and
activate the scope.
Remove Client from Old Subnet

I have a scope to service my new subnet. Let's go ahead and take the client off of the old subnet and see
what happens right now.
I've made a change in the background of my virtual machine, to disconnect it from the network
segment that the DHCP server is on and now it's physically connected to a different virtual switch,
which is the same as putting it on a different network. I have a Windows machine that's acting as a
router that has a network interface in both of these two networks.
If I do an ipconfig/release and then I do an ipconfig/renew. We would expect to see that the client
cannot renew its IP address. The reason being that that broadcast is no longer reaching the DHCP
server, because it's got to go through the router and it can't go through the router without an IP address.
By default, routers don't pass broadcast traffic, so nothing is going to happen.
And this could take quite a while, because the client is going to make many attempts to contact the
DHCP server before it finally gives up and fails over to APIPA.
We can see here, we've got an error request. We can see here an error occurred while renewing it,
unable to contact the DHCP server.
Now that the client is on one network, and DHCP is on a different network, we need to set up our
DHCP relay agent, in order to pass that broadcast traffic from the client's network over to the DHCP
server.
Windows Box/Router

The DHCP relay agent, if you're setting it up on a Windows box, is setup in routing and remote access.
Many times, routers will be able to function as a DHCP relay agent, and then you would program the IP
helper tables. In our case, we have a Windows box and we're going to turn that Windows server, that's
acting as a router, into a DHCP relay agent.
Let's go over to our router, and set it up to relay DHCP broadcasts.
Here we are on our router and I've very creatively named it, Router. You can see it has two IP
addresses, 192.168.1.1 and 192.168.2.1. I'm going to go ahead up into Tools and open up routing and
remote access. The first thing I need to do is add in the DHCP relay agent. I'm going to right-click
'General' and do a new routing protocol. The one we want is the one that's highlighted DHCP relay
agent. The next thing I need to do is tell the DHCP relay agent which network it should listen for
DHCP broadcasts on. I'm going to right-click and do a 'New Interface.' In this case, I want it to listen
on the 2.0 network. I'm going to click 'OK.' It says, "All right. Should I relay DHCP packets from that
network?" Absolutely. Now that my DHCP relay agent is listening on the 192.168.2.0 network, the last
piece is for me to tell it, to whom should it give those DHCP packets? I'm going to right-click and go to
the 'Properties' and say, "All right. You're going to be relaying this to 192.168.1.10, which is the IP
address of my DHCP server." I add that in, and click 'OK.'
Looking at Statistics

Before we go, let's go back to our DHCP server and take a quick look at where we are in terms of our
statistics.
I'm going to go ahead and right-click 'IPv4' and 'Display Statistics.' You can see, even though we've
done the release and renew over on the client, we're still on 84 discovers, because that renew never
made it through that router and over to the DHCP server.
Let's go back to our client one last time, and take a look at what happens, now that we've configured
the DHCP relay agent.
I'm going to go ahead and hit the 'Up' arrow, and do my 'ipconfig /renew.' You can see that I've picked
up an appropriate IP address for my subnet.
Analysis

I've been assigned the IP address of 192.168.2.2. My default gateway is 192.168.2.1 and, if I do an
'ipconfig /all,' you can see that the IP address of my DHCP server, is 192.168.1.10.
Finally, we will finish off by going back to our DHCP server. I'm sure you'll believe me that the DHCP
discover statistics have gone up, but just in case, I'll show you what's going on over there.
I'm going to right-click 'IPv4,' 'Display Statistics,' and you can see that we've got 85 discovers.
Review

DHCP relay agents are used whenever the client is on a different network than the DHCP server. You
need to make sure that you have a scope that matches the network address of the client. You might say
to yourself, "How did it know to get from the 2.0 scope to that client over on 2.0?" It's simple. We have
our DHCP relay agent listening on its 192.168.2.1 network card. The request for DHCP comes from
that relay agent network card of 192.168.2.1. The DHCP server is going to give from the scope that
matches the network ID of where the request is coming from. You make that scope that's going to
match the network address of whatever the DHCP relay agent is. Those things have to match up. We go
into the DHCP relay agent, we add in the service, we tell it which network card to listen on, and then
we tell it the address of the DHCP server. We've got centralized DHCP for our clients that don't have a
DHCP server on their subnet.
Summary

In this demonstration, we configured a DHCP relay agent.


5.4.2 DHCP Relay Facts

Because a DHCP client doesn't have an IP address assigned when it initially boots, it must use
broadcast frames to communicate with a DHCP server. If the server is on a different subnet from the
client, then the DHCP requests sent by the client will not be able to reach the server, because broadcast
frames are dropped by network routers. If your network is configured in this manner, you can
implement one of the following mechanisms to forward DHCP broadcasts through network routers to a
remote DHCP server on a different subnet:
Option Description
An RFC 1542 compliant router listens for DHCP traffic and routes any received
DHCP frames to the appropriate subnet. For example, on a Cisco router, you can
RFC 1542 enable this functionality by using the ip helper-address command. The syntax is:
Compliant
Router ip helper-address [server_address]

Replace [server_address] with the IP address of the remote DHCP server.


If you are using a Windows server in your network, then you can install the Routing
and Remote Access service (RRAS) role on the server and enable the DHCP Relay
Agent role service. The DHCP Relay Agent sends the DHCP packets it receives to a
remote DHCP server on a different subnet. To configure the DHCP Relay Service,
you must do the following:
DHCP Relay
Agent
 Specify which server network interface the agent listens on for DHCP
messages.
 Specify the IP address of the remote DHCP server where the agent should
forward DHCP messages to.

5.7 Multicast

As you study this section, answer the following questions:


 How does multicast differ from unicast and broadcast?
 What is the IP address range reserved for multicast groups?
 What does a regular switch do when it receives a multicast frame?
 Which device would you configure to prevent multicast traffic from being sent to non-group
members?
This section covers the following Network Pro exam objectives:
 Domain 4.0 IP Configuration
 Identify and select valid IP addresses and subnet masks for network connections.
 Given a Windows system, configure static IPv4 or IPv6 address information on a
network connection.
5.7.1 Multicast

Multicasting is used to create logical groups of computers, which allows a single message to be sent to
the group. To understand the benefit of multicasting, let's first look at what would happen without
multicasting.
Suppose we have a single server that needs to send a message to three different devices, such as an
audio stream for an online meeting.
Unicast

One method the server could use to reach all three computers is called 'Unicasting.' With a unicast, the
server sends out an individual message addressed to a specific device. In this case, it would have to
send three separate messages, one to each of the devices. Each message is identical to the others. To do
this, the server must know the IP address of each device.
While unicasting would work, it would require the server to know the IP addresses of all the destination
devices and to generate identical data streams, one for each computer. This creates a great deal of
network overhead as the same data is sent redundantly to multiple computers.
Broadcast

Another option would be to use a Broadcast. A broadcast is a single packet that is sent to all devices. In
this case, the server sends out one message using a special broadcast address. All devices receive the
message and process it. However, if I have additional devices on the same network segment, they will
also receive that same message even though they weren't intended to. Another problem with
broadcasting is that routers are usually configured to not forward broadcast messages. If there are
routers separating my sending server and my receiving devices, the broadcast traffic will reach the
router but will not be forwarded through to the destination hosts.
IGMP

Multicasting, on the other hand, uses the Internet Group Management Protocol, or IGMP, to define
multicast groups and group members. Routers can use IGMP to send messages to subnets that have
group members.
First, the router sends out a message addressed to 224.0.0.1, which is a special reserved IP address on
the local subnet. The message asks any host that is a member of any multicast group to please respond.
Each host then responds with a list of the multicast groups it is a member of.
Group Membership

Multicast groups are identified by special IP addresses in the reserved range of 224.0.0.0 and
239.255.255.255. Each group has its own multicast address, although addresses within the 224.0.0.0
range are reserved for local subnet communications.
As each host responds to the router, the router then compiles a list of all of the groups that exist on the
subnet. The router actually doesn't keep track of which hosts are members of which group. It only
remembers that the subnet contains at least one member of each group.
The router then sends a list of its groups to additional routers that are upstream between the router and
the source computer. In this case, the router will send a list upstream to this router indicating that its
network contains members of groups one, two, and three.
Multicast Messages

When the server needs to send a message to the group, it sends a message addressed to the group's
multicast IP address, such as 239.10.11.155 in this example. It sends the message to the router. The
router then identifies the subnets to which group members are attached. For example, suppose this
address is assigned to group two. This router knows that members of group two are on this segment, so
it forwards this message down to the other router. If there are segments connected to other routers that
do not contain group members, this message will not be forwarded out that segment. This router then
receives the multicast packet and sends it out on the segment to which the appropriate group members
are attached.
Routers do not identify individual members of a group. Routers only remember which subnets have at
least one group member. When a switch is used to connect hosts to routers, the multicast messages are
forwarded to all hosts on the switch.
In this example, we have four computers connected by a switch to the router. These three computers are
members of group two, which has a multicast address of 239.10.11.155. When the router receives a
multicast packet, it creates a frame with a special destination MAC address.
Multicast Frames

The MAC address of a frame containing a multicast packet always begins with 01-00-5E. The
remaining portion of the MAC address is a modified format of the multicast address.
The router sends this frame to the switch. The switch checks its forwarding table for the MAC address.
Suppose it finds that none of the hosts connected to any of its ports has the MAC address that matches
the MAC address used with the multicast frame. Because it's an unknown address, the switch will flood
the frame out all ports to all connected devices. In this case, the fourth device will still see the frame
even though it is not a member of the group. However, because it is not looking for this MAC address,
it will not actually process the frame and receive the multicast data stream.
IGMP Snooping

If you want to keep these frames from being forwarded to hosts that are not a member of the multicast
group, you need to purchase a switch that is capable of IGMP snooping. With IGMP snooping, when
each device responds that it is a member of a multicast group, the switch examines those frames to
identify which groups each host is a member of. Using this feature, the switch is able to create a list of
which hosts are members of each group. When a multicast frame arrives at the switch, it can forward
the frame to just the individual group members. It will not forward the frame to devices that are not
members of the group.
The Internet Group Management Protocol makes network communication more efficient because it
allows a single data stream to be forwarded to multiple hosts. It also simplifies management because it
identifies network segments that have group members, preventing unnecessary data from being
transmitted on subnets without group members.
Summary

That's it for this lesson. In this lesson, we reviewed how unicast, broadcast, and multicast messages are
sent over a network. We spent time discussing how the IGMP protocol is used to manage multicast
messages. We ended this lesson by discussing how IGMP snooping on a network switch can optimize
multicast traffic.
5.7.2 Multicast Facts

Multicasting creates logical groups of hosts—messages sent to the group are received by all group
members. Multicasting is typically used for streaming video and audio applications, such as video
conferencing.
Without multicasting, messages sent to a specific group only use the following:
Method Description
Messages are sent to a specific host address. The sending device must know the IP
Unicasting address of all recipients, and must create a separate packet for each destination
device.
A single packet is sent to the broadcast address and is processed by all hosts. All
hosts, and not just group members, receive the packet. Broadcast packets are not
Broadcasting
typically forwarded by routers, so broadcast traffic is limited to within a single
subnet.
The Internet Group Management Protocol (IGMP) is used to identify group members and to forward
multicast packets on to the segments where group members reside. IGMP routers keep track of the
attached subnets that have group members, using the following process:
1. A router sends out a host membership query. This query is addressed to the IP address 224.0.0.1.
 The address 224.0.0.1 is never assigned to a group because it is used for the query
messages sent by routers.
2. Hosts that are members of any groups respond with a list of the groups they belong to. Each
group is identified with a multicast IP address in the range of 224.0.0.0 to 239.255.255.255.
3. The router uses these responses to compile a list of the groups on the subnet that have group
members. Routers do not keep track of individual hosts that are members of a group; they
simply compile a list of groups on the subnet that have at least one member.
 When a host joins a new group, it automatically sends a join group message to the
router. When the last host in a group leaves the group, it sends a leave group message to
the router. Hosts can join or leave groups at any time.
4. When a host joins a new group, it automatically sends a join group message to the router. When
the last host in a group leaves the group, it sends a leave group message to the router.
5. The IGMP router reports to upstream routers that they have members of a specific group.
 Upstream routers are the routers that exist between the router and the server that sends
out the multicast data stream. They keep track of downstream routers that have group
members.
The following process is used when sending a multicast stream:
1. The sending server sends packets addressed to the multicast group.
2. Routers receive the multicast packets and check their lists of group members.
 If the router is connected to a subnet that has group members, or if the subnet includes a
downstream router with group members, the multicast packet is sent on that subnet.
 If a subnet does not have any group members, the packet is not forwarded on that subnet.
 If a router does not have any subnets with group members, the packet is dropped and not
forwarded.
3. Each intermediary router performs the same tasks until the data stream eventually reaches the
multicast client.
Additional multicasting facts include:
 Frames that contain multicast traffic are sent to a special MAC address. The MAC address
begins with 01-00-5E, with the last portion being a form of the IP multicast group address. A
single multicast MAC address could be shared by up to 5 other IP multicast addresses.
 A regular switch that receives multicast traffic sends the traffic out all ports, because the
destination MAC address will be an unknown address. This means that a host might see
multicast traffic on its segment, even if it isn't a member of the group. However, hosts that are
not members of the group will not process the frame because they will not associate the
multicast MAC address with their own address.
 IGMP snooping on a switch allows the switch to control which ports get IGMP traffic for a
specific group. With IGMP snooping, the switch identifies which ports include members of a
specific multicast group. When a message is received for a group, the message is sent only to
the ports that have a group member connected.
5. 8 Troubleshooting IP Configuration Issues

As you study this section, answer the following questions:


 What does the /release switch do when used with ipconfig?
 How can you tell if a rogue DHCP server is active on your network?
 How do you know if a host is using APIPA?
After finishing this section, you should be able to complete the following tasks:
 Find information about IP configuration settings on Windows and Linux systems.
 Troubleshoot IP configuration problems.
This section covers the following Network Pro exam objective:
 Domain 4.0 IP Configuration
 Given a scenario where Windows systems cannot connect to the network or the Internet,
troubleshoot and resolve IP configuration and communication issues.
5.8.1 IP Configuration Troubleshooting

Let's talk about troubleshooting protocol problems. In order for two network hosts to communicate,
they have to be using the same network protocol.
Therefore, if you have an IP-based network with IP-based routers and servers and other IP-based hosts,
and you go ahead and install a workstation on that network and you configure it with a different
protocol, then it's not going to be able to communicate with any other host. Everything has to be using
the same network protocol. In addition, if you do decide to use an IP protocol on your network, then
your IP network configuration settings have to be made correctly. For example, all of the hosts on the
same logical network have to be using the same network address. The network portion of the IP address
on every single host must be the same in order for them to communicate.
Subnet Issues Example 1

On this network we have four different hosts: a server and three workstations. The server is assigned an
IP address of 192.168.1.1. These three hosts are assigned 192.168.1.2 and 192.168.1.3. and this other
host over here is assigned an IP address of 192.168.2.4. To determine which hosts are going to be able
to communicate with each other, you need to determine the correct subnet address. If the network uses
classful addressing, we can look at the first octet and determine what the default subnet mask is. In this
case the server's IP address starts with 192. Because this network uses classful addressing we know that
this is a class C address.
The first three octets are the network portion of the address, while the last octet is the host portion of
the address. These three hosts all have the same network address: 192.168.1.0.; therefore, they'll be
able to communicate with each other. This host over here, however, will not because it uses a 2 in the
third octet. Therefore, this host is using a completely different network address of 192.168.2.0.
Therefore, it's not going to be able to communicate with the other hosts on this network.
Subnet Issues Example 2

Let's take a look at another example. Here our server is assigned an address of 10.0.0.1 and this host is
assigned an address of 10.0.0.2. This other host is assigned an address of 10.0.0.3 and this host over
here is assigned an address of 10.200.1.4. In order to determine which of these hosts we'll be able to
communicate and which won't, we need to look at the IP address and determine what class it is. Again,
we're going to assume that this network uses classful addressing. This address starts with a 10., and if it
uses classful addressing then this is a class A address.
Therefore, only the first octet is actually used for the network portion of the address. The subnet
address in this case is 10.0.0.0. These three octets are the host portion of the IP address. In this case, the
addresses are actually configured such that all these hosts will be able to communicate. This is very
important, because if you don't pay attention to the address class, your first inclination may be to
assume that this host has an invalid address because the second and third octets are different. In this
case that's not correct.
In order to ensure that they're on the same network, you need to make sure that every host in the
network uses the same subnet mask, regardless of whether you're using classless or classful IP
addressing. If your IP addresses are configured correctly, then the network portion of each IP address
on each host will be the same if you use the correct mask, but if you use the wrong mask then the hosts
are not going to be able to communicate, even though the addresses may appear to be configured
correctly. Remember, the subnet mask defines which portion of the address is network and which
portion is node.
Subnet Issues Example 3

In the previous two examples we looked at, we assumed that all the hosts were configured with the
default classful subnet mask and that those masks were configured correctly. However, if you use an
incorrect prefix or if you don't calculate your subnet boundaries correctly, then again, communications
aren't going to work. In this example suppose we're going to use a 22-bit prefix on this network, which
is equivalent to using a subnet mask of 255.255.252.0. In this configuration, these three hosts will be
able to communicate with each other. By using a mask of 255.255.252.0, we can actually define up to
64 total subnets within this address space. Remember, 172.17 is a class B address by default.
However, what we've done is steal an additional 6 bits from the host portion of the address to create
these additional subnets. Therefore, you've got to know where the subnet boundaries reside. Because all
of these addresses look like valid class B addresses on the surface, it's actually quite easy to miss the
fact that one of these hosts is actually on a different subnet than the others. In this example our subnet
boundaries are 172.17.0.0, 172.17.4.0, 172.17.8.0, and so on. In order to determine your subnet
boundaries, you can use an online subnet calculator. When you do, we'll see that this host here will not
be able to communicate with the other hosts on the network because it's on a completely different
logical network. Its subnet address is 172.17.4.0 instead of 172.17.0.0.
Incorrect Default Gateway Address

If your host needs to communicate with other IP hosts outside of your local network segment, then
there's another parameter that you have to configure correctly, and that is the default gateway router
address. Understand that when you send information from one IP host to another IP host on the
network, the IP protocol on your system will first check to see if the destination computer resides on
the same local subnet or whether it resides on a completely different subnet. It does this by looking at
the source and destination IP addresses, along with the subnet masks.
If the destination host resides on the same network as the sending host, then the IP protocol will use
ARP to determine the appropriate MAC address and directly deliver the data. However, if it determines
that the destination host resides on a completely different IP network, then the IP protocol will
immediately forward that data to the default gateway router address, and then let the router deliver the
information to the appropriate destination host. This can be a problem if the wrong gateway router
address has been configured. If this happens, then you're going to notice a couple of common
symptoms.
First of all, the host will be able to communicate correctly with the other hosts on the local network
segment, but the host will not be able to communicate with hosts on another network segment,
including the internet. A key mistake that's commonly made, and something that you need to remember,
is that the address that's assigned to the default gateway router has to reside on the same logical subnet
as the host that's using it. You need to check the default gateway address and verify that it's configured
with an IP address and subnet mask that's appropriate for your local subnet.
DNS Name Resolution Issues

You need to make sure that it is the correct IP address.


Another common problem revolves around DNS name resolution. You need to make sure that the
correct DNS server address has been configured for name resolution. For example, if you find that you
can ping a remote host by its IP address but you can't ping that host by its DNS host name, then
something's wrong with your name resolution system. It could be caused by several different things.
One thing you should check first is whether or not the host has been configured with the correct DNS
server IP address.
DHCP Issues

In addition to DNS issues, you also need to be concerned with DHCP issues on an IP network. DHCP
problems can cause serious protocol issues if the DHCP server is misconfigured. The DHCP server,
remember, hands out IP addresses, subnet masks, the default gateway address, as well as DNS server
addresses to the hosts on your network. When the host comes up on the network, it gets this IP
addressing information from the DHCP server. Problems can occur if that host cannot contact the
DHCP server when it comes online. Modern operating systems, including Windows, Linux, and Mac
OS can let a protocol called an APIPA take over if an address can't be configured from a DHCP server.
Let's suppose we have a DHCP service running in this network and it's built into our network switch.
We've configured our workstations here to use DHCP to get their IP addressing information. In most
networks our servers, our routers, and our other network infrastructure devices are usually assigned a
static IP address. They don't use DHCP. In this situation, we can potentially have a problem if the
DHCP server stops working. Suppose the firmware inside our switch here experiences a fault and the
DHCP server goes down for some reason.
When this happens, hosts can no longer get an IP address when they're powered on. At this point,
APIPA takes over and the DHCP clients will get an address starting with 169.254. When this happens,
all of the hosts on our network that received an APIPA address can still communicate with each other,
but they cannot communicate with our infrastructure devices. For example, they can't communicate
with our file server, nor can they communicate with our default gateway router because they're
assigned a static IP address. Because our workstations are using APIPA assigned addresses and our
infrastructure devices are using statically assigned addresses, they're now running on two different
logical subnets.
With APIPA, the goal is to automatically assign IP addresses to network hosts such that if something
happens to the DHCP server, they still get an IP address. Theoretically, this should allow them to
continue to operate and communicate. However, in this scenario these three hosts will be able to
communicate using their APIPA-assigned IP addresses, but they'll not be able to communicate with any
of our critical infrastructure devices that use statically-assigned IP addresses. The problem here is that
being able to communicate between workstations really isn't all that useful.
Users on these workstations need to be able to access their files. They need to be able to send print jobs
to printers. They need to access their email and they need to access other services that are hosted on our
servers. In addition, access to the internet requires the ability to communicate with this network's
default gateway router. Therefore, when APIPA takes over, these workstations lose communications
with all of these infrastructure devices, and therefore what they can do is not a whole lot.
Rogue DHCP Server Issues

Another DHCP problem that happens occasionally is that of a rogue DHCP server. This can occur in
several different situations. One of the most common examples is one where somebody installs a server
operating system. Perhaps they're doing some testing. They've installed a server operating system in,
say, a virtual machine. Without realizing it, they may accidentally enable the DHCP service on that
server. I've even seen situations where a rogue DHCP server can be accidentally enabled by an end user
who's configured their mobile phone to function as a hotspot.
This can disrupt network communications because some of our network hosts are going to get their IP
address from the rogue DHCP server, while our other hosts get an address from the legitimate DHCP
server. Because the rogue DHCP server wasn't configured properly, it's very likely that it's handing out
incorrect IP addresses. When this happens, some workstations can communicate with the server or the
router and others can't because they're getting the wrong IP addressing information.
Summary
That's it for this lesson. In this lesson we talked about troubleshooting the IP configuration. We looked
at using the correct IP addresses. We looked at using the correct subnet mask. We talked about the
importance of configuring the correct default gateway router address. We talked about configuring the
correct DNS server address, and then we ended this lesson by talking about how to resolve common
DHCP problems.
5.8.2 Using ipconfig

As part of troubleshooting, I might need to verify the IP configuration of a computer. For instance, this
computer has a single network interface card here, I can go ahead and click on the Connections link,
and click 'Properties' and see the properties set for the IPv4, and here it says to go ahead and get an IP
address automatically.
Viewing IP Information from a DHCP Server

It doesn't really give me much information there.


I can, however, close those property dialogues and click the Details link, and here I have the IP
information that was given to me from a DHCP server. Lines of interests are the DHCP enabled, yes.
My IP address and subnet mask, as well as my default gateway and the DHCP server addresses, and the
two DNS server addresses that I have configured.
ipconfig

This dialog box isn't the only place that I can get that information.
I can go ahead and open up a command prompt, and use the utility ipconfig. Ipconfig displays a little
bit of information. It gives me my IP address, my subnet mask and my default gateway.
ipconfig /all

If I want more information I'll use the same command, ipconfig /all. We'll scroll up here and here's the
information we want to look for. We have the line DHCP Enabled, and it says yes, as well as
Autoconfiguration Enabled, our IPv4 Address information, our Subnet Mask, our Default Gateway and
our DHCP Server. We have the DNS Servers.
Lines to Focus on for Troubleshooting

The lines that we want to focus on for troubleshooting can be this line here, DHCP Server, and it's
found directly underneath the Default Gateway line, as well as these two lines, DHCP Enabled and
Autoconfiguration Enabled.
Configure Static IP Information

Let's go ahead and go back to the Properties of this network interface card and make some changes. Go
ahead and open up the properties, and in this case we'll give it some static IP information. We'll go
192.168.20.75, we'll leave the default subnet mask, go ahead and set up a default gateway of
192.168.20.1 and we'll provide a static DNS server address, 192.168.1.20. Go ahead had click OK,
Close. We'll go back to the command prompt; we'll issue the ipconfig /all command and scroll up. Let's
look at the differences. It says DHCP Enabled and no, whereas before it was yes. We're no longer
asking for information from the DHCP server. We have configured it statically, and you'll see that here.
IPv4 address is now 192.168.20.75, with our default subnet mask, and the default gateway that we
statically configured. You'll notice that underneath our Default Gateway line there is no DHCP server
IP address line. We can know that it's not looking for the DHCP server in this case and it hasn't found
one. Here is our static DNS server IP address. Let's go back to the properties and re-enable DHCP and
DNS, go ahead and click OK.
Deactivate the DHCP Server

This time we're going to go ahead and deactivate the DHCP server on the network. We're going to go
ahead and switch over to the server, and we're going to deactivate the DHCP server on the network. It
says my scope is active and disabling the scope will prevent clients from obtaining IP addresses. Are
we sure we want to disable the scope? We'll click Yes. We currently see that we don't have any address
leases on the DHCP server. We'll return back to our client and we'll click Close, we'll click Close again.
ipconfig /release

This time we want to use the utility called ipconfig /release and we've released any IP address, we no
longer have our IP address given from our DHCP server or the static configuration that we made, we've
been given another address.
ipconfig /renew

We'll go ahead and use the command of ipconfig /renew. What that is doing is it's sending traffic out to
the DHCP server requesting that IP address, and it might take a little while because it doesn't know that
we shut down the DHCP service. We'll go ahead and wait for it to finish looking for that DHCP server.
I have the message here that states it was unable to contact my DHCP server.
Example of APIPA Providing an IP address

We already knew that because we disabled the DHCP service.


We have a computer user that gives us a call and says I cannot connect to the Internet. You show up to
his workstation and you type the ipconfig /all command. We'll scroll up and start to diagnose the
problem. We want to notice that yes, DHCP is enabled, so it's wanting to talk to the DHCP server. In
this case, I'm missing the line underneath default gateway that says DHCP Server and then lists the IP
address. Instead, I have an IP address that has been given to me through autoconfiguration, which is
known as Automatic Private IP Addressing, APIPA. I'll know that because all addresses assigned
through APIPA have the network address of 169.254, with a subnet mask of 255.255.0.0. When you see
the DHCP is enabled yet you have an address within this range, you'll know that APIPA has given it
this IP address. This also means that if you have other computers on the network that couldn't reach the
DHCP server they will be given IP addresses within this range, therefore you'll be able to have the
computers within that network be able to communicate.
Configuring the Alternate Configuration

One last thing we want to look at here, we'll go ahead and go back to our connection ad we'll click
Properties, we'll select the Properties for IPv4. This is the Alternate Configuration tab. You saw earlier
that when it was unable to talk to the DHCP server it used Automatic private IP addressing to give us
the 169.254 address. In this case we actually get to select the IP address that we want this computer to
receive when it cannot talk to the DHCP server. We'll go ahead and I've given it an IP address, a subnet
mask, a default gateway, as well as a DNS server.
Testing the Results

Let's go ahead and test this out. I'll close that down and we'll go ahead and go back and look at the
ipconfig information. We'll type in ipconfig /all, scroll up to our information here, and you can see that
we do have DHCP enabled. We also have the IP address and subnet mask that we configured manually
if this machine wasn't able to contact the DHCP server. It also has the default gateway and our DNS
servers. Notice however, there is no line for the DHCP server IP address which typically falls right
underneath our default gateway line.
Summary

When troubleshooting a client connection you'll want to view the information about the configuration
by using the ipconfig /all command right here.
To summarize, we'll pay attention to DHCP Enabled, as well as if we have a DHCP Server IP address.
That information will be of great help as you're troubleshooting the connection for a computer on a
network.
5.8.3 Using ifconfig

In this demonstration, we're going to work with the ifconfig command to manage IP addressing on a
Linux system. Basically, it works in much the same way as ipconfig on a Windows workstation. Now,
notice right here that I'm currently logged in as my RTracy user into the system and RTracy is a
standard user, not a super user.
ipconfig Command

I could run the ifconfig command at the command prompt and I would be able to view IP configuration
information but I could not modify IP configuration information. If you want to use ifconfig to modify
configuration information, then you do have to switch to your root user account first. I'll use the 'su'
command, switch to root. Now, I can use ifconfig to do whatever I want it to do.
After typing ifconfig we see that two interfaces are displayed.
ENS32

First of all, we have the ENS32 interface and we have the LO interface. This interface up here is your
wired or wireless network interface. This interface down here is your loop-back adapter. It's the local
host. Even if your Linux system does not have an Ethernet interface installed in it at all, it will always
have at least one loop-back interface to find because that interface is used by various services running
on the system to send information back and forth to each other.
Also, be aware up here with your actual interface, your wired or wireless interface, that the name of the
interface may or may not be ENS32. On an older Linux system it might be named eth0.
eth0

On a newer Linux system that uses a wireless interface, it would be WLAN something. The interface
name is defined based upon what type of interface it is, if it's a wireless or wired, or if that interface is
integrated into the motherboard of the system, or if that interface is installed in a slot, and if so, which
PCI slot is it installed in and so on.
If we look at the output of the ifconfig command, we can see a lot of very important information.
Hardware Adder

For example, hardware adder right here specifies the MAC address of the network interface.
Inet Adder

Inet adder right here, as you might guess, specifies the IP address that's been addressed to that interface.
Bcast

Bcast specifies the broadcast address for the sub net that we're on.
Mask

Mask also specifies the subnet mask that we're using for the subnet that this host is on. If IPv6 is
running, then an IPv6 address is listed here under inet6 address.
RX Packets

RX packets tells us how many packets have been received on this interface.
Statistics

There's some important statistics over here such as errors, drops, and overruns. You can keep an eye on
these parameters to see if any errors are happening. We also have the transmit packets. These are the
number of packets the system has transmitted. We have the number of collisions listed right here. This
is a switch network so we should never see any collisions happening. You can see that it's currently
zero, which what we want. Down here we can see how many bytes of data we have received on this
interface and how many bytes we have transmitted on this interface.
ipconfig Features
As you can see, with ifconfig you can pretty much view almost the same information that you can view
with ipconfig on a Windows system. One thing you can do with ifconfig on Linux that you cannot do
with ipconfig on Windows is actually change IP addresses. For example, let's suppose we want to
change the IP address from 10.0.0.139 to 10.0.0.140. To do this I type 'ifconfig' at the command
prompt. Then I have to specify which interface in the system I want to modify. In this case it's going to
be ENS32. Then I specify the IP address that I want to assign to it, '10.0.0.140.' Of course I have to
specify the subnet mask that should be used with that address. I type 'netmask.' Then the mask that will
be used, in this case '255.255.255.0.' Then whether specify the broadcast address so we type 'broadcast,'
and then the broadcast address, which we can just look up here and get: '10.0.0.255.' Hit 'Enter.'
Now if we do 'ifconfig' again we see that the IP address has been changed from 10.0.0.139 to
10.0.0.140.
Summary

That's it for this demonstration. In this demo we looked at using the ifconfig command on a Linux
system to view IP configuration information. We first looked at ifconfig to view IP configuration
information, and then we ended this demonstration by using ifconfig to change IP configuration
information.
5.8.4 Ipconfig Utility Facts

You can use ipconfig /all to troubleshoot IP configuration problems. The following table describes how
the output for this command changes, based on how IP settings are configured and for specific problem
situations:
Condition ipconfig /all Output
If the workstation is configured with static IP information, the following
conditions will exist:
Static IP
Configuration  The DHCP Enabled line will show No.
 The DHCP Server line will not be shown.

If the workstation has received configuration information from a DHCP server,


the following conditions will exist:
DHCP
 The DHCP Enabled line will show Yes.
Configuration
 The DHCP Server line will show the IP address of the DHCP server that
sent the configuration information.

A rogue DHCP server is an unauthorized DHCP server on the network.


Symptoms of a rogue DHCP server include:

Rogue DHCP  Conflicting IP addresses on the network


Server  Incorrect IP configuration information on some hosts

To identify a rogue DHCP server using ipconfig, verify the DHCP server
address. If this address is not the address of your DHCP server, you have a rogue
DHCP server.

When you have a rogue DHCP server on the network, some hosts
will likely receive configuration information from the correct DHCP
server and others from the rogue DHCP server.

Your DHCP server can send out various IP configuration values, like the IP
address and mask. If network hosts are configured with incorrect IP values (such
Incorrectly
as incorrect default gateway or DNS server addresses), first verify that the
Configured
workstations are contacting the correct DHCP server. If the correct server is
DHCP Server
being used, go to the DHCP server to verify that it is sending out correct
configuration information.
If the workstation used APIPA to set configuration information, the following
conditions will exist:

 The DHCP Enabled line will show Yes.


 The DHCP Server line will not be shown.
 The IP address will be in the range of 169.254.0.1 to 169.254.255.254,
with a mask of 255.255.0.0.
 The Default Gateway line will be blank.
 The DNS Servers line will not include any IPv4 addresses.

When APIPA is used, the workstation sets its own IP address and mask. It does
APIPA
not automatically configure default gateway or DNS server values. When APIPA
Configuration
is being used:

 Communication is restricted to hosts within the same subnet (there is no


default gateway set).
 Hosts can communicate with other hosts that have used APIPA. If some
hosts are still using an address assigned by the DHCP server (even if the
DHCP server is down), those hosts will not be able to communicate with
the APIPA hosts.
 Name resolution will not be performed (there are no DNS server
addresses configured).

If the workstation has been configured using an alternate configuration, the


following conditions will exist:

 The DHCP Enabled line will show Yes.


Alternate  The DHCP Server line will not be shown.
Configuration  The IP address and subnet mask will be values other than the APIPA
values.
 Default gateway and DNS server addresses will be configured using the
alternate configuration values.

If the workstation has received configuration information from the wrong DHCP server or has
configured itself using APIPA, you may need to contact the DHCP server again once the DHCP
problems have been resolved. Use the following commands:
 ipconfig /release to stop using the current dynamic IP configuration parameters.
 ipconfig /renew to retry the DHCP server request process to obtain IP configuration parameters.
To display the TCP/IP configuration on a Linux computer, use the ifconfig command.

5.9 Troubleshooting IP Communications

As you study this section, answer the following questions:


 What is the difference between the netstat and nbtstat commands?
 If a ping test fails, what should you do?
After finishing this section, you should be able to complete the following tasks:
 Find information about IP configuration settings on Windows and Linux systems.
 Troubleshoot IP configuration problems.
This section covers the following Network Pro exam objective:
 Domain 4.0 IP Configuration
 Given a scenario where Windows systems cannot connect to the network or the Internet,
troubleshoot and resolve IP configuration and communication issues.

5.9.1 Network Communication Troubleshooting

In this lesson we're going to spend some time talking about network troubleshooting. Network
troubleshooting often starts with a help desk call from a user. Be aware that most of your end users are
not going to have enough networking or computer knowledge to describe specific symptoms to you.
Identify the Scope of the Problem

Therefore, one of the first steps in the troubleshooting process that you're going to have to take is to
first identify the scope of the problem. Is the problem being experienced across the entire network, or is
it isolated just to a single workstation?
Internet Host Unreachable Example 1

In this example, we're going to look at a situation where we have a single user who is connected
through a router to the internet. This user is trying to access a website somewhere out on the internet.
When they enter the URL of the website in their browser window, they get a message basically telling
them the website can't be accessed. In order to troubleshoot this problem, we first need to determine the
scope of the problem. One of the things we can do to accomplish this is to first try and reproduce the
problem ourselves. In this example, we can open up a browser on our workstation and try to do exactly
what the user is doing.
Alternatively, you could go to the user's desk and use their workstation to try to replicate the problem
as well. In this example, let's suppose that we tried to access the same web server out on the internet
and we don't get a response either. Therefore, one of the first things we might try to troubleshoot the
problem at this point is to ping that web server out on the internet by its IP address. In order to test IPv4
communications, we use the ping command. If, on the other hand, we need to test IPv6
communications, we can use the ping command again, just with a -6 switch added to the end, or you
can use just the ping6 command itself.
If you can ping the server by its IP address and it responds, but then you try to ping it using its DNS
name and it does not respond, this usually indicates that we have a name resolution problem. Most
likely we have a misconfigured DNS server address or the DNS server itself. Either way, name
resolution is not working for some reason. If, on the other hand, you ping this website by its IP address
and it doesn't respond, then we have a different set of problems that could be caused by many different
things. First of all, that server itself may be down, or maybe a router somewhere between our
workstation and the server is not working properly.
It's also possible that everything is working just fine but there's a firewall protecting the web server
that's been configured to not respond to ICMP traffic, which would filter out all of our ping requests.
It's also possible that maybe the IP protocol itself is misconfigured on the workstation. In order to help
narrow down the scope of the problem, the next thing you might try is accessing a different website on
the internet. You could also try pinging other hosts on the internet. If you find that some hosts respond
but others do not out on the internet, then the problem likely exists outside of your network. That's a
problem on the internet itself.
If, on the other hand, you can communicate with the destination host from the source host, then you
know that networking communications are working all the way from your network, through the
internet, into the destination host. If this is case, really there's not much more you can do but tell the
end user that they're going to have to wait until the web server comes back online.
Internet Host Unreachable Example 2

On the other hand, let's suppose that you find that you can't access any hosts on the internet at all.
Everything you try doesn't work. This indicates that the problem resides elsewhere. This could indicate,
for example, that something is wrong with your organization's network, or maybe something's wrong
with your organization's connection to the internet. Be aware that most organizations connect to the
internet through a single router, and this router connects to another router at your ISP's location. The
first troubleshooting step you can take in this scenario is to first try and ping your workstation's default
gateway.
If you get a response back, it indicates that the router is up and the workstation can reach it. If this
connection works, then you can try pinging the interface on the other side of the router that's connected
to your ISP. If you can ping this address and get a valid response back, then you know that your router
itself works as well, and you also know that routing is being performed, because the packet being sent
to the destination network can make it back through the router to your workstation. To further then test
communications, we can try pinging the IP address of the ISP's router on the other side of our router.
If you can get through your network to the ISP's router, then the problem is most likely with the ISP's
network, and at that point there's not much you can do. You'll need to call the ISP and report the
problem, because it exists outside of your administrative domain. In many cases, they're probably
already aware of the problem, and they're probably already working on it. In fact, for this reason, an
organization that's completely dependent upon the internet for their business may even set up a
redundant ISP; that way they can keep things running if their primary ISP goes down and has a
problem such as we've talked about.
Internet Host Unreachable Example 3

At this point, let's step back a little bit and consider what we would do if the ping test to our default
gateway router fails. For instance, let's say that we can ping the near side of the router, but a ping to the
other side of the router that's connected to the ISP fails. In this case, you know that messages are not
making it from your router to the ISP. The problem could be one of many. It could be a misconfigured
router, perhaps it's a break in the line that connects you to the ISP, and so on. In this case you may need
to call the ISP and ask them to run tests to verify that the link itself is working.
Remember, the network wiring hits a certain point called the Demarc at your location. Everything on
your side of the Demarc is your responsibility; everything on the other side is the ISP's responsibility.
The ISP can run tests to verify that the line from their location to the Demarc point is working. If it is,
then you know you've got to troubleshoot the problem on your side of the Demarc. In essence, the
problem exists on our network. If the ISP's line test fails on the other hand, then the ISP has to go
troubleshoot the line from their end.
Internet Host Unreachable Example 4

Finally, let's look at one more scenario. Let's suppose that when you try to ping your default gateway
router, that you find that you can't communicate with it at all, it doesn't respond to your ping request. In
most cases, your host workstation will be connected through a switch to the other devices on the same
network. One thing you can do is try to ping another workstation on the same subnet that's connected to
the same switch. If you can ping one of these other workstations but not the default gateway router,
then the problem may be with the link between the router and the switch, or maybe the router itself is
misconfigured, or maybe it's down.
However, if you cannot ping any other host on the network, then you probably need to focus your
troubleshooting on the link between the workstation and the switch, or maybe even the IP configuration
of the workstation itself.
Summary

That's it for this lesson. In this lesson we talked about how you can use the ping command to test
communications of the various devices inside and outside of your network to narrow the scope of a
network problem. We discussed how you can use this information to identify whether the problem
exists with a specific host on the internet, within your ISP's network, or with your connection to the
network, or maybe within your own local network itself.
5.9.2 Using Ping and Tracert

In this demonstration, we're going to work with 2 very useful network troubleshooting utilities.
Example Network Structure and Information

First, we're going to look at ping and then we're going to look at traceroute. We're going to practice
using those commands with the network that you see here. This is actually the workstation that you're
looking at right now. This is WS1. We're going to practice using these utilities to work with other hosts
on the same subnet as well as host on different subnets within our same organization. As well as host
out here on the Internet. As you can see, there are 2 routers between my workstation here and the
Internet. One router here and one router here. Any traffic going out to the Internet must go through
those 2 routers. The first thing we want to do is to verify our IP configuration on this workstation. We'll
use the 'ipconfig' command to do that. We use the '/all 'parameter so we can see all the available
information.
When we do, we see that the IP address assigned to this workstation is 10.0.0.117 and its subnet mask
is 255.255.255.0. That means we are on the 10.0.0.0 subnet. The default gateway is 10.0.0.254. That's
also functioning as our DHAP server. We also have a DNS server with an IP address of 10.0.0.60 as
well as a backup name server out on the Internet with an IP address of 8.8.8.8.
Ping Utility

The first utility we want to practice working with is the ping utility. The ping utility sends out an ICMP
echo request packet to a remote host that you specify. When that host receives it, it responds back with
an ICMP echo response. If your system receives an ICMP echo response it tells us 2 important things.
It tells us, first of all, that physical connectivity is there between my workstation and the remote host
that I'm pinging. It also tells me that the protocols are properly configured on that physical
infrastructure. That the packets are making it through. Let's try using it. Let's 'ping' a host that's on my
same subnet. Let's go ahead and 'ping' my DNS server right here. Let's 'ping' 10.0.0.60.
When I do, 4 ICMP echo request packets are sent to 10.0.0.60 and 4 ICMP echo response packets are
received in return.
Information from Ping Output

There's a lot of good information we can pull from the output of ping. This parameter right here simply
specifies the IP address of the remote host the bites tells us how big the ICMP echo request packet was.
The time parameter over here tells us how long it took from the time that we sent the ICMP echo
request to the time that we received the ICMP echo response. As you can see, the first one took around
2 milliseconds. Then, the subsequent 3 took less than 1 millisecond. The remote host that we were
pinging is on the same subnet. It really didn't take that long. The first one took 2 milliseconds. Most
likely because it was doing something else. If it's a DNS server, it was probably dealing with another
DNS response from some other client. It took just a second for it to respond to our ICMP echo request.
The other 3 ICMP echo request packets were sent within less than a millisecond which is really fast.
Some summary statistics are displayed down here. We sent 4 requests. We received 4 responses. Zero
were lost. You will, on occasion, have instances where you may send 4 requests and get 3 responses
back and lose one. That could be caused by network congestion, bad network cards, and all kinds of
different things. If that happens, you may need to do some troubleshooting. We also have some
averages for our round-trip time. The fastest one was basically almost instantaneous, zero milliseconds.
The longest one was 2 milliseconds. The average was less than 1 millisecond. Again, because we are on
the same subnet. Notice up here that we pinged by the IP address of the remote host.
Ping Using the DNS Name
You don't have to. You can also ping by its DNS name. Let's try that. 'Ping corpserver.corpnet.com.'
Notice that ping first resolves this hostname into the associated IP address. Then, it sends the same 4
ICMP echo request packets to the remote host. Pinging by IP address first and then by DNS name can
be very valuable.
Because if you can ping the remote host by IP address but cannot ping by its DNS name, it tells you
that the basic connectivity is there. The protocols are working. We've got a problem with our name
resolution system. Maybe we have wrong DNS server address configured. Maybe our DNS server is
down.
Ping a Router

Maybe it doesn't have a record for the host that we're trying to ping. Lets' try pinging again. This time,
let's ping this interface right here on this router. This is our default gateway. This is where all the traffic
that needs to go to a host that's on a different subnet than the one we're on will be sent to by default.
This can be a very useful troubleshooting tool. If we're having trouble communicating with hosts
outside of our subnet, well, one of the first things you ought to do is start using ping to verify that
traffic can even get out of your subnet. Let's type, ping router.corpnet.com. That's the DNS name of our
router on this particular interface on that router. Notice that again, it resolves that DNS name into an IP
address.
It pings every single request. Received a response, their round-trip time was very, very fast. We know
everything is working well. Let's go ahead and send a ping through this router. Then, through this
router. Then, out onto the Internet. We'll type 'ping' and then we'll type the DNS name of a server out
on the Internet. It's www.testout.com. Again, ping will resolve www.testout.com into the appropriate IP
address. We'll send the 4 ICMP echo request packets. Again, we got all of our responses for each of
those packets back, meaning that we have good connectivity out onto the Internet. Notice that there is
something different right here. Our round trip time is much slower. That's because these packets had to
go through 2 of our organizational routers. Then, they went out on the Internet and went through many
different routers. Probably 10 or 11. Just guessing. Before they reached the destination host. Because
they had to be routed and had to travel such a long distance, the round-trip time was considerably
longer. We can see that down here in the summary statistics.
The fastest one was 61 milliseconds. The slowest one was 72 milliseconds for an average around 65
milliseconds.
Tracert Utility

At this point, let's shift gears and look at the trace route utility. Ping is awesome for testing connectivity
between 2 hosts. It doesn't really reveal much of anything about the path that a packet has to take
through various networks between your sending host and the destination host. If you need to
troubleshoot routing problems then you need to use the 'traceroute' command. Be aware that on
Windows it is 'tracert' as you see here. On Linux systems, it's fully spelled out. It's trace r-o-u-t-e. Also,
be aware that the way traceroute works is a little different on Linux and on Windows. On Windows, it
uses the ICMP protocol just like Ping. On Linux, it actually uses the UTP protocol instead of ICMP.
Traceroute is really interesting. As we said just a minute ago, it uses the same ICMP echo request
packets that we talked about earlier with Ping.
However, it manipulates that ICMP echo request packet a little bit, to reveal the IP address of each
router between the source-sending system and the destination system.
TTL

It does that by manipulating the time to live parameter, the TTL. What happens is that we don't want
packets circling the Internet endlessly because they couldn't reach their destination host. The ICMP
protocol is designed as such that every time a packet has to cross a router, the router will automatically
decrement the value of the TTL by 1. If you get down to zero, we can pretty much assume that that
packet is going nowhere. Instead of letting it circle the network endlessly, we're going to just drop it at
that point. When that happens, the router will send an ICMP time-exceeded error back to the source
address of that ICMP packet. That ICMP exceeded error packet contains the source address of the
router that's sending it back to the system that originally sent the ICMP echo request packet. That
reveals the IP address of that router.
Traceroute will send an ICMP echo request packet to the target system. It'll set the TTL to zero, first of
all, which will cause the first router in the path to send that ICMP time exceeded error back to the
sending system. We got the first IP address of the first router in the path. Then, the second packet it's
going to send out, it's going to set the TTL value to 1. It'll get through the first router. The first router is
going to decrement it, however, to zero. Then, that ICMP echo request packet is going to hit the second
router. The second router is going to say, "Hey, this packet is at zero. I'm going to send that ICMP time
exceeded error back to the sending system." We know the second IP address of the router. It'll just keep
doing this over and over and over. Incrementing the TTL value each time it sends. Thereby, revealing
each router in the path between the sending system and the destination system. Let's play with it a little
bit and see how it works.
We'll do a simple test first. We'll just do a traceroute between this system right here and my default
gateway right here. They're on the same subnet so we really shouldn't see a whole lot happening. Let's
do traceroute.router.corpnet.com. As you can see, we got one pop, right here. We see the DNS name
and IP address of this first hop router. Remember, that first ICMP echo request packet that traceroute
sent out, had a TTL zero. It hit this router. Its router said, "Hey, this is an old packet. I'm going to send
you that error message back." We get the IP address of that router right there. Notice over here that
there are 3 round trip times listed for this hop. That's because traceroute actually sends 3 ICMP echo
request packets to that target system, with the TTL set to 'zero', for the first hop. We get 3 of these back
which allows us to see what the round-trip times are. Let's do something a little more complex.
Let's do traceroute. www.testout.com. This is going to require a little bit of time to complete because
we're actually going to start sending packets through to a host out here on the Internet. Remember,
because the TTL value is 'zero', this first router is going to respond first. Then, the next set of packets
will go to this router. It'll respond with its error messages. Then, it'll go out on the internet. It'll have to
hop through probably 9 or 10 different routers before it actually reaches the destination system. Let's
see what happens. We're seeing many hops being delivered by the traceroute command. You'll notice
right here we have the same first hop that we looked at before, right there. We also have a second hop
here that says, request timed out. Does that mean that this router, the second router in the path is bad or
down or something? No. Look. The packets are making it through, past it.
What this usually means is when you see request timed out and stars for your round-trip times, it
probably means that this router right here, whichever router that happens to be in the past is configured
to not respond to ICMP echo requests, in order to prevent denial of service attacks. Traffic is going
through it just fine. It just is ignoring all the ICMP traffic that's being sent to it. You can see that in
order to get to testout.com, a packet has to move through 16 different routers. There are 16 hops
between my system and www.testout.com. You can use this information to try to troubleshoot problems
if you're not able to reach hosts out on the internet. For example, if this router right here were down,
number 2, the second hop, and then we would see no output for the rest of the routes all the way down
because none of the packets were making it past this router. That can help us say, "Oh, okay. I know
where to start troubleshooting because I know that this router is working just fine.
Summary

The problem is with this guy right here."


That's it for this demonstration. In this demo, we practiced using two very useful troubleshooting tools.
We first looked at using ping. Then, we looked at using traceroute.
5.9.3 Network Communication Troubleshooting Facts

As part of the troubleshooting process, you need to identify the scope of the problem so you can take
the proper actions to correct the problem.
In this scenario, Workstation A can't communicate with Workstation C.

The following table lists several tasks you can perform to troubleshoot the reported connectivity
problem. These steps trace the problem backward from the remote host to the local host (another way is
to work through these steps in reverse order). Depending on the situation, you might be able to
troubleshoot the problem more efficiently by skipping some tests or changing the order in which you
perform them.
Task Description
Often the best way to start troubleshooting a problem is to ping the host you
are trying to contact. This verifies the reported problem. If the ping is
successful, the problem is not related to network connectivity. Check other
problems, such as name resolution or service access.
Ping host C
If you have access to another computer, try pinging the destination
host from that computer. If the ping is successful, skip the
remaining tasks and troubleshoot the local host configuration or
physical connection.

If you cannot contact a specific remote host, try pinging another host in the
Ping host D same remote network. If the ping is successful, then the problem is with the
remote host (e.g., a misconfiguration, broken link, or unavailable host).
If you cannot contact any host in the remote network, try pinging hosts on
other remote networks (you might try several other networks). If the pings are
successful, or if you can contact some remote networks and not others, then
Ping host E
the problem is with the routing path between your network and the specific
remote network. Use the traceroute/tracert commands to check the path to
the problem network.
If you cannot contact any remote network, ping the default gateway router. If
Ping the default the ping is successful, but you still cannot contact any remote host, have the
gateway router administrator verify the router configuration. Check for broken links to
the remote network, interfaces that have been shut down, and access control
lists or other controls that might be blocking traffic.
If you cannot contact the default gateway router, ping other hosts on the local
Ping host B
network. If the pings are successful, check the default gateway router.
If you cannot communicate with any host on the local network, then the
problem is likely with the local host or its connection to the network.
Troubleshoot the Troubleshoot by doing the following:
local host
connection or  Check physical connectivity
configuration  Validate the TCP/IP configuration on the local host
 Validate IP configuration settings

One special ping test you can perform is pinging the local host. By doing this, you are verifying that
TCP/IP is correctly installed and configured on the local host. In essence, you are finding out if the
workstation can communicate with itself. To ping the local host, use the following command:
ping 127.0.0.1

If this test fails, check to make sure TCP/IP is correctly configured on the system.
This test does not check physical connectivity. The ping can succeed even if the host is
disconnected from the network.

5.9.4 Using arp, netstat, and nbtstat

In this demonstration we're going to look at how to use three useful network management utilities: arp,
netstat and nbtstat.
arp

Let's start this demo by looking at the arp utility. The arp utility is used to view and manage the ARP
table on an IP host. Remember, whenever we send a packet to a destination host on an IP network we
have to know that host's MAC address. The MAC address is the hardware address that's burned into the
ROM on a host network interface. The issue here is that the MAC address, generally speaking, there
are instances where this isn't true, but for the most part the MAC address does not change, it's
associated with a specific network interface. That MAC address is usually globally unique, meaning
that no two hosts in the world should have the same MAC address.
IP addresses, on the other hand, are logical in nature, meaning that a host could have one IP address one
day and the next day have a completely different IP address. Therefore, we have to map IP addresses to
MAC addresses so we know which physical interface a particular packet needs to be delivered to, and
that's the job of the Address Resolution Protocol, or ARP. The arp utility is used to manage the ARP
protocol. It can be used to view mappings of IP addresses to MAC addresses on a particular host. It can
also be used to add or remove mappings of IP addresses to MAC addresses on a particular host.
arp -a

Let's take a look at how it works. If we want to just view a current listing of IP address to MAC address
mappings, the ARP table on a host, we simply enter arp -a at the command prompt of the Windows
system and we can see here that we have several different mappings. Notice that we have a mapping for
192.168.6.1 that maps to this physical address, this MAC address, and it was dynamically discovered,
meaning that when I sent that packet out to the 192.168.6.1 host the ARP protocol sent out a broadcast
saying, okay, which one of you has an IP address of 192.168.6.1? The host with that IP address
responded back saying, hey here's my MAC address, and the ARP protocol recorded that mapping in
the ARP table on this host, that's why it's dynamic in nature over here under type.
We also have a mapping for 192.168.6.2 that maps to this MAC address. If necessary you can also use
the arp utility to either add or delete a mapping to the ARP table on a Windows host. For example, let's
suppose that my host has an IP address of 192.168.6.1 just went down because the network board went
bad. I've opened up the case, I've pulled out the network board and I've installed a new one, and it's
come back online. In this situation we've got a little bit of a problem because the old IP address
mapping is no longer valid because it has a new network board with a new MAC address. In time it's
not a big deal because the ARP protocol will resolve that and be able to populate the ARP table with the
correct MAC address.
arp -d

If we needed to we could go in and manually remove the old mapping so that a new mapping could be
immediately created. In order to remove an entry from the ARP table I do have to have Administrator
privileges on the work station, so let's go ahead and gain Administrator privileges, and we will go to All
Programs, Accessories, and we will run command prompt as administrator. All right. Now we'll be able
to do it. Let's do an arp -a again, and we want to delete the first mapping 192.168.6.1 so we enter arp -d
and then the IP address, 192.168.6.1, do an arp -a again, and you can see that the mapping was deleted
from our ARP table on this workstation.
We can add it back in a couple of different ways. One way is to simply send a ping packet to the
destination host such as ping 192.168.6.1, and when I do this the ARP protocol is going to look in the
ARP table and say oh I don't have a mapping for that IP address, I need to get it, and they'll send out the
broadcast saying hey which of you has an IP address of 192.168.6.1? The host will respond back with
its MAC address, and then the ping command will go ahead and send the ping packet. If we do an arp -
a you can see that 192.168.6.1 has been added back into the ARP table of my system.
arp -s

You could do it manually if you wanted to, if you knew the MAC address of the host and you wanted to
add it in manually, you could use the arp -s command followed by the IP address of the host,
192.168.6.1 followed by the MAC address of the host. When I press enter then ARP would manually
add this IP address mapping into the ARP table, and I'm not going to do because it's already there, ARP
took care of it automatically for me.
netstat

That's how ARP works.


The next utility we want to look at is the netstat utility. Netstat stands or Network Statistics. It's a
command line utility that you can use to display information about the network interfaces in your
system. You can view information about network connections, you can view the routing table, you can
look at protocol statistics for each network interface you have installed in your system, and so on. Let's
begin with just the basic netstat command. The netstat command without any parameters will display a
listing of active connections with this host, both incoming and outgoing. Press enter. You can see that I
have two active connections.
This connection right here, using the TCP protocol on port 139, is actually a mapped drive to a share on
the da1 server; 139 is one of the ports used by the server message block protocol. I've established a
connection to the local share from a remote machine. In this case it's kind of confusing because I have
two machines and I have a share on each machine, so I have this first share going out from this
workstation to a share on da1, and the second connection is a map drive on da1 that's coming into the
share on this machine. I have two separate active connections; one incoming, one outgoing.
netstat -a

If you enter the netstat command and use the -a option with it, it will display a list of not only all
connections but also all of the open ports on your system that are listening. This could be really useful
because it's a great way to find rogue processes. If you have some type of malware running on your
system that's opened up an IP port and is either sending or receiving information, this is one way that
you can identify it. Enter netstat -a and you see a list of all the open connections and open ports on this
system, ports that are listening for connections. As you can see, we have our two established
connections that we saw previously, both the outdoing share that we mapped a drive to on da1, as well
as the incoming mapped drive to a share on this local system, these two connections right here that are
established.
Notice over here we also have several open ports that are in a listening state, and you can come over
here and see what the port number is. If you see something that you don't recognize you can go consult
your port listing table and find out if it's a legitimate open port or if it is not. We also have our active
listening UDP ports down here, as well. In addition to detecting malware, the netstat -a command's also
a really useful tool for verifying that a network service is running the way it's supposed to. For
example, if you start up a particular service and you're not able to establish connections to it from a
remote host and you've checked the firewall and the firewall's got the port open, you can come and use
the netstat -a command to see if the port is actually open and listening on the host, and that will help
you troubleshoot what might be wrong with the service.
netstat -es

In addition to the -a option you can also enter netstat -es to display Ethernet statistics for each protocol
configured on your network interface. We press enter, scroll up a little bit here, you can see a ton of
information about the various protocols for your network interfaces. For example, up here you have a
summary of interface statistics, the number of bytes sent, and the number of bytes received. The
number of unicast packets sent, the number of non-unicast packets. Packets that were discarded, errors
and unknown protocols and so on.
We have stats for IPv4, the number of packets received, the number of errors, the number of received
packets discarded. The number of received packets delivered. You can see we had three received
packets that were discarded, while we had 282 received packets that were delivered correctly. The
number of outgoing requests, and so on. We also have ICMP protocol statistics, you can see the number
of ICMP messages sent and the number of ICMP messages received. We also have TCP statistics for
IPv4, the number of active open operations, the number of passive open operations.
The number of current connections. Right now you can see there are two, remember we had one
outgoing connection and one incoming connection. Here's the number of segments retransmitted. We
have UDP statistics, the number of datagrams received, the number of errors, the number of datagrams
sent, and so on.
netstat -r

You can also use the netstat command to view your routing table. netstat -r, this will show you your
IPv4 and IPv6, we're not going to talk about IPv6 right now, but it will display the routing table for
both protocols. Here you can see your IPv4 route table. This is the same information you can get by
entering the route print command. We have a route for the local network segment 192.168.6.0, and we
have the default route. Any packets not addressed to a host on 192.168.6.0 will immediately be sent to
the default route up here which is forwarded to the 192.168.6.2 default gateway.
nbtstat

The last utility we want to look at is the nbtstat utility. In my experience I don't use nbtstat nearly as
much as I use arp and netstat, but there are occasions, especially if you're a Microsoft Administrator
where nbtstat can be useful. It's a diagnostic tool for NetBIOS over TCP/IP and it's commonly used to
troubleshoot name resolution problems with NetBIOS, not DNS, but with NetBIOS.
nbtstat -n

You can use the -n option with nbtstat to display the NetBIOS names that have been registered locally
on your system. The name of this workstation is WS1, and its workgroup name is WORKGROUP. I
want to point out one other thing over here. If you look in this column right here, we have the NetBIOS
suffix, and this can be very useful if you want to determine what type of workstation this is. The
number in this suffix over here, which as you can see, is a hexadecimal number, identifies whether or
not this is a workstation, whether it's a silent print server, whether it's a domain controller, a backup
domain controller, and so on.
nbtstat -r

You can also use the -r option with nbtstat to display account of all NetBIOS names that have been
resolved by broadcast, versus those that have been resolved by querying a WINS server. All of my
NetBIOS names have been resolved by broadcast, that's because I don't have a WINS server running on
this network, I'm not user WINS. Therefore, all of my NetBIOS names have to be resolved by
broadcasting.
nbtstat -s

You can also use the -s option with nbtstat command to view a list of current NetBIOS sessions, their
status and statistics about that connection. The local name of this host NetBIOS name is WS1, and we
have an incoming connection from a remote host DA1, and that's because as we said earlier, DA1 has a
mapped network drive to a share on WS1 here.
5.9.5 Arp, netstat, and nbtstat Facts
The following table lists several commands that you can use on a Windows system to gather
information about network connections:
Tool Option(s)
arp arp -a shows the IP address-to-MAC address mapping table (the address cache).
netstat shows the active connections.
netstat -a shows detailed information for active connections.
netstat
netstat -r or route print shows the routing table of the local host.
netstat -s shows TCP/IP statistics.
nbtstat nbtstat -c shows the IP address-to-NetBIOS name mapping table (the name cache).
Local computers have a cache of recently used IP addresses and their corresponding MAC
addresses. When a computer needs to contact another computer on its own subnet, it first
checks its cache for an entry of the IP address. If the entry is found, the corresponding
MAC address is used to communicate with the destination computer. The cache can cause
problems if the MAC address for a computer has recently changed (e.g., if the network
interface card has been replaced). To correct a problem, use the netsh command to clear the
ARP cache.

5.10.1 Name Resolution Troubleshooting

Let's spend some time talking about fixing problems with the DNS service in your network. As you
know, before two IP hosts can communicate using hostnames, we have to take those hostnames and
resolve them into IP addresses that the IP protocol itself can understand. The job of a DNS server is to
resolve these hostnames, such as www.google.com, into an IP address, and it also can work in reverse.
It can take an IP address that we give it and resolve it into its associated hostname. A common
symptom of name resolution problems is one where users on your network open up a web browser and
they try to access a webpage out on the internet, and when they do, they just get an error message.
Something is displayed saying essentially, "I can't reach the server," because the IP protocol has no idea
where it's supposed to go.
Test DNS Name Resolution Using Ping

It can't locate the server.


A quick and easy way to test DNS name resolution is to try pinging by its IP address. If you can ping
by the host's IP address, then you know that basic connectivity exists between those two systems. Once
that's done, then try pinging that same host but this time using the DNS name. This time, if the ping
fails, then we know that there's a pretty good chance that we have a DNS name resolution problem. We
know that network connectivity exists between the sender and receiver, it's just that the sending host
can't figure out how to resolve the hostname into an IP address that it can work with.
Common DNS Issues

There are many possible causes for DNS problems. For example, the DNS server could be down or just
otherwise unreachable for some reason. You can try using ping to test communications with the DNS
server itself to see if it responds. However, if you do this and you don't get a response, be aware that the
firewall protecting the DNS server may actually be configured to drop ICMP packets in order to
prevent denial of service attacks. So, if this happens and you try pinging the DNS server and it doesn't
respond, don't immediately assume that the server is down just because it didn't respond.
It could be that there's a firewall in the way that is simply filtering out all of your ping packets. It's also
possible that there could be a routing problem between the sending host and the DNS server. In this
situation you can use the tracert command to trace the route between the sending host and the DNS
server. It's also possible that the sending host could be configured with the wrong IP address for the
DNS server. In this case you can use the nslookup or the dig command to test whether or not the IP
address you have configured on your work station, is actually the correct DNS server.
Other Possible DNS Issues

Be aware that it is possible that your workstation is configured correctly, and that the DNS server is
configured and working correctly as well, but for some reason that DNS server is unable to contact a
root level DNS server. This could be due to several different things: it could be due to a routing
problem, it could be a connectivity issue, maybe there's a WAN link issue. To test this, you need to do
two different things. First of all try using nslookup or dig to resolve a DNS name that you know that
your DNS server is authoritative for. Because the DNS server is authoritative, it should actually have a
local record for the hostname; and if name resolution fails, at that point we know that the DNS server is
having some problems that we're going to have to troubleshoot.
If it's successful, however, and it is able to resolve a hostname that it's authoritative for, the next step is
to try using nslookup or dig again, but this time send it a DNS name to resolve that it is not
authoritative for. In this situation, the DNS server has to contact root level name servers, and then use
the recursion process to try to find the correct IP address for the hostname. If your DNS server can
resolve local hostnames but cannot resolve hostnames that it is not authoritative for, then you know that
there is a connectivity problem, most likely, between the DNS server and the root level name servers.
Summary

That's it for this lesson. In this lesson, we talked about troubleshooting name resolution problems. We
looked at several common DNS issues, such as an unreachable name server, routing problems between
you and the name server, misconfigured DNS name server addresses on hosts, and finally we looked at
root level DNS server communication problems.
5.10.2 Name Resolution Troubleshooting Facts

Common name resolution problems include the following:


 The DNS server could be down or otherwise unreachable.
 There may be a routing problem between the sending host and the DNS server.
 The sending host could be configured with the wrong IP address for the DNS server.
Name resolution problems typically have the following symptoms:
 You can ping a destination host using its IP address but not its host name.
 Applications that use host names fail. This could include:
 Entering a URL into a browser.
 Pinging the host using the host name.
 Searching for the host by its name.
To troubleshoot DNS name resolution, use the following tools:
 ping
 tracert (Windows) or traceroute (Linux)
 nslookup
 dig (Linux)
 host (Linux)
The following table lists several ways to use these commands:
Command Purpose Example
Contacts the DNS server to see if it responds. Be aware
that the firewall protecting the DNS server may be
ping 8.8.4.4
ping configured to drop ICMP packets in order to prevent DoS
attacks—if the server doesn't respond, it is not necessarily
down.
tracert or Tests the route between your workstation and the DNS tracert 8.8.4.4
traceroute server.
nslookup
nslookup [host] Queries the IP address of a host. www.mit.edu

Starts nslookup in interactive mode. The default


nslookup set type=ns
nslookup interactive mode query is for A records, but you can use
the set type= command to change the query type.
Queries a host. The default query is for A records. You can
change the default search by appending one of the record
types below to the end of the command:

 a—address records dig www.vulture.com


 any—any type of record ns
dig hostname host
 mx—mail exchange records
host hostname www.vulture.com -t
 ns—name server records
ns
 soa—sort of authority records
 hinfo—host info records
 axfr—all records in the zone
 txt—text records

Queries the root server at the IP address or host name for


dig @IP address dig @192.168.1.1
the domain's A records. You can change the default query
or host name vulture.com ns
type by appending a different record type to the end of the
domain
command.
dig -x IP dig -x 62.34.4.72
address Finds the host name for the queried IP address. host 62.34.4.72
host IP address
Local computers have a cache of recently resolved DNS names. The cache holds the DNS
name and its IP address. When you use a DNS name, the computer first checks its cache. If
the name is in the cache, the corresponding IP address will be used. This can cause
problems if the IP address of a host has changed. Old values in the cache might continue to
be used temporarily, making communication via the DNS name impossible. To correct this
problem on a Windows computer, run ipconfig /flushdns to delete the local DNS name
cache.
5.10.3 Using nslookup and dig

In this demonstration we're going to look at two utilities that you can use to test name resolution with
your DNS server system. The first utility is called 'nslookup', and the second utility is called 'dig'.
Before we go any farther I do need to point out that nslookup is available on both the Windows and the
Linux platforms; however, dig by default is only available on the Linux platform.
ipconfig /all

Before we start using these utilities, I do want to check the IP configuration of my Windows
workstation here to make sure that it received the correct DNS server address when it received its IP
addressing configuration from the DHCP server. To do that at the command prompt I'm going to type
ipconfig /all and if we scroll up here we should see that the DNS server address is set to 192.168.6.2.
That is what we want, it's working correctly.
nslookup

If we want to test whether or not our DNS server is able to resolve hostnames properly, we can enter
'nslookup' at the command prompt.
Passive Mode

You actually have two different modes that you can run nslookup in. One mode is called passive mode
where you simply provide it with the hostname that you want it to resolve, such as nslookup
da2.mydom.com, in which case nslookup will send the query to the DNS server for the host name that
you specify, and will return the response it gets from the DNS server and exit out.
Interactive Mode

That's one way to use nslookup.


Or you can use it in interactive mode, in which case you just enter 'nslookup' at the prompt, and here
you can enter as many host names as you'd like and have nslookup continually resolve them until
you're done. When you're done you can enter 'exit' at the little carrot prompt to exit out of nslookup.
After entering 'nslookup' you'll notice that it tells me the hostname and address of the DNS server that
this workstation is going to use to resolve hostnames.
Resolving an Authoritative Hostname

The DNS server address is 192.168.6.2 and its hostname is da1.mydom.com. Let's use that DNS server
to resolve the da2.mydom.com hostname. I should point out here that this DNS server right here is
authoritative for the mydom.com zone, and the DNS suffix that has been set for this workstation is also
mydom.com. Therefore, I actually don't need to type out the entire fully qualified domain name at the
prompt here. I can actually just enter DA2 because the DNS suffix is set to mydom.com it will
automatically append that domain name to the hostname that I specify before sending it to the DNS
server for name resolution. For our purposes here, I'll just spell out the fully qualified domain name and
press enter. It sends that domain name to the DNS server and it responds with the IP address that's been
assigned to that hostname, in this case 192.168.6.3.
Resolving a Non-Authoritative Hostname

In addition to resolving hostnames in zones that this DNS server is authoritative for, we can also have it
look up names that it's not authoritative for. For example, I could enter the prompt www.testout.com.
Notice that the response we get says, 'Non-authoritative answer'. We did not get that up here because
the DA1 DNS server is authoritative for the mydom.com zone. It is not authoritative, however, for the
testout.com zone. Notice it was still able to resolve this DNS name into this IP address. How did it do
that? It's because the DNS server is configured with root hints. These are IP addresses of DNS servers,
root level DNS servers, on the Internet that DNS servers are preconfigured by default to send queries to
or domains names that they are not authoritative for. In this case this DNS server was not authoritative
for testout.com so it sent that query to a root level name server, which in turn, pointed the DA1 DNS
server to the DNS server that is authoritative for testout.com. My DNS server, DA1, sent the request
then to that DNS server that's authoritative for testout.com and asked it to resolve the host name. That
remote server did and sent the response back to my DA1 DNS server, which then in turn, returned it
down to my workstation through the nslookup command.
Exiting nslookup

When you're done with nslookup you can enter 'exit' at the prompt and it will exit you out. Be aware
that if you enter any other command such as 'quit', for example, it won't exit you out, instead, nslookup
will submit quit to the DNS server for name resolution, and your DNS server will probably respond
with I don't have a record for quit.mydom.com, and it won't work.
dig

That's how you use nslookup.


Let's look at a second utility called 'dig'. As I said earlier, the dig command is not available by default
on Windows systems. It is available by default however, on Linux systems, as well as nslookup, it's
available on both platforms as well.
In order to use dig I've switched over to a Fedora 16 Linux system and we'll use it to send DNS name
resolution queries to our DA1 name server. Just as I did with the Windows system I do want to make
sure that my IP configuration is set properly. I'll use the ifconfig command at the command prompt of
the Linux system and we can see that we are getting an address, 192.168.6.12 from my DHCP server.
Notice that the DNS server address is not listed here. One way you can check that is to enter the 'cat'
command which simply displays the contents of a file on the screen, and display the contents of the
'/etc', which in the Linux world we affectionately call the etc directory, and then the name of the file we
want to view is called resolv.conf. You can see that the name server is set 192.168.6.2, and that the
DNS suffix is set to mydom.com just as it was with the Windows workstation.
If I enter 'dig' at the command prompt it will, by default, use whatever name server is specified in the
/etc/resolv.conf file. If however, I wanted to use a different DNS server for some reason, I can still do
that. I can do that by entering an '@' sign after the command, followed by the IP address of the DNS
server that I want to use. For example, on the remote subnet I do have a DNS server that has an IP
address of 10.0.0.1. If I wanted to use it instead of my Windows name server here, 192.168.6.2, I would
put '@' and then the IP address of that server at the common line, and then all my name resolution
requests would go to that server instead of the default name server specified in the resolv.conf file. For
now I don't want to do that, I just want to use my 192.168.6.2 name server.
I enter 'dig' at the command prompt followed by the type of lookup I want to perform. For example, I
could enter 'a' to do an 'a' record lookup followed by the DNS name that I want to look up. For
example, da2.mydom.com, and it will perform an 'a' record lookup on that hostname.
dip ptr

If on the other hand I want to perform a reverse lookup by looking up the pointer record, the PTR
record in the reverse zone, I enter dig ptr followed by the IP address of the host that I want to do the
reverse lookup on. Instead of resolving the hostname into the IP address, it will resolve the IP address
into the hostname.
dig cname

I can also perform a 'cname' lookup or alias lookup by entering 'dig cname' at the command prompt
followed by the alias that I want to resolve into a hostname.
dig mx

If I want to resolve a mail server record I would enter 'dig mx' followed by the domain name of the
mail server that I want to resolve.
Resolve an A Record

For our purposes today let's simply resolve an 'A' record.


'Dig a ca2.mydom.com' sends the query to the name server and it responds with the corresponding
record from the zone. Notice that a little bit more information is displayed by dig as compared to
nslookup. For example, down here we have the question section. In other words, this is the request for
name resolution that I sent. I sent this domain name, I'm looking for an Internet record, and I'm looking
for an 'A' record. In response, here's what I received, da2.mydom.com resolves to 192.168.6.3.
Down here we see the query time, how long it took to make that resolution, eight milliseconds, and
then the IP address for the server that was used to resolve the domain name into the IP address. Notice
there's a '#53' over here, that simply specifies that the DNS server is listening on port 53 which is the
default port used by DNS servers. Then the timestamp, or the response, and then the size of the
message.
Just as we could perform an authoritative and a non-authoritative lookup with nslookup, we can also
perform an authoritative or non- authoritative lookup with dig. For da2.mydom.com was an
authoritative lookup. The DNS server is authoritative for the mycdom.com zone.
To do a non-authoritative lookup we just do the same thing, 'dig', and I'll point out here if you don't
specify a type of lookup to perform, by default dig will perform an 'A' record lookup. I don't actually
need to include the 'a' here. I enter 'dig' and let's do www.testout.com. 'Enter'. Just as with nslookup the
DNS server receives the response, recognizes the fact that it is not authoritative for that zone, goes out
and queries a root level name server on the Internet which responds with the IP address of DNS server
that is authoritative for the testout.com zone. My DNS server then sends that name resolution request to
that server that is authoritative for testout.com, receives the response, and then forwards the response
down to the dig command on my Linux workstation. You can see here that we have the same
information, here's the question that was sent, www.testout.com. As I said before, because we didn't
specify a particular type of lookup to perform, by default dig performs an A record lookup. Then we get
our answer section here where we say testout.com resolves to 98.129.173.225.
Summary

That's it for this demonstration. In this demo we talked about using nslookup and dig to perform both
authoritative and non-authoritative lookups of DNS records.

UNIT-II
6.1 Switch Access

As you study this section, answer the following questions:


 What are the requirements for connecting a VTY (virtual terminal) to a Cisco device?
 What types of cable can you use to connect a PC to a router console port?
 What is the difference between a managed and an unmanaged switch?
 What is the difference between in-band and out-of-band management?
After finishing this section, you should be able to complete the following task:
 Modify system passwords.

6.1.1 Device Access

As a network administrator, you need to know how to access and configure a network switch. There are
two general categories of network switches, unmanaged and managed. Unmanaged switches are low-
end switches that are sold in most retail stores. To implement unmanaged switches, you plug them into
a power outlet and then connect your network devices using UTP cables. While these switches are
convenient and easy to implement, they lack many of the advanced management and security features
available on managed switches. Managed switches must be configured before you can use them. Let's
take a look at how this is done.
We're going to be using a Cisco managed switch, as they are found in most networks.
Out-of-Band Management

There are two ways to connect to a Cisco switch:


Out-of-band management: With out-of-band management, we configure a dedicated channel for
accessing a network device. On a Cisco switch, you accomplish this by connecting a notebook or
workstation system to its console port. This is the option you must use if you are configuring the switch
for the first time because it doesn't have an IP address or any authentication information configured by
default.
In-band Management

In-band management: With in-band management, you use a standard network connection to access the
switch using remote access protocols, such as Telnet or SSH, or using a browser-based configuration
interface. This can only be done after the switch has been initially configured with an IP address and
with authentication information using out-of-band management. You should avoid using Telnet as an
in-band management solution as it sends all data, including authentication information, as clear text
over the network connection. If you are using a web interface, you should use HTTPS instead of HTTP
for the same reason.
Let's assume that we are implementing a new network switch, so we must use out-of-band management
to access and configure it. This means we must connect to the console port on the switch. Every Cisco
device has a console port. You connect one end of a console cable to this port and the other end to your
notebook or workstation. When creating your network documentation, you will depict a console
connection using a dotted line. This indicates the connection is an out-of-band, dedicated connection,
and not a traditional network connection.
Let's take a look at a console cable and console port on a Cisco switch. A console cable was shipped
with every Cisco device, but now you must purchase them separately. The console cable is also called a
rollover cable. You connect the RJ45 jack on one end to the console port on your switch. You connect
the serial connector on the other end to your laptop or workstation. Many newer laptops and
workstations no longer provide a serial port, so the device manufacturer now provides roll-over cables
with USB connectors instead of serial connectors. After connecting the USB connector on the cable to
a USB port on your workstation, you must load the appropriate USB driver software to enable it.
The console port on a Cisco device is shown here. This is where you connect the RJ45 end of your
rollover cable. After creating the physical connection between your workstation and the console port of
the Cisco device, you must load terminal emulation software and create a logical connection to the
device. Some commonly used terminal emulation programs include PuTTY or SecureCRT. Configure
the terminal emulation software to establish a session using the serial port or USB port on your system.
By default, there are no passwords configured on a Cisco switch. You simply press Enter several times
to display the device prompt.
That's a quick look at how to connect to a managed Cisco switch. To review, we discussed how to
access a Cisco device, including out-of-band and in-band connections. The console port is used for an
out-of-band connection to the device, and must be used for a first time configuration. Once the device
has been configured with an IP address and is reachable across the network, you can use in-band
connections utilizing remote access protocols, such as Telnet or Secure Shell, or a browser-based
configuration interface to configure the device.
Summary

6.1.2 Using the Command Line Interface (CLI)

In this demo, we are going to provide an introduction to the Cisco Command Line Interface, or CLI.
We'll take a look at default terminal emulations settings to gain access to a Cisco console port. We'll
take a look at basic navigational commands to get in and out of different modes on the Cisco device,
and then we will finish with a discussion of how to actually save your work so that the next time the
device is reloaded, the commands that you entered are preserved.
Configuring Terminal Emulation Settings

First topic of this demo would be terminal emulations settings. In my case, for this demo I am using a
product called PuTTY. Windows also has its own hyper terminal. There are many others, but whatever
terminal emulator you use, for that first connection to a Cisco device via its console port, you have to
set your terminal settings to match what you are going to see here. In my particular case, I am using a
USB connection, so I am going to select that one as the one I want to configure and then if I go to
Serial connections within PuTTY--this is important--the console port of any new Cisco device will
default to these settings.
They have to match on your emulator software or whatever it is you are using. You are going to set this
to a speed of 9600 bits per second or baud, 8 data bits, one stop bit, no parity bits, and no hardware
flow control. Once you have done that and you have a physical connection to the console port of the
Cisco switch or router, you should just be able to hit Enter and gain access to the device.
User Mode Prompt

I am accessing a device that has no config on it, just like it is right out of the box, so I am not going to
get prompted for any passwords here. As soon as I hit Enter, I get a switch prompt. We call this prompt
the user mode prompt. That's because of the greater than symbol (>) you can see here. User mode is a
mode where you can type in certain show commands to reveal very basic information about the device,
but you certainly can't configure any aspect of the device or reveal anything that would require a higher
level of authorization. It's used for very basic troubleshooting.
If I type the command 'show' here, and then a question mark (?) to provide me a list of options for the
'show' command. I do have quite a few. Note that 'show clock' is an option here. I hit the space bar to
get the next page. What I want you to focus on is that there is no option here for 'show running-config'.
It goes right from 'rtr' to 'show sessions'. We'll take a look at the actual output of 'show running-config',
a pretty important command, a little bit later in this demo, but what I want you to focus on here is that
it's not an available command while you are in user mode because it reveals too much about the device.
To get the 'show running' command, you have to get into a more privileged mode, which we'll see in
just a moment.
I type the letter 'Q' here, it will quit this output so I don't have to go through every single page. But
there are some available commands here. For example, 'show version'. I can see from this command
that I am actually logged in to a Cisco 2960 series switch. I can see this switch has 24 FastEthernet
interfaces, which are 100 megabit per second capable, and it has 2 gigabit Ethernet interfaces, which
are 1000 megabit per second capable. I hit space bar to get to the next page. I can also get information
about MAC addresses, serial numbers, etc. If I do need access to the more revealing commands, such
as 'show run', from user mode you type the command 'enable'.
Enable/Privileged Mode

Note once again, I'm not being prompted for a password because nothing has been configured yet on
the switch.
But the prompt does change to a pound symbol (#); the pound symbol indicates that you are now in a
higher authorization state on the device. We call this again privileged mode or often enable mode
because that's the command used to get there. But now if I type 'show' command with the same
question mark (?) to see some options, you'll see a lot of the same options. If I space down a little bit,
you will see now that the running-config command is available.
show running-config is Available

While I could not type that command in--it was not a command option in user mode--because I am now
in privileged mode on the device, it will accept that command. We'll take a deeper look at this
command output later on in this demo and in subsequent demos.
configure terminal

Once you are in privileged mode, there are many other subordinate command modes to actually
configure the device. If I did want to configure some aspect of this switch, I would type from here
'configure terminal', so I am using my terminal through the console port to gain access to it.
interface f0/1

Let's say that I wanted specifically to configure one of those 24 FastEthernet interfaces, so I am going
to type 'interface FastEthernet', or just 'f' for short, '0/1'. That's the first interface on this switch. I can
type commands in that are relevant to an interface. For example, 'speed', maybe you want to fix that at
'100'; maybe 'duplex', I want to fix that at 'full'; etc.
Backing Up Using the exit Command

At this point, I am two levels deep within privileged mode. I first typed 'config t' to get to config mode
and then 'interface', to get to config-if mode. If I wanted to back up a level, just to config mode, the exit
command does this. It backs you up one level. If I type it again, it will back me up one more so I am
back at privileged mode. Then you get some output indicating that the system had just been configured
via the console. If I go a couple levels deep again with the 'configure terminal' command, which, by the
way, I can abbreviate to simply 'conf t' and then the 'interface' command, which I could abbreviate also
to just 'int F0/1'. You only have to type enough of a command to be uniquely recognized within that
mode. No other commands at the Config mode level begin with the letters INT.
end

But now if I want to get all the way back out, I don't want to have to type 'exit' twice to get there. I can
type the 'end' command, or as you can see here, the Ctrl+Z does the exact same thing.
saved and running-config

If I hit Enter though, after 'end', I am all the way back at privileged mode.
The last thing I want to talk about in this demo are the concepts of saved and running-configs. We did
type the 'show running-config' command in before. I am going to do it again now: 'show running-
config' or just 'show run' would also work. And if I space through a couple screens, you will notice that
FastEthernet 0/1 does have those two commands that I typed in. The running-config is the config of the
device that is currently stored up in dynamic RAM and would be lost if you actually powered the
device down at this point.
startup-config

If you want to save your work, you have to save it into what is called a startup-config. I hit 'Q' to quit
this output and type that 'show startup-config'. You will see it's not present. This is a new device and I
haven't yet saved anything on it. The point again is that if I powered off the device at this point and
then brought it back up later, those two commands--Speed and Duplex-- would no longer be in effect.
copy running-config startup-config

If you want to save whatever you've done, you need to make a copy of that running-config to the
startup-config. You do that with the 'copy' Command. Copy from 'running-config' to the 'startup-
config'. I'll hit Enter there. I will confirm that I want to do this just by hitting Enter. And by the way, the
'write memory', or abbreviated 'wr mem', command will also do the same thing.
Verify Settings Were Saved to startup-config

But now if I do a 'show startup-config' again, or just hit the up arrow to find it in command history, now
you see that there is a startup-config because I saved whatever it is was in Dynamic RAM right now as
a permanent copy on the system. So now if I powered off this device and brought it back up, those
commands would, in fact, still be in effect. Always save your work. Lastly, when you are done
configuring or accessing the device for this time, then to exit your session, you can simply close out of
your terminal session.
Or, to get out of the switch itself, type 'exit'. Note you have a switch prompt no longer. To get back in
once again, simply hit Enter and you are back in user mode. This demo has provided an introduction to
the Cisco CLI, some basic navigational commands, and also how to save any configured work.
6.1.3 Device Connection Facts

Enterprise network switches need to be configured before they are implemented. Be aware that low-end
switches available from many retail stores cannot be configured. These are called unmanaged switches.
To implement an unmanaged switch, you simply plug it in to a power outlet and connect your network
devices with UTP cables. While unmanaged switches are convenient and easy to implement, they lack
many of the advanced management and security features available on managed switches.
Some router and switch management tasks can be performed by using management utilities provided
by your workstation operating system through a network connection. This is called in-band
management, because it uses a standard network connection to perform the tasks. For example, tools
such as Telnet or SSH provide in-band management. Using the same network connection for both data
and management has several drawbacks:
 You must compete with normal network traffic for bandwidth.
 The network traffic created by the management utilities must be protected from sniffing to
ensure that hackers cannot capture sensitive configuration information.
 If the network connection is unavailable, or if the device is unresponsive to network
communications, management tasks cannot be performed.
Out-of-band management, on the other hand, overcomes these problems by using a dedicated
communication channel that separates management traffic from normal network traffic. With network
switches and routers, you can use console redirection to access the device's console through a built-in
serial or USB port. For example, Cisco routers and switches do not use monitors, and you cannot
connect a keyboard or a mouse directly to the device. Instead, you connect a standard PC to the
device's console port to manage the device.
You can use these options to manage a Cisco device:
Connection
Description
Type
A console connection allows for a direct connection through a PC to the console
port on the device. The PC needs a terminal emulation program (such as PuTTY)
to connect to the device's command line interface. This is an example of out-of-
band management. In the terminal emulation program, use the following settings:

Console  9600 baud (or a rate supported by your router)


 Data bits = 8 (default)
 Parity = None (default)
 Stop bits = 1 (default)
 Flow control = None

A VTY connection connects through a LAN or WAN interface configured on the


Virtual Terminal device. Use a program (such as PuTTY) to open the command line interface. This
(VTY) is an example of in-band management. The Cisco device must be configured with
an IP address before a VTY connection can be made.
The Cisco SDM allows a web browser connection to the device, using HTTPS.
Once connected, the SDM allows you to manage the security features and
network connections through a web-based graphical user interface. This is an
example of in-band management. Be aware of the following SDM settings:
Security Device
 10.10.10.1 is the default IP address of the SDM.
Manager (SDM)
 The default value for both the username and password is cisco.

A new router may not be completely configured for an SDM


connection, so you may need to make a console connection first.

Use the following cable types to make the initial connection to the switch or router for device
management:
Cable Type Pin-outs Use
1'8 Use a rollover Ethernet cable to
2'7 connect the device's console port to
3'6 the serial port on a PC. Connect the
Rollover Ethernet Cable
4'5 RJ45 end to the console port and
5'4 connect the serial end to the PC. A
6'3 rollover cable is also called a console
7'2 cable.
8'1
Many newer Cisco devices
use a USB for the console
connector and can be
accessed with any standard
USB cable.

Use a straight-through Ethernet cable


to connect an Ethernet port on a router
to an Ethernet port on a hub or switch.
The router can then be accessed from
1'1 another PC connected to the same
2'2 network, using a VTY connection.
Straight-through Ethernet Cable 3'3
6'6 If the router has an AUI
port, connect one end to an
AUI transceiver before
connecting to the router.

Use a crossover Ethernet cable to


connect an Ethernet port on a router
directly to the NIC in a PC. Establish a
VTY session from the PC to connect
1'3
to the device.
2'6
3'1
If the router has an AUI
6'2
port, connect one end to an
AUI transceiver before
connecting to the router.
Crossover Ethernet Cable

6.1.4 Password Levels

In this lesson, we're going to discuss the different password levels you can configure on Cisco devices
to prevent unauthorized access.
There are two important passwords that you can configure on Cisco devices. The first is the user mode
password, and the second is the enable (or privileged) mode passwords.
User Mode Passwords

When you first connect to a Cisco device, you are in user mode. You can tell you are in user mode
because the hostname of the device is displayed followed by the greater than sign (>). In user mode, the
commands you can execute are limited.
Once you're in user mode, you can enter the enable command to enter enable mode. In enable mode,
you will see the hostname of the device followed by the number sign (#). Within enable mode, you can
change most switch configuration parameters, including security settings.
New Cisco devices aren't configured with any passwords. For obvious reasons, you shouldn't leave it in
this state and should password-protect access to both user mode and enable mode.
User mode passwords are used to restrict access to user mode. There are actually several user mode
passwords that can be configured, but the two most important are the console password and the virtual
terminal (VTY) password. When you connect to a Cisco device using its console port, you must enter
the console password to access user mode. If you use a remote access protocol, such as SSH or Telnet,
to access the device, you must use the VTY password to access user mode. You can use the same
password for both connection types, or you can use different passwords for each.
There're only a handful of tasks that you can complete in user mode. To make significant configuration
changes, you must enter enable mode. This is done by entering the enable command while in user
mode.
Enable Mode Passwords

When you do, you will be prompted to provide the enable password. There are two passwords used
with enable mode that you need to be familiar with:
the enable password and the enable secret password. The enable password is not encrypted, but the
enable secret password is encrypted. Because of this, the enable secret password supersedes the enable
password. If you have both the enable password and the enable secret password defined, then you must
supply the enable secret password to enter enable mode. On the other hand, if you have just the enable
password configured, then it is used to gain access to enable mode. For this reason, only the enable
secret password is usually configured on a Cisco device. Once you've entered the appropriate
password, you'll be placed in enable (privileged) mode, where you have unrestricted access to the
device's configuration.
If your organization uses multiple devices, you can consider using an AAA solution to centralize
password management. Instead of configuring duplicate passwords individually on each device, you
can configure your devices to use an authentication, authorization, and accounting (AAA) server. AAA
offers a means for centrally managing the passwords for all your Cisco devices.
Summary

That's it for this lesson. To review, we discussed Cisco device password levels, along with the
command modes that they protect.
6.1.5 Configuring Line Level Passwords

This demo will discuss configuring line level passwords on Cisco devices. Line level passwords protect
access to user mode on Cisco switches and routers, such as if you have a console connection to the
system or a Telnet or SSH connection, it prevents you from actually getting to user mode before you
type in a password. Enable passwords are used to protect access to privilege mode, but again, line level
passwords simply protect access to user mode. Once you've configured both, there are two levels of
passwords to actually get through to the point where you can configure the device. A Cisco device has
no passwords by default, which means that I can, once I have terminal settings set up, I can simply hit
Enter and get a user prompt. I currently have a console mode connection to this switch, and what I want
to do is protect what I'm doing here with a password. I don't want someone to be able to console
connect and get to user mode without typing in some security phrase first.
Go to Configure Mode

To configure that, I have to first go to enable mode and then to configure mode. Once I'm in configure
mode, I want to go to the line that I want to configure the password for, which in this case would be the
console, which is 'line console 0', or 'line con 0', or simply 'line 0' would work.
Define the Password

Once I'm here, I want to define the password that I want to use with the 'password' command. I'll make
the password in this case 'useraccess' Enter. That's not enough. Just because you've defined the
password, you still have to tell the system to prompt for the password. You do that with the 'login'
command, which forces a login process.
Configure a Password for Remote Connections

I also want to configure a password for remote connections, so for Telnet and Secure Shell. To do that, I
have to go to a different line. To grab them all, I would use the range of VTY 0 through 15. We'll make
the password in this case 'remoteaccess'. Once again, I have to tell it to prompt for that password with
the 'login' statement. Okay. I'm done with the configuration. I'll type 'end' or Ctrl+Z to get out of config
mode.
Test the Password

I generally recommend that before you save your work, typically, you would copy your running-config
to starting-config at this point, but these are passwords, and to avoid a typo or and then have to use
password recovery to get back in, I like to test things out first. Let me 'exit' here and then hit Enter. This
time, you'll note that I do not get automatic access to user mode. I do have to type in a password. If I
remember that correctly, it was 'useraccess'. Now that I know it works, now that I've documented my
passwords, I can get into enable mode again and then do a 'copy run start', to save my work.

6.1.6 Configuring Enable Mode Passwords

This demo will discuss how to configure enable mode passwords on Cisco switches and routers. If I hit
Enter here, I am prompted for the console password of user access is what I configured. That takes me
to user mode. I can still type enable with no password prompt to get to privilege mode.
Options for Password Protection for Privilege Mode

We certainly don't want that.


To configure password protection for privilege mode, there are two options. I go to configure terminal.
There are two password options here. There is what's called enable password. I'll go ahead and set
enable password, which by default is not encrypted. You can manually encrypt this password if you
wish. There is also an enable secret password, which I'll also configure here. We'll call that
'supersecret'. Several points about this. The enable secret password is encrypted by default; the enable
password, like I said, is not. In the case where both are enabled, the enable secret password will always
supersede.
If you only had the enable password configured, then that is what you would type in--in this case,
'secret'--to gain access to enable mode. If you have them both set, then in this case, unless your terminal
emulator did not support the encryption of the enable secret password, then you would type in
'supersecret'. Note these are both global config commands because they are not particular to a certain
line, such as the console port or a remote connection. Note if I type 'show run' here, for running config,
you can verify what I said--that the enable password is in clear text by default or the enable secret is
clearly encrypted by default.
Manually Encrypting the Enable Password

I mentioned you can also manually encrypt the enable password. You do so using the service password
encryption command, which is also global config. If I do that 'show run' command again, you'll see that
the enable password is indeed now encrypted, but at a much weaker level than the enable secret.
Regardless, if you're using a current terminal emulator that does support the encryption of the enable
secret password, then the enable password would never actually be used.
Verifying the Passwords Work

Let's verify that this now works. I exit my system. I did not save my work just in the case that I may
have fat-fingered a password. Then I can simply restart the switch to undo those changes. I type in my
console password once more. As before, I'm in user mode. This time, if I type enable, I am prompted
for a password. If I type in 'secret', which I defined as the enable password, note it does not work. If I
type in 'supersecret', it should. Now you have two-level password protection to get to privilege mode.
You have to type in a line level password to get to user mode and then you have to type in an enable
level password to get to privilege mode.
6.1.8 Configuring AAA Authentication

In this demonstration, we're going to configure triple A authentication. We know that triple A stands for
authentication, accountability, authorization that we've got for our infrastructure. We have a RADIUS
server or a TACACS server. We have some type of triple A service somewhere in our infrastructure.
What we want to do is point this router to that service so that when somebody tries to log in to the
router, it has to ask this service they are authenticated to do this? What we're going to do is set this up
on our router. The first thing we're going to do is to go into 'Global Configuration,' and we're simply
just going to say, 'AAA or triple A new dash model' and hit 'Enter.' That's enabling triple A force. We're
turning it on, on this router. Now we need to point our router to wherever the service is. We're going to
use RADIUS in this case. Let's say there's a RADIUS server sitting on our infrastructure somewhere.
We say it is RADIUS-server, and we're going to say it is a specific host, and we need to type in the IP
address, so '10.10.10.10' or whatever it is inside of your organization. Then we'll hit 'Enter.'
Now we have enabled it and now we're pointing it to our RADIUS server, that triple A server on our
infrastructure. Now let's tell our router how we're going to use this service. What we're going to say is
we're going to get in, and we're going to turn it on for authentication, and we're going to make it for
logging on. When we try to log into the router, we're going to talk to the triple A service to be
authenticated. What we're going to say is triple A authentication, I'm going to hit 'Tab' there to speed it
up. We're going to say for 'log in,' and we'll say, 'Default group.' The group is pointing back to that
particular RADIUS server with the IP address of 10101010. We could also use a group of servers. We
could have a server farm for this authentication, and we'd have that for redundancy and fault tolerance.
The keyword there is group. The last one we can type 'RADIUS' or 'TACACS' but since we're using a
RADIUS server, we want to use RADIUS right here. Now let's finish by hitting 'Enter' and there you
go.
Summary

Enabled RADIUS Server

That's for this demonstration. In this demonstration, we enabled triple A on our router. We pointed it to
the particular RADIUS server host on our network. Then we set up the router to tell it how to use this
server that we have on our infrastructure. We're using it for log in authentications. That's how we get in
and set up triple A authentication for our router.
6.1.9 Switch Password Facts

The following table lists three of the most common password types that you can configure on Cisco
devices, including switches and routers:
Password
Description
Type
Console Controls the ability to log on to the device through a console connection.
Controls the ability to log on to the device using a virtual terminal (VTY)
VTY
connection.
Controls the ability to switch to configuration modes. There are two different
passwords that might be used:

 The enable password is stored in clear text in the configuration file.


EXEC mode
 The enable secret password is encrypted and stored in the configuration file.

A Cisco device always uses the enable secret password if it exists.

The following are facts about configuring router passwords:


 Passwords are case sensitive.
 For security reasons, do not use the same password for both your enable and enable secret
passwords.
 You can set the enable, enable secret, and line passwords in setup mode.
 Cisco devices support Terminal Access Controller Access Control System (TACACS) and
Remote Authentication Dial-In User Service (RADIUS) to centrally validate users attempting to
gain access to the device.
To control physical access to the device's console, you must keep it in a locked room. A
console connection can only be established through a direct, physical connection to the
device. Keeping the device in a locked room prevents unauthorized users from making a
console connection. Even if you have set console passwords, users with physical access to
the device can use password recovery to gain access.

The following table summarizes basic password commands:


Command Description
Sets the encrypted password used for privileged mode access. The enable
secret is always used if it exists.
Router(config)#enable
secret [password] This command uses the Message-Digest 5 (MD5) hashing
algorithm to encrypt the password.

Router(config)#enable Sets the unencrypted password for privileged mode access. This
password [password] password is used if the enable secret is not set.
Router(config)#line con
Switches to the line configuration mode for the console.
0
Switches to the line configuration mode for the virtual terminal. Specify
one line number or a range of line numbers. For example:
Router(config)#line vty
[0-197] [1-197]
line vty 0 4

Router(config-
Sets the line password (for either console or VTY access).
line)#password
Router(config-
Requires the password for line access.
line)#login
Router(config)#no
enable secret
Router(config)#no
enable password Removes the password. The no login command disables password
Router(config-line)#no checking.
login
Router(config-line)#no
password
Encrypts all passwords as type 7 passwords. Encrypted type 7 passwords
are not secure and can be easily broken. However, the encrypted values
Router(config)#service
do provide some level of protection from people looking over your
password-encryption
shoulder after you issue the show run command. Use the enable secret
command for better encryption.
If you do not use the login command in line mode, a password will not be required for
access, even though one is set.

Access to the console through a Telnet or SSH session is controlled by the login and password entries.
To prevent VTY access, there must be a login entry without a password set. Access is allowed based on
the following conditions:
 no login, no password = access is allowed without a password
 login, no password = access is denied (the error message indicates that a password is required
but not set)
 no login, password = access is allowed without a password
 login, password = access is allowed only with a password
6.2 Switch IP Configuration

As you study this section, answer the following questions:

Why would you configure an IP address on a switch?


What does the ip address dhcp command allow you to do?
After finishing this section, you should be able to complete the following tasks:

Configure management VLAN settings.


Configure switch IP settings.

6.2.1 IP Address and Default Gateway Configuration

In previous demos, we've configured a Cisco layer 2 switch with interface information, such as speed
and duplex. We've also configured both line and enable mode passwords. In this demo, we're going to
provide the switch with an IP address and default gateway configuration to allow for remote
management through Telnet or Secure Shell. We will log in to the device. Again, still with a console
connection. You can see that here. Until you have defined an IP address configuration for the device,
the local console port is the only way you can access the device.
View the Status

We'll fix this with this demo--logging in.


Before I configure anything, I'm going to type the show IP interface brief command. This time, let's
focus right on the top. We can see that VLAN1 currently does not have an IP address assigned to it and
is administratively down. This being a layer 2 switch, you cannot configure IP addresses on individual
interfaces. The switch itself has an IP address, which is considered virtual, and that is assigned VLAN
number 1. To allow this switch to be manageable from different subnets than what it currently is on--in
essence, to be manageable from anything outside of a physical connection to the console port--we have
to give the switch an IP address and also a default gateway so it knows how to reach or respond to
devices that are not connected to its local subnet.
Configure an IP Address on VLAN1
To do this, we go to global config, and then we have to configure an IP address on the virtual interface
or VLAN1. We'll go to that interface, and I will type in an IP address that I want to give this switch and
its mask.
Activate the VLAN

You may have noticed, again, that the default status of that VLAN is administratively down, so we have
to change that. If we have assigned an IP address to a VLAN, and we want that VLAN to be active, we
have to perform a no shutdown operation to bring it up. By default, the VLAN is in shutdown state. By
issuing a no shutdown, we're giving it permission to activate. You can see, after a couple of moments,
the state of the VLAN changed to up. If I end, and do a show IP interface brief command again, you'll
see now its status has changed to up and up.
Define a Default Gateway

At this point, the switch should be reachable--via Telnet or SSH by any of the devices connected to it
the same subnet--but we want to enable operation to remote subnets as well. To do that, we have to
define a default gateway for the switch. Just like a Windows workstation would have both an IP address
and a default gateway, the switch needs one as well. This command is issued in global config, and the
command is 'ip default-gateway', and then the IP address of the next hop device that would be used to
reach remote networks. Let's take a look at our config. I'll do a show run, and this information will
appear on the bottom now. You can see that VLAN1 does have an IP address and a default gateway.
Verify the Connectivity and Save the Configuration

At this point, we should be able to test our work and verify using a ping command that I can reach my
default gateway, and I had five successful responses there. Once you've verified connectivity, we'll go
ahead and copy run start, which will copy my running config to startup config, to save my work and
confirm it. We have now enabled this switch to be managed via remote connections--Telnet or Secure
Shell, either on the same subnet or remote subnets--so that we don't have to use a console connection
locally to manage this device.

6.2.2 Switch IP Configuration Facts

Keep in mind the following facts about IP addresses configured on switches:

Basic switches operate at Layer 2 and are therefore able to perform switching functions with no
IP address configured.
A switch does not need to have an IP address configured unless you want to manage it with an
in-band management utility, such as SSH or a web-based interface.
Switch ports do not have IP addresses unless the switch is performing Layer 3 switching, which
is not supported on all switches.
The switch itself has only one active IP address. The IP address identifies the switch as a host on
the network.
To configure the switch IP address, set the address on the VLAN interface (a logical interface defined
on the switch to allow management functions). By default, the VLAN is VLAN 1. Use the following
commands to configure the switch IP address:
switch#config terminal

switch(config)#interface vlan 1

switch(config-if)#ip address IP_address subnet_mask

switch(config-if)#no shutdown

To enable management from a remote network, configure the default gateway. Use the following
command in global configuration mode:
switch(config)#ip default-gateway IP_address

You can use the ip address dhcp command to configure a switch (or a router) to get its IP
address from a DHCP server. The DHCP server can be configured to deliver the default
gateway and DNS server addresses to the Cisco device as well. A manually configured
default gateway address overrides any address received from the DHCP server.

You can use the show cdp neighbors detail command to displays detailed information
about neighboring devices including network address, enabled protocols, hold time, and
software version.

6.3 Switch Interface Configuration

As you study this section, answer the following questions:


 How does the VLAN interface configuration mode differ from Ethernet, FastEthernet, and
GigabitEthernet interface configuration modes?
 What must you consider if you manually configure the speed or duplex settings?
 What happens when autonegotiation fails for the Ethernet interface on a Cisco device?
 What is the default setting for all ports on a switch?
After finishing this section, you should be able to complete the following task:
 Configure switch ports.
6.3.1 Switching Operations
In this lesson, we'll discuss switching operations, including how switches learn about their environment
and how they intelligently forward frames through the network to the intended destination host.
In this network, we have four workstations connected to a switch. The numbers below each workstation
represent the MAC address of the computer. MAC addresses are actually six bytes (or 48 bits) in
length, but we've shortened these addresses to make things easier. For example, the MAC address of
the Ethernet card in host A is 1111, host B, 2222, and so on.
Host A is connected to the switch on port Ethernet zero (E0). This is the first Ethernet port on the
switch. E1 would be the second Ethernet port, and so on. Let's assume that the switch has just been
powered on and has not yet received data from any workstations. The switch maintains a table in
memory called the Content Addressable Memory (CAM) table. The CAM table stores the relationship
between the MAC addresses on the network and the switch port each one is connected to.
The structure of the CAM table looks something like this. There's a column for the MAC address and a
column for the switch interface it's connected to.
When the switch is first powered on, the CAM table is blank. It doesn't know the MAC address of any
host that's connected. To build the CAM table, the switch has to receive frames from the network. Now,
the switching operation uses three critical processes: Learning, forwarding, and filtering.
Learning

The first switch process is learning. When the switch first comes online, it needs to populate its CAM
table. To do this, it listens to network transmissions to catalog the MAC addresses on the network and
identify which switch interface they are connected to. For example, for the switch to learn that host A is
connected to the network, host A must first send some data.
Let's suppose host A sends out a frame. The frame arrives on port E0, with a source MAC address of
1111. Using this information, the switch learns that the host with MAC address 1111 can be found on
port E0, and it adds this information into its CAM table.
However, the switch doesn't have a port association yet for the destination MAC address of the frame it
received. The switch has no idea which port the destination host is connected to. So, to deliver the
frame, the switch must flood the frame of every switch port except for the one the frame arrived on,
because it's trying to find the destination host that A is trying to communicate with.
Let's suppose A is trying to communicate with host C, so the destination MAC address within the frame
is 3333. When the switch floods that frame, host C receives it, along with hosts B and D. But because B
and D were not the intended recipients, they drop the frame. However, host C's MAC address matches
the destination MAC address in the frame, so it accepts it and processes it. Then host C sends a reply
back to the sending host, or host A. The reply frame it sends has a source MAC address of 3333. At this
point, the switch learns that 3333 is connected to port E2.
The switch's CAM table will only be complete after every device on the network has had the chance to
send a frame. Only then will it have learned all the MAC addresses on the network and the ports used
to reach them. In this case, we'll assume that the switch has learned that 2222 can be reached on E1 and
4444 can be reached on E3. The CAM table is now complete. The learning and flooding processes are
no longer required until a new device is connected to the switch.
Forwarding
Once the switch has learned where each host is connected, it can intelligently forward data where it
needs to go. For example, suppose A needs to send data to B. The destination MAC address of the
frame will be 2222. Instead of flooding the frame, the switch simply performs a lookup in the CAM
table and sees that 2222 is connected to port E1. The switch then intelligently forwards the frame to
that port, only it's not sent to E2 and E3. Learning builds the CAM table, and then forwarding takes
advantage of the CAM table by sending data only to the port where the host with the destination MAC
address is connected. Genius, right?
Let's look at one more example. Suppose C needs to send data to D. The frame comes into port E2 on
the switch. The switch looks up D's MAC address in the CAM table and sees that 4444 is connected to
E3. Then it intelligently forwards that frame to E3, but not to E0 or E1.
Filtering

Switches can also perform filtering. Let's replace host C with a hub. We'll connect hosts E, F, and G to
that hub. Hubs are layer 1 devices, so they can't perform switching operations, such as learning and
forwarding. When they receive an electrical signal, they simply repeat it to all hub ports. In this
scenario, any frames sent by E, F, or G will also be sent by the hub to the switch. When stations E, F, or
G first send a frame, the switch will add those MAC addresses to its CAM table. However, all of the
MAC addresses of the hosts connected to the hub will be associated with the same switch port.
Now, suppose host E needs to send a frame to host G. In this case, the frame coming from E would
have a destination MAC address of 6666. Because this is a hub, that frame gets copied to the ports
where F and G are connected, so G will receive the frame successfully. F will simply drop the frame
because it isn't addressed to its MAC address. However, because this is a hub, the frame is also sent to
the switch. The switch recognizes that the source MAC address and the destination MAC address of the
frame are associated with the same switch port (in this case, E2). Therefore, the switch assumes that the
frame has already been received by the recipient and it doesn't need to forward it to any port. So, it
drops the frame. This is called filtering. On a switch, forwarding only occurs if the source and
destination MAC addresses are associated with different ports on the switch. If they're associated with
the same switch port, the switch filters the frame and discards it.
Summary

So that's it for this lesson. We discussed how switches operate and how they learn host MAC addresses
from the network and then use that information to intelligently forward frames to the correct switch
ports. The switch may even filter frames if the source switch port and the destination switch port are
the same.

6.3.2 Switch Forwarding Facts

Bridges and switches build forwarding databases. A forwarding database is a list of Layer 2 MAC
addresses, with the port used to reach each device. Bridges and switches automatically learn about
devices to build the forwarding database, but a network administrator can also program the device
database manually. When a frame arrives on a switch port (also called an interface), the switch
examines the source and destination address in the frame header and uses the information to complete
the following tasks:
Step Results
If the source MAC address is:

 Not in the switch's Content Addressable Memory (CAM)


table, a new entry is added to the table that maps the source
1. The switch examines the device's MAC address to the port on which the frame was
source MAC address of the received. Over time, the switch builds a map of the devices
frame and notes which switch that are connected to specific switch ports.
port the frame arrived on.  Already mapped to the port on which the frame was received,
no changes are made to the switch's CAM table.
 Already in the switch's CAM table, but the frame was
received on a different switch port, the switch updates the
record in the CAM table with the new port.

If the destination MAC address of the frame is:

 A broadcast address, then the switch sends a copy of the


frame to all connected devices on all ports. This is called
flooding the frame.
 A unicast address, but no mapping exists in the CAM table
for the destination address, the switch floods the frame to all
ports. The connected device that the frame is addressed to
2. The switch examines the will accept and process the frame. All other devices will drop
destination MAC address of the frame.
the frame.  A unicast address and mapping exists in the CAM table for
the destination address, the switch sends the frame to the
switch port specified in the CAM table. This is called
forwarding the frame.
 A unicast address and mapping exists in the CAM table for
the destination address, but the destination device is
connected to the same port from which the frame was
received, the switch ignores the frame and does not forward
it. This is called filtering the frame.

6.3.3 Switch Configuration Overview

In this lesson, we'll discuss the basics of configuring a Cisco Layer 2 switch.
By default, all the interfaces on a Cisco switch are active. Therefore, you can pull a new Cisco switch
out of the box, plug it in, and connect workstations to it. It will immediately start learning the MAC
addresses of the connected workstations without any configuration changes. It will automatically
populate the CAM table, and then start forwarding frames intelligently (or filtering frames if it doesn't
have to forward them). In this respect, Cisco switches are plug-and-play. However, this doesn't mean
that you don't have to configure them. While there are many configuration settings that you can make
on a switch, the most important settings you should initially configure include: passwords to prevent
unauthorized access, IP addresses to provide remote access, and a default gateway address.
Passwords

Probably the most important parameters to configure on a switch are its passwords. There are no
passwords set by default on Cisco equipment. Leaving the switch in this state is not recommended. A
malicious user with the right knowledge could access the switch configuration and compromise its
integrity. For example, they could configure the port that their workstation is connected to as a mirror
port, allowing them to capture a copy of all the traffic being transmitted on the network. There are
several different passwords that you can use to restrict access to a Cisco switch. At a minimum, you
should configure console, VTY, and enable mode passwords. These protect access to user mode and
enable (privileged) mode on the device.
IP Addresses

Assigning an IP address on a Layer 2 switch allows you to remotely access and configure the switch
over the network using protocols such as Telnet or Secure Shell (SSH). A Layer 2 switch requires only
a single IP address to be assigned for remote management.
Default Gateway Address

You should also configure the switch with the IP address of the default gateway. This allows you to
access the switch and configure it even if your management workstation is on a different network. In
this example, the switch is connected to a router, and that router is connected to another switch, which
is where your management workstation resides. Each interface in the router is connected to a different
network. The workstation is on the network on the left side of the router. The switch to be managed is
on the right side of the router on a different network.
Because your workstation is not on the same network as the switch, you cannot access it directly by its
IP address unless you have the correct default gateway addresses configured on both sides. The default
gateway is the device that you have to pass data to go beyond your own network. If this is my
workstation on the left, and I'm trying to get data to the network on the right, the default gateway
address on my workstation must point to the local interface of the router connected to my network. The
switch also must have a default gateway address configured that points to its local router interface.
Once these are configured, then I can use Telnet, SSH, or web protocols to remotely access the switch
and manage it.
Summary

That's it for this lesson. To review, we discussed some of the basics settings you can configure when
deploying a Cisco Layer 2 switch, including password definitions, IP addresses, and the default
gateway address.

6.3.4 Configuring Switch Interfaces

In this demo, we're going to discuss basic switch interface configuration. I'm currently connected to the
console port of SwitchA. If I hit Enter here, I'll see the banner that we've configured in a different
demo, as well as its prompting for a user mode password, which I will type in.
Enter Enable/Privileged Mode and Identify the Version

To configure anything on a switch, you have to first enter enable or privileged mode, so I will enter
that, and again, be prompted for that password. Before I enter configure mode, I'm going to type a
couple show commands to take a look at what this switch currently is and what interfaces are involved.
If I type 'show version', I can verify that this is a Cisco 2960 model switch, I can see that it does have
24 FastEthernet and 2 Gigabit Ethernet interfaces.
View a Brief Summary of the Interfaces

I can also type 'show IP interface brief' to get a summarized view into all those interfaces and their
current status, and what you'll notice for all of them--except for the virtual interface on top, VLAN 1--
is that they are in a down, down state. If I spacebar to get to the rest of the output, the same is true for
all of them, including the gig interfaces on the bottom. That's simply because I don't have anything
plugged into this switch currently. By default, all interfaces on Cisco layer 2 switches attempt to be up
by default, meaning out of the box if you power the device up, and you plug workstations into it, then
those ports will automatically become live. You don't have to manually activate them. Again, down,
down here is not a bad thing--it simply means that I have no connected devices currently.
Configure Mode and Interface Configuration Commands

Let's take a look at some basic layer 2 interface configurations. I'm going to go into configure mode,
and I'm going to pick a couple interfaces here. Let's take a look at interface FastEthernet zero one
(f0/1). Let's say this connects to workstation 1. If I type a question mark here, I can see a list of the
commands that are available in interface mode. There are quite a few, but the two I'm going to focus on
here are duplex, and on the next page you'll see a speed option. By default, all interfaces on Cisco layer
2 switches are set to auto configure speed and duplex.
Manually Configure the Speed and Duplex

A best practice recommendation is to configure interfaces with the specific speed and duplex of the
devices that will be connected to them, so I'm going to do that here. Instead of having the Ethernet
interface autosense and try to negotiate if there's a ten or a one hundred megabit per second device
connected, I will set the speed manually to hundred, and I will also set the duplex to full.
Configuring Multiple Ports at the Same Time

This is a 24-port FastEthernet switch, so what if you wanted to do this on all 24 ports of the switch? It
would be time-consuming to repeat these two commands on 24 individual ports, so what you can do is
use the interface range command. With the range command, I can select a contiguous range, multiple
contiguous ranges, or even specific ports, but all on one command line. For example, if I want to
configure ports 2-6 and 10-18 in a similar fashion, I can use this command to identify those two ranges,
and then execute the same commands, and it will affect all of them.
Verify the Configuration

To verify this, I can use the 'show run' command, which normally, if you'll recall, is a command you
would execute in privileged mode; in fact, to demonstrate this error, I can type 'show run' from this
mode, and you will see that it will error out, because the show command is not recognized in a non-
privileged mode level. However, several years ago, Cisco did introduce the 'do' command, so if I
precede the command with the word "do", then type 'show run', it will work. I can verify that the speed
and duplex settings are now active for all those ports involved in the ranges I specified. When the
command completes, I'm still in interface mode.
Administratively Shut Down Switch Ports

I mentioned earlier that all Cisco layer 2 switch ports are not in a disabled state by default, meaning
that if something is connected to that device and active, then the switches will be in an up, up state. If
there are certain switch ports on the device that you know will have nothing connected to them, you
can manually shut them down. For example, type 'exit' here, and go to 'interface', for example, 'f0/20',
and I know that port is currently not connected to anything, so I don't want someone just plugging in
and getting a connection. I could actually force that interface into a shutdown state. If I type the word
'shutdown' and exit config mode, you will see the message comes up that the port has been
administratively shut down.
Verify the Port is Shut Down

I'm going to repeat the same "show ip interface brief" command. You'll note that while all the other
ports that I did config on are still in the down, down state because nothing has plugged in to them yet,
that port 0/20 is now administratively shut down. If I even plug something into port 20, the port would
not allow the connection.
Plug a Device into a Port

To finish this demonstration--I just plugged a device into the FastEthernet 0/24 port on this switch, and
you can see that it automatically changed the state of that port to up, and if I repeat the "show ip
interface brief" command again, we'll go all the way down to the bottom. You'll see that port 24 is now
up, up. I did not have to manually activate that port. It detected that there was a device plugged in to it,
that it could communicate, then the port automatically came up and will allow traffic. If at this point I
plugged that same device into port 20 on this switch, because I manually shut that port down, it would
remain administratively down.
Save the Changes

Once again, after you've configured anything on a Cisco device that you want to make permanent, you
have to remember to save your changes. I will type 'copy run star'. This is, again, short of copy the
running config to the startup config. Confirm it, and my changes are permanent. This demo has
discussed basic layer 2 switch interface configuration.
6.3.5 Switch Configuration Mode Facts
The following image illustrates some of the configuration modes available on a Cisco switch:

The following table describes some of these configuration modes:


CLI Mode
Mode Details
Prompt
The switch has multiple interface modes, depending on the
physical (or logical) interface type. For this course, you should be
familiar with the following switch interface modes:

 Ethernet (10 Mbps Ethernet)


 FastEthernet (100 Mbps Ethernet)
 GigabitEthernet (1 GB Ethernet)
Interface  VLAN Switch(config-
Configuration if)#
The VLAN interface configuration mode is used to
configure the switch IP address, and for other
management functions. It is a logical management
interface configuration mode, rather than the physical
interface configuration modes used for the
FastEthernet and GigabitEthernet ports.

Details of the config-vlan mode include the following:

 It can be used to perform all VLAN configuration tasks.


 Changes take place immediately. Switch(config-
Config-vlan
vlan)#
Do not confuse the config-vlan mode with the VLAN
interface configuration mode.

Details of the VLAN configuration mode include the following:

 It allows you to configure a subset of VLAN features.


 Changes do not take effect until you save them, either
before or while exiting the configuration mode.
VLAN  Changes are not stored in the regular switch configuration
file. Switch(vlan)#
Configuration
For most modern Cisco switches, it is recommended
that you configure VLAN parameters from config-
vlan mode, as VLAN configuration mode is being
deprecated (phased out).

Line Use this mode to configure parameters for the terminal line, such Switch(config-
Configuration as the console, Telnet, and SSH lines. line)#
6.3.6 Switch Comfiguration Command List

The following table lists common switch configuration commands:


Command Action
switch(config)#interface FastEthernet 0/14
Moves to interface configuration mode
switch(config)#interface GigabitEthernet 0/1
switch(config)#interface range fastethernet 0/14
- 24

switch(config)#interface range gigabitethernet


0/1 - 4 Moves to configuration mode for a range of
interfaces
switch(config)#interface range fa 0/1 - 4 , 7 - 10

switch(config)#interface range fa 0/8 - 9 , gi 0/1


-2
switch(config-if)#speed 10

switch(config-if)#speed 100
Sets the port speed on the interface
switch(config-if)#speed 1000

switch(config-if)#speed auto
switch(config-if)#duplex half

switch(config-if)#duplex full Sets the duplex mode on the interface

switch(config-if)#duplex auto
switch(config-if)#no shutdown
Enables or disables the interface
switch(config-if)#shutdown
switch#show interface status Shows the interface status of all ports
switch#show ip interface brief Shows the line and protocol status of all ports
The following are some facts about switch configuration:
 All switch ports are enabled (no shutdown) by default.
 Port numbering on some switches begins at 1, not 0. For example, FastEthernet 0/1 is the first
FastEthernet port on a switch.
 Through auto-negotiation, the 10/100/1000 ports configure themselves to operate at the speed of
attached devices. If the attached ports do not support auto-negotiation, you can explicitly set the
speed and duplex parameters.
 Some switches always use the store-and-forward switching method. On other models, you may
be able to configure the switching method.
 If the speed and duplex settings are set to auto, the switch will use auto-MDIX to sense the
cable type (crossover or straight-through) connected to the port and will automatically adapt
itself to the cable type used. When you manually configure the speed or duplex setting, it
disables auto-MDIX, so you need to be sure you use the correct cable.
 By default, the link speed and duplex configurations for Ethernet interfaces in Cisco devices are
set using IEEE 802.3u auto-negotiation. The interface negotiates with remote devices to
determine the correct settings. However, auto-negotiation can be disabled on the Cisco device
and other Ethernet network hosts, and static values can be manually assigned. Devices with
auto-negotiation enabled will try to negotiate link speed and duplexing but will get no response.
When auto-negotiation fails, Cisco devices that have auto-negotiation enabled default to the
following:
 The interface will attempt to sense the link speed, if possible. If it cannot, the slowest
link speed supported on the interface is used (usually 10 Mbps).
 If the link speed selected is 10 Mbps or 100 Mbps, half-duplex is used. If it is 1000
Mbps, full-duplex is used.

6.4 Virtual LAN’s

As you study this section, answer the following questions:


 What are two advantages to creating VLANs on your network?
 You have two VLANs configured on a single switch. How many broadcast domains are there?
How many collision domains are there?
 What happens if two devices on the same switch are assigned to different VLANs?
After finishing this section, you should be able to complete the following task:
 Create VLANs and assign switch ports to a VLAN.

6.4.1 VLAN Overview

In this lesson we're going to talk about virtual LANs, which we also call VLANs. VLANs play a very
important role in the network because they allow us to take a regular layer-2 switch and actually
segment it into multiple broadcast domains.
Function of VLAN

Let's take a look at how this works. By default, every port on a layer-2 switch belongs to the same
broadcast domain. For example, let's suppose we have a host that's connected to this switch port, and it
sends a frame that's addressed to the broadcast MAC address, and it sends that frame over here to the
switch. The destination MAC address in this frame is set to all ones in binary. Therefore, that switch is
going to replicate this frame to all of the active ports that it has. No matter how many hosts are
connected to the switch, a broadcast frame received on incoming port will be replicated to all the other
ports on the switch. Because of this, we can consider all ports connected to a layer-2 switch to be
members of the same broadcast domain.
The use of a VLAN allows you to actually take that switch and segment it into multiple broadcast
domains. This is done by assigning switch ports on the switch to be members of different virtual LANs.
By default, every port on the switch belongs to VLAN 1, this is the default VLAN. Each VLAN exists
within its own broadcast domain. Therefore, because all these switch ports belong to VLAN 1 by
default, all ports on the switch are within the same broadcast domain by default.
VLAN Isolates Traffic

You can find additional VLAN on the switch in order to isolate traffic and to create these multiple
broadcast domains. For example, suppose that these two users work in the sales department, and I want
to isolate their traffic away from these other two users who work in engineering. We don't want
broadcast frames propagating from sales over into engineering. In fact, we don't want the sales users to
even be able to see the computers over in engineering, so to do this, we can assign the two sales ports
on the switch to a different VLAN. In this case, let's assign all sales users to VLAN 2.
Because our engineering computers contain proprietary information that is actually worth a lot of
money we'll want to separate them as well. We're going to assign these two ports to a different VLAN,
in this case VLAN 10. As you can see, you can actually create many different VLANs on the same
switch just based upon whatever your infrastructure requires. In this configuration, any broadcast frame
received from a workstation in sales will enter into the switch on VLAN 2 and will get replicated out to
every other port of the switch that also belongs to VLAN 2. In this example, there's only one other port
on the switch that's a member of VLAN 2, so the broadcast frame is copied only to that second port.
The frame is not propagated to the ports in the engineering VLAN, so they never make it to any of the
engineering computers. In fact, the computers in the sales VLAN can't even see the computers in
engineering VLAN by default because they're in a completely different virtual network. This is because
each broadcast domain created by the VLAN is actually a different network.
As a result, unicast frames as well as any broadcast frames are going to be restricted to each VLAN.
Really, it's analogous to using two completely separate unconnected switches in your network, one for
your sales computers, and another one for your engineering computers. In the scenario we're working
with here, there is no layer-3 routing device in the network. Therefore, there is actually no way for sales
host in VLAN 2 to communicate with engineering host in VLAN 10 because each VLAN is its own
network. We've assigned each VLAN a different IP subnet, so in this example, sales is on the
172.16.1.0 subnet, while engineering is on the 172.16.2.0 subnet.
Communication Options for Different Subnets

Whenever you need to pass data between different subnets, you have to use a layer-3 device such as a
router or a layer-3 switch. In this scenario, we only have a layer-2 switch with no routing capabilities,
so there is no way for sales host to communicate with engineering host.
For example, if a user in sales tries to send a packet through an IP address on the engineering subnet, it
won't work. There has to be a router or a layer-3 switch in the network to route the data between these
two subnets at layer-3. One option would be to connect a router to an open port on the switch and
configure it to route data between these two subnets. In this case, if a sales user tries to send data to a
user in engineering, the packet from sales would enter into the switch and then be sent to the router.
The router would then route the traffic to the engineering subnet, pass the data back to the switch then
the switch would send the data on to the engineering node on the other VLAN. Instead of implementing
a separate router, another option would be to use a layer-3 switch. A Layer-3 switch provides both
switching and routing functions at the same time within the same device. If this were a layer-3 switch
then the routing would occur internally within the switch and we wouldn't have to plug in an external
routing device.
Summary

That's it for this lesson. In this lesson we talked about VLANs. Remember VLAN define broadcast
boundaries when the broadcast frames aren't propagated to every single port on the switch. VLANs can
also be used to increase security because they isolate traffic between two different virtual networks.
6.4.2 VLAN Facts

A virtual LAN (VLAN) uses switch ports to define a broadcast domain. When you define a VLAN, you
assign devices on different switch ports to a separate logical (or virtual) LAN. Although a switch can
support multiple VLANs, each switch port can be assigned to only one VLAN at a time. The following
graphic shows a single-switch VLAN configuration:

In the single-switch VLAN configuration above, the following is true:


 FastEthernet ports 0/1 and 0/2 are members of VLAN 1.
 FastEthernet ports 0/3 and 0/4 are members of VLAN 2.
 Workstations in VLAN 1 will not be able to communicate with workstations in VLAN 2, even
though they are connected to the same physical switch. Communications between VLANs
requires a router, just as with physical LANs.
 Defining VLANs creates multiple broadcast domains. The above example has two broadcast
domains defined, each of which corresponds to one of the VLANs.
 On Cisco switches, all ports are members of VLAN 1 by default.
Switches use VLAN IDs to route VLAN traffic. VLAN IDs:
 Are appended to the header of each frame.
 Allow switches to identify which VLAN the frame belongs to.
 Are used for inter-switch traffic.
VLAN IDs are only understood by switches. VLAN IDs are added and removed by
switches, not the clients.

Creating VLANs with switches offers many administrative benefits. You can:
 Create virtual LANs based on criteria other than physical location (such as workgroup, protocol,
or service).
 Simplify device moves (devices are moved to new VLANs by modifying the port assignment).
 Control broadcast traffic and create collision domains based on logical criteria.
 Control security (isolate traffic within a VLAN).
 Load-balance network traffic (divide traffic logically rather than physically).
VLANs are commonly used with Voice over IP (VoIP) to separate voice traffic from data
traffic. Traffic on the voice VLAN can be given a higher priority to ensure timely delivery.

6.4.3 Configuring VLANs


In this demo, we're going to discuss basic VLAN configuration. To begin, I'd like to take a look at the
network diagram that we're going to be working with in the demo. In this case, we have a Layer 2
switch in the middle. We'll call that SwitchA. We have a router connected to that switch. I have given
you the ports that the connections are using. The switch also has three workstations connected to it:
workstations 1, 2, and 3. The IP addressing that's been assigned to all these devices is referenced here.
You'll see that Workstation 1 has a last octet of 1. Same with 2. Workstation 2 has a .2. Workstation 3
has a .3.
The switch has a last octet of 200. The BranchA router is going to have a last octet of 254. You're going
to see these come up again as I demonstrate some basic connectivity verification, both before and after
creating some VLANs. I've gotten to a command prompt on Workstation 1 right now.
ipconfig

I could confirm that by typing 'ipconfig'.


Verify Workstations Connectivity Using ping

I can see that my IP address is, in fact, 1.1.


I'm just going to ping around the network and verify that I do have connectivity to other devices. I'll
start with Workstation 2. You see that I am getting replies from Workstation 2, so that's a good thing.
Also try Workstation 3. Looks good. The switch--which, remember, is 200--also works. Then lastly, the
router, which I have assigned at 254. It looks like I have good connectivity all around. This was
expected because all the workstations are currently connected to the switch in a common virtual LAN.
View Switches Configuration

Let's take a look at the switch's config next. Now I'm connected to SwitchA. We're going to take a look
at how it's currently configured. Log in, of course. The first command is going to be 'show running
config', or simply 'show run' for short. I'm going to spacebar down to see the next page to look at the
interface configs. I can see they're all set to speed 100 and also duplex full, on all 24 Fast Ethernet
ports. I can also see the IP address that was assigned to the switch here, under interface VLAN 1, and
the fact that the switch does have an IP default gateway that points to the router.
Verify the VLAN the Switch Ports Belong To Default VLAN

The next command is 'show vlan'. With this command, you're going to see that all of the ports on this
switch belong to the only built-in default VLAN, that all the ports on this switch belong to VLAN 1,
which is the only usable VLAN that comes with a new switch. You can ignore these down here. These
are administrative VLANs, and you cannot assign interfaces to them. All the interfaces belong to
VLAN 1. What that means is that they're all in the same broadcast domain, and also the same network
or subnet from an IP addressing perspective. It's also why the three workstations that we saw earlier in
the network diagram can all successfully ping each other.
Reasons to Create Additional VLANs

They're all part of the same broadcast domain currently.


Some reasons why you might want to create additional VLANs would be to segment different
departments up so that their broadcast and multicast frames don't actually propagate into other
departments. That's a performance consideration. You might also want to create different VLANs for
security concerns, to force departments to go through a routing device to reach each other.
Creating New VLANs

We'll actually see examples of that in subsequent demos.


For now, let's create a couple new VLANs. To do so, I go to global config mode, using config t, for
terminal. Then use the VLAN command to create the new VLANs. The numbers are irrelevant. You
can use whatever you want for them. 'vlan 10' I'm going to use for the sales department ports. Then, for
good documentation's sake, you can also give them names. You'll just see those names show up in the
show outputs. I'll create a second VLAN, number 11, and I'll name that one 'Mktg'. Exit global config
using the 'end' command, and then repeat the 'show vlan' command. You'll see both 10 and 11 do show
up there now as active VLANs. You can see the names that we assigned to them. What you're also
noticing is that there are no interfaces assigned to those VLANs.
Assign Interfaces to VLANs

All the Ethernet ports on this switch still remain in VLAN 1.


Just creating new VLANs does not assign any ports to them. That's the next step. Global config again.
Let's say that I want that Workstation 1 we were pinging from earlier to be a sales workstation. To
assign that workstation to VLAN 10, I have to use the 'interface' command first. I'm going to select just
that single workstation port. I'm going to use the 'switchport access vlan', and then the number you
want to assign to it--in this case, 10. I'm going to assign the other two workstations to VLAN 11
(marketing). I can do both of them together using the 'interface range' command. I'll specify 'f0/2 - 3'.
Once again, 'switchport access vlan 11', 'end'. One more time with 'show vlan'. You can see what
happened.
I've now taken VLAN 10 and assigned it to Port 1. Ports 2 and 3 are now in VLAN 11. What you'll also
notice is that assigning a VLAN to a port this way removes it from the default VLAN, 1. F0/1 only
shows up down here, no longer in VLAN 1. You can only belong to a single VLAN at a time on what's
called an access port.
Verify Switch Ports Have Been Segmented

We'll talk about trunk ports in a later demo.


Finally, how does that affect traffic between our devices? For that, let's go ahead and go back to the
command prompt of Workstation 1. We're back at Workstation 1. I'm simply going to repeat some of
those ping commands. Now that Workstation 1 is in the sales VLAN, which is different than the other
two workstations we should expect some different results here. Here's a ping attempt to Workstation 2.
Now you can see it does not work. Getting 'timed out' messages for Workstation 3. In fact, if I try to
ping the switch itself now, which was 200, that also fails, because the switch's IP address was also
assigned to VLAN 1.
Summary
This is normal behavior.
We have successfully now segmented the ports on this switch into different broadcast domains or
VLANs. What's important to remember, though, is to enable communications between those different
VLANs, you have to have a Layer 3 device in place, which would be a router or a Layer 3 capable
switch. We'll talk about inter-VLAN routing in a subsequent demo. This demo has discussed basic
VLAN concepts and configuration on a Layer 2 switch.
6.4.4 VLAN Command List

To configure a simple VLAN, first create the VLAN, then assign ports to that VLAN. The following
table shows common VLAN configuration commands:
Command Action
switch(config)#vlan [1-
Defines a VLAN.
4094]
Gives the VLAN a name.
switch(config-
Naming the VLAN is optional. VLAN names must be unique.
vlan)#name
[unique_name]
Deletes a VLAN.

When you delete a VLAN, all ports assigned to the deleted


switch(config)#no vlan
VLAN remain associated with it and are therefore inactive.
[1-4094]
After a VLAN is deleted, you must reassign its ports to an
appropriate VLAN.

Assigns ports to the VLAN.


switch(config-
if)#switchport access If you assign a port to a VLAN that does not exist, the VLAN
vlan [1-4094] will be created automatically.

switch#show vlan
Shows a list of VLANs on the system.
switch#show vlan brief
switch#show vlan id
Shows information for a specific VLAN.
[1-4064]
The following commands create VLAN 12, name it IS_VLAN, identify port 0/12 as having only
workstations attached to it, and assign the port to VLAN 12:
switch#config t
switch(config)#vlan 12
switch(config-vlan)#name IS_VLAN
switch(config-vlan)#interface fast 0/12
switch(config-if)#switchport access vlan 12

6.5 Trunking
As you study this section, answer the following questions:
 What is trunking?
 Why is trunking important to VLAN configuration?
 What protocol does a Cisco switch use to automatically detect trunk ports?
 By default, traffic from which VLANs are allowed on trunk ports?
 What is the default configuration of most Cisco switches?
After finishing this section, you should be able to complete the following tasks:
 Configure trunking.
 Configure the native VLAN.
 Configure allowed VLANs.

6.5.1 Access and Trunk Ports

In this lesson we are going to discuss the difference between access ports and trunk ports on a switch.
These are very important concepts to understand because they control how your VLANS are going to
behave. Understand that all ports on a switch are a member of VLAN 1 by default. Therefore, every
device that you connect to a layer 2 switch will be in the same broadcast domain by default. Any
broadcast frames sent to the switch will be replicated out to every other port on that switch.
Access Ports

Every port on the switch is defined as an access port by default. Here is an important thing that you've
got to understand, an access port on a switch can only be a member of one single VLAN at any given
time.
Access ports are probably the most common type of port on a switch. They're usually used to connect
end points to the switch such as workstations, servers, printers, and so on. That's because a workstation
or a server or a printer usually only needs to be a member of a single VLAN; hence, it only needs to
belong to one single broadcast domain.
However, some switch ports can be configured as a trunk port.
Trunk Ports

Access ports are assigned to a single VLAN. A trunk port, on the other hand, can be assigned to
multiple VLANS at the same time. In fact, when you create a trunk port it will be automatically made a
member of all VLANS configured on that switch by default. Trunk ports are used in situations where
there are multiple switches in the network environment. You want ports on multiple switches to all be
members of the same VLAN. In essence, we are extending a VLAN between multiple switches.
For example, suppose I connect another switch to this switch and there are workstations connected to
both switches and we've assigned ports on these switches to different VLANS. For example, our sales
workstations are connected to VLAN 2 on both of these different switches. This is totally allowed. You
can have the same VLAN defined on two or more different switches. All of these ports together on both
switches comprise the sale subnet on VLAN 2. This switch also has a second VLAN defined, VLAN 3.
The workstations at marketing on both switches are connected to the VLAN 3 ports.
By defining these two VLANS we've created two broadcast domains. An incoming broadcast frame on
this particular switch port, for example, will be replicated to every other node in the same broadcast
domain, which is VLAN 2. Therefore, the broadcast frame will be forwarded to this interface on this
switch because it's also a member of VLAN 2. However, we also need to get that broadcast frame over
to this port and this port on the second switch, because they're also members of VLAN 2. In order to do
this, this frame must traverse this connection between the two switches. Therefore, this uplink
connection between the switches must be a member of VLAN 2. An option for making this possible
would to be to simply make the two ports on these two switches members of VLAN 2. However, notice
that these two ports on the second switch are assigned to VLAN 3. These ports on the first switch
which are also assigned to VLAN 3 need to be able to communicate with these ports over here on the
second switch.
Therefore, the uplink connection between these switches must be assigned to both VLAN 2 and VLAN
3 at the same time. In order to do this you need to configure these two ports, one on each switch, as
trunk ports. An access port, remember, can only be a member of a single VLAN at any given time.
However, a trunk port can be a member of multiple VLANS at the same time. If we were to configure
these two ports as trunk ports, they will become a member of all VLANS configured on the switches.
In this case, this trunk is going to be automatically assigned to both VLAN 2 and VLAN 3 at the same
time. Therefore, traffic for both VLANS will be able to move between the switches over this
connection.
VLAN Tagging

As well as traffic for any other VLANS you might define on these switches.
When frames are sent across this trunk connection, the switch on the other end needs to know which
VLAN they belong to. When traffic on VLAN 2 from the first switch traverses this trunk connection,
the data needs to be marked in some way so that when it reaches the switch on the other end of the
trunk, that switch knows which VLAN that frame needs to be placed on. This is all done using VLAN
tagging. The first switch will insert the appropriate VLAN number into the frame before it sends it out
on the trunk link.
Let's take a look at a simplified Ethernet frame. This Ethernet frame is going to be assigned to VLAN 2
so it can traverse this trunk link. We have the frames destination and source MAC address fields here.
Then using VLAN tagging this first switch is going to insert a new field called the VLAN tag after the
MAC address fields in the frame to identify which VLAN this frame belongs to. This VLAN tag
contains the VLAN number associated with the workstation that originally sent the frame. For example,
let's suppose this workstation over here that's connected to this port on VLAN 2 sends a broadcast
frame. This frame will be forwarded to this port on the switch, because that port is also a member of
VLAN 2. However, before this frame traverses the trunk link between switches a VLAN tag containing
the number 2 will be inserted inside the frame, then the frame will be placed on the trunk link. When
the frame reaches the other end of the trunk the receiving switch will process the tag and it's going to
see that that frame is associated with VLAN 2. The second switch will remove the VLAN tag from the
frame and then replicate it out only to those local ports that are assigned to VLAN 2. This VLAN
tagging method is defined in the 802.1Q standard, which is a very widely accepted VLAN tagging
method.
Before we go any farther, we need to talk about some ways that you can automate the VLAN trunking
configuration process.
VTP

Typically, this is done using the VLAN trunking protocol or VTP. VTP simplifies the VLAN
configuration process on a multi switch network. It does this by propagating configuration changes
between the switches. For VTP to work the switches obviously have to be connected by some type of
trunk link. VTP cannot be used if you're dealing with an access port. With VTP the switches are
configured using one of three different configuration modes. Either a server, client, or transparent mode
switch.
A switch that's configured into server mode would be used to modify the VLAN configuration and then
advertise that information out to the other switches in the network so they know what's going on.
A switch can also be configured in client mode. In which case it simply receives changes from a VTP
server switch. It will also pass that information on to any other switches that might be connected to it.
The important thing to remember here is the fact that changes cannot be made to the VLAN
configuration on a client switch. You make all the changes on a server switch.
A switch can also be configured into transparent mode. A switch in transparent mode allows you to
make local configuration of the VLAN information on just that switch, but it does not send out its
configuration information to other switches. Likewise, a switch that's in transparent mode will not
accept information from a VLAN server switch. The one thing it will do is go ahead and transmit that
information on that it receives to other switches in the network. Basically, a switch in transparent mode
uses its own configuration. It ignores any information it might receive from a server switch, but it will
go ahead and pass that information on. By default most managed switches are preconfigured to operate
in server mode. If you don't intend to use VTP, then you should configure your switches to operate in
transparent mode.
Summary

That's it for this lesson. In this lesson we reviewed how access ports and trunk ports work in a VLAN
implementation. Remember, access ports can only be a member of one VLAN at a time. They can
belong only to a single broadcast domain. They are primarily used to connect workstations, servers, and
other end mode devices to the switch. Trunk ports, on the other hand, belong to multiple VLANS at the
same time. They're used for connecting multiple switches together in a deployment where VLANS
have been defined across multiple switches. We use VLAN tagging to identify which VLAN a
particular frame belongs to when it traverses a trunk link. The VLAN tag is inserted in the frame before
it's placed on the trunk link and the switch on the other end of the connection will process the VLAN
tag and place the frame on the appropriate VLAN. We ended this lesson by discussing how you can use
VTP to automate your VLAN configuration in a multi-switch network.

6.5.2 Configuring Trunking

In this demo, we're going to discuss the configuration of trunk interfaces. The network exhibit you can
see here is what we're going to use for the demo. It has two switches: SwitchA, SwitchB. You can see
they're connected by their FastEthernet 0/24 interfaces. Each of them also has a Sales workstation,
which is configured in a non-default VLAN, that's VLAN 10, which is named Sales. That's true on each
of the two switches. The FastEthernet 1 link on both switches is configured in VLAN 10, while the
trunk connection to B, between the two switches, is currently still in VLAN 1.
The fact that currently VLAN 10 is not allowed across that connection between switches means that the
two Sales workstations would not be able to communicate, even though they are using compatible IP
addresses.
Show VLAN

Let's take a look at SwitchA.


The first command we're going to use is show VLAN. With show VLAN, you can see that we do have
a non-default additional VLAN called Sales. There's also one called Marketing, but we're going to
focus on Sales for this one. That's VLAN 10. Like I'd indicated, you can see that FastEthernet 0/1 is
assigned to VLAN 10. The same is true on SwitchB. Also note that the interface that connects the two
switches, FastEthernet 0/24, remains in the default VLAN, number 1.
Show interface trunk

I can also type 'show interface trunk' from here to verify that there are no active trunks on this switch
currently. Let's go ahead and create one. I go to Config, then I will go to the interface that connects the
two switches.
Switchport mode trunk

The command to configure a switch interface as a trunk is switchport mode trunk. We wait a few
moments, and you'll see that the interface goes down and comes back up, which is good. By default,
trunk interfaces allow all VLANs known to that switch to traverse the trunk. If you had Marketing
workstations, for example, in VLAN 11 on SwitchA, and A sent broadcast traffic, that broadcast traffic
would traverse the trunk connection to SwitchB.
Switchport trunk allowed VLAN remove 11

If you know that you don't have devices in the marketing VLAN on Switch B, it would make sense to
actually trim support for that VLAN off of the trunk. To do that, you would use the command
switchport trunk allowed VLAN, and then remove 11, for example.
Verify the Configuration

Now the trunk connection will support all VLANs except for 11.
Let's verify our work. If I type 'end', and then 'show VLAN' once more, you'll note that FastEthernet
0/24 no longer shows up in this output assigned to VLAN 1, because the show VLAN command shows
you access ports. Once again, these are ports that are assigned to a single VLAN. Once a port is
configured as a trunk, it won't show up in this output any longer. I can use the show interface trunk
command, though, and this time I will see a trunk.
So port fa0/24 is currently on. It is trunking; it is using a frame tagging method called 802.1Q. I can see
that the VLANs that are allowed on the trunk are 1 to 10 and 12 to 4,094. The one that's missing there
is the one that I pruned, which is VLAN 11. Since I've configured the link between the two switches as
a trunk, now the two Sales workstations in VLAN 10 on both sides would be able to communicate.
This demo has discussed configuring switchport interfaces as trunks.
6.5.3 Truking Facts

Trunking occurs when you configure VLANs that span multiple switches, as shown in the following
diagram:

In this example, each switch has two VLANs configured, with one port on each VLAN. Workstations
in VLAN 1 can only communicate with other workstations in VLAN 1. This means that workstations
connected to the same switch in this example cannot communicate directly with each other.
Communications between workstations within each VLAN must pass through the trunk link to the
other switch.
Additional facts regarding trunking and VLANs are as follows:
 Access ports are connected to endpoint devices (such as workstations), while trunk ports are
connected to other switches.
 An access port can be a member of only a single VLAN.
 Trunk ports are members of all VLANs on the switch by default.
 Any port on a switch can be configured as a trunk port.
 By default, trunk ports carry traffic for all VLANs between switches. However, you can
reconfigure a trunk port so that it carries only specific VLANs on the trunk link.
When trunking is used, frames that are sent over a trunk port are tagged with the VLAN ID number so
the receiving switch knows which VLAN the frame belongs to. In VLAN tagging:
 Tags are appended by the first switch in the path and removed by the last.
 Only VLAN-capable devices understand the frame tag.
 Tags must be removed before a frame is forwarded to a non-VLAN capable device.
A trunking protocol defines the process that switches use to tag frames with a VLAN ID. One widely
implemented trunking protocol is the IEEE 802.1Q standard, which supports a wide range of switches
from many device manufacturers. 802.1Q supports VLAN numbers 1 through 4094.
With 802.1Q trunking, frames from the default VLAN are not tagged, but frames from all other
VLANs are tagged. For example, suppose VLAN 1 is the default VLAN on a switch (the default setting
on most Cisco switches). In this configuration, any frame on VLAN 1 that is placed on a trunk link will
not be assigned a VLAN tag. If a switch receives a frame on a trunk port that doesn't have a VLAN tag,
the frame is automatically put onto VLAN 1.
When using switches from multiple vendors in the same network, be sure that each device
supports the 802.1Q standard.

The VLAN Trunking Protocol (VTP) simplifies VLAN configuration on a multi-switch network by
propagating configuration changes between switches. For VTP to work, the switches must be
connected by trunk links. With VTP, switches are configured in one of the following configuration
modes:
 A switch in server mode is used to modify the VLAN configuration. The switch then advertises
VTP information to other switches in the network.
 A switch in client mode receives changes from a VTP server switch and passes that information
on to other switches. Changes cannot be made to the local VLAN configuration on a client
switch.
 A switch in transparent mode allows for local configuration of VLAN information, but it does
not update its configuration with information from other switches. Likewise, local VLAN
information is not advertised to other switches. However, VTP information received on the
network is passed on to other switches.
By default, most managed switches are preconfigured to operate in server mode. If you do not intend to
use VTP, configure your switches to use transparent mode.
6.5.4 Trunking Command List

The following table lists important commands for configuring and monitoring trunking on a Cisco
switch:
Command Action
Switch(config-if)#switchport
Enables trunking on the interface
mode trunk
Switch(config-if)#switchport Configures an interface as an access port, which disables
mode access trunking on the interface (if it was previously configured)
Switch(config-if)#switchport trunk
Sets the trunking protocol to 802.1Q
encapsulation dot1q
Allows the trunking protocol to be negotiated between switches
Switch(config-if)#switchport trunk
encapsulation negotiate
Switch(config-if)#switchport trunk Configures the VLAN that sends and receives untagged traffic
native vlan [vlan_id] on the trunk port when the interface is in 802.1Q trunking mode
Switch(config-if)#switchport trunk
Defines which VLANs are allowed to communicate over the
allowed vlan all
trunk
Switch(config-if)#switchport trunk
allowed vlan add [vlan_id]
Switch(config-if)#switchport trunk
Removes a VLAN from a trunk link
allowed vlan remove [vlan_id]
Switch(config-if)#switchport
Assigns an interface to a VLAN
access vlan [number]
Shows interface trunking information with the following:

Switch#show interface trunk  Mode


 Encapsulation
Switch#show interface fa0/1 trunk  Trunking status
 VLAN assignments

Two distribution layer switches, SW1 and SW2, are connected through their respective Gi0/1
interfaces. The following commands configure a trunk link between the switches:
SW1>ena
SW1#conf t
SW1(config)#int gi 0/1
SW1(config-if)#switchport mode trunk

SW2>ena
SW2#conf t
SW2(config)#int gi 0/1
SW2(config-if)#switchport mode trunk

6.5.6 Configuring the Native VLAN

This demo is going to discuss the configuration of trunk interfaces on Layer 2 switches.
Interface Modes for Layer 2 Switches

Layer 2 switches support one of two different interface modes; those being access and trunk. An access
interface supports a single VLAN and is used to connect to non-switching devices, such as work
stations, printers, possibly routers, but not other switches.
A trunk port, on the other hand, carries all the VLANs of that switch by default and is used to connect
to upstream switches, such as from an access switch to a parent distribution switch, over uplink ports.
In that regard, maybe you want to be a bit more careful in configuring trunk interfaces, because they
can affect a much larger number of devices, versus simply reconfiguring an access port.
Show Interfaces Trunk

We're going to start with a couple basic show commands. We're going to start with Show Interfaces
Trunk. Before I hit enter, I want to make clear that switches will try to negotiate trunk connections
automatically. I haven't configured anything on the switch regarding trunk operation, but when I hit
ENTER, you'll see that Port Gig 01 is, in fact, in a trunking status. Its mode is 'auto', which means it
dynamically negotiated a trunk connection with what devices connected over that port.
I can also see its encapsulation type is 802.1Q, which is the standard frame-tagging method used on
current switches today, Cisco and non-Cisco, and its native VLAN is 1. These are all default settings. I
can also see that all VLANs would be allowed on this trunk and that the specific VLANs that currently
exist on this switch are also allowed and active on this trunk connection.
Show Interface Status

You do have control over this, which we'll see later.


Another show command would be Show Interfaces, and this time, I'll use the 'Status' keyword. You'll
see a whole screen of output, but on the bottom, I can once again verify that gig ethernet-01 is currently
configured as a trunk. I can also see that its duplex and status have been automatically negotiated to be
full and 1,000 megabits per second. I could have honed that command down and put the specific port
nomenclature in there followed by status, and then just that single port output would show up.
Configure Trunk Negotiation

The trunk interface came up automatically.


Let's talk about how you configure trunk negotiation or manually set the status of a trunk connection.
We go into global config and then to the interface, interface G01. We'll type Switch Port Mode
followed by the question mark, and you'll actually see three options listed here: 'access', 'dynamic', and
'trunk'. Those commands would be used to unconditionally or manually, if you will, force this interface
to be one of those two modes.
If I type Switch Port Mode Trunk, then what you're telling the switch is that I don't care what the other
side says; I want you to try to come up as a trunk on this end. In this case, you're not depending on
dynamic negotiation using the Dynamic Trunking Protocol, or DTP, like so.
If I did want the switches to negotiate, I could set that to, instead, 'dynamic'. Then you'd have to pick
one of two options: 'auto' and 'desirable'. All you really need to know about these is that if both ends of
the connection are set to 'auto', then the trunk will not become active. 'Auto auto' will not bring a trunk
connection up. If I have any other combination of values between both switches, if I had one switch set
to 'auto', for example, and the other was set to desirable or trunk, then the interface would still come up
as a trunk connection.
To demonstrate that, if I do change this to 'desirable', I'll also do a shut down and "No Shut" on this
interface. It'll bounce it. If I now do that Show Interfaces Trunk command again, use 'Do' to execute the
command from within config mode, you'll see that the mode is desirable on this end but still operating
as a trunk connection.
Again, you'll notice that the VLANs currently allowed on this trunk are all the VLANs that are
currently active on this switch.
Specify VLANs Allowed

There might be a reason that you didn't want that. Let's say that, for example, VLANs 100, 101, and
105 were not present or being used on the parent switch that the trunk connection leads to. Maybe there
are no devices within those VLANs upstream over this trunk connection. For that reason, you wouldn't
want broadcast traffic originating from those VLANs to be sent over the trunk.
You can use the Switchport Trunk Allowed VLAN command. If I look at the options here, you'll note
you can add or remove VLANs. In this case, I want to remove 100 through 101 and also 105. Take a
look at my Show Interfaces Trunk command now. You'll see that the allowed VLANs have been cut
down to be just 1, 10, and 110.
Change the Native VLAN

Lastly, we noticed that the native VLAN was set to 1. The native VLAN is the VLAN over which
frames will not be encapsulated by 802.1Q. They will not be frame tagged when traversing the trunk.
Instead, they are sent in their raw form with no 802.1Q header. I can change the native VLAN on a
switch using a similar command to what I just used, Switchport Trunk. Note there's an option for
native, native VLAN. Let's say I wanted to change that to VLAN 10 so that any traffic originating from
VLAN 10, when crossing this connection, will not have a tag assigned to it 10.
Verify the Native VLAN Assignment

One more time, type the Show Interfaces Trunk command. You'll see that the native VLAN has in fact
been set to 10.
You want to be careful that if you do set the native VLAN on one side of a trunk, you'll have to perform
the exact same thing on the other side. I just did this on Switch A. You'd want to go into Switch B at
this point. Go to its trunk interface and execute the same command to switch that one to native VLAN
10, as well.
This concludes our demonstration on trunk interface configuration.
6.6 Spanning Tree Protocol

As you study this section, answer the following questions:


 Why will a tie breaker never be necessary for the root switch selection?
 When would you modify an STP mode?
 How does PVST+ differ from Rapid PVST+?
 How do ports work in a multiple VLAN environment?
 How are root bridges designated in a multiple VLAN environment?
 What happens during STP convergence?
After finishing this section, you should be able to complete the following tasks:
 Configure STP.
 Select a root bridge.
 Configure Rapid PVST+.
 Find STP Info 1.
 Configure EtherChannels.
6.6.1. Spanning Tree Protocol
In this lesson we're going to discuss the dangers associated with switching loops and how you can use
the Spanning Tree Protocol, which we call STP, in order to prevent them. In this network, notice that
we have two switches, each with several hosts connected. Suppose a broadcast frame is sent from this
workstation to its local switch port. Because this is a broadcast frame, it's intended for all other
connected hosts. Therefore, it's going to get copied to all of the other hosts that are connected to this
switch, and it also gets copied over to the next switch. This second switch will receive the broadcast
frame, and because it is a broadcast frame, it's going to forward it to all the other hosts that are
connected to it. It doesn't matter which switch you're connected to; a broadcast frame is going to
propagate to all of the other connected switches and hosts.
However, consider what would happen if we were to add a second link between these two switches.
This is actually commonly done, because using a single link between these switches creates a single
point of failure. If this link were to go down, we would lose communications between the switches.
Using multiple links creates redundancy such that if anything were to happen to one of these links, the
second link maintains communications between the two switches. This configuration also creates very
serious potential problems.
Example of a Switching Loop

Consider how broadcast traffic would flow in this configuration. Let's suppose the same workstation on
the first switch sends a broadcast frame again to the local switch, and as before, the switch will forward
that frame to all of its other active ports, including both ports that are connected to the second switch.
This means the second switch is going to receive not one, but two copies of the same broadcast frame.
The important thing to remember here is the fact that that second switch will not recognize the fact that
they're the same frame. It's going to see them as two separate frames. The second switch will therefore
forward both copies of that frame to all other ports, including all of its connected hosts. Here's the key
problem; each copy of that frame arrived on a different port. Therefore, the copy that arrived on this top
link will be forwarded back to the first switch on the bottom link, and the converse is true for the other
copy of that frame that arrived on the bottom link. It's going to be forwarded back to the first switch on
the top link. As a result, the first switch will receive two copies of the same single frame that it
originally sent to the second switch.
This first switch is going to repeat the process, because it doesn't recognize that that is the very same
frame that it originally sent out. It's going to send copies of those two frames back to the second switch
again. The second switch is going to repeat this process over and over and over again, forming a never-
ending loop. The same scenario happens every single time any host on either switch sends a broadcast
frame, and this is called a switching loop. Sometimes it's also called a broadcast storm, and it's a really
bad thing. In a switching loop, we have stale broadcast data circling continuously around the network
as fast as the switches can process the frames, and this can take the network down very quickly as more
and more frames are transmitted by the hosts on the network.
Implementing STP

However, we do want to keep the fault tolerance that's provided by having multiple links between the
switches. Therefore, it's very important that you implement a mechanism to prevent these switching
loops from occurring. This is normally done using the Spanning Tree Protocol, or STP. Usually you'll
find that STP is actually enabled by default on most high-end switches, such as those that come from
Cisco. You need to check your documentation to verify that this is the case for the specific equipment
that you're using.
To use STP, each switch has to be assigned a bridge ID number. The bridge ID does two things: first, it
identifies the switch, and it also prioritizes the switch. If the switches are configured in a looped
configuration, such as the one you see here, then the bridge ID identifies which switch is the boss--
which switch should take charge of these redundant links. Let's suppose that the bridge ID of the switch
on the left is 1, and the switch on the right has a higher bridge ID of 2. Be aware that these are
simplified bridge ID numbers. The real bridge ID numbers are actually much longer; we've shortened
them here for demonstration purposes. When a switch first comes online, the spanning tree protocol is
going to send out some very special frames. These frames are called Bridge Protocol Data Units, or
BPDUs. It will send these out of each of the switch's ports.
Each BPDU contains that switch's bridge ID. This is done to alert each neighboring switch that it has
another switch actually connected to it. In this network, both of these switches will send out BPDUs
when they first come online looking for neighboring switches. In this case, the second switch is going
to see BPDUs coming from the first switch over both of these connections, and the first switch will also
see BPDUs from the second switch. Because switch 1 has a lower bridge ID, and therefore a higher
priority number, it will take responsibility for managing these redundant links.
These BPDUs will also be received by the other hosts on the network; they will receive all BPDUs
from both of these switches, but these end devices--our workstation servers-- don't really care about
them. They just ignore them. Likewise, these end devices don't send any BPDUs when they come
online. In this way, each switch can identify which switch ports are actually connected to other
switches, and which ones are not.
Port States

The switch with the lowest bridge ID has to decide how it's going to manage these redundant links in
order to prevent switching loops. To do this, one of the ports on either switch 2 or switch 1 has to be
put in a block state in order to break the loop. In this case, since switch 1 has the highest priority, it will
keep both ports in an active forwarding state. The redundant connection on switch 2, with the lowest
port number, is also going to remain in an active forwarding state. In this example, let's assume that it's
going to be port E/0. As a result, this bottom link will remain in an active forwarding state between the
two switches. However, the other port on switch 2 will be placed in a blocking state. In this state, it will
not forward any of the frames that it receives. This effectively breaks the loop. As a result, we have one
active connection between switch 1 and switch 2--the bottom connection. No frames will traverse this
top link between the two switches.
However, if for some reason the bottom link did go down, STP will recognize this, and it will put this
top link back into a forwarding state automatically. STP can detect when the active link goes offline,
and it will then respond by unblocking a redundant port in order to make the link between the switches
active again.
Summary

That's it for this lesson. In this lesson you were introduced to the STP protocol. Remember, without the
STP protocol, any redundant links that we've configured between switches to provide fault tolerance
will result in a switching loop that can bring down your network very quickly. The Spanning Tree
Protocol eliminates those loops by comparing switch metrics and determining which port should be
blocked and which port should be left active in order to break the loop.
6.6.2 Configuring STP
So far we've seen where we use STP and how to use STP. In this demonstration, we're going see STP in
action. With STP, we don't have to configure anything anymore, because it works right out of the box.
Configuring Switches

I have three switches hooked up. We have some redundancy. As we know from STP, one of those ports
has to be shut off, so we don't have a switching loop.
Let's go into switch one, and see how our STP is set up. Let's type 'show spanning.' I'm going to hit the
'Tab' key to expand it out and then hit 'Enter.' Here is the information we get. It's broken up into a
couple of different areas. First, it's telling us that spanning tree protocol is enabled, so we have
spanning tree running by default.
Root ID

Here is our root ID. This is pointing to our root bridge. The root bridge is in control. He has a priority
of 32769, because we're on VLAN one. If you look right up here. I'll highlight it for you, VLAN1.
Remember, our priority is 32768. We had one because of our VLAN, and there we have it.
MAC Address

This is the MAC address of our root bridge.


On all three switches, that top area should match. All of them should say this right here, this area right
here, should match on all three switches. Because the root ID or that root bridge, is not going to
change. The part about us is right down here, this bridge ID. This is the switch that we're on right now.
You can see that there are couple of different things that we can look at here: our address ' it doesn't
match up, so that tells us that we're not the root bridge.
Root Port and Designated Port

Also, down here we have a root port and a designated port. Now remember root ports point to the root
bridge. Well, if I am the root bridge I'm not going to have any root ports, all of my ports will be
designated ports. What I want to do now is jump over to another switch and see if we can find our root
bridge.
Okay, we're over here on switch two and look at its spanning tree protocol. Let's do 'show spanning
tree,' and hit 'Enter.' Let's take a look at this.
VLAN1

We're still on VLAN1, perfect. Remember, that this part up here is about our root bridge. Here is the
MAC address of our root bridge. We got Charlie Alpha Zero Three. We've got this sentence right here
that's telling us, "This bridge is the root." There's no mistaking that, right. It's saying, "Hey, I'm the root
bridge. Here we are." Let's double check it. Charlie Alpha Zero Three, Charlie Alpha Zero Three. This
is the part of our switch that we're on right now. They match, perfect. Look down here. We have a
designated port and a designated port.
All the criteria meets. Our MAC addresses match up. We have only designated ports on this switch,
plus it's telling us that this bridge is the root.
Summary

That's it for this demonstration.


In this demonstration, we configured STP.
6.6.3. Selecting a Root Bridge

In this demonstration, let's talk about the root bridge and selecting a root bridge. In the previous demo,
we configured STP. We saw where the root bridge ID was, where our ID of the switch that we're on
was with the priority numbers and the MAC address numbers.
We're going to take it one step further, because if you remember in the previous demo, we had three
switches and switch number two was our root bridge switch, our root bridge ID. Let's say, for whatever
reason, you're the network admin, you get to decide. Let's say switch number one needs to be the root
bridge, so we need to force it to become the root bridge, because right now, switch number two is.
What I want to do is type 'show spanning tree,' one more time, and what we're going to focus on is this
line right here, this priority line, because right now our default port priority is 32768. Right now this
one says 32769, that is because, they take the VLAN number and add onto that. Again, we are working
off VLAN number one right here. 32768 plus one is 32769. Right now everybody has this priority.
Switch number two happened to be the first one to come up, so it grabbed the root bridge ID. What we
want to do is make switch number one the root ID right now.
What we're going to do is lower the ID for this particular switch. Right now it's at 32768 plus one, we
get 32769. What we want to do is get into global config here, and we're going to type 'spanning tree'.
Let's look at the options real quick, we have mode, port fast and VLAN. In this case, we want to get
into the VLAN, because these are set per VLAN. I'm going to hit the question mark again just so you
can see, and this says which VLAN do you want? Remember, we want VLAN number one. Let's hit
number one, space, question mark, just so we can see this.
We're going to hit priority and we're going to set the priority. I'm going to hit the question mark to show
you our range that we can pick. It says anywhere from zero to 61,440. We have to go up in increments
of 4,096 every time. Watch what happens when I say okay, I want a priority of '100'. It's going to say
'NO', the allowed values are this, this and this. I can go zero, 4096, 8192 and so on. I can pick any of
those that I want, but remember, we want to force this guy to become the root ID, so we're going to
need something lower than 32768.
Let's just say, I'm going to hit the up arrow here, take off my priority of 100, and let's just say 4096. I'm
going to go ahead and hit 'Enter,' and it's going to reconfigure. STP will reconfigure and it will bring up
another bridge. Let's get out of this mode and let's do a 'show spanning tree,' and now let's see. You can
look right here, and it says, 'yes', this bridge is the root bridge. I didn't switch, I didn't change switches I
should say, we're still on switch one. Our MAC addresses line up, our priority lines up and we've got
two designated ports.
Summary

That's it for this demonstration. In this demonstration, we forced a new root bridge inside of our
switching topology. We used a lower priority to force STP to renegotiate and elect a new root bridge.
6.6.4. STP Facts

To provide for fault tolerance, many networks implement redundant paths between multiple switches.
However, providing redundant paths between segments could cause frames to be endlessly passed
between the redundant paths. This condition is known as a switching loop.
To prevent switching loops, the IEEE 802.1d committee defined the Spanning Tree Protocol (STP).
With STP, one switch for each route is assigned as the designated bridge. Only the designated bridge
can forward packets. Redundant switches are assigned as backups.
The spanning tree protocol:
 Eliminates loops.
 Provides redundant paths between devices.
 Enables dynamic role configuration.
 Recovers automatically from a topology change or device failure.
 Identifies the optimal path between any two network devices.
The spanning tree protocol uses a spanning tree algorithm (STA) to calculate the best loop-free path
through a network by assigning a role to each bridge or switch. The bridge role determines how the
device functions in relation to other devices and whether the device forwards traffic to other segments.
The following table describes the three types of bridge roles:
Role Characteristics
The root bridge is the master, or controlling, bridge.

 There is only one root bridge in the network. The root bridge is the logical
center of the spanning tree topology in a switched network.
 The root bridge is determined by the switch with the lowest bridge ID (BID):
 The bridge ID is composed of two parts—a bridge priority number
and the MAC address assigned to the switch.
 The default priority number for all switches is 32,768. This means the
Root bridge switch with the lowest MAC address becomes the root bridge unless
you customize the priority values.
 You can manually configure the priority number to force a specific
switch to become the root switch.
 The root bridge periodically broadcasts configuration messages. These
messages are used to select routes and reconfigure the roles of other bridges,
if necessary.
 All ports on a root bridge forward messages to the network.

A designated bridge is any other device that participates in forwarding packets


through the network.
Designated
bridge  They are selected automatically by exchanging bridge configuration packets.
 To prevent bridge loops, there is only one designated bridge per segment.

Backup All redundant devices are classified as backup bridges.


bridge
 They listen to network traffic and build the bridge database. However, they
will not forward packets.
 They can take over if the root bridge or a designated bridge fails.

Devices send special packets called Bridge Protocol Data Units (BPDUs) out each port. BPDUs sent to
and received from other bridges are used to determine bridge roles and port states, verify that neighbor
devices are still functioning, and recover from network topology changes. During the negotiation
process and normal operations, each switch port is in one of the following states:
Port State Description
A port in the disabled state is powered on but does not participate in forwarding or
Disabled
listening to network messages. A bridge must be manually placed in the disabled state.
When a device is first powered on, its ports are in the blocking state. Backup bridge
Blocking ports are always in the blocking state. Ports in a blocking state receive packets and
BPDUs sent to all bridges, but they will not process any other packets.
The listening state is a transitory state between blocking and learning. The port
remains in the listening state for a specific period of time. This time period allows
Listening network traffic to settle down after a change has occurred. For example, if a bridge
goes down, all other bridges go into the listening state for a period of time. During this
time the bridges redefine their roles.
A port in the learning state receives packets and builds the bridge database
Learning (associating MAC addresses with ports). A timer is also associated with this state. The
port goes to the forwarding state after the timer expires.
The root bridge and designated bridges are in the forwarding state when they can
Forwarding receive and forward packets. A port in the forwarding state can learn and forward. All
ports of the root switch are in the forwarding state.
During the configuration process, ports on each switch are configured as one of the following types:
Port type Description
The port on a designated switch with the lowest port cost back to the root bridge is
identified as the root port.

 Each designated switch has a single root port (a single path back to the route
Root port
bridge).
 Root ports are in the forwarding state.
 The root bridge does not have a root port.

One port on each segment is identified as the designated port. The designated port
identifies which port on the segment is allowed to send and receive frames.

 All ports on the root bridge are designated ports (unless the switch port loops
Designated back to a port on the same switch).
port  Designated ports are selected based on the lowest path cost to get back to the
root switch. Default IEEE port costs include the following:
 10 Mbps = 1000
 100 Mbps = 19
 1 Gbps = 4
 10 Gbps = 2
 If two switches have the same cost, the switch with the lowest priority
becomes the designated switch, and its port the designated port.
 If two ports have the same cost, the port on the switch with the lowest port ID
becomes the designated port.
 The port ID is derived from two numbers—the port priority and the
port number.
 The port priority ranges from 0–255, with a default of 128.
 The port number is the number of the switch's port. For example, the
port number for Fa0/3 is 3.
 With the default port priority setting, the lowest port number becomes
the designated port.
 Designated ports are used to send frames back to the root bridge.
 Designated ports are in the forwarding state.

A blocking port is any port that is not a root or a designated port. A blocking port is
Blocking port
in blocking state.
Devices participating in the spanning tree protocol use the following process to configure themselves:
1. At startup, switches send BPDUs out each port.
2. Switches read the bridge ID contained in the BPDUs to elect (identify) a single root bridge (the
device with the lowest bridge ID). All the ports on the root bridge become designated ports.
3. Each switch identifies its root port (the port with the lowest cost back to the root bridge).
4. Switches on redundant paths identify a designated switch for each segment. A designated port is
also identified on each designated switch.
5. Remaining switch ports that are not root or designated ports are put in the blocking state to
eliminate loops.
6. After configuration, switches periodically send BPDUs to ensure connectivity and discover
topology changes.
The following table lists commands you would use to configure spanning tree:
Command Function
Sets the spanning tree mode.

 PVST+ (Per VLAN Spanning Tree Protocol), also known as


PVSTP, is a Cisco proprietary protocol used on Cisco switches.
 Rapid PVST+ is Cisco's proprietary version of Rapid STP, which
Switch(config)#spanning- is based on the 802.1w standard.
tree mode {pvst | rapid-
pvst} PVST+ and Rapid PVST+ are the same except that Rapid
PVST+ uses a rapid convergence based on the 802.1w
standard. To provide rapid convergence, Rapid PVST+
deletes learned MAC address entries on a per-port basis after
receiving a topology change.

Switch(config)#spanning-
tree vlan [1-4094] root Forces the switch to be the root of the spanning tree.
primary
Manually sets the cost. The cost range value depends on the path-cost
calculation method:
Switch(config)#spanning-
tree vlan [1-4094] cost [1
 For the short method the range is 1 to 65536.
- 200000000]
 For the long method the range is from 1 to 200000000.

Manually sets the bridge priority number:

 The priority value ranges between 0 and 61440.


 Each switch has the default priority of 32768.
Switch(config)#spanning-
tree vlan [1-4094]  Priority values are set in increments of 4096. If you enter another
number, your value will be rounded to the closest increment of
priority [0-61440]
4096, or you will be prompted to enter a valid value.
 The switch with the lowest priority number becomes the root
bridge.

Switch(config)#no
spanning-tree vlan [1- Disables spanning tree on the selected VLAN.
4094]
Shows spanning tree configuration information, including the following:

 Root bridge priority and MAC address


 The cost to the root bridge
 Local switch bridge ID and MAC address
Switch#show spanning-  The role and status of all local interfaces
tree  The priority and number for each interface

To verify that spanning tree is working, look for an entry similar to the
following for each VLAN:

Spanning tree enabled protocol ieee

Shows information about the root bridge for a specific VLAN.


Information shown includes:

Switch#show spanning-  The root bridge ID, including the priority number and the MAC
tree vlan [1-4094] root address
 The cost to the root bridge from the local switch
 The local port that is the root port

Shows spanning tree configuration information about the local switch


Switch#show spanning-
for the specified VLAN. Information includes the local bridge ID,
tree vlan [1-4094] bridge
including the priority and MAC address.
Even though STP is great at eliminating switching loops, it has a key weakness. It allows only a single
active path between two switches at any given time. If that active link goes down, it can sometimes
take 30 seconds or more for STP to detect that the link has gone down before it activates a redundant
link. To address this weakness, a new protocol, Shortest Path Bridging (SPB), has been developed to
eventually replace STP. SPB is a routing protocol defined in the IEEE 802.1aq standard that adds
routing functions to Layer 2 switching. SPB uses a link-state routing protocol to allow switches to learn
the shortest paths through a switched Ethernet network and to dynamically adjust those paths as the
topology changes, just like a Layer 3 router does.
SPB addresses this issue by applying Layer 3 routing protocols to Layer 2 switches. This allows those
switches to actually route Ethernet frames between switches, just as Layer 3 protocols route packets
between routers. By doing this, SPB allows multiple links between switches to be active at the same
time without creating a switching loop. This functionality is designed to eliminate the time lag
associated with failed links managed by STP. If a link between switches goes down on a network that
uses SPB, the frames can be immediately re-routed to the destination segment by using redundant links
between switches that are already active and able to forward frames.
6.6.8 Configuring EtherChannels

In this demo, we're going to take a look at configuring EtherChannels between Cisco switches. In this
first example, you can see in the exhibit we have three Cisco switches-- Switch A, B, and C. Switch A
and B have dual, redundant connections up to Switch C.
Redundancy is good, so in this case if either one of those connections upstream went away, the other
one would still be able to carry all traffic between access and distribution, but by default, having dual
connections like this would create a loop in the network and spanning tree would block one of those
connections. So even with redundant links upstream, you'd still only be using 1 gigabit of throughput.
What EtherChannel technology allows us to do is bundle those two connections together so that they're
seen as one 2 gigabit per second, or twice the speed circuit, so now both links are in use and you get
higher bandwidth between the switches. They operate in that case like a single link, and Spanning Tree
does not block either connection. In this demo, we're going to configure all three switches and we're
going to use a negotiation protocol called PAgP, or the Port Aggregation Protocol. This is Cisco
proprietary. If you were connecting to non-Cisco switches and wished to use EtherChannel negotiation,
you'd have to use the Link Aggregation Control Protocol, or LACP, instead.
Configure Switch A

First thing I'm going to do is a do a 'show run' and just take a look at the two involved interfaces on
Switch A, and see that GigabitEthernet 0/1 has no config. It's in its default state right now, so that's
good, and I'll verify the same on GigabitEthernet 0/2, the same.
Configure the EtherChannel

I haven't done anything with this switch yet and we're ready to start our EtherChannel configuration.
Modify Multiple Port Settings at a Time

We're going to use the 'interface range' command here because I want to be able to modify more than
one port's settings at the same time with common commands, versus going into GigabitEthernet 0/1,
executing a series of commands, and then going into the second interface and repeating those same
commands.
Set the Interfaces to Trunks

I'm going to use the 'switchport mode trunk' command to manually set these interfaces to trunks. You
might recognize that oftentimes, switches will automatically negotiate trunk connections.
Using the channel-group Command

In this case, I'm taking charge and manually configuring them.


Now the EtherChannel configuration, which involves the 'channel-group' command followed by a
number, and this number indicates the channel-group for this particular series of interfaces that will be
grouped together. Use any number you want here. I'll use '1', as this is the first EtherChannel I'm
creating, and then 'mode', and you have several options here.
I mentioned I was going to use PAgP, and you can see that you have two choices there for 'auto' and
'desirable'. 'desirable' is what I'm going to use on this switch and what that means is it's taking an active
role in EtherChannel negotiation. At least one side has to be desirable. If you had both sides at auto,
which is more of a passive role, then the channel would not get its settings negotiated.
End and Verify Configuration

And the port channel has been created.


That's all I have to do. I 'end' out, do a quick 'write'. Then again I'll show the config of both of those
interfaces, so you can see the interface range command actually did work.
Configure Switch B

I'm done with Switch A.


I'm going to do the same series of steps on Switch B now. I'm going to Telnet over to Switch B, log in,
and the exact same series of commands for the most part: 'config t', 'interface range' again. This time I'll
use channel group 2 because I'm going to make the channel groups match on the parent switch--Switch
C. The channel group that's leading from Switch C down to Switch A will be '1' and this 'channel-
group' will be '2'. Again I'll make it 'mode desirable', and the numbers that you use for the channel
groups are more for documentation purposes, so they could be anything.
Configure Switch C

Now we're done with Switch B, so we need to pop out to our parent Switch A, and then I'll Telnet into
Switch C. This should all look familiar. Go 'config t' again. I am going to configure two ports at a time
here, using range versus all four. You'll see why. 'g0/1 - 2' once more, set to 'trunk', and then use
'channel-group'. I'm going to use 'channel-group' to match what I did on Switch A. '1', it will make its
'mode auto', which is a passive negotiation mode. Remember you set the other side to desirable. As
long as one side takes charge, then negotiation will occur.
Configure the EtherChannel
After a bit of a pause, we see that we finally get our prompt back, and now I can continue with the
configuration of the second EtherChannel. So 'int range g0/3 - 4'. Once again 'switchport mode trunk',
then 'channel-group 2'. Again, this matches the EtherChannel that I configured on Switch B, 'mode'; I
will make this one 'auto' as well. I should be done here. If I go ahead and 'end' out and save my work.
We can do a 'show run' take a look at one of those ports. We can see it is configured as a trunk and the
channel-group has been set to 'auto' for PAgP negotiation.
Verify the EtherChannel Configurations

Finally, to further verify your work, you can type 'show etherchannel summary', and I can see that I do
have two EtherChannel groups up currently. I have port group 1, port group 2. I can also look at some
of the codes here that we have an (SU) for both of those. 'S' indicates it's a Layer 2, or switched, port
channel. I can also create Layer 3, or routed, port channels. I can also see the negotiation that was used
to bring the port channel up, and finally the physical ports that have been added to each of these port
channels, so success.
An Example Using LACP for a Non-Cisco Switch

In this exhibit, we have two switches--Switch A, Switch B--and you can see that there are four
redundant connections between the two, using Fast Ethernet 0/1 through 4 on both switches.
As it turns out, Switch A in this example is not a Cisco switch. You can create EtherChannels between
Cisco and non-Cisco devices, but the fact that it's not Cisco means that we cannot leverage any Cisco
proprietary protocols. If we want to make use of a negotiation protocol for this EtherChannel, we're
going to have to use LACP in this instance. We're also going to make sure when we configure Switch B
that all the interfaces are operating in full duplex mode. Switch A has already been configured, so all I
have to do is Switch B's config now.
Let's go into global config. I'll use the 'int range' command again to grab all four of these interfaces.
That will take care of that duplex requirement right away. I'll use the 'duplex full' command, which
basically means the interfaces can send and receive at the same time. I'm going to configure them all
manually as trunk connections. Then I'm going to type the 'channel-group' command, followed by a
group number. Again, this can be anything unique to the switch. The exam gives you a channel number
to use. Make sure you do so, for example, '6', then 'mode'. You have to tell it what mode of negotiation
this EtherChannel will make use of.
If we look at the options, we know we can't use PAgP because again, that's Cisco proprietary. I can see
two options for LACP. I can see 'active' and 'passive'. They're pretty self-explanatory. Active means that
this switch will take charge of negotiating for that EtherChannel's settings. Passive means that the other
side will wait for the connected switch to take charge. If you have both sides set to passive, then no
negotiation will take place. I'm going to make this side 'active', so we'll take a leading role in
configuring the EtherChannel. That's all. The port channel has been created. I can then 'end' out, save
your work of course using the 'write' command or 'copy run start', and that completes the demo.
6.6.9 EtherChannel Facts

EtherChannel combines multiple ports on a Cisco switch into a single, logical link between two
switches. With EtherChannel:
 You can combine 2-8 ports into a single link.
 All links in the channel group are used for communication between the switches.
 Bandwidth between switches is increased.
 Automatic redundant paths between switches are established. If one link fails, communication
will still occur over the other links in the group.
 Spanning tree convergence times are reduced.

Cisco switches can use the following protocols for EtherChannel configuration:
Protocol Description
Port Aggregation Protocol prevents loops, limits packet loss due to
misconfigured channels, and aids in network reliability. PAgP operates in the
following modes:

 Auto places the port into a passive negotiating state and forms an
Port Aggregation
EtherChannel if the port receives PAgP packets. While in this mode,
Protocol (PAgP)
the port does not initiate the negotiation.
 Desirable places the port in a negotiating state to form an
EtherChannel by sending PAgP packets. A channel is formed with
another port group in either the auto or desirable mode.

Link Aggregation Control Protocol is based on the 802.3ad standard and has
similar functions to PAgP. LACP is used when configuring EtherChannel
between Cisco switches and non-Cisco switches that support 802.3ad. LACP
operates in the following modes:
Link Aggregation
 Passive places the port into a passive negotiating state and forms an
Control Protocol
EtherChannel if the port receives LACP packets. While in this mode,
(LACP)
the port does not initiate the negotiation.
 Active places the port in a negotiating state to form an EtherChannel
by sending LACP packets. A channel is formed with another port
group in either the active or passive mode.

The following table shows common commands to configure EtherChannel:


Command Action
Switch(config-if)#channel-protocol
lacp
Selects the EtherChannel protocol on the interface
Switch(config-if)#channel-protocol
pagp
Switch(config-if)#channel-group [1-8]
mode auto
Selects the PAgP mode on the interface
Switch(config-if)#channel-group [1-8]
mode desirable
Switch(config-if)#channel-group [1-8]
mode active
Selects the LACP mode on the interface
Switch(config-if)#channel-group [1-8]
mode passive
Switch(config-if)#no channel-group
Disables EtherChannel on the interface
[1-8]
Switch#show etherchannel Displays EtherChannel details on the switch
Displays EtherChannel information for a channel, with a
Switch#show etherchannel summary
one line summary per channel group
Each channel group has its own number. All ports assigned to the same channel group will
be viewed as a single logical link.

The following commands configure GigabitEthernet 0/1 and 0/2 interfaces to actively initiate the
negotiation of an EtherChannel with the PAgP protocol and a channel group of 5:
Switch>ena
Switch#conf t
Switch(config)#int range gi 0/1 - 2
Switch(config-if-range)#channel-protocol pagp
Switch(config-if-range)#channel-group 5 mode desirable

The following commands configure FastEthernet 0/1 through 0/4 interfaces to form an EtherChannel
with the LACP protocol if the other device actively initiates the EtherChannel connection:
Switch>ena
Switch#conf t
Switch(config)#int range ga 0/1 - 4
Switch(config-if-range)#channel-protocol lacp
Switch(config-if-range)#channel-group 3 mode passive
Switch(config-if-range)#duplex full

Use the following guidelines to troubleshoot an EtherChannel configuration:


 Make sure that all ports in an EtherChannel use the same protocol (PAgP or LACP):
 If the channel-group command is used with the desirable option on one switch (PAgP),
the other switch must use either desirable or auto.
 If the channel-group command is used with the active option (LACP), the other switch
must use either active or passive.
 Verify that all ports in the EtherChannel have the same speed and duplex mode. LACP requires
that the ports operate only in full-duplex mode.
 Check the channel group number. A port cannot belong to more than one channel group at the
same time.
 Verify that all ports in the EtherChannel have the same access VLAN configuration or are
VLAN trunks with the same allowable VLAN list and the same native VLAN.
 Check the spanning tree configuration. If you do not configure EtherChannel, the spanning tree
algorithm will identify each link as a redundant path to the other bridge and will put one of the
ports in a blocking state.
 Check the port type and number. You can configure an LACP EtherChannel with up to 16
Ethernet ports of the same type. Up to eight ports can be active, and up to eight ports can be in
standby mode.
 Be sure to enable all ports in an EtherChannel. A port in an EtherChannel that is disabled using
the shutdown interface configuration command is treated as a link failure, and its traffic is
transferred to one of the remaining ports in the EtherChannel.
Do not configure more than 6 EtherChannels on one switch.

6.7. Switch Troubleshooting

As you study this section, answer the following questions:


 You have a network connected using switches, with a single device connected to each switch
port. Why would you be surprised to see collisions on this network?
 What is a duplex mismatch?
 What conditions lead to a broadcast storm?
 How can you prevent switching loops from forming?
 You moved a device from one switch port to another, and now it cannot communicate with any
other device on the network. The switch link lights are lit. What switch configuration should
you check?
 Besides the switch configuration, what should you check if you see excessive frame errors on
the switch?
6.7.1 Switch Troubleshooting

In this video we're going to look at several problems that can occur when you're working with switches.
We're going to review some of the things you can do to either prevent or fix them. We're going to look
at collisions, duplexing mismatches, speed mismatches, switching loops, VLAN membership errors,
and frame errors.
Collisions

Let's begin by looking at the issue of collisions on the network medium.


Now collisions happen because with an Ethernet network there are multiple devices that share the same
transmission medium. Now in the early days of Ethernet networking, collisions were common and
could become a very serious problem. The issue here is that if the number of collisions on the network
became excessive, then the bandwidth of the network would become severely compromised by all of
the collisions going on.
Now, thankfully, this is almost a non-issue with a modern switched Ethernet network. Remember, when
we're dealing with a switch, each switch port represents its own collision domain. Therefore, as long as
some type of end point device like a workstation or a server is connected to a switch port, you should
see no collisions at all.
Now, if you are experiencing collisions on a switched network, there could be two possible causes.
First of all, you might simply have malfunctioning hardware. In this situation, either the switch port or
the network board is transmitting when it shouldn't, and this is causing the collisions to occur.
Now it's also possible that we have a hub connected to one of our switch ports. Now remember, a hub
does not selectively forward frames. Instead, it simply floods every single frame it receives to every
active port. Therefore, the collision domain in this situation consists of every single port on the hub,
including the port that is connected to the switch.
So, in order to troubleshoot the first issue, simply make sure that your network card, the switch, and the
cabling are all functioning properly. To troubleshoot the second issue you should replace that hub with
a switch, and you should also question why on earth you were even using a hub in the first place.
Mismatched Duplex Settings

Switches are very good; hubs are not. They are evil.
Next, we need to look at the duplex settings that are used by the network devices and the switch. Now
remember, duplexing determines whether or not devices can transmit and receive on the network
medium at the same time. Now, using full duplex communications enables this functionality, but using
half duplex communications restricts the network device to either transmitting or receiving on the
network medium, but not both at the same time. Therefore, the amount of available bandwidth that the
device can use is affected.
Now the duplex setting also determines whether or not collision detection is enabled or not, whether or
not CSMACD is running. Now this controls whether the device will look for collisions on the network
medium. Here's the key thing you need to remember. If you configure full duplex communications,
we're going to assume that the transmission medium is not being shared with any other device and that
therefore there is no possibility of a collision occurring.
In addition, because the devices are allowed to send and receive at the same time, using full duplex
communication effectively increases the amount of network bandwidth available. On the other hand,
using half duplex communications assumes that the transmission medium is being shared between
multiple devices such as on a hub-based Ethernet network, and that therefore collisions are possible.
So, in this situation, collision detection must be enabled even if you're using a switch. This can happen,
for example, in a situation where you have a hub that is connected to a switch port.
Now when you connect a device to a switch, one of the first things that it's going to do is try to identify
the duplex mode that it needs to use for communications. Now by default, most devices are configured
to automatically discover and negotiate the correct duplex mode. So in this case, if I were to connect a
device to a switch, both the device and the switch will try to determine is there something connected to
the other end of the cable. And if that is the case, if there's a single device connected to a single switch
port, they will then automatically negotiate whether or not they can use full or half duplex
communications, and if we're dealing with a modern switch, they should negotiate automatically full
duplex communications.
However, suppose that a switch port has been manually configured to operate in half duplex mode
instead of full. Now because it's been manually configured, that switch will not automatically negotiate
the duplex setting. It's always going to use half duplex communications on that link.
Now this also can happen in a situation where one device is configured to automatically negotiate, but
it cannot do so successfully. Now the default behavior in this situation is to actually drop down to half
duplex communications because it's the safest option to choose. It will use half duplex when the device
cannot determine what to use otherwise. For example, this can happen if one device is configured for
automatic detection but the other device is manually configured to use full duplex.
Now you might think that because one side is automatically detecting and the other side is manually
configured, that the auto-detecting side will automatically pick full duplex, but that's not the case. In
this situation, the device that's configured to use automatic detection will not be able to automatically
negotiate the duplex setting and will instead drop back to half duplex communications, and you're
going to have a mismatch in the duplex settings on each device.
Now if this happens, the switch is going to try to use half duplex communications, but the workstation
will be trying to use full duplex. Communications will still occur but everything's going to slow down.
This happens because the switch is still trying to detect collisions even though it doesn't need to. But
the workstation is not going to try to detect collisions; it's simply going to send data assuming that
there's no possibility for a collision to occur. So in this situation you will find that the network
communications slow down to a very noticeable degree because that switch is continually trying to
detect collisions, but the device is continually sending data.
So, as a best practice, all devices should be configured for auto-negotiation. If an automatic negotiation
fails for someone reason, then make sure you go in and manually configure both sides of the link, not
just one side, but both sides of the link to use the same duplex setting.
Link Speed Settings

Now in addition to negotiating the duplex setting, most Ethernet devices also try to auto-detect the link
speed that should be used. For example, let's suppose you plug a device that has a 100 megabit per
second network board into a gigabit Ethernet switch. Well that switch is automatically going to drop the
link speed down to 100 megabits per second instead of 1 gigabit per second. And you can tell this by
looking at the link lights on the front of the switch. It will tell you that that link is running at a slower
network speed.
If you run into a situation where you have a link speed mismatch, for example, the switch is trying to
run at 1 gigabit per second but the device is trying to run at 100 megabit per second, you need to verify
that the network board or the network switch has not been manually configured to run at a slower
speed.
Now most switch ports are configured to automatically detect the correct link speed when the device is
plugged in, but however, it is possible to manually configure the switch port to run at a specific speed
regardless of what kind of device is connected to it. If a link is running at a speed slower than you
would expect, first verify that the network board and the switch port are set to auto-detect the link
speed. Now if this doesn't work, then you may need to manually configure both devices, the switch and
the device itself, to run at the same speed.
Now, if all devices and the switch ports are configured correctly and you're still seeing a slow link
speed, that slow link speed could be caused by hardware problems. For example, maybe you have
crosstalk going on in your cabling or maybe you have a bad RJ45 connector on one of your cables. If
you find that you're getting less than the rated speed out of a connection, you should use a cable
certifier to test the cables and the connection between the devices to make sure that they're functioning
properly.
Switching Loops

Now another problem that can occur with a switch is a switching loop. Now a switching loop forms
when a switch has multiple paths to the same device. So in this diagram we have three switches and
they're connected together in a loop. Now when a switch has multiple paths to a single destination, it's
possible for a frame that's being sent to this device on this switch from a device on this other switch to
be sent in one direction and then replicated and sent around the other direction as well. This results in a
switching loop. The frame is going from one switch to another, propagating back in the opposite
direction.
Now switching loops are particularly nasty when we're working with broadcast traffic. In this example,
a broadcast frame from a workstation is received on one switch port and then forwarded out to all the
other ports because it's a broadcast frame. In this case, the broadcast frame here would be forwarded to
all these workstations and to this switch, as well as to this other switch. Now this switch would receive
that broadcast frame and forward it out to all of its connected ports, including this port right here. This
other switch would do the same thing: forward that broadcast frame out in this direction.
Now this switch would receive the broadcast frame sent by this switch and forward it in this direction,
which would hit this other switch, causing it to be forwarded out back this direction. As you can see,
the broadcast traffic ends up being propagated throughout the switch network over and over and over.
This is called a broadcast storm. The broadcast traffic will quickly consume all of the available
bandwidth as these various broadcast frames keep being replicated from one switch to another.
So, the first step in resolving switching loops is to use the Spanning Tree Protocol or STP. STP prevents
switching loops by ensuring that there's only one valid path between switches. Now the redundant link
still exists but the switch puts specific ports into a blocking state so that traffic is not forwarded through
those ports. So in this example, broadcast traffic may come in through this port and this switch would
forward it out to the other two switches. This switch then when it receives the broadcast frame would
forward it out to all of its ports, but it would not forward it out to this link that connects these two
switches together. The same thing happens on this side. The broadcast will not cross this link.
In addition, many high-end switches will also include special software that detects broadcast storms. If
the broadcast traffic reaches a certain threshold, then the software will automatically start dropping
some of that broadcast traffic so that normal communications can occur.
Misconfigured VLAN Assignments

Now another common problem on switches is one where we have misconfigured VLAN assignments.
Remember, VLAN assignments can be made in two different ways with most switches. The first type is
called a static port assignment. Now static port assignments are configured by the administrator and
they do not change. They're made on a per port basis. Any device plugged in to a particular port
automatically joins the VLAN that's been configured on that port.
On the other hand, we can also have dynamic VLAN assignments. Now dynamic VLAN assignments
are not made on a per port basis. Instead, dynamic VLAN assignments are made based upon the MAC
address of the device that's plugged into a given port. The switch dynamically assigns a VLAN to a
particular switch port based upon the MAC address of the device that's plugged into it.
Now, if we're dealing with a static VLAN port assignment, then we could potentially end up with a
VLAN mismatch. Remember, without a router configured, devices can only communicate with other
devices that are on the same VLAN. Therefore, if VLAN membership is based upon the port number,
then if you were to move a device from one switch port to another, you could accidentally be changing
that device's VLAN membership. It's possible that the different ports are actually assigned to different
VLANs. In fact, this is very commonly done as a security measure. Many administrators will take all of
the unused switch ports on a switch and assign them to a separate VLAN so that anyone who plugs into
one of those unused ports will not be able to communicate with any of the other devices on the
network.
Frame Errors

Now another problem you might experience from time to time on a switch is one where you have frame
errors. Now frame errors could have many different causes, but the symptoms are all similar. Basically,
whenever a switch sees a frame that it can't process, it doesn't understand, it simply drops that frame
and therefore communications don't occur between the sending and receiving devices.
Now, the first problem you might encounter are one where the frames are too big. Frames that are too
long are usually caused by a faulty network card that's jabbering. In other words, it's sending out junk
data, frames that are too big. In this situation, the switch is going to keep looking for the end of the
frame. It's looking for that signal that indicates that the frame is complete, but it doesn't ever see it. So
in this situation that frame is going to get dropped.
Now it's also possible to have frames that are too small. A frame that is too short is typically caused by
a collision occurring. In other words, the two different frames collided with each other on the network.
Part of the frame gets obliterated in the process, and when the switch sees that the frame is too small, it
assumes that it's bad and it's going to drop it.
Another problem you might encounter is one where we have a CRC error. Now a CRC error indicates
that the frame has been corrupted for some reason. The frame might be the right size. It's not too big,
it's not too small, but it's failed the CRC test. When it fails the CRC test, the switch knows that the
frame that was sent by the sending system is not the same now as it's receiving. Some type of
corruption in between has occurred. Again, this will cause the switch to drop the frame.
Now, frame errors are typically caused by either faulty network cards or faulty cabling. When frame
errors occur, you probably still get a certain degree of connectivity but the data transfers will be very,
very slow, and this is because the dropped frames will have to be retransmitted again. So, in order to
detect whether or not you're having frame errors you need to go to your switch configuration. Most
switches have counters that keep track of the number of frames dropped and why they were dropped,
what type of error occurred. If you see an excessive number of these types of errors, you need to check
your network boards and also certify your cables to make sure they're not causing these types of errors
to occur.
Summary

That's it for this lesson. In this lesson we discussed several different switch troubleshooting techniques.
We first looked at collisions. We talked about mismatched duplex settings. We talked about link speed
settings. We talked about problems with switching loops. We talked about problems with misconfigured
VLAN assignments. Then we ended this lesson by discussing frame errors.
6.7.2 Switch Troubleshooting Facts

The following table lists several problems you might encounter when managing switches on your
network:
Issue Description
A collision occurs when two devices that share the same media segment transmit at
the same time. In a switched network, collisions should only occur on ports that
have more than one device attached (such as a hub with workstations connected to
it).
Collisions
 To eliminate collisions, connect only a single device to each switch port.
For example, if a hub is connected to a switch port, replace it with another
switch.
 If collisions are still detected, troubleshoot cable and NIC issues.

A duplex mismatch occurs when two devices are using different duplex settings. In
such a case, one device will try to transmit using full duplex, while the other will
expect half duplex communications. By default, devices are configured to use
auto-negotiation to detect the correct duplex setting to use. If a duplex method
cannot be agreed upon, devices default to half duplex.

A duplex mismatch can occur in the following cases:

Duplex  Both devices are configured to use different duplex settings.


mismatch  Auto-negotiation does not work correctly on one device.
 One device is configured for auto-negotiation and the other device is
manually configured for full duplex.

Symptoms of a duplex mismatch include very slow network


communications. Ping tests might appear to complete correctly, but
normal communications work well below the expected speeds, even
for half duplex communications.

Most network components are capable of supporting multiple network


specifications. For example, a NIC might support 10BaseT, 100BaseTX, and
1000Base-T. By default, these devices use the maximum speed supported by all
devices on the network.

Slow link Do the following if the speed of a segment is lower than expected (for example, 10
speed Mbps instead of 100 Mbps, or 100 Mbps instead of 1000 Mbps):

 Check individual devices to verify that they all support the higher speed.
 Check individual devices to see if any have been manually configured to
use the lower speed.
 Use a cable certifier to verify that the cables meet the rated speeds. Bad
cables are often the cause of 1000Base-T networks operating at only
100Base-TX speeds.

A switching loop occurs when there are multiple active paths between two
switches. Switching loops lead to incorrect entries in a MAC address table, making
a device appear to be connected to the wrong port; this causes unicast traffic to be
circulated in a loop between switches.
Switching loop
The Spanning Tree Protocol (STP) ensures that only one path between switches is
active at any given time. STP is usually enabled by default on switches to prevent
switching loops.

A broadcast storm is excessive broadcast traffic that renders normal network


communications impossible. The following can cause broadcast storms:

 Switching loops that cause broadcast traffic to be circulated endlessly


between switches
 Denial of Service (DoS) attacks
Broadcast
storm To reduce broadcast storms, do the following:

 Run STP to prevent switching loops


 Implement switches with built-in broadcast storm detection, which limits
the bandwidth that broadcast traffic can use
 Use VLANs to create separate broadcast domains on switches

VLANs create logical groupings of computers based on switch port. Because


devices on one VLAN cannot communicate directly with devices in other VLANs,
incorrectly assigning a port to a VLAN can prevent a device from communicating
through the switch.
Incorrect
VLAN With VLAN membership, static port assignment is defined by switch
membership port, not by a MAC address. Connecting a device to a different switch
port could change the VLAN membership of the device. On the switch,
verify that ports are assigned to the correct VLANs and that any
unused VLANs are removed from the switch.

The switch examines incoming frames and will only forward frames that are
complete and correctly formed; invalid frames are simply dropped. Most switches
include logging capabilities to track the number of corrupt or malformed frames.
The following are common causes of frame errors:
Frame errors
 Frames that are too long are typically caused by a faulty network card that
jabbers (constantly sends garbage data).
 Frames that are too short are typically caused by collisions.
 CRC errors indicate that a frame has been corrupted in transit.
 All types of frame errors can be caused by faulty cables or physical layer
devices.

7.1. Routing Basics

As you study this section, answer the following questions:


• What is the difference between static and dynamic routing?
• What is convergence?
• What information is contained in a routing table?
• What is the function of a routing table?
• When would you create a static routing table entry?

7.1.1 Routing

In this lesson, we're going to spend some time talking about routing network information.
Routing is the process of moving packets from one network to another. A router sends packets
received on one network interface out on another network interface. The goal is to send the packet
from router to router along a path to the destination host located on a remote network. A router
makes forwarding decisions by looking for routes to destination networks in its routing table.
The routing table lists all the known destination networks, along with other information, such as
the interface, the next router in the path, and a value that identifies the cost to reach the destination
network.
In this sample network there are three routers and five subnets. Let's look at what the routing
table might look like for this router here. The routing table includes the network address of every
known network. In this case, the router is directly connected to three separate networks, so it will
have an entry for each of those network addresses. The routing table also identifies the interface
that is used to reach each network. In this case, we'll call these interfaces E0 (for Ethernet zero),
E1 and E2. The network interface that is used to reach each of these networks is added to the
routing table.
Notice that this router can also reach two additional remote networks by sending information to
other routers. In order to route these packets through the network, it must have an entry for each
of those subnets in its routing table. In the case of the 4.0.0.0 network, the router would have to
route packets through the E1 interface. To reach the 5.0.0.0 network, packets must be routed
through the E2 interface.
The routing table also includes the address of the next router that is in the path to the destination
network. For each of these three networks the router is directly connected to those networks, so
it does not need a next hop router address. It simply sends the information out to that network.
For the 4.0.0.0 network, the router must first send the information to this router. Therefore, the
routing table must contain that router's IP address, so the first router can forward packets to the
destination network.
Routing Table Function

Let's look at how this works in more detail. Suppose we have a host on this network that has an
IP address of 1.0.0.15. It needs to send a data to a host on this network that has an IP address of
4.0.0.15. The source host creates a packet that is addressed to the IP address of the destination
host. It also includes its own address as the source address in the IP packet header. It then creates
a frame that it sends to the default gateway router. In this case, the frame is sent to the E0 interface
on the router, and it inserts its own MAC address as the source address.
This frame is sent to the router, which looks at the destination MAC address and realizes that the
frame is addressed to itself. So it strips off the frame headers and analyzes the IP packet header
to identify the destination IP address of the data. It sees that the destination is on a different
network, so it checks its routing table to identify the destination network for the packet. It
identifies that it needs to send the packet out its E1 interface to the next hop router, which has an
IP address of 2.0.0.2.
In this case, the router creates a new frame that is addressed to the MAC address of the interface
of the next hop router. It uses its own MAC address, assigned to this interface, as the source
MAC address of the frame.
It's important to understand that the header information of the packet itself does not change. Only
the source and destination MAC addresses of the frame that encapsulates the packet is modified.
The frame is transmitted to the next router, which checks its routing table and sees that the 4.0.0.0
network is a directly connected network. In this case, the router will create a new frame and
address it to the MAC address of the destination device. It will use its own MAC address as the
source address of the frame and send it to the destination device.
As packets flow through the network, each router checks its routing table to identify the
destination network, the interface, and, if applicable, the next hop router that each packet needs
to be sent to.
Routing Table Entries

At this point, we need to discuss how routes get put in the routing table. There are actually several
different ways. First, any network that is directly connected to the router gets put in the routing
table automatically.
In this example, this router is directly connected to these three networks, so these three entries
are automatically put in the routing table when the interfaces come up.
Routing table entries for remote networks must be added using a different mechanism because
they are not directly connected to the router. Entries for remote networks can be entered statically
or learned dynamically. With static entries, an administrator must manually add each route. These
routes stay in the routing table until they are manually removed. Therefore, if the network
changes, then the administrator must manually change the routing table to reflect those changes.
With dynamic routing, routers use a routing protocol to learn about routes from other routers.
Dynamic routing is easier to maintain because routers learn routes automatically. When a change
is made in the network, such as adding or removing a network, those changes are learned
automatically. This automatic process does take some time to fully complete. The routers need
time to share information with each other when a change is detected.
The term 'convergence' describes the state when all routers have a consistent view of the network.
Before convergence occurs, some routers may have an incorrect picture of the network. After
convergence, all the routers are in harmony with each other.
For larger networks, you will probably rely on a routing protocol to share routing information.
But if necessary, you can implement static and dynamic routing at the same time on a router.
For example, using both static and dynamic routing is appropriate when you have a private
network connected to the Internet. In this case, the boundary of the private network is here. This
router is responsible for routing data onto the Internet. On this side of the router is the internal
private network. In this example, the private network has multiple subnets separated by multiple
routers. Within the private network, these routers can dynamically share information using a
routing protocol. Every router learns about every other subnet within the private network using
the routing protocol.
Static Entries/Default Route

However, you probably do not want to share your internal routes with routers out on the Internet.
To route internal traffic out to the Internet, you need a single routing table entry which simply
says anything that is not internal needs to be sent to this router, which will take care of forwarding
it on to the Internet.
To accomplish this, there's a special static route used in the routing table called the default route.
The default route uses a network address of 0.0.0.0 with a mask of 0.0.0.0. This routing table
entry specifies that any packet with a destination address that isn't on a known network should
be sent to the default gateway router.
Let's look at a simple example. Suppose a network with an address of 10.1.0.0 (using a 16 bit
subnet mask) exists on the private internal network. Other subnets also exist on the internal
network that use similar network addresses. If this router were to receive a packet addressed to
this subnet, it would match the packet's destination address with this entry in its routing table and
send it to the appropriate router on the private network.
But if it receives a packet addressed to a host with an IP address of 160.10.12.155, the destination
network address does not match any of the addresses in the routing table. Therefore, it is
automatically forwarded to the default router defined by this static default route entry in the
routing table. This entry automatically routes any packets not addressed to an internal network
out onto the Internet.
Dynamic Entries

When you configure a dynamic routing protocol, you must first enable the protocol on the router,
and then identify which interfaces will use the protocol to dynamically learn routing information.
In this private network, you would enable the routing protocol on each of the routers within the
private network, including the one that sits on the boundary to the Internet. You would then
activate the routing protocol on every router interface that is connected to a private network. In
the case of this router, enable sharing and learning on these two interfaces, but not on the interface
that is connected to the internet. This allows the gateway router to dynamically learn about routes
within the private network, but it prevents it from trying to learn all of the routes that exist on the
Internet. This also prevents the gateway router from sharing routes from the private network with
routers on the Internet.
The routing of packets from the Internet into your private network is usually taken care of by the
ISP's router. Their router has an entry in its routing table that routes data destined for this private
network to this router. Your router does not need to know about every route on the Internet, nor
does it need to share its private routing information with every other router on the Internet. As
long as it knows the IP address of the ISP's router, it can get information on the Internet. As long
as the ISP's router knows the IP address of my router, here, it can route information back from
the Internet to the private network.
Summary
That's it for this lesson. In this lesson we discussed how network routing works. Routing is a
process of moving packets from one network to another. We pointed out that routers use the
routing table to identify destination networks and forward packets to the next hop router, which
is the next router in the path to the destination. Finally, we discussed how routing table entries
can be created, either statically or dynamically using a routing protocol.

7.1.2 Routing Facts

A router is a device that sends packets from one network to another. Routers receive packets,
read their headers to find addressing information, and forward them to the correct destination on
the network or Internet. Routers can forward packets through an internetwork by maintaining
routing information in a database called a routing table. The routing table typically contains the
following information:
• The address of a known network.
• The interface or next hop router used to reach the destination network.
• A cost value (also called a metric) that identifies the desirability of the route to the
destination network (using distance, delay, or cost).
• A timeout value that identifies when the route expires.
Routers automatically have an entry in their routing tables for each directly connected network.
Information about other networks can be added to the routing table using one of two methods:
Method Description
Static Static routing requires that entries in the routing table be configured manually.
• Network entries remain in the routing table until manually removed.
• When changes to the network occur, static entries must be modified, added, or removed.
Dynamic Routers can dynamically learn about networks by sharing routing information with
other routers. The routing protocol defines how routers communicate with each other in order to
share and learn about other networks. The routing protocol determines:
• The information that is contained in the routing table.
• How messages are routed from one network to another.
• How topology changes (i.e., updates to the routing table) are communicated between
routers.
Use a routing protocol to allow a router to learn about other networks automatically. The routing
protocol generates some network traffic for the process of sharing routes, but it has the advantage
of being dynamic and automatic (i.e., changes in the network are propagated automatically to
other routers).
Be aware of the following when managing routing tables:
• You do not need to create static entries for directly connected networks.
• You can use dynamic and static routing together. You can add static routes to identify
networks that are not learned about through the routing protocol.
• The most common reason for creating a static routing table entry is to define a default
route.
• The default route is similar to a default gateway setting on a workstation. It identifies the
router that is used to forward packets to networks that do not appear in the routing table.
• If a default route does not exist, the router will drop any packets that do not match a route
in a routing table.
• A route entry of 0.0.0.0 with a mask of 0.0.0.0 identifies the default route in the routing
table.
• When you configure a router for dynamic routing, you enable a routing protocol and
identify the interfaces that will participate in the exchange of routing information. Enabling a
routing protocol on an interface configures the router to:
• Share information in its routing table with other routers accessible on that interface.
• Share information about that network with other routers.
• When a routing protocol is used, changes in routing information take time to propagate to
all routers on the network. The term convergence is used to describe the condition when all
routers have the same (or correct) routing information.
• A loopback interface is a software interface which can be used to emulate a physical
interface. By default, loopback interfaces are not enabled, so they need to be created. A loopback
interface:
• Uses the loopback interface's IP Address to determine a router's OSPF Router ID.
• Is always up and running and always available, even if other physical interfaces in the
router are down.
• Allows Border Gateway Protocol (BGP) neighborship between two routers to stay up even
if one of the outbound physical interface connected between the routers is down.
• Used as the termination points for Remote Source-Route Bridging (RSRB), and Data-
Link Switching Plus (DLSW+).
• Can be assigned IP addresses. To create a loopback interface, use the following syntax.
• Router(Config)#int loopback <loopback_interface_number>
Router(Config-if)#ip address <ip_address> <subnet_mask>
For example,
Router(Config)#int loopback 5
Router(Config-if)#ip address 200.0.0.10 255.255.255.0

7.2 Routing Protocol

As you study this section, answer the following questions:


• When would you configure both static and dynamic routing on the same router?
• Which type of route is preferred, one with a higher metric or one with a lower metric?
• Why is the hop count sometimes an unreliable metric for choosing the best path to a
destination network?
• How does the link-state method differ from the distance vector method?
• What is the difference between RIP and RIPv2? Why is this important in today's
networks?
• Which routing protocol is typically used within an ISP? Which protocol is used on the
Internet?
• Which routing protocols divide an autonomous system into areas?
• How does IS-IS differ from OSPF?
After finishing this section, you should be able to complete the following tasks:
• Configure a router with static routes.
• Enable OSPF routing.

7.2.1 Routing Protocol Characteristics

A routing protocol is the method that routers use to share and learn about routes from connected
routers. There are many different routing protocols, each with their own strengths and
weaknesses. To understand their differences let's look at how routing protocols are classified.
Routing Scope

The first way of classifying a routing protocol is the scope. The scope identifies what information
is shared and remembered.
A private network that is connected to the Internet is known as an Autonomous System, or AS,
and is fairly independent from the Internet. The only thing that is really shared is the link to the
Internet.
Businesses or organizations that connect their private network to the Internet are assigned a
unique autonomous system number, or ASN. Usually, the ASN is handled by the ISP.
The routing protocol scope identifies boundaries where routing information is shared.
IGP

An Interior Gateway Protocol, or IGP, is a routing protocol that is used within an Autonomous
System. Within your private network you would run a routing protocol to share internal routes.
EGP

This is an interior gateway protocol.


An Exterior Gateway Protocol, or EGP, is used to share routing information between autonomous
systems. For example, a routing protocol used within the Internet to route data between Internet
routers and into autonomous systems runs as a protocol that is classified as an EGP. As a network
administrator, you will mostly work with IGP routing protocols. The only situation where you
would work with EGP is if you had a very large network connected to the Internet.
Metric

Another way of classifying a routing protocol is how it makes routing decisions based on a
specific metric. The metric is a value assigned to the network that identifies the preferred route
when multiple routes exist.
Hop Count
A route with a low metric indicates the best route. One metric used by a routing protocol is called
the hop count. The hop count identifies the number of routers that must be used to reach a
destination network. Say this router has a message that needs to be sent to the D network in this
direction. The hop count to the destination would be one, two. The packet goes through two
routers to reach the destination. If the packet traveled this direction, the hop count would be one,
two, three. In this example, this router has two separate routes to the destination network. If using
the hop count as its metric, it will use the path with the lower hop count, which is the one with
two hops.
Bandwidth and Delay

Another metric that can be used is a metric based on the bandwidth, or sometimes latency (delay).
Both of these measure how fast a message is sent from the source to the destination.
Let's say that this router here has 10 megabits per second links. These routers down here all use
100 megabits per second links. In this example, it would be faster to send the information this
direction rather than across the slower links.
If the routing protocol were using the hop count, it would prefer this route to the destination,
even though it is slower. When using the bandwidth or delay as a metric, each link is assigned a
relative cost value. For instance, let's say these two links each have a value of 100. The total cost
going this direction to the destination would be 200.
Down here each of these links may have a metric value of 10, meaning that going this direction
would only have a metric value of 30. In this case again, the lower metric wins so this router
would use this direction when sending the information to the destination network.
These values that are assigned based on bandwidth, or latency, are typically identified based on
the routing protocol. For example, the routing protocol might say all 10 megabits per second
links get a metric value of 100. There may be even more complex computations which actually
take into account the delay that happens when sending real traffic between the two routers. The
basic idea is that the routing protocol tries to calculate the bandwidth and the delay and assigns
the link a value that helps it identify the best path.
Relative Value

Another metric that can be used is a relative value. With this, every link might be assigned a
default value. An administrator could then change these default values to manually control traffic
flow. If the routing protocol used one number for all links, regardless of the bandwidth of the
delay, the routing protocol would be using the hop count. But, an administrator could go in and
modify the values on each link, increasing or decreasing these values, in order to customize how
data flows through the network. This relative value that is assigned is often called a link cost and
can be based on a number of ideas.
Another way to distinguish between routing protocols is the method used to share routing
information.
Distance Vector

The first method is called the Distance Vector Method. With the distance vector method every
router shares its entire routing table with its immediate neighbors. Let's take a look at an example.
Before the routing protocol starts, each router would have routing table entries for the directly
connected networks. The first router would know of networks A and B. This router would know
of networks B and C. This router would know of networks C and D. Routing information is
shared by routers periodically. First, this router would share the routes it knows about with its
neighbor. In this case it knows about routes A and B. The second router receives this information
and looks at its own routing table. It sees that it already knows about the network B, but it doesn't
know anything about A. It would add A to its routing table. Let's also assume that this routing
protocol is using the hop count metric. In this case directly connected networks are identified
with a metric of zero, meaning that there are no routers that have to be traversed to reach the
destination network.
When the first router shares its information it will share that it knows about a specific network.
The router, in adding the new network to its table, would know that the route is not directly
connected, so it would take the information that is shared and then increment the hop count by
one, knowing that in order to reach the destination network of A this router must go through one
additional router to reach that destination network.
Let's say it's the second router's turn to share its information. It would share routes with each of
its neighbors, so it would share with this router as well as sharing back with this router. The
routing table that's sent to this router would contain the entries of B, C and A in its routing table.
This router would say, "I already know about network C, but I don't know yet about B." It would
then add B to its table and increment the hop count, and it doesn't know about A so it adds A to
its routing table and increments the hop count.
When the routing table from the second router gets shared back, this router does the same thing.
It already knows of networks A and B, but it does not know about network C, so it would add
that to its table.
After this, the third router would share information back with its neighbors. The middle router
receives this routing table from its neighbor and identifies a network that does not yet exist in its
routing table and adds that information.
This first router still has not learned of network D. It isn't until one last update from the middle
router to its neighbors that the first router would learn about network D and increment the hop
count.
Convergence

At this point, every router knows about every network, so we can say that convergence has
occurred meaning that all routers share a consistent view of the network.
A key characteristic of the distance vector method is that every router shares its entire routing
table with its neighbors at every update interval.
Link State

Another method is called the Link State Method. With the link state method routers only share
information about their own directly connected networks. For instance, this router would share
information about networks A and B. It uses special messages called link state advertisements,
LSAs, and link state packets, or LSPs, to share information throughout the network. When a
router receives one of these advertisements from a router it records the information in its own
routing table and then forwards that same information on to other routers within the network. In
this case the advertisement that has come from the first router about networks A and B would go
to the second router, and then be forwarded to the third router without any modification to the
information. Likewise, the second router would share information, each direction, about its
connected routes, advertising that it knows about networks B and C. And finally, the third router
would do the same, sending information about network C and D to its neighbor which would
then forward that information on throughout the network.
Flooding

The process of sending LSAs and LSPs through the network is called flooding because a single
packet is flooded, or forwarded, throughout the entire network. Routers use these advertisements
to build a database, or a topology, of the network within its own routing table. Eventually, each
router will learn about every other network. Once convergence has been reached and all routers
know about all other networks, these advertisements would contain only changes instead of all
directly connected networks.
Method Comparison

The main differences between the distance vector and the link state methods are that with the
link state method routers only share information about their own routes with their neighbors, and
these updates are passed along between routers. In addition, once convergence is reached, routers
only share information about changes, not all known networks at every update.
Hybrid

A third method used to share information is called a hybrid method. As its name suggests, it
combines the distance vector and link state methods.
VLSM

One final method of classifying routing protocols is their support for variable length subnet
masks, or VLSM. VLSM allows routers to use subnet masks that are different from the default.
For example, if you have an address of 10.0.0.0, the default subnet mask is 255.0.0.0, or indicated
with a /8. With a variable-length subnet mask, you can use a custom mask and subdivide this
address into multiple subnets. For example, you can use a non-default mask to create smaller
subnets.
Early routing protocols relied on the default subnet mask when sending routing information.
When a router would advertise a known network, receiving routers would assume that the default
subnet mask was being used.
Non-VLSM Problem

This caused a lot of problems.


For example, you have three networks connected by routers and this network uses a custom
subnet mask separated by another network. It is connected to a third network here. When this
router goes to advertise this route with its neighbor, if it did not support variable-link subnet
mask, it would simply advertise that it was connected to network 10.0.0.0, and not include this
information here in this octet because that is not part of the default subnet address. And this router
would advertise this route to its neighbor.
Let's say that this router has a packet that is addressed to 10.1.1.1. When it goes to send this
information it will look in its routing table and it would not find an entry for this network. In fact,
if this router only understood default subnets, it would think that it was connected directly to the
same network out this interface. So when it received a packet addressed to this address it would
think it actually resides on this subnet somewhere. With variable-link subnet mask, routers are
able to advertise the subnet mask along with the subnet address. In this case this router would
advertise a subnet address of 10.1.0.0 with a subnet mask length of 16 bits. This router would
then have an entry in its routing table for that destination subnet. When it receives a packet
addressed to that subnet it would know that it needs to be sent this direction.
Virtually all routing protocols today support variable-link subnet mask.
Classful and Classless

Only early protocols use the default address class. Routing protocols that do not support variable
link subnet masks are called classful routing protocols. Address class is used to identify the
subnet mask. Protocols that do support variable link subnet masks are called classless. The
address class is ignored and the subnet mask is included with the routing information.
Summary

Well, that's it for routing protocols. In this lesson we learned about the different ways to classify
routing protocols. A protocol can be classified by the scope of information it shares; by the metric
used to distinguish routes; the method used to share routing information; and whether or not
variable-length subnet mask is supported.
7.2.2 Routing Protocol Characteristics Facts

Routers use a routing protocol to exchange information about known routes with other routers.
The following table describes general characteristics of a routing protocol:
Characteristic Description
Scope Each organization that has been assigned a network address from an ISP is considered an
Autonomous System (AS). The organization is then free to create one large network or divide
the network into subnets. Each autonomous system is identified by an AS number (ASN). This
number can be locally administered (private ASN) or publicly registered (public ASN) if the AS
is connected to the Internet.
Routing protocols can be classified based on their scope, or whether traffic is routed within or
between an autonomous system.
• An Interior Gateway Protocol (IGP) routes traffic within an autonomous system.
• An Exterior Gateway Protocol (EGP) routes traffic between autonomous systems.
Metric The metric is a value assigned to each route that identifies the distance or cost to the
destination network. The metric is used by the routing protocol to identify and select the best
route to the destination when multiple routes exist. A lower metric identifies a more preferred
route. The metric can be calculated based on the following criteria:
• Hop count is the number of routers between the current router and the destination network.
• Bandwidth, or time, is an actual measure of how long it takes to reach the destination
network (delay). For example, high-speed links might be associated with a lower metric cost.
• Link cost is a relative number that represents the cost for using the route. For example, it
could relate to the actual cost of using a link, such as an expensive WAN link, or it might identify
the desirability of using a specific link.
Be aware that comparing route metrics used by different routing protocols is not useful. For
example, a metric of 10 for a routing protocol that uses bandwidth might indicate a better route
than a metric of 4 for a protocol that uses hop count.
Routing update method Routing protocols use different methods for sharing routing
information and discovering networks. The following are common sharing methods:
• With the distance vector method, a router shares its entire routing table with its immediate
neighbors. Routes learned from neighboring routers are added to the routing table and are shared
with its neighbors.
Most distant vector routing protocols use a technique called split horizon to prevent routing
loops. Split horizon does this by making sure that a router cannot send network information
backwards.
• With the link-state method, routers share only their directly connected routes using special
packets called link-state advertisements (LSAs) and link-state packets (LSPs). These route
advertisements are flooded (forwarded) throughout the network. Routers use this information to
build a topology database of the network.
• A hybrid method combines characteristics of the distance vector and link-state methods.
A router shares its full routing table at startup, followed by partial updates when changes occur.
• Route redistribution is a way of exchanging routing information between two different
routing protocols. Route redistribution involves placing the routes learned from one routing
domain, such as RIP, into another routing domain, such as EIGRP. When this occurs, you have
to address several issues.
• Metrics. Each routing protocol has its own way of determining the best path to a network.
RIP uses hops, and EIGRP and IGRP both use a composite metric of bandwidth, delay, reliability,
load, and MTU size. Because of the differences in metric calculations, when redistributing routes,
you lose all metrics and must manually specify the cost metric for each routing domain. This is
because RIP has no way of translating bandwidth, delay, reliability, load, and MTU size into
hops, and vice versa.
• Classful vs. classless. Some routing protocols are classful and do not send subnet mask
information in the routing updates (e.g., RIP and IGRP), and some protocols are classless and do
send subnet mask information in the routing updates (e.g., EIGRP). This causes problems when
VLSM and CIDR routes need to be redistributed from a classless routing protocol into a classful
routing protocol.
In general, the different routing protocol methods have the following characteristics:
• The distance vector method is simpler and requires less processing power for routers. It
is best suited for small networks.
• The link-state method uses less network traffic for sending routing information, converges
faster, and is less prone to errors. It is the best choice for large networks or for sharing routes
over WAN links.
• A hybrid method reduces the negative effects of the distance vector method while gaining
many of the benefits of the link-state method.
Classful or classless Early routing protocols were not capable of variable-length subnet masks
(VLSM) and used only the default subnet masks to identify destination networks. Routing
protocols can be identified based on their support for Classless Inter-Domain Routing (CIDR)
features.
• A classful protocol uses the IP address class and the default subnet mask to identify
network addresses. Classful protocols do not support CIDR or VLSM.
• A classless protocol ignores the IP address class and requires that a subnet mask value be
included in all route advertisements. Classless protocols support CIDR and VLSM.

7.2.3 Routing Protocols

In this lesson, we're going to spend some time discussing common routing protocols.
RIP

One of the oldest routing protocols is the Routing Information Protocol, or RIP. RIP is an interior
gateway protocol, meaning that it's designed to be used within a private organization for sending
routes within the private network. RIP uses the hop count as a routing metric, but it's limited to
15 hops between any two networks. If you ever see a hop count of 16 with RIP, it indicates that
the network is unreachable. The size of your network is a built-in limitation for RIP. Between
two subnets within your network, you can have a maximum of 15 hops. If you have more than
15 hops, RIP will not be able to route data to distant networks.
RIP is a distance vector routing protocol, meaning that it shares its entire routing table with every
neighbor at every routing update. But be aware that RIP Version 1 does not support variable
length subnet masks.
RIPv2

The newer version of RIP, called RIP Version 2, has all the same characteristics of RIP v1, except
that RIP v2 supports variable length subnet masks.
EIGRP

The next routing protocol is the Enhanced Interior Gateway Routing Protocol, or EIGRP. From
its name, you can see that it is an IGP. EIGRP uses bandwidth as a metric along with delay
estimation. It is classified as a hybrid routing protocol. It's a distance vector protocol that includes
key improvements that can make it act, in some cases, like a link state protocol. EIGRP also
supports variable length subnet masks.
OSPF

Another protocol is called Open Shortest Path First, or OSPF. OSPF is also an IGP and uses link
cost as its metric. OSPF is a link state method that supports variable length subnet masks. It's
designed to support larger networks that can be supported with RIP. With OSPF, your private
network, represented by this circle, is divided into different areas. Each area can contain multiple
subnets, but the areas are linked together. The routers within an area only keep track of the routes
within that area.
OSPF requires you to have a special area called area zero. Area zero is the backbone that connects
all other areas. Every other area in the network must have a connection to area zero, either directly
or indirectly through another area. Routers within an area share information about their routes
only within that area. These are called internal routers. Routers that sit on the edge of an area are
used to connect two separate areas. In this case, area zero would extend to connect with this area.
These routers are called area border routers and are responsible for sharing information about an
area with other areas. With a link state protocol, the boundary between the areas is designated by
these area border routers. Finally, we have an autonomous system border router whose job is to
communicate with routers outside of the autonomous system. To review, the key characteristics
of OSPF design are an area zero that connects all other areas. Routers within the area share
information about routes within that area. Area border routers share information between areas.
The autonomous system border router shares information outside of the autonomous system.
IS-IS

Intermediate System to Intermediate System, or IS-IS, is another routing protocol to be familiar


with. IS-IS is also an IGP that uses link cost for a metric. It is a link state method that supports
variable length subnet masks. In many respects, IS-IS is very similar to OSPF. An IS-IS design
uses areas, much like OSPF. But, with ISIS there is no requirement for an area zero. IS-IS also
defines several different types of routers. An L1 router shares information about routes within an
area, like the internal router with OSPF. An L2 router shares information between areas. An
L1/L2 router shares information about internal networks with the L2 routers. For example, we
have multiple routers within an area. Each of these routers is configured as an L1 router and can
communicate with each other. We have an L2 router sitting at the edge of the area. It can
communicate with another L2 router in a different area. But, this L2 router cannot communicate
with these L1 routers directly. We'll designate one of these routers as an L1/L2 router, so it can
share information with the L2 router, which then shares information with another L2 router in a
different area. With IS-IS, instead of using the router as the boundary between areas, an actual
physical link is the boundary. In this case, it would look like this. Remember, with OSPF the
areas touched and the routers themselves were the boundary between the areas. With IS-IS, on
the other hand, the boundary is a link and you have a boundary router at the edge of each area.
BGP

The final routing protocol to be familiar with is Border Gateway Protocol, or BGP. BGP is the
only Exterior Gateway Protocol, or EGP, that we will discuss. BGP uses a special metric based
on policies and rules. BGP is, to a degree, a distance vector protocol. It's often labeled as an
advanced distance vector protocol and is sometimes called a path vector protocol. It supports
VLSM. The key thing to remember is that BGP is the routing protocol used for sharing routes on
the Internet and between autonomous systems. As an administrator, you will typically not need
to be concerned with BGP. BGP is only used to route data through the Internet and out to separate
autonomous systems.
Choosing a Protocol

Most organizations that connect to the Internet use one of these protocols internally and have no
need for sharing information on the Internet. An exception to this is when you have a very large
network that uses two or more connections to the Internet through different Internet service
providers. In this case, your routers decide which Internet connection to use to efficiently route
information. Because of that you might want to use BGP routers to learn about routes that exist
on the Internet. You can also use BGP to share information, in this case, with routers on the
Internet so that they can efficiently route information into your autonomous system.
When choosing a routing protocol for your network, you typically implement them as follows.
Choose RIP for small, private networks. Be aware that the 15 hop count limit means you can't
use RIP on a large network. Use EIGRP or OSPF for large private networks. IS-IS is typically
used within an ISP, because it supports multiple protocols not just IP. It also routes IP Version 6
addresses without any additional modifications. BGP is used within the Internet and is sometimes
used for private networks that have multiple connections through different ISPs. When you run
BGP on your private network, you're using routes on the Internet to choose the best route to the
destination, and you are also advertising your own private routes onto the Internet.
Summary

In this lesson, you learned about commonly used routing protocols. We reviewed RIP, EIGRP,
OSPF, ISIS, and BGP.
7.2.4. Routing Protocol Facts

The following table lists the characteristics of specific routing protocols:


Protocol Description
Routing Information Protocol (RIP) RIP is a distance vector routing protocol used for
routing within an autonomous system (e.g., an IGP).
• RIP uses hop count as the metric.
• RIP networks are limited in size to a maximum of 15 hops between any two networks. A
network with a hop count of 16 indicates an unreachable network.
• RIP v1 is a classful protocol; RIP v2 is a classless protocol.
RIP is best suited for small private networks.
Enhanced Interior Gateway Routing Protocol (EIGRP) EIGRP is a hybrid routing protocol
developed by Cisco for routing within an AS.
• EIGRP uses a composite number for the metric, which indicates bandwidth and delay for
a link. The higher the bandwidth, the lower the metric.
• EIGRP is a classless protocol.
EIGRP is best suited for medium to large private networks.
Open Shortest Path First (OSPF) OSPF is a link-state routing protocol used for routing within
an AS.
• OSPF uses relative link cost for the metric.
• OSPF is a classless protocol.
• OSPF divides a large network into areas.
• Each autonomous system requires an area 0 that identifies the network backbone.
• All areas are connected to area 0, either directly or indirectly through another area.
• Routes between areas must pass through area 0.
• Internal routers share routes within an area; area border routers share routes between
areas; autonomous system boundary routers share routes outside of the AS.
• A router is the boundary between one area and another area.
OSPF is best suited for large private networks.
Intermediate System to Intermediate System
(IS-IS) IS-IS is a link-state routing protocol used for routing within an AS.
• IS-IS uses relative link cost for the metric.
• IS-IS is a classless protocol.
• The original IS-IS protocol was not used for routing IP packets; use Integrated IS-IS to
include IP routing support.
• IS-IS divides a large network into areas. There is no area 0 requirement, and IS-IS
provides greater flexibility than OSPF for creating and connecting areas.
• L1 routers share routes within an area; L2 routers share routes between areas; an L1/L2
router can share routes with both L1 and L2 routers.
• A network link is the boundary between one area and another area.
IS-IS is best suited for large private networks; it supports larger networks than OSPF. IS-IS is
typically used within an ISP and easily supports IPv6 routing.
Border Gateway Protocol (BGP) BGP is an advanced distance vector protocol (also called a
path vector protocol). BGP is an exterior gateway protocol (EGP) used for routing between
autonomous systems.
• BGP uses paths, rules, and policies instead of a metric for making routing decisions.
• BGP is a classless protocol.
• Internal BGP (iBGP) is used within an autonomous system; External BGP (eBGP) is used
between autonomous systems.
BGP is the protocol used on the Internet; ISPs use BGP to identify routes between autonomous
systems. Very large networks can use BGP internally, but typically share routes on the Internet
only if the AS has two (or more) connections to the Internet through different ISPs.

7.2.5 Configuring Routing

In this demonstration, we are going to set up routing between two routers. Now this is the
typology that we are going to work with.
We start at router one, the one over to the left, and go over to router three. That's how the scenario
is going to be. This is the typology that I'm going to be referring to for this demo.
Router One

Now that we saw the typology that we're working with let's start on router one. What we want to
do is set up a static route. Which means we're going to tell the router explicitly, this is where you
need to go to get to this destination route, or this destination network I should say.
Show Run

Because remember routers route to networks.


What we're going to do. Let's do a simple show run, so we can see what we're working with. You
can see that we have serial 000 connected out to another router, to the next router. That one is
connected to router two as we saw in our typology. Then router two is kind of our in between
guy. Which we really won't be using him because he's connected to both networks. He doesn't
really need any routes, because he knows about everybody by the direct connection. The only
two we're going to be using are router one and router three.
Again, as you can see, S000, our serial connection here has an IP address of 10.10.10.1, with a
slash thirty mask, or 255.255.255.252. What we're going to do is tell it to get to router three.
Router Three

Now router three has an IP address, its subnet is 100.100.100.0/24. What we're going to do is tell
router one explicitly how to get over there to that route.
Connect the Two Routers

We got to get into Global Config. Then we're just simply going to type IP route and then the IP
address of the network we're trying to get to. We're trying to get to 100.100.100.0 with a /24 mask
255.255.255.0. Now we have to tell our router how to get there. What door do you go out of to
get to this distant network, so we go out S000. I'm going to hit 'Enter.' Now that's all there is to
it. IP route, our destination IP address, our destination mask, and then we have to have the exit
interface, or the door of how to get outside of our network.
Now if I ping the other router it's not going to work yet, because a ping works off of an echo
reply, echo request. If I request something, the first part of the packet may get to router three, but
router three doesn't know how to get back to us yet. Let's test this out. Let's say 'ping
100.100.100.1,' 'Enter.'
Whoops, I'm in the wrong configuration.
Testing the IP Address

Let's test this out. Now, 'ping 100.100.100.1' and we are successful. Because I already have the
route set up on router three. Let's go check that out.
Here we are on router three. Let's get into privilege exec and do a show run. I'll show you the
route that I have set up. Here it is right here. Now this router knows how to get back to router
one. Let's go back and test this one more time. What I want to do is say Global Config, and I'm
going to say no IP route 10.10.10.0 255.255.0.000, there it is. All right, so do show run, so we
can make sure that it's out of there. You can see we don't have that route in here anymore. Let's
bounce back over to router one and test this out one more time just to make sure I was right a
little bit earlier.
Router One Connection

All right, so we're back here on router one. Let's hit the up arrow and say ping 100.100.100.1,
that's router three. Let's see if we get a reply this time. You can see we're not getting any replies
because again, router three doesn't know how to get back to us. Because I took that route out.
That's how we would get in and set up those static routes. We're explicitly telling the routers,
"Hey this is how you get to this destination network."
Mapping Out the Networks

Now if we have lots of routers out there and lots of destination networks, static routes are not the
best idea because of the administrative overhead for us as a network admin. We'd have to go in
and type each individual network in as we go. We don't really want to do that. What if we had
something that would do it for us? Well we do. We have a couple options. One we're going to
talk about is OSPF, Open Shortest Path First.
Open Shortest Path First (OSPF)

The first thing I'm going to do first here is I'm going to get back into Global Config and I'm going
to take that route out that we used to get into other network. All right, so now we don't have that
route in there anymore. What we're going to do is set up OSPF to do this for us. Now we got to
get back into Global Config. We're going to type router OSPF. I'm going to just give it a process
number of 100. Process ID is an individual ID for each individual router. It has nothing to do
with router advertisements, or router, anything.
Setup Network Addresses on the Interface

It's just an identifier for that local router.


Now what we have to do is set up network addresses on the interfaces that we want to participate
in OSPF. These are going to be the interfaces on our router. Now remember we're on router one,
so we're on the 10 network. I'm going to say network, 10.10.10.0. Now we have to do something
called wild card mask. Which a lot of people say it's the inverse of a regular mask. If you look at
this, yes it is, but if you get into noncontiguous masks that definition doesn't hold, but for our
purposes that will hold.
New IP Address

It's the inverse.


Earlier, we said it was a 10 network, 10.10.10.0 with a /24. Well a /24 is 255.255.255.0. If you
inverse that where the 255's are now become these three zeros, right here that we have. The 255s
are now zeros and the zero becomes a 255. We're going to say network, 10.10.10.0 000.255.
We're going to say this is area 0. I'm going to go ahead and hit 'Enter.' Now what should be
happening is our router should be participating in OSPF. What we want to do is we need to turn
OSPF on all of our routers in order for this to work. This is where router number two is going to
have to come into play here a little bit, and start doing something as well.
Router Two

Let's go over to router two, and set up OSPF on him. All right, so we're over here on router two.
Let's do a show run and make sure we've got some IP addresses set up, and you can see we do.
We have the 10.10.10 network, and the 100.100.100 network.
Global Config

Because remember this is router two, think back to our typology. We've got one going to the left,
one going to right.
Now we need to get into Global Config. We're going to set up OSPF on this guy, so router, OSPF
100, and we're going to type in network 10.10.10.0 and 000.255 area 0. Now that takes care of
the 10 network. You can see it's loading right now. Because now those two routers are talking
OSPF. We also have to use OSPF on the 100 network. Let's go ahead and say network
100.100.100.0 because we got to do the other interface.
Communication Between Routers

The inverse of that /24, 255 area 0.


Now we have router one talking with router two. As you can see right here where it says it went
from loading to full, and it's done. They're talking OSPF. They are exchanging information right
now. Now router two and router three we have half of that connection up. What we need to do is
bounce over to router three and set up this. Let's jump over to router three real quick and set up
our OSPF.
Okay, we're over here on router three. We're already in Global Config, so we're just simply going
to go say router OSPF 100 'Enter.' Now remember we have to put those network addresses in.
Now again, think back to the typology. We're on router three. We're on the 100 network so we're
going to say network, 100.100.100.0 0.0.0.255. We're going to put these guys again in area 0.
You probably noticed I put everybody in area 0. That's our backbone area. You can have multi
areas. Right, you can have multiple areas in OSPF. It's simpler and it makes for a better
demonstration with area 0. Because you have to have at least one area 0 in order for different
areas to talk.
As soon as I hit 'Enter' here we should have another loading message. There we go, loading to
full. Now we're talking OSPF all the way around. Let's get back to privilege exec mode here.
Let's do a 'show IP route' now. Because this is going to show our routes of how to get there. Now
you can see we have some directly connected ones right here. These are the C networks. These
are the ones that we have a hard connection to, and that goes to router two. Look right here, this
tells us OSPF is working. We have an O route, so it's telling us to get to the 10.10.10.0 network.
Which again, remember is router one. That's what we hand jammed in just a minute ago when
we explicitly put that in as a static route. Now OSPF is communicating and we're getting this
routing information flooded throughout our network.
Now if any changes are made. If we had dozens or hundreds of routers out there, and we had
static routes. What if one of those routes went down? We'd have to go back in each individual
router and reconfigure the router. That's kind of a pain. This way OSPF is constantly sending out
updates. It says, "Hey remember are you up? Do you have any changes to let me know of?" If
they do, they let them know it, and they flood the network. Everybody updates the routing table.
It kind of self-heals itself, which makes our job as a network admin a lot easier.
Summary

That's it for this demonstration.


In this demonstration, we set up static routes. We configured routing explicitly by forcing it to
go in a specific direction. Then we set up OSPF, so the routers could learn about the network for
us, set up our IP routes, and allow us to route to networks pretty much automatically.
7.3. Network Address Translation

As you study this section, answer the following questions:


• What are two advantages to using NAT?
• What is the difference between static NAT and dynamic NAT?
• What is port forwarding?
• What is the difference between NAT and PAT?
After finishing this section, you should be able to complete the following tasks:
• Implement NAT.
• Configure Internet connection sharing (ICS).

7.3.1Network Address Translation

Let's spend a few minutes talking about network address translation or NAT. Registered IP
addresses are both expensive and in extremely short supply. Yet, every single host on a public
network, such as the Internet, must use a unique registered IP address. Once assigned, no other
host can use the same IP address. Yet, with all of mobile devices, like tablets and smart phones,
IP addresses are in higher demand than they ever have been. It's not uncommon for a single user
to require 3 or more IP addresses for all of their devices.
Reason for NAT Implementation

Because registered IP addresses are in short supply, there are a couple of options to dealing with
the expanding demand. The best solution is to implement IP version six, which dramatically
increases the available IP addresses for hosts. Even after nearly 20 years IPv6 has not been widely
implemented yet. Instead, most system administrators opt to use network address translation
(NAT).
Benefits of Using NAT

NAT has been broadly implemented because it's easy to use and works relatively well.
One of the problems with IPv6 is the fact that the addresses used with IPv6 are really long and
difficult to work with. Network address translation, on the other hand, allows you to continue
using familiar IPv4 addresses.
Network address translation allows you to use a limited number of registered IP addresses for
your entire organization by translating many unregistered IP addresses from your internal local
area network into a limited number of registered IP addresses.
How NAT Works

Let's take a look at how NAT works.


NAT is usually implemented on a default gateway router. The router has two or more network
interfaces. One is connected to a public network that requires registered IP addresses. The other
is connected to the private internal network that doesn't require registered IP addresses. Using
NAT, we don't need a registered IP address for every single host inside of our private network.
Instead, we'll only need a public IP address for the NAT router itself.
The NAT router in this scenario might have a registered IP address assigned on the public side
of the router. This is a registered IP address. No other host on the public network can have that
same address. This address has to be assigned by an ISP and must be paid for.
The interface on the private network doesn't need a registered IP address. Instead, it can use a
private IP address along with all of the other network hosts on the private network. The following
address ranges have been reserved for private networks:
* Class-A: 10.0.0.0 through 10.255.255.255
* Class-B: 172.16.0.0 through 172.31.255.255
* Class-C: 192.168.0.0 through 192.168.255.255
These private IP addresses are non-routable. Routers on the public network are configured, by
default, to not route any of these IP addresses. Because these IP addresses are not registered and
are non-routable, you can use any of these IP addressing schemes on your private network. For
example, if you decided to use a Class A private IP addressing scheme, you could potentially
have millions of hosts on your private network all assigned IP addresses from this IP address
range. When using NAT, only a limited number of registered IP addresses would be required.
The NAT routers will translate the private IP addresses into registered IP addresses.
Suppose we have a host on the private network that has an IP address of 192.168.1.10 and the
private side of the NAT router has an IP address of 192.168.1.1 assigned. This is the default
gateway router for the network segment. When this host, 192.168.1.10, tries to request resources
from an IP address that's not on this network segment, it automatically knows that it needs to
forward that request to the default gateway, which is a NAT router.
When the NAT router receives that request, it strips off the source address of the transmission,
192.168.1.10, and replaces the source address with its public IP address (137.65.7.2). It then
sends it to the public network where the request is routed to the appropriate host on the Internet.
The host on the public network that receives this request, the recipient of the transmission, doesn't
see that request came from 192.168.1.10. Instead, it sees it coming from the NAT router itself,
137.65.7.2.
The receiving host responds and sends the requested information back. It sends the response back
to 137.65.7.2, so the request returns to the NAT router. The NAT router changes the recipient
address of the transmission to the original requesting host on the private network (192.168.1.10)
and forwards it through its 192.168.1.1 interface to the host that originally requested the
information.
The NAT router may have to process thousands of translation requests at any given time. When
information is returned from the public network, it keeps track of which request goes to which
host on the internal network using Port Address Translation (PAT).
When a request is sent, the NAT router sends it from a randomly selected dynamic port on its
public IP address. The Internet Assigned Numbers Authority (IANA) has assigned the port range
49,152 to 65,535 to be used as dynamic ports. The NAT router picks one of these dynamic ports
and forwards the request out on the public network from the selected port.
Each request that comes through for address translation is assigned to a different dynamic port.
The NAT router keeps a table in memory called the translation table that temporarily maps a
particular port to a particular request from a specific host source IP on the private network. That
way, when the data returns from the host on the internet, it returns to the port from which it was
sent. The NAT router then looks at its translation table and identifies which private host address
is currently associated with that port number. Therefore, it knows that information coming back
from the public network needs to be forwarded to the private network to this particular host with
this IP address.
The NAT router maps the IP address of the originating host to a particular dynamic port and
stores it in the translation table. By doing this, it can keep track of which returning information
goes to which host on the private network.
There are several different network address translation implementations.
Many-to-one NAT

The most common implementation is called network address and port translation or many-to-one
NAT, also called IP Masquerade. Many-to-one NAT works just like we described where multiple
internal private hosts are mapped to a limited number of registered IP addresses. Many different
private host are represented by one registered IP address, hence, the term many-to-one NAT.
Static NAT

There are other ways to implement NAT. For example, static NAT (also called one-to-one NAT)
allows you to manually configure a permanent one-to-one mapping in the translation table. By
doing this, we map a particular host with a private IP address to a particular port on the NAT
router. This is called port forwarding. For example, we can configure this private IP address to
be permanently mapped to port 50,001.
This has some useful benefits. One benefit with many-to-one NAT is the fact that it's a one-way
implementation. Data can come from the internal private network and be sent to the public
network, but not vice versa. If we have a Web server on our private network and someone out
here on the Internet wants to connect to it, they can't get through the NAT router because there is
a many-to-one relationship between the registered IP address and the various hosts on the private
network. The ports that are used to keep track of this host are transient; they change all the time.
With Static NAT, you create a permanent mapping. In this case, 192.168.1.10 is permanently
assigned to this port number. This allows port forwarding. A host on the Internet can establish a
connection to the host on the internal network through this permanent port mapping. The NAT
router knows where this host resides. It has a permanent port assignment.
NAT routers are very useful and have a lot of benefits. They allow us to use a very limited number
of scarce registered IP addresses for an entire network full of hosts. NAT routers also obscure the
private network from the public network to a degree. From the public network's point of view,
all the traffic is coming from the registered IP address on the NAT router.
Security Issues with NAT

The public network can't really see the private network.


An important design consideration to remember when designing a NAT implementation is that
NAT is not a firewall. Sometimes NAT is regarded as a firewall because the public network only
sees the registered IP address of the NAT router. This isn't correct, because NAT only acts as a
translator. Only a real firewall can offer the degree of network security that you need.
Sometimes administrators think that the NAT router completely obscures the private network
from the public network. It does but only superficially. You can use a variety of different IP
protocol analysis tools on the public side of the NAT router to gain information about what the
internal network looks like.
Summary

That's it for this lesson. In this lesson, we talked about network address translation or NAT. We
began by discussing what NAT is and how it works. We talked about some of the benefits of
using NAT and ended by discussing some of the security issues surrounding NAT.
7.3.2.Configuring NAT from the CLI

In this demonstration, we'll configure NAT from the CLI.


Here is the topology that we'll be working with. What we're going to do now is set up NAT or
Network Address Translation. That is where we can take a private IP address inside of our internal
network, and give that IP address Internet access to the real world. We're going to translate a
public-to-private and private-to-public. We start on the inside. Usually, we request a webpage, it
goes out to the public side. That information has to come back through, so we have to translate
that back from public to private. We're going to look at this three different ways. We're going to
look at something called NAT, static NAT. We're going to look at one called dynamic NAT. We're
going to look at one called PAT, or port address translation or NAT overload.
The first one I'm going to take a look at is static NAT. Static means one-for-one. One private IP
address is mapped to one public IP address. There's no changing it. Then were going to take a
look at dynamic NAT, which is many private IP addresses to a few public IP addresses. We may
have more private IP addresses inside and only half or 25% of the public ones in a pool out there
that everybody shares.
Last, we're going to look at port address translation or PAT, or you may see it as NAT overload,
which is many-to-one. Meaning we have many private IP addresses to only one public IP address,
but it uses ports.
Static NAT

It appends ports on the end to make this happen. Let's take a look at our static NAT first. I already
have this Router 1 set up. I'm going to go 'sho run', and just to show you the IP addresses here,
you can see we got a 192.168 for our fast Ethernet (that's our private) and we got a 170.100.5.0
serial for our public.
Let's get into Global Configuration. We are simply going to type IP NAT Inside and the source
is going to be static. We're going to do our 192.168.1.5, and we're going to say you're going to
translate to 170.100.5.5. I'm going to hit 'Enter.' That's all there is to it. We have just set up that
static NAT. We have to tell the router which ones to use. What we have to do now is get into our
interfaces and we're going to get into our F0/0 fast Ethernet first. I'm just going to simply type IP
NAT and say this is the inside address, and then I'm going to our serial interface and type IP NAT
Outside. That's all there is to set up our static NAT. First, we have to set up our mapping. That's
this line right here. If we had more, then we would go in there and do this a couple more times.
Remember, this is a one-to-one mapping. Then we have to tell it are we the inside addresses,
which the fast Ethernet is because it's on the inside of our network. That is our private side. Then
we have our serials which is the outside. It knows to map the inside to the outside and vice versa
when that information is coming back through.
Dynamic NAT

Let's jump over to Router 2 and take a look at dynamic NAT. We're over here at dynamic NAT
on router two and this is a little bit different. What we want to do here is set up a pool of addresses,
not a one-to-one mapping. The IP addresses are somewhat the same. The outside IP addresses
have to be a little different because I'm tying all three of these into our cloud router. What we're
going to do now is get into global config. We're going to say IP NAT Pool. We have to give it a
name here so we can say 'My Pool' or whatever you want to call it. We're going to say the pool
of addresses. This is the outside addresses that we have available to us to use. We're on a
200.100.50. We can't use .1 because that's on the serial. We use 10 to 200.100.50.15. We have
that range, and we'll say netmask 255.255.255.224. I'm going to hit 'Enter.' There we go. We've
set up our pool of addresses from 50.10 to 50.15 with that subnet mask. We have to say IP NAT
just like we did on the static, but now were going to say 'inside source'; we're going to say List
1, or whichever list it is and then we're listing back to the pool that we created, My Pool. Missed
a keyword here, got to say 'list pool MyPool'. Hit 'Enter.' What we need to do is set up an access
list. This access list allows us to say who has permission. We have to explicitly give permission,
so we're going to say access List and we're going to say number 1, because that is the list. If you
look right here this is the list that we created, List 1, that's where it's getting this information
from. We'll say List 1. Say 'permit' because that's what we want to allow them to do. Then we're
going to permit them on our internal addresses, which we have a 192.168.1.0 and our wildcard
mask 000.255. Go ahead and hit 'Enter'.
What we have to do is go in here and get into our interfaces just like we did earlier, and say which
ones are the inside and which ones are the outside. We'll get into our fast Ethernet and that one
is going to be our internal address. IP NAT Inside. Then we have to get into our serial interface
which is our IP NAT Outside. There we have it. That is setting up dynamic NAT. This is a little
bit different than static. Remember, static is that one-to-one. Here we created a pool. We defined
a range right here. We did from 10 to 15 with a subnet mask 255.255.255.224, which gives us
about 30 addresses to play with. We're using one for the router, so we had to be careful with that.
We just defined six addresses here, 10, 11, 12, 13, 14 and 15. We created our IP NAT list, line
right here, we identified it with List 1 because it ties back to this access list which we are
permitting everybody. That's permitting the internal IP addresses. Then we applied them to our
interfaces. We said which one was inside, which one was outside, so the translation could take
place properly.
NAT Overload

That was dynamic NAT.


Let's jump over one more time to Router 3 and take a look at NAT overload or PAT. We're over
here on Router 3 now. Our IP addressing, the public IP address now is 170.100.10.1 for our serial.
The public side is going to change just a little bit more. Again, get into global config. We got to
create our pool again, just like dynamic. Except when we type 'create this pool', we're going to
see a little bit of difference because we're not going to define a range. We're just going to give it
one IP address. Remember, PAT is that one-to-many or everybody-to-one, because we only have
one public IP address. Let's say IP NAT Pool we we'll give it a name of 'MyPool' again. This is
our public side so we're going to say 170.100.10.5. We got to define a range 170.100.10.5. That's
not a typo. It's correct. Remember, we want to give them just one IP address. Subnet mask
255.255.255.224. Hit 'Enter'. Line 1 is done. We need to create the line that ties back to the access
list. We haven't created that access list yet, but we can go ahead and type this line out. We'll say
IP NAT Inside<Source List and we'll give it List 1 again. We'll say 'Pool' and this is MyPool from
the line we just typed. Then this important part for PAT right here is this keyword called
'overload'. It's very important, because that is what turns on the port address translation for us. If
we don't turn that on by typing it out, the only thing we would have is one IP address and our
dynamic NAT range. Don't leave this keyword out or you won't have port address translation.
We'll go ahead and hit 'Enter' and now we can go ahead and create our access list. It is Access
List 1 because that's what we numbered it in the line before.
We're going to say 'permit' and our internal IP address is 192.168.1.000 0255. That's everybody
on the inside is allowed to use this NAT overload that we've created. We have to again apply it
to the interface. Interface at 00. IP NAT Inside and T000 for our serial interface, and that's IP
NAT Outside. So there we go. We have created our pool up here at the top. The cool thing about
this one for PAT is, remember our pool only has one IP address we can use. We said 10.5. Then
we created our source list where our inside source and we tied it to Access List 1. Don't forget
the keyword 'overload'. That is the important part when it comes to port address translation. We
created our access list saying who is allowed to use this port address translation, or this NAT
implementation that we have. Then we tied it to our interfaces.
Summary
We said if you're an inside interface or an outside interface.
In this demonstration, we set up static NAT, dynamic NAT and NAT overload or port address
translation.
7.3.3. Configuring NAT

In this demo, we're going to discuss NAT on our network security appliance.
We're already logged in once again to our Security Appliance Configuration Utility.
IPv4 WAN Configuration

We'll begin by looking at the WAN configuration on the network security appliance.
If we look under IPv4 Config and scroll to the bottom, you'll notice that we're using a static IP
address--192.168.5.1. That may be a little confusing because that's a private IP address, but we're
actually utilizing this appliance in a test network, and so that's a simulated WAN address, even
though it appears to be a private address.
IP Alias

In this case we have assigned 192.168.5.1 to our WAN interface, but we can also add IP Aliases
to our WAN interface. In addition to 5.1, we've also added 192.168.5.2 as an additional IP Alias
to the WAN interface, and also 5.3. Each of those three addresses are available for use on the
WAN interface.
Routing

In the network security appliance, we configure NAT underneath Routing. We'll look under
Routing, and underneath Routing you'll see that the Routing Mode between the WAN and the
LAN is actually defaulted to NAT.
Our network security appliance by default, since it's for a small business type of environment,
uses one IP address that's valid for the outside world and then allows you to use the internal
addressing throughout the network. You can also turn NAT off and use classic routing, and in
that case you'll use the classic routing tables and RIP, and you can set Static routes, and you can
set Dynamic routes just as you would in a normal router. For our purposes, NAT is enabled and
NAT is functioning to take our one and now two and three IP addresses on the external interface
and translate them to internal interfaces.
IPv4 Firewall Rules

In the network security appliance, the way that you configure the NAT is through firewall rules.
Firewall rules incorporate the NAT settings as part of the configuration. Because the network
security appliance defaults to using NAT, the functionality for NAT configuration is built right
into the firewall rules.
LAN to WAN

First we're going to look at rules that we created earlier-- first is a LAN to WAN rule. We'll click
on Edit, and in this LAN to WAN rule where we're going from the secure LAN to the unsecure
internet, we have the ability to set our source NAT settings. And what we are choosing when we
choose our source NAT settings is which IP address on the external interface is going to be used
for NAT-- which IP address will be translated. We're talking about network address translation,
and so the internal IP addresses will be translated to an external IP address, and so we can choose
which one that will be. We can choose the WAN Interface, which is 5.1, or we can choose Single
Address, which allows us to choose one of the two IP Aliases that we created--either 5.2 or 5.3.
We'll go ahead and choose 5.2 as the address interface that we're going to be using for LAN to
WAN communications. This is very similar to Dynamic NAT in the traditional NAT world.
WAN to LAN

All right, so we've changed that.


Let's take a look at one of our WAN to LAN. In this case we'll use our WAN to DMZ firewall
rule that we created earlier. If we look at this, this is coming from the unsecure network to one
of our more secure networks--our DMZ. If we scroll down, here we have the option to set
Destination NAT Settings. We set an internal IP address for the internal server that we're going
to be hitting, and we also in this case enabled port forwarding to translate the port number. But
the important piece of this is, once again, which of the IP addresses that are available to us on
the external side is the IP address that's going to translate to this internal server.
This setting is very similar to what we would call Static NAT in the traditional NAT world. We
can select that we want server address 192.168.5.3, is the address that you would see on the
outside, but that translates to 172.16.2.100 on the inside. By us addressing traffic to 192.168.5.3,
or making an HTTP request on port 80 that will become an HTTP request inside of our network
in the DMZ, on 172.16.2.100 and port 8080.
In this way, we can do settings which are very similar to static NAT on the LAN to WAN
interface. We're able to set settings which are very similar to dynamic NAT, and we can use IP
Aliases so that we have more addresses than just our WAN interface available if we want to
access a specific server using a specific IP address from the external site.
7.3.4. Configuring Port Forwarding

In this demonstration, we'll look at port forwarding. I've opened up a web browser to my wireless
Internet router in my home. All routers have a port forwarding feature these days unless you have
a really old home wireless router. But any you buy nowadays is going to have this feature. I can't
tell where the port forwarding feature is found, because each vender designs their color scheme
and dropdown boxes differently, but the underlying process of what's going on is the same. I'm
using an ASUS router here. I've got another Cisco router that we could've used, but I just went
ahead and tied into this one, because they're all going to act the same as far as how you're going
to configure them. What we're trying to do is allow outside people to reach our internal services.
We looked at network address translation where we can have private IPs to public IPs and public
to private, and have that translation. We have private networks for a reason. They protect our
resources on the inside, they protect our host and our infrastructure. But what if we wanted to
run a web server inside your home? You wanted to host your own web page? Your ISP has not
given you a whole bunch of public IP addresses. It's not given you port 80 like you would need
for a web server. What we have to do is trick the system into letting it do what we want to do.
Enable Port Forwarding

Let's take a look here. Right now I'm going to enable port forwarding. You can see I have that
turned on right here in this Yes/No radio button. Then we have this, where it says Famous Server
List on my router. Yours just may say Server List, it may just say Servers. It may not have this.
I'm going to click this drop-down box and you can see some popular services that you can run.
These are services that you're running internal. Maybe you want to run an FTP server for your
small business or for your friends.
Maybe you want to set up an e-mail server for SMTP. Whatever it is you want to set up, let's go
to here and just say we want to set up our own little web server. I'm going to go ahead and click
HTTP and you can see it's populating what we have down here. The server's name: I can type
anything in there that I want. I could say My Personal Web Server, My Web Server, HTTP Server,
WWW Server, whatever. The port range is what's going to be on the outside. Don't want to mess
with this one unless you know explicitly that somebody's typing this in. Let's say you go to
www.whereever.com. Your browser knows to tell the server I'm trying to get into port 80. That's
just a common, well-known port. All web servers run from port 80. If it's not, your browser's not
going to give you that information unless you know the specific port number and put that in the
browser URL. Don't mess with that. This is for the traffic coming in from the outside. Then your
local IP address is the IP address that you're going to have inside of your network. This is going
to be a private range.
It could be anything inside of your network. Whatever you have your DHCP assigned to inside
of your router will be there. Right now it has put in 1.14 for me automatically because that is one
of the IP addresses that I have inside of my network. You can click the dropdown box and see
different IP addresses that have been assigned and different services that I have running in my
home. You can see I have got a media server, I've got a computer called "Beast" and I've got a
couple other IP addresses out here being used. I'm not nervous about showing you these. Some
people might be, "I know your IP addresses". That's fine. These are all private IP addresses
anyway. You have to get through the firewall and everything first. The important part is the local
port. That is the port that belongs to your server inside. You can say, "I need (whatever your
public IP address is) port 80, to point to 192.168.1.14 and then the local port on the inside of your
network".
Maybe you're running your web server from port 80, maybe you're running it from port 8080 or
40000, however you have it set up inside of your configuration. That's beyond the scope of this
video. If you're good enough to get your web server set up, you're going to know about what port
you're running it on. This is where you would type that port in. I'll just go ahead and leave it at
80 because if you set up an Apache or IIS, by default it's going to be running on port 80 unless
you explicitly go in there and change it. What we've done so far is we've said we want that
service, HTTP Server, from port 80. That's on the outside, the outside of our modem. That's going
to point to 192.168.1.14 on port 80, and those two boxes are on the inside of our network. I can
go ahead and hit 'ADD.' It's added that service for me. If someone wants to access that internal
service they are going to have to know my public IP address which I would have to know and
then give to them. They know as soon as they go to that IP address from their browser, then it's
going to redirect when it hits my router to this IP address and to this port right here. They can
actually see my web server on the inside of my network. Pretty cool, right? That alleviates the
problems of people having lots of public IP addresses coming to your home in order to get these
services out to the rest of the world.
Summary

That's how we would get in and set up that port forwarding on a home router.
In this demonstration, we configured port forwarding on a wireless router.
7.3.5.NAT Facts

Network Address Translation (NAT) allows you to connect a private network to the Internet
without obtaining registered addresses for every host. Private addresses are translated to the
public address of the NAT router:
• Hosts on the private network share the IP address of the NAT router or a pool of addresses
assigned for the network.
• The NAT router maps port numbers to private IP addresses. Responses to Internet requests
include the port number appended by the NAT router. This allows the NAT router to forward
responses back to the correct private host.
• Technically speaking, NAT translates one address to another. Port address translation
(PAT) associates a port number with the translated address.
• With only NAT, you would need a public address for each private host. NAT associates a
single public address with a single private address.
• PAT allows multiple private hosts to share a single public address. Each private host is
associated with a unique port number on the NAT router.
• Because virtually all NAT routers perform PAT, you are normally using PAT and not just
NAT when you use a NAT router. (NAT is usually synonymous with PAT.)
• NAT supports a limit of 5,000 concurrent connections.
• NAT provides some security for the private network, because it translates or hides private
addresses.
• A NAT router can act as a limited-function DHCP server, assigning addresses to private
hosts.
• A NAT router can forward DNS requests to the Internet.
• The following are three types of NAT implementation:
Type Description
Dynamic NAT Dynamic NAT automatically maps internal IP addresses with a dynamic port
assignment. On the NAT device, the internal device is identified by the public IP address and the
dynamic port number. Dynamic NAT allows internal (private) hosts to contact external (public)
hosts, but not vice versa—external hosts cannot initiate communications with internal hosts. This
implementation is also sometimes called Many-to-One NAT, because many internal private IP
address are mapped to one public IP address on the NAT router.
Static NAT (SNAT) Static NAT maps a single private IP address to a single public IP address on
the NAT router. Static NAT is used to take a server on the private network (such as a web server)
and make it available on the Internet. Using a static mapping allows external hosts to contact
internal hosts—external hosts contact the internal server using the public IP address and the static
port. This implementation is called One-to-One NAT, because one private IP address is mapped
to one public IP address.
In addition to static NAT, the term SNAT also means source NAT, stateful NAT, and secure NAT.
Although the terms vary, the function is the same.
One commonly used implementation of static NAT is called port forwarding. Port forwarding
allows incoming traffic addressed to a specific port to move through the firewall and be
transparently forwarded to a specific host on the private network. Inbound requests are addressed
to the port used by the internal service on the router's public IP address (such as port 80 for a web
server). This is often called the public port. Port forwarding associates the inbound port number
with the IP address and port of a host on the private network. This port is often called the private
port. Based on the public port number, incoming traffic is redirected to the private IP address and
port of the destination host on the internal network.
Port forwarding is also called Destination network address translation or DNAT.
Dynamic and Static NATT Dynamic and Static NAT, where two IP addresses are given to the
public NAT interface (one for dynamic NAT and one for static NAT), allows traffic to flow in
both directions.
When connecting a private network to the Internet through NAT, addresses on the network need
to be taken from the predefined private IP address ranges. These address ranges are guaranteed
not to be used on the Internet and do not need to be registered. The following are the private IPv4
address ranges:
• 10.0.0.1 to 10.255.255.254
• 172.16.0.1 to 172.31.255.254
• 192.168.0.1 to 192.168.255.254
7.4. Routing Optimization

As you study this section, answer the following questions:


• What is administrative distance?
• What is the difference between automatic and manual summarization?
• What are the benefits of route summarization?
• What is a first-hop router?

7.4.1 Administrative Distance

When optimizing network routing, you need to take into consideration the concept of
administrative distance. The administrative distance is a value that is used by routers to make
routing decisions if they receive multiple sources of routing information about a remote network.
This helps routers determine which source of routing information is the most trustworthy, which
will then determine which source is actually used to populate the routing table.
Recall that different routing protocols use different metrics to determine which is the best path
to a particular network.
RIP Routing Decisions

For example RIP and OSPF each use different metrics to determine the best path. From the
perspective of router 1 here, RIP prefers this path to get to network X down here because it has
a lower hop count. A packet that's being addressed to network X would only need to pass through
1 router to reach network x from router 1, versus this other route going around this direction,
which would require 2 hops.
OSPF Routing Decisions

OSPF, on the other hand, assigns a cost metric based upon the bandwidth to each link. In this
situation, we have 2 100 megabit links and 1 10 megabit link. As a result, OSPF actually prefers
this longer route to network x because of the faster connections. In this case, we have 2 100
megabit per second links, which will result in a lower metric than the slower 10 megabit per
second link.
Suppose for some reason that your networking environment has both of these routing protocols
configured and each one is providing a very different route to reach network x. If Rip prefers this
path, but OSPF prefers this other path then router 1 has to make a choice: which information
should the router trust? That's where administrative distance comes into play.
Administrative Distance Values

A default administrative distance value has been assigned to each source of routing information.
If a router receives routing information from multiple sources, it will trust the one with the lowest
AD value. The AD default values are shown here.
As you can see, the most trustworthy source of information is a route that is directly connected
to the router itself. There's an Ethernet cable plugged into the router that goes directly to that
particular network. In this case, a locally connected link that is connected directly to the router
itself will be assigned a default administrative distance value of 0. This type of link has the
highest degree of trust and therefore it's going to be preferred over any routes that are discovered
using a routing protocol. Likewise, a statically configured route, has an administrative distance
of 1.
Example

For example, if we access the configuration of router 1 and we manually configure a static route
to network x, that route would be preferred over any routes that were discovered by any other
routing protocol. Basically, the router assumes that you actually knew what you were doing when
you created that static route.
The administrative distance assigned to a route discovered using a routing protocol will actually
vary based upon which protocol was used. For example, any route discovered using the OSPF
protocol will be assigned an administrative distance value of 110 by default. A route discovered
using the RIP protocol will be assigned a default administrative distance value of 120. Be aware
that these are default administrative distance values. If you need to, you can customize them to
fit your particular network needs. The highest possible administrative distance value is 255,
which the router doesn't like and will consider unusable.
In this scenario, router 1 is getting routing information from both RIP and from OSPF. Which
route is it going to trust? We know that OSPF has a default administrative distance of 110, while
RIP has an administrative distance of 120. Since 110 is lower than 120, OSPF is considered the
most trustworthy source of information. Therefore, for this reason, the route provided by OSPF
will be the one that is actually added to the routing table of router 1, while the route discovered
by RIP won't be added at all. As a result, all traffic intended for network x from router 1 will
follow the route specified by OSPF. But, if OSPF were not running, then the RIP route would be
the preferred route and would be added to the routing table instead.
Be aware that it's really unlikely that a single organization is going to have multiple routing
protocols running within the same network. Instead, what they'll probably do is configure all of
the routers to use the same protocol. There are scenarios where an organization may actually
need to use multiple routing protocols. Let's suppose we have two different companies and they
connect their networks together so that they can exchange information. In order to do this, they
also need to exchange routing information with each other.
Route Redistribution

If one company uses OSPF and the other uses RIP, then both OSPF and RIP protocols have to be
used.
This is one way to do things, but there is actually a better way. Instead of using both protocols
on both networks, what we can do is instead install a border router between the networks. This
border router takes all the routes that are learned by OSPF on one network and advertises them
as RIP routes on the other network and vice versa. It takes the routes learned by RIP on the other
network and advertises them on the other side as OSPF routes.
Summary

This process is called route re-distribution. That's it for this lesson.


In this lesson we discussed the role of administrative distance in routing. Remember,
administrative distance determines the trustworthiness of a particular source of routing
information. The lower the AD number, the more trusted that source is and that source of routing
information would be the one that's preferred in an environment where multiple routing protocols
are all active at the same time. The routes it provides will be the ones that are actually added to
the router's routing table, the others will be ignored. We ended this lesson by talking about route
re-distribution.
7.4.2 Route Summarization

In this lesson, we're going to discuss Route Summarization, which is also sometimes called Route
Aggregation.
Route Summarization provides us with a way to represent a collection of many routes with a
single Summary Route in a router's routing table which makes routing more efficient.
Example 1

Let's take a look at how it works.


In this network, we have two physical locations separated by two routers and a WAN connection.
Campus 1 is on the left and Campus 2 is on the right. Suppose we have several subnets defined
in Campus 1. The first subnet is 10.1.1.0, the second is 10.1.2.0, the third is 10.1.3.0, and so on-
- all the way down to 10.1.100.0. All of these subnets use a 24-bit mask. Therefore, the first three
octets represent the subnet address and that's what we use for making routing decisions.
We also have a large number of subnets defined in Campus 2. The first subnet is 10.2.1.0, the
second is 10.2.2.0, the third is 10.2.3.0, and so on--all the way down to 10.2.100.0. As with the
networks on the left, all of these subnets also use a 24-bit mask.
If you want complete visibility between both campuses, then all the networks on the left have to
be reachable by the networks on the right and vice versa. Notice that all the subnets in Campus
1 begin with the same two octets: 10.1. All the subnets in Campus 2 also begin with the same
two octets: 10.2. There are no 10.2 networks in Campus 1; likewise, there are no 10.1 subnets in
Campus 2.
In this scenario, if the host IP address starts with 10.2, then we know that it has to be routed to a
subnet somewhere over here in Campus 2. Likewise, if the router in Campus 2 receives a packet
that's being sent to a host whose IP address starts with 10.1, then we know that is has to be
forwarded to the router in Campus 1.
In this situation, we don't actually need separate entries in the routing table of each router for all
of the individual subnets on the other side. Paradigm of Route Summarization For example, the
router in Campus 1 doesn't need 100 routing entries for all of the subnets in Campus 2. Likewise,
the router in Campus 2 does not need 100 routing entries for all of the subnets in Campus 1.
Instead, each router only needs one single entry for the summarized network in the other campus.
In this case, the Campus 1 router just needs one entry in its routing table for the 10.2.0.0/16
network. Accordingly, the Campus 2 router just needs one entry in its routing table for the
10.1.0.0/16 network. These are Summarized Routes, which are also sometimes called
Aggregated Routes.
Example 2

For example, the Summarized Route in the routing table on the left represent all of the subnets
on the right because they all begin with the same two octets: 10.2. Because only the first two
octets are used for routing, we only need to use a 16-bit mask. The same is true for routing table
on the router on the right.
Route Summarization optimizes the process of sharing routes by the routing protocol in use. In
this scenario, each router only needs to share a single route instead of 100 routes. To reach
convergence with each other, these two routers only need to share a grand total of two routes.
Without summarization, convergence would require the sharing of 200 routes.
Let's suppose that a user in Campus 1 needs to send a packet to a host with an IP address of
10.2.2.100. The packet will first get sent to the local router in Campus 1 and that router will
consult its routing table trying to locate a route that matches the network address of the
destination host. It should find this single summary route in the routing table that was sent to it
by the router in Campus 2. The network address in the packet will match the entry in the routing
table because the first two octets in the packet, 10.2, match the first two octets of the route, so it
will forward the traffic over to the router in Campus 2. This router has entries in its routing table
for all of its local subnets, so it will forward the packet to the specific subnet where that
destination host resides.
Benefits of Route Summarization

As you can see, Route Summarization not only optimizes the sharing of routes between routers,
it also makes router lookups from the routing table much faster. Instead of searching through
hundreds of routes trying to find the destination subnet, the router only has to look at one route
to find a match.
Summary

That's it for this lesson. In this lesson, we discussed how Route Summarization works. Route
Summarization is used to eliminate the need to share routes from multiple subnets. Instead, we
represent all routes in a site by using a fewer number of bits in the subnet mask. Configuring
summarized routes can add efficiency to network router advertisements, as well as well as to
router lookups.
7.4.3 High Availability

In this lesson we're going to talk about the First Hop Redundancy Protocol, or FHRP. FHRP is
really useful. Basically, it allows us to create redundant default gateways for a network segment.
With FHRP, we can create redundant default gateway routers, as well as links, such that if one
default gateway or its link becomes unavailable, a redundant router or link can be used instead,
allowing normal network operations to continue.
Single Points of Failure

I want you to carefully examine the diagram shown here and then ask yourself the question,
"Where are the single points of failure in this network?" We have redundant connections between
the switches on both LANs, so we're protected from switch failures. However, notice that there's
only one link between each LAN and its associated default gateway router. As well, there's only
one link between the routers. If any of these three network links were to go down, or if one of
these routers were to go down, then all communications outside of each subnet would be cut off.
In this diagram, we then have at least three single points of failure.
Routing Redundancy

Consider this diagram. Notice that we actually have redundant default gateways now for each
segment: here, here, here, and here. Notice that we also have redundant links to the default
gateways on each side as well. In addition, we also have redundant links between routers. We
have a link here and we have a link here. In this configuration, the failure of one router or link
could be compensated for by the other router on the same network. However, this actually
presents a problem. As you know, when you configure the IP protocol settings on a workstation,
you have to supply the IP address of the default gateway router.
In this scenario, we have a default gateway here and a default gateway here. Same thing on the
other side. Because of the rules of IP addressing, each of these default gateway routers must have
a different IP address assigned to its LAN interface. If we were to have one of these routers fail
for some reason and we wanted to begin using the other redundant router, then we would have
to manually reconfigure each host on the subnet with a different default gateway router address
in order for them to start using the new redundant router. Frankly, this would take a very long
time.
In fact, it's very likely you could have the problem with the offline router fixed before you ever
finished reconfiguring all the hosts on the subnet to use the new default gateway router address.
But all is not lost. To get around this problem, you can use a First Hop Redundancy Protocol to
automate the process.
First Hop Redundancy Protocol (FHRP)

This allows network communications to continue using the redundant router while you
troubleshoot whatever problems have occurred with the offline router. Using FHRP, you don't
have to manually configure your host to use a different default gateway. This is possible because
FHRP configures our redundant default gateways on the same subnet to share a virtual IP address,
and in some cases the same virtual MAC address. Here's the key point you need to remember.
When you configure the default gateway address on your network host, you're going to specify
the virtual IP address created by the First Hop Redundancy Protocol.
Because it's a virtual IP address, FHRP can dynamically determine which routers the traffic
should actually go to. For example, if one gateway were to go down, the redundant router that
shares the same virtual IP address can then take over for the failed device. Because we're using
a shared virtual IP address, the network hosts have no idea that anything has happened when a
failover occurs. They just keep on sending packets to the same default gateway IP address that
they've always used. No reconfiguration is needed. In order to do this, the routers have to
exchange messages with each other periodically, basically asking, "Hey, are you still up? Are you
still up? Are you still up? How about now?" This allows the routers to negotiate with each other
and to agree as to what each router is going to do at any point in time.
For example, if one of these routers fails, then the other routers can use these messages to
determine which router is then going to take over for the router that failed. Be aware that FHRP
is not a single protocol. It's actually a family of several different protocols that you can pick from.
You can pick whatever one it is you want to use. First, we have the Hot Standby Router Protocol,
or HSRP. We also have its close cousin, called the Virtual Router Redundancy Protocol, or VRRP,
which is basically the same thing as HSRP with a few improvements. And then finally we have
the Gateway Load Balancing Protocol, or GLBP. GLBP is the latest First Hop Redundancy
Protocol, and it has a lot of improvements over HSRP and VRRP.
With that in mind, let's spend some time looking at each of these protocols in a little more detail,
starting with HSRP.
Hot Standby Router Protocol (HSRP)

HSRP uses an active standby model for redundant routing.


Using HSRP, multiple routers are actually configured as default gateways, as we saw before.
However, only one is actually allowed to function as a default gateway at any given point in time.
This is called the active router. The other routers are in a dormant state. These are the standby
routers. They really don't do anything until something happens to the current active router. When
you're using HSRP, each default gateway will be assigned the same virtual IP address and the
same virtual MAC address. You need to determine which virtual IP address it is that you want to
use. And of course, the virtual IP address you choose to use must be a valid IP address on the
LAN segment to which the LAN interface on each router is attached.
But that virtual IP address has to be unique from any IP addresses that may be configured on the
standard interfaces on each router. We're basically just following the standard IP addressing rules,
where each interface has to have a unique IP address, whether it's a physical interface or a virtual
interface. The virtual MAC address, on the other hand, is automatically assigned for you. With
HSRP, only the active router actually uses the virtual IP and the virtual MAC address at any given
time. The rest of the routers, the standby routers, don't. They simply wait for their turn to take
over, should the active router go down for some reason.
After you've finished configuring HSRP on your redundant routers, all of them will begin
exchanging HSRP messages with each other to make sure that the active router is up, running,
alive, and well. If this active router stops responding for some reason, then the redundant routers
will use these HSRP messages to decide which standby router is going to step in and take over.
If the active router goes down, then the standby router is going to take over because it didn't get
any response to the HSRP messages that were sent. This brings up an important issue. The
switches to which these routers are connected here need to know that the switch port to which
the default gateway router is connected has now changed. We're still using the same virtual MAC
address, remember, but the actual switch port that that MAC address is connected to has changed.
It was connected over here.
Now that MAC address needs to be associated with this new switch port over here. In order to
accomplish this, the new active router will send a gratuitous ARP frame to the switch to which
it's connected. It's called gratuitous because it's sending an ARP response when no ARP request
was actually sent. This is done for a very important reason. It updates the CAM tables in the
switches. All the switches know which switch port the default gateway router is connected to,
because now the port to which that MAC address is associated has been updated.
Virtual Router Redundancy Protocol (VRRP)

With that, we need to move on to VRRP, the Virtual Router Redundancy Protocol.
There's actually not a whole lot to say about VRRP, because it works in almost exactly the same
way as HSRP. The same principles apply. However, the Gateway Load Balancing Protocol, or
GLBP, works very differently from VRRP or HSRP.
Gateway Load Balancing Protocol (GLBP)

Instead of using an active standby model, GLBP actually balances the load between multiple
redundant default gateways, using an active/active model. It's important to note too that when
we're dealing with GLBP, each of the redundant routers is called a forwarder. If you see
forwarder, we're just talking about the gateway, the router itself. With GLBP, all of the redundant
routers are active at the same time. GLBP actually uses a lot of the same concepts as HSRP and
VRRP, such as a shared virtual IP address.
In fact, just as with HSRP and VRRP, all of our network hosts will use the virtual IP address as
their default gateway router address. However, GLBP does not use the concept of a shared virtual
MAC address, as HSRP and VRRP do. Instead, each router has its own unique virtual MAC
address.
In order to manage the load between redundant routers, GLBP designates one of them as the
active virtual gateway, or the AVG. The role of the AVG is to respond to all ARP requests for the
shared virtual IP address. And here's the key thing you need to understand. Whenever an ARP
request for the default gateway is received by the active virtual gateway, it will respond to some
of the ARP requests with one router's virtual MAC address, and then it will respond to other ARP
requests for the same IP address with a different MAC address. Instead of the MAC address of
the first redundant router, it will respond with the MAC address of the second redundant router.
If there was a third one and a fourth one, it would respond accordingly.
By doing this, some of our network hosts down here are going to send frames to the MAC address
of one of the redundant routers, while other hosts are going to send frames to the MAC address
of the other router. By doing this, we balance the load between our redundant gateway routers,
eliminating single points of failure. If an ARP request arrives at the active router that is not an
active virtual gateway, the ARP request will actually be ignored. Only the active virtual gateway
will respond to ARP requests for the virtual IP address. As long as these redundant routers are
running properly, each one will act as a forwarder for its own virtual MAC address.
However, each one will listen for GLBP messages in order to make sure that the other routers are
still working. If one of these routers fail, then the routers that are still up will assume the failed
router's MAC address and continue to forward traffic. We have load balancing as well as failover
with GLBP. It's important to note that all of this is transparent to the end user. It happens behind
the scenes. Host systems on the network have no idea which physical router they're actually
sending packets to.
Summary

Everything just works from the standpoint of each individual workstation.


That's it for this lesson. In this lesson, we introduced you to the need for redundant first hop
routers. We first talked about eliminating single points of failure, as far as our default gateways
for our LAN segments are concerned. Then we talked about different ways in which you can
implement redundant default gateways using the First Hop Redundancy Protocol family of
protocols. We first looked at HSRP, then we looked at VRRP, and then we ended this lesson by
looking at GLBP.
7.4.4 Routing Optimization Facts

Several commonly used methods for optimizing network routing include configuring the
following:
• Administrative distance values
• Route summarization
• Redundant default gateway routers
The administrative distance is a number assigned to a source of routing information (such as a
static route or a specific routing protocol). The router uses this value to select the source of
information to use when multiple routes to a destination exist. A smaller number indicates a more
trusted route. The following table shows the default administrative values for a Cisco router:
Route Source Administrative Distance
Connected interface 0
Static route 1
EIGRP summary route 5
EIGRP internal route 90
IGRP 100
OSPF 110
RIP 120
EIGRP external route 170
You can modify how routes are selected by modifying the administrative distance associated with
a source.
Routers can use multiple routing protocols to learn about routes to other networks. Additionally,
there might be multiple paths between any two points. When making routing decisions, the router
uses the following criteria for choosing between multiple routes:
1. If a router has learned of two routes to a single network through different routing protocols
(such as RIP and OSPF), it will choose the route with the lowest administrative distance (OSPF
in this example).
2. If a router has learned of two routes through the same protocol (e.g., two routes through
EIGRP), the router will choose the route that has the best cost as defined by the routing metric
(for EIGRP, the link with the highest bandwidth and least delay will be used).
Another way to optimize routing is to implement route summarization. Route summarization
groups contiguous networks that use the same routing path, advertising a single route as the
destination for the grouped subnets. Keep in mind that summarization:
• Reduces the size of the routing table. A single route to the summarized network takes the
place of multiple routes to individual subnets.
• Speeds convergence. The accessibility of each subnet address is indicated by the
accessibility of the summarized address.
• Retains all necessary routing information, so all networks are still reachable after
summarization.
• Can happen in one of two ways:
Method Description
Automatic With automatic summarization, the router identifies adjacent networks and
calculates the summarized route.
• Auto-summarization is supported on classless and classful routing protocols.
• Auto-summarization uses the default class boundary to summarize routes.
• RIP (version 1 and version 2) and EIGRP support auto-summarization; OSPF does not.
• For RIPv2 and EIGRP, you can disable automatic summarization.
Manual With manual summarization, an administrator identifies the summarized route to
advertise. The specified route includes the summarized subnet address with the subnet mask that
includes all summarized subnets.
Automatic summarization sends route summaries along class boundaries on a network of a
different classful network only when advertising those routes. Consider the following graphic:

If both routers were using automatic summarization in this example:


• Router A would not automatically summarize routes from the 10.3.1.0/24 or the
10.3.2.0/24 networks when advertising those networks to Router B. This is because subnet
10.2.0.0/16, which connects the two routers, is in the same classful network (10.0.0.0/8) as the
subnets connected to Router A.
• Router B would automatically summarize all routes as 10.0.0.0/8 when advertising routes
on the 12.0.0.0/8 network. This is because the network is a different classful network than the
10.0.0.0/8 network.
Route summarization can also be used to advertise multiple classful network addresses as a single
summarized route. For example, the subnets 192.168.1.0/24 through 192.168.255.0/24 could be
summarized as the single route 192.168.0.0/16.
Network hosts are typically configured with a single default gateway (the first-hop router) to
allow them to communicate outside the local subnet. However, if the default gateway were to
fail, the hosts would be limited to communicating only within the subnet, effectively
disconnecting the hosts from the rest of the network. This is shown in the figure below. Even if
there is a redundant router that could serve as a replacement gateway, there is no dynamic way
for hosts to switch to a redundant default gateway IP address. To use the redundant router in this
situation, users must statically change their default gateway address, which requires them to:
• Realize that the router is down.
• Know the IP address of the redundant router.
• Know how to manually change the IP address of their default gateway.
The First Hop Redundancy Protocol (FHRP) is a fault-tolerant approach that ensures hosts can
communicate outside their local subnet. FHRP allows hosts to dynamically switch between the
main router and one or more redundant routers should an outage occur. By doing this, FHRP
protects against a single point of failure. Using FHRP, a group of two or more routers actively
manage a single virtual router MAC address and IP address (as seen below) as their default router
address. This configuration ensures that if a router fails, a backup router takes responsibility as
the default gateway. With FHRP, LAN clients send traffic to the virtual router, and the physical
router handles the forwarding of that traffic. The difference between the virtual and physical
routers is transparent to clients.

FHRP is not an actual protocol. Instead, it identifies a family of protocols. FHRP includes the
following:
• Hot Standby Router Protocol (HSRP)
• Virtual Router Redundancy Protocol (VRRP)
• Gateway Load Balancing Protocol (GLBP)
7.5 Routing Trobleshooting

As you study this section, answer the following questions:


• How is it possible for all hosts on a subnet to be configured with the wrong default
gateway address?
• What is the format for the default route entry in a routing table? What purpose does the
default route serve?
• What are the symptoms of a routing loop? How can you identify a routing loop?
• Why might you escalate routing problems that you observe?
• How can proxy ARP settings appear as routing problems?
After finishing this section, you should be able to complete the following tasks:
• View the routing table on a device.
• Trace the path used between two devices through a network.

7.5.1 Routing Troubleshooting

Let's talk about some of the problems you might encounter when troubleshooting routing issues.
We'll do this using a sample network, with multiple internal subnets, connected to the Internet.
Host Can Only Communicate on Local Subnet

The first problem we'll look at is one where a specific device can communicate with other devices
within its own local subnet, but is unable to communicate with any other devices on any other
subnet.
Default Gateway
Be aware that this may not actually be a routing problem at all. The first thing to check is the
default gateway setting on each device. The default gateway identifies the router that is used for
communications outside of the subnet. It's possible that the wrong default gateway address has
been used on multiple computers. The most likely cause of this is a DHCP server that is assigning
the wrong default gateway address to all hosts on the subnet. In this case, hosts will be able to
communicate with each other, but would not be able to contact the default gateway router.
Troubleshooting the Router

If the default gateway address is correctly configured, then you can move on to troubleshooting
the router itself. First, make sure that the router is actually connected to the network. You can use
the route command on the router to view directly connected routes that have been set up. Make
sure that this directly connected network is actually listed, which identifies that the router has a
valid connection to this subnet.
The next thing you look for are other entries that identify routes within your private network. For
instance, if these devices cannot communicate with any other routers on the private network, the
problem could be that this link is down, or it could simply be that the router has not learned about
these routes from the other routers. Using the route command to view the routing table helps you
see at least what networks the router knows about.
nd (Neighbor Discovery)

One possible issue that could cause this to happen could be problems with the Neighbor
Discovery protocol (ND). ND enables routers on the same link to advertise their existence to
neighboring routers and to learn about the existence of their neighbors. Routers use ND messages
to identify the link-layer addresses of neighboring devices that are directly connected to the
router.
ND allows routers to know when neighbors become unreachable or inoperative. To do this, it
periodically sends and receives small hello packets to and from neighboring routers. If hello
packets are not received from a particular router, ND will assume that the router is not
functioning.
Issues can occur when using a large subnet for point-to-point links between routers, particularly
when using IPv6. By convention, we typically use a /64 prefix on each subnet when
implementing IPv6, allowing for a very large number of hosts on the subnet. But, a point-to-point
link between routers only requires two hosts on the subnet, one on each router. As a result, it can
take a very long time for ND to perform address resolution for all possible hosts on the link
subnet. This aspect of ND is amplified when it tries to scan a subnet for connected hosts. Some
Denial of Service attacks use this weakness to exploit routers using a flood of ND data. As a
result, newly connected devices may not be recognized by the other routers they are connected
to for a long period of time.
As a recommended best practice, consider using a very small subnet on the point-to-point link
between routers to reduce ND traffic. The recommendation is to use 127-bit (/127) prefixes on
these links instead of the conventional 64-bit prefix.
Host Can't Communicate With a Specific Subnet

Let's look at another problem. Suppose a host can communicate with almost every other subnet,
but it cannot contact any device on one specific subnet. In this case, one of the first things you
might do is to check the routing table on the default gateway router and look for an entry for that
problem network in the routing table. If an entry doesn't exist, then you can check the
configuration of the routing protocol, or you can just statically add the route so that the router
can properly route the information.
If you still experience problems, you may need to check the routing table on every router in the
path. Unfortunately, this may not actually be possible because you may not have access to these
devices.
traceroute

If this is the case, check the path to the destination network using the traceroute command from
the host. The traceroute command will show you the path that is used to reach that destination
network.
If there's a problem, the traceroute command may return a partial path, then it will return an error
saying that the destination network cannot be reached. Suppose, for instance, this router appears
to be the point of failure in the path to the destination network. These other routers may have
routes in their routing tables specifying that this is the path to route traffic. But this router may
not have a valid connection or it may not know about that destination network.
MTU Black Hole

The traceroute command will help isolate the problem.


A broken route to a destination host could be cause by an MTU black hole. This condition is
caused by black hole routers. A black hole router drops packets if the size of the packet exceeds
the Maximum Transmission Unit (MTU) size it can support. We call it a black hole because the
router does not send an error message to the sending host when it drops an oversize packet. In
essence, the packet enters a network 'black hole'.
If you experience an unexplained broken router, you can use the ping command to try to locate
a possible black hole router in the path by including the following parameters along with the IP
address of the destination host you are attempting to reach:
* -f causes ping to send an ICMP echo packet that has the do not fragment bit set.
* -l sets the size of the ICMP echo packet.
The results of the ping test using these options will provide you with several key pieces of
information:
* If the MTU size supported by each router in the path is the same size as (or larger than) the
MTU size of the packets being sent from your source system, then you will get successful ICMP
responses from the destination host.
* If there are routers in the path that have been configured a smaller maximum MTU size, but
they return an appropriate ICMP "destination unreachable" packet, the ping utility will display
an error message indicating "Packet needs to be fragmented but DF set."
* If there are routers in the path that have been configured to a smaller maximum MTU size, and
they do not return an appropriate ICMP "destination unreachable" packet to the sending system,
the ping utility displays a "Request timed out" message.
Routing Loops

The traceroute command can also be used to identify routing loops. A routing loop occurs when
data is being passed back and forth between routers in the path instead of forwarding it to the
destination network. For example, traceroute may show something like this. We'll use letters to
designate destination networks. The path may go from network A to network B to network C to
network D, and then you'll see a series of networks that appear to be repeated. In this case, the
path B to C to D is repeated, forming a routing loop. The router at B passes it off to C, which
goes to D, which passes it to B, and which goes back again to C and then to D. Drawn out this
way is a better visual indication that the packets are just being looped throughout these routers.
Routing loops almost always indicate a misconfigured router somewhere along the path. Either
routes are not being shared correctly or a router is not configured correctly.
Here's one example of how a routing loop might form. Suppose you have a private network
connected to the Internet. This network has a web server that can be accessed through the
Internet. The ISP is configured to forward all traffic for that web server to a router at your private
network. If this link is actually down, this known network will disappear from the routing table
of this router. But this router may be configured with a default route, which is 0.0.0.0. It will
receive a packet addressed to the website, but because this route no longer exists in its routing
table, it will determine that it's an unknown destination and send that information this direction.
The ISP router receives that packet addressed to the Web server and again sends the packet this
way. And then that process repeats itself. When you see a routing loop you may need to just wait
until the down connection is restored if the router is not under your control. If the router is under
your control, you should check the router configuration and the routing tables to resolve the
problem.
Host Can't Access Internet

Other routing problems could cause hosts on your private network to be unable to access anything
on the Internet. In this case, you need to have an understanding of how your network connects to
the Internet. In most cases, your router is connected to another router at your ISP. The problem
with the connection to the Internet could be with your router, or with the ISP's router, or with the
ISP's interface into the Internet. One of the first things to check would be the routing table on
your router that connects you to the ISP. Use the route command and check to make sure that this
entry for this network exists. That will tell you whether this link is up or not. If the link is down,
you need to either troubleshoot your connection, or the ISP needs to troubleshoot the local loop
into your business.
Another thing to check for in the routing table is an entry for the default network. The link to
your ISP may be up, but your router may not be configured to send unknown Internet traffic to
the Internet. To address this, look for a route with a network address of 0.0.0 and a subnet mask
of 0.0.0.0.
Occasionally, you may find that the link is correct and the route is correct, but you still cannot
communicate on the Internet. At this point, you could try pinging the interface of the router that
connects you to the ISP. This may be the next top router that's recorded for this default route. If
you ping it and you get a response then you know that this connection to the ISP is valid.
Another thing that may happen is that the ISP may have a web server or e-mail servers on their
network within their private network before it connects to the Internet. You may find that you
can communicate with these servers at your ISP and still not be able to communicate with
anything on the Internet. In this case, all you can do is contact your ISP, let them know the
problems you're experiencing and have them troubleshoot their link to the Internet.
Remote Access Clients
In most cases they will already be aware of a problem.
A final situation related to routing involves remote access clients. For example, suppose a remote
access client can connect to the remote access server, but can't access any resources on the private
network. In this case, we have a remote access client that connects to the remote access server.
The remote access server is connected to the private network and allows access to resources
within the private network. If the remote access client cannot establish a connection to the remote
access server, you need to troubleshoot either the client or the server and figure out why that
connection is not granted. If you can connect but can't access any resources on the private
network, there are a few other things to check.
IP Address Assignment

The first thing to look for is the IP address that gets assigned to the client when the connection is
made. When a connection is made, the remote access server, or a DHCP server, typically assigns
the client an IP address appropriate for the private addressing scheme. For instance, suppose the
client in this case is assigned an IP address of 10.1.0.15 using a 16 bit subnet mask. And let's say,
that the same subnet address is used on my private network. In this case, because the remote
access client has the same subnet address as is used on the private network, the remote access
client appears to be connected to the same physical media of the private network. However, it's
actually separated by the remote access server on a different kind of a connection.
proxyARP

In this case, the remote access server should be providing a service called Proxy ARP. With Proxy
ARP devices on the private network that need to contact the remote access client, such as servers
that need to respond back to requests from the client, must know the MAC address associated
with the IP address of the client. Because it is not actually on the same physical segment, and
because modems do not have MAC addresses, the server acts as a proxy and builds a database
mapping the MAC address associated with my interface to the IP address of all of my remote
access clients. To make this process work, Proxy ARP must be enabled on the interface that
connects to the private network.
Another issue you may encounter involves a remote access client that is given a different address
on a different subnet from the private network. In this case, the remote access server is acting as
a router. You need to verify that routing has been enabled so that packets from one subnet can
pass to the other subnet.
Summary

That's it for this lesson. In this lesson, we discussed troubleshooting routing issues. By properly
identifying the symptoms of a problem, you can generally narrow down the source of most
routing problems to a specific router or a specific link. In many cases, the problems you might
find are not with routers under your control. Even if they are within your own network you might
not be in charge of maintaining those routers. By performing some simple diagnostic tests you
can narrow the scope of the problem and escalate the problem to the administrator who can best
resolve the problem.
7.5.2 Troubleshooting Routing
Just like everything else in IT, or information technology, things can go wrong when we set up
our routing and our switching, and we have our infrastructure all ready to go, or at least we think
we do. What I want to do now is look at a few commands that we can use to help troubleshoot
our wide area connections and our routing in general.
Show Running Configuration (Show Run)

The first thing I want to look at is the simple show run command. What I'm going to do is get
into privilege mode. I'm going to type 'E' in for enable. You can see that we are in privileged
mode now. I'm going to type 'show run' and hit 'Enter.' This is going to show the running
configuration of our router that we have right now. You can see that we have the time stamps
turned on. We're doing some debugging. We have a syslog service set up. I have my host name
set up.
What we want to do is scroll down here. I'm going to hit the 'Enter' key a few times. We're looking
at the interfaces. This is going to be important when it comes to troubleshooting because you
may not be able to send any information. You may not be able to receive information. The first
thing you want to look at is this shutdown state. Are we using these interfaces to send
information?
The only interface we have up right now is this S000. That is our wide area connection over to
another router right now. You can see we're using encapsulation PPP. That's important, too,
because if the other end is not using the same encapsulation we're using, we won't be able to
communicate as well. I'm going to go ahead and hit the space bar and go on down. We're getting
into the bottom here and now we have our lines, our console lines, our auxiliary line, our VTY
line or that Virtual Teletype Line where we could remote access in.
That's the first thing we want to look at is a simple show run, so we can look at the running
config. Another good one that I really like to run all the time is a show IP interface brief.
Show IP Interface Brief

This command tells us some good information and it's very quick. You can see the first column,
we have are interfaces. Fast Ethernet, serials, any other interfaces that we may have on our
particular device. The second column is the IP addresses that have been assigned to that interface.
That should be the same IP address that we saw just a second ago in the show run or the running
configuration. The two columns we really want to focus on is the status and the protocol. We
want to look for up and up. This tells us that we have a good physical connection. That is our
status column and the protocol is on Layer two. For status we're looking at Layer one. Protocol
we're looking at Layer two. We want 'up, up'.
We are good here, right? We can rule out, any kicked out cables, any broken cables, any missed
connections, if it's loose in there. The protocol goes back to that PPP that I was talking about just
a second ago. If the other side, for whatever reason, did not configure point-to-point protocol we
would have an Up, Down situation and we wouldn't be able to pass any traffic.
Show IP Route

Another popular command that I want to touch on is a Show IP Route. Here, we're going to see
the different routes. Here we can see the IP routes or the routing table of this particular router.
You can see there are no dynamic routes in here. All we have is our point-to-point connections,
are directly connected. That's what these Cs tell us over here. These over here on the far left hand
side; are directly connected connections. What this tells us is this is the doors. These are the
traffic routes. These are the roads that the router is taking to send traffic. If we can't get from
Point A to Point B, this is a good place to look because perhaps we have configured something
incorrectly as far as the route; how to get traffic from one side to the other.
Ping

Another popular command is 'ping'. That's an echo request, echo reply. What that means is I'm
going to send out an electronic signal, if you will, across the line. If the other sign is up, if they're
responding to traffic, to this request, they will reply. This lets me know that the distant end or the
next hop in this case is going to be up. I can rule out that the next router's down; that they're
interface is down. Because if I can ping you, then I know the connection is good between me and
you. What I can do is say, "Ping 10.10.10.2," and I get a success rate of 100%. That's great. That
tells me that the other router that I have in my topology is up and going. It's listening and it's
talking to me.
Traceroute

The last command is traceroute. It uses the same protocol that ping does. It uses ICMP. Traceroute
actually takes information from ping and says, "Okay, this is the first hop. This is the second hop.
This is the next hop. This is the next and the next," and so on, until it finally reaches its
destination.
These hops represent the number of routers that it goes through before it ever gets to the
destination. If we type 'traceroute 10.10.10.2.' We won't have many hops because if you look up
here, you can see that 10.10.10.2 is inside the same subnet and here it is explicitly with a slash
32. It says it's directly connected, so there's no hops. There are no routers between this one and
the next one. It's one hop away, which makes sense. If we had two or three routers in here, we
would have a couple more lines for our traceroute.
Summary

That's it for this demonstration.


In this demonstration, we learned about the troubleshooting commands:
Show IP interface brief, show IP route
show run, and how to use ping and traceroute.
Keep these troubleshooting commands in mind. These are commands that network admins need
to have on hand all the time, because these are tools we can run to troubleshoot.
7.5.3 Trobleshooting Routing Facts

A general routing problem symptom is the inability to access hosts on a specific network or any
remote network. The following table lists various problems that are typically caused by routing
issues:
Problem Description
Can't access hosts outside the local subnet If one or more hosts can communicate only with
hosts on the local subnet, the problem is likely with the default gateway configuration.
• If a single host is having problems, check the default gateway setting on that host.
• If multiple hosts are having problems, check the default gateway setting, and verify that
the DHCP server is configured to deliver the correct default gateway address.
• If all hosts have the same problem and the default gateway setting is correct, verify that
the default gateway server is up and configured for routing.
This issue could also be caused by problems with the Neighbor Discovery (ND) protocol.
• Routers on the same link use the ND protocol to advertise their existence to neighboring
routers and to learn about the existence of their neighbors.
• Routers process ND messages to identify the link layer addresses of neighboring devices
that are directly connected to the router.
• Routers use the ND protocol to periodically send and receive small hello packets to and
from neighboring routers. If hello packets are not received from a particular router, that router is
assumed to be not functioning.
Issues with the ND protocol can occur when a large subnet is used for point-to-point links
between routers, especially when IPv6 is used. By convention, a /64 prefix is used on each subnet
when implementing IPv6, allowing for a very large number of hosts on the subnet. If you use a
standard /64 prefix on the link subnet, the ND protocol will try to perform address resolution for
all possible hosts on the subnet. When this happens, newly connected devices may not be
recognized by other routers for a long period of time.
A point-to-point link between routers is composed of only two interfaces, one on each end of the
link. Therefore, the link subnet needs only to support a maximum of two hosts. As a
recommended best practice, use a very small subnet for the point-to-point link between routers
to reduce ND traffic. The recommendation is to use 127-bit (/127) prefixes on these links instead
of the conventional 64-bit prefix.
Can't communicate with any host on a specific network If hosts are unable to contact
hosts on a specific subnet, but they can communicate with other subnets, try the following:
• Verify that the router connected to the subnet is up.
• Use the route command on the default gateway of the local subnet and verify that the
router has a route to the remote subnet. If necessary, configure a routing protocol so that the route
can be learned automatically, or configure a static route.
• Use traceroute to view the route taken to the destination network. Identify the last router
in the path, then troubleshoot routing at that point.
• Check for routing loops in the path to the destination network. A routing loop is caused
by a misconfiguration in the routers along the path, causing data to be sent back along the same
path rather than forwarded to the destination. Routing loops are indicated by:
• Routing table entries that appear and then disappear (called route flapping), often at
regular intervals (such as every minute).
• Routing table entries where the next hop router address oscillates (switches) between two
or more different routers.
Routing loops are displayed in a traceroute output and shows the same sequence of routers being
repeated.
• Check for black hole routers. A black hole router will drop packets when the packet size
exceeds the Maximum Transmission Unit (MTU) size. You can use ping to locate a black hole
router by setting the following parameters along with the IP address of the remote host:
• -f causes the ping utility to send an ICMP echo packet that has the IP "Do not Fragment"
or DF bit set.
• -l sets the buffer (or payload) size of the ICMP echo packet. Specify this size by typing a
number after the -l parameter.
The ping test will provide you with helpful information:
• If the MTU of every segment of a routed connection is at least the MTU size, the packet
is successfully returned.
• If there are intermediate segments that have smaller MTUs, and the routers return the
appropriate ICMP "destination unreachable" packet, the ping utility displays the message,
"Packet needs to be fragmented but DF set."
• If there are intermediate segments that have smaller MTUs, and the routers do not return
the appropriate ICMP "destination unreachable" packet, the ping utility displays the message,
"Request timed out."
Can't access the Internet If hosts are able to reach all internal networks but can't access the
Internet, try the following:
• Verify that the Internet connection is up.
• Check for a default route on the router connected to the Internet. A default route is
indicated by a network address of 0.0.0.0 with a mask of 0.0.0.0. The default route is used for
packets that do not match any other entries in the routing table.
Most routers that connect private networks to the Internet do not know about specific networks
and routes on the Internet. Additionally, most routers do not share routes for private subnets with
Internet routers. A router is configured with a single default route that is used for all Internet
traffic, and a router at the ISP is responsible for sharing a single route for your private network
with other Internet routers.
Remote clients can't access network resources If you have remote access clients who can
establish a connection to the remote access server but can't connect to other resources on the
private network, check the following:
• If remote clients are being assigned IP addresses on the same subnet as the private
network, make sure that proxy ARP is enabled on the LAN interface of the remote access server.
Proxy ARP makes it appear as if the remote clients are connected to the same network segment.
• If remote clients are being assigned IP addresses on a different subnet than the private
network, make sure the remote access server is configured to route packets between the remote
clients and the private network.
UNIT-III

8.1. Firewalls

As you study this section, answer the following questions:


• How does a packet filtering firewall differ from a circuit-level gateway?
• Why is a packet filtering firewall a stateless device?
• What types of filter criteria can an application layer firewall use for filtering?
• Which security device might you choose to restrict access by user account?
• What is the difference between a proxy and a reverse proxy?
After finishing this section, you should be able to complete the following task:
• Configure a host firewall.
This section covers the following Network Pro exam objective:
• Domain 6.0 Network Security
• Given a scenario and a Windows system, configure a basic host firewall.

8.1.1 Firewalls

In this lesson, we're going to look at a very important component that should be a part of every
network called a firewall. A firewall is a software or a hardware base network security system
that allows or denies network traffic based on a set of rules. Firewalls are typically used to protect
networks or devices from attacks or from unwanted or untrusted traffic.
Hardware/Software Firewalls

A firewall can be implemented in two different ways. First of all a hardware firewall is typically
used to protect an entire network or to protect one specific network segment. They are dedicated
hardware appliances that contain all the hardware and software needed in order to protect the
network.
Hardware firewalls are a lot more expensive than other types of firewalls, but they also provide
the best performance. Software firewalls on the other hand are typically used to protect a single
computer or device. Software firewalls are a lot less expensive than hardware firewalls, but they
usually aren't as robust either. Firewalls are commonly used to protect private networks that are
connected to the Internet and they do this by filtering traffic between your network and the
Internet. One of the main purposes of a firewall is to prevent attackers on the Internet from
gaining access to your private network.
Network-based or Host-based

Be aware that a firewall can be network-based or it can be host-based. With a network-based


firewall the firewall sits at the edge of your network and acts as a barrier between your entire
network and the outside Internet. A network firewall filters all network traffic to and from the
Internet. Host-based firewall, on the other hand, protects a single system from unauthorized
connections. Network-based firewalls are usually hardware firewalls and host-based firewalls
are almost always software firewalls.
ACLs

In addition to protecting an entire network from attackers on the Internet, firewalls can also be
used to isolate and protect sensitive segments of your private network. For example, let's suppose
we want to protect the set of servers that hold sensitive accounting data. Then we could create a
special subnet for those servers within the network and then install a firewall to protect that
segment from unauthorized traffic originating from within our own private network or any traffic
out on the Internet. In order to do this we would define a set of rules on the firewall to specify
that only very specific types of traffic will be allowed through. All other traffic will be blocked
by the firewall rules.
These filtering rules on the firewall are called Access Control Lists or ACLs. The firewall scans
incoming and outgoing network traffic and it compares that traffic to the rules that you've
defined. Then it decides whether the traffic should be allowed or whether it needs to be rejected.
The level at which a firewall scans network traffic depends upon the type of firewall being used.
There are several different types of firewalls that you need to be familiar with. First we have
packet filtering firewalls, we have circuit level gateways, and finally we have Application Layer
firewalls.
Packet Filtering Firewalls

Let's first talk about how a packet filtering firewall works. A packet filtering firewall examines
the information within each packet header. It operates at layer three on the network layer of the
OSI model. When a frame enters a packet filtering firewall the firewall will remove the framing
information in order to expose the IP packet information within that frame. Which includes the
data, the destination IP address of the packet and the source IP address of the packet as well as
the source port and destination port. With a packet filtering firewall you can define ACLs based
on that information that is contained within the IP packet. Including the source IP address, the
destination IP address, the source port number, and the destination port number.
Every packet that comes into the firewall is compared to the rules in the ACL that you define.
These rules specify whether to allow or reject these packets based on first of all the network
interface that the packet was received on. The direction of the communication whether it's
inbound or whether it's outbound. The source IP address, the destination IP address of the packet,
the source port number of the packet, or the destination port number of the packet. For example,
the firewall could be configured to allow all packets from a specific source IP address.
Alternatively, maybe you could define an ACL that blocks all inbound traffic that's destined for
port 22.
Be aware that these are really just very simple ACL rules. You can define very complex ACLs
with many different rules that a packet has to be evaluated against. For example you could block
all packets through source IP address just from an external network and only allow
communications from hosts that are on the same subnet as the destination host. Be aware that
many routers actually provide a Packet Filtering Network. They're basically a router and a
firewall all in one providing both functions. Because of their ability to filter based on port
number, a best practice with a packet filtering firewall is to actually block all ports and then open
only the ports that are necessary for network functionality.
For example let's suppose you have a web server on your network. Therefore you might want to
block all ports in the firewall except for the one that is used by that web server port 80.
Implicit Deny

In fact with most firewalls all traffic is denied by default. This is very important; it's known as
implicit deny. Implicit deny is a security technique that blocks everything unless it is explicitly
allowed. You have to manually specify what traffic you want to allow through that firewall;
everything else will be blocked. For example all IP addresses and port numbers are blocked
except for those that are allowed in the ACL. Not only is implicit deny a good security practice,
but it also makes your job as the network administrator a lot easier. Chances are you're going to
want to block a lot more types of traffic than you're going to want to allow.
Circuit Level Gateways

There's a second type of firewall that you need to be familiar with and this is called a circuit level
gateway. A circuit level gateway filters traffic based on the session's state not on the IP address
or port number like a packet filtering firewall does. A circuit level gateway makes filtering
decisions based on the session layer information which is the session ID number. The firewall
only allows packets that match after sessions. In order to do this the circuit level gateway has to
take advantage of the TCP three-way handshake. In order to establish a TCP session, a client
computer first sends a request for a session with a very special packet called a SYN packet, S-Y-
N.
The server responds back with an acknowledgment, an ACK basically stating that, 'Yeah, I have
a session available and you can use it.' The client then responds with another acknowledgment
called a SYN ACK that acknowledges that it received the original session information and it
wants to go ahead and establish communications with the server. This is important because the
circuit level gateway monitors this three-way handshake process in order to identify an active
session. One that has been set up properly, has been acknowledged properly and is in use.
When a packet's received by the firewall, it will move the packet header information just like a
packet filtering firewall does. This time it's going to examine the session information within that
packet. If that session packet represents a legitimate setup session and it's currently active and
being used, then that communication will be allowed. If it finds a session ID that is not active or
that it was never created in the first place, it doesn't exist. Or that's already been closed it's been
terminated, then that packet will be dropped and not forwarded.
With our circuit level gateway instead of examining every single packet and filtering them based
on rules, a circuit level gateway simply looks at the session ID and decides whether or not it's
associated with a legitimate TCP session that was set up properly. A circuit level gateway is very
useful because it protects your network against network attack such a SYN flood attack. A SYN
flood attack attempts to manipulate that TCP three-way handshake in order to instigate a denial
of service attack. Even though a circuit level gateway filters at the session layer of the OSI model
instead of at the network layer, they're actually faster than a packet filtering firewall.
This is because a circuit level gateway only filters based on one single value and that is the value
of the session ID. Instead of a long list of different values in an ACL such as the IP address, port
number and communication direction the way a packet filtering firewall does.
Application Layer Firewalls

The final type of firewall that we're going to look at here is the Application Layer firewall. An
Application Layer firewall, as its name implies, operates at the Application Layer in the OSI
model. Application Layer firewalls do not filter packets based on IP address or port numbers,
instead they filter data based on the application layer data within each packet's payload. Messages
that enter the firewall are typically composed of multiple packets of information. At the network
layer a packet filtering firewall examines each packet to make the forwarding decisions based on
the ACLs that you've defined.
At the Session Layer, a circuit level gateway would examine the session ID associated with that
information across multiple packets to make decisions. At the Application Layer, the Application
Layer firewall will actually take all those individual packets and reassemble them into the
original data. Then it's going to make forwarding decisions based on what's in that data. Let's
take a look at an example. Let's suppose that this firewall here is filtering HTTP requests, web
page requests. An HTTP request can be something simple as get this specific web document from
this website. The Application Layer firewall is going to reassemble that request when it comes
through creating the original document.
Once that is done the firewall can go ahead and evaluate what's actually in that document and
then make filtering decisions based upon what it finds. For example the firewall could filter the
website based on content. This could be as simple as blocking a specific URL or it could use a
list of predefined terms or words to specify what is blocked. This could be very useful if you
need to block specific categories of websites: for example, online gaming, online gambling, or
websites with adult content. This is also useful for blocking games or other applications that may
have been reconfigured in order to use traditional ports in order to make connections.
For example, we may have an online gaming application that's been reconfigured to use port 80
instead of its default dynamic port because most packet filtering firewalls have port 80 open for
web pages, right? This would allow that application to get past a packet filtering firewall.
However, because an Application Layer firewall inspects the content of each packet, then it can
be used to identify specific application signatures and block those applications even though they
may be using an allowed port.
Another feature of this type of firewall is the ability to allow or deny access based on users and
groups.
Implementation on a Network/ Proxy Server

The implementation of an Application Layer firewall is often referred to in the industry as a


proxy server which is just a specific implementation of a particular type of Application Layer
gateway. A proxy server sits between a network and an end user. For example, we may have a
network of computers over here that need to access information over here on the Internet. The
proxy server sits between the Internet and the clients.
All of the requests that are going to the Internet from these clients are actually going to be
intercepted first by the proxy server. The proxy server is going to take those requests and apply
application layer filtering to the request to decide whether the request should be allowed or
whether it needs to be blocked. These filters could be based on the URLs being requested, it
could be based on the users that are making the request. For example, you could configure your
proxy servers such that only certain users are allowed to access the Internet. In fact while they're
there they're only allowed to view a very specific set of websites.
A proxy server may also be configured to cache frequently accessed Internet content. When a
user accesses a particular webpage through a proxy server, the proxy server can actually cache
that data locally. Then when another user on the internal network tries to access the very same
webpage, the proxy server just simply pulls the page out of its cache and delivers it to the end
user without going out on the Internet to get it again.
Reverse Proxy Server

That saves bandwidth.


There's another type of proxy server called a reverse proxy, and it works a little bit differently.
Instead of filtering internal requests going out to the Internet, a reverse proxy handles requests
from the Internet to internal servers. For example, here we have a reverse proxy server that's
sitting in front of these web servers over here. Let's say a client out here requests access to one
of these servers. Instead of that request going directly to the server, it first goes to the reverse
proxy server. The reverse proxy server looks at the request and then depending upon what was
requested it will connect to the correct server. Basically, it creates sort of a link between the client
and the server.
One thing to be aware of about reverse proxy servers, and proxy servers for that matter, is the
fact that they operate transparently. The client doesn't know that they're connecting to a proxy
server. As far as they're know they're connecting directly to the server over here either on the
Internet or on the internal network depending upon whether we're using a reverse proxy server
or a traditional proxy server.
One thing to be aware of about reverse proxy servers and regular proxy servers for that matter is
the fact that they operate transparently. That means that the client doesn't know that they're
connecting through a proxy server whether it's reverse proxy or a traditional proxy. As far as
they're know they're connecting directly to the server over here.
One thing to remember is the fact that reverse proxy servers can be used to cache information as
well just like a regular proxy does such as website data. Also reverse proxy servers can be used
to balance the load being placed on these internal web servers.
SOHO/UTM

There are a couple of other firewall implementations that you need to be familiar with. The first
one is a firewall that is typically used in a small office or home office environment. We call these
SOHO environments, S-O-H-O. These firewalls don't have the same features and they don't have
the same functionality as a dedicated hardware firewall that we might use in a large organization.
They're also a lot less expensive and they're also usually easier to configure. Typically, they're
robust enough to handle the demands of a SOHO environment.
Alternatively, a small to medium-sized business may invest in a Unified Threat Management
device, a UTM. A UTM device combines multiple security services all into one device, into a
central device. It may include such things as firewall, anti-spam, antivirus, load balancing, VPN
services and so on. Combining all these services into one single device makes managing all these
services a lot easier and it lowers the overall cost of ownership; you'll only have to manage one
device. However, using a UTM device also creates a single point of failure in your network.
If the UTM device fails then you'll lose every single service that it was providing. Because of
this larger organizations usually employ individual devices one for each security service that the
organization needs in order to create a layered defense. As you can see a firewall can be used to
protect either a single host or an entire network from external threats.
Summary
It can also be used to protect against internal threats.
That's it for this lesson. In this lesson we learned why firewalls are an integral part of any
network. We learned about the different types of firewalls, we talked about packet filtering
firewalls, we learned about circuit level gateways. We talked about Application Layer firewalls.
Then we talked about the different ways that you can implement a firewall on a network.
8.1.2 Firewalls Facts

A firewall is a software- or hardware-based network security system that allows or denies


network traffic according to a set of rules. Firewalls can be categorized by their location on the
network:
• A network-based firewall is installed on the edge of a private network or network segment.
• Most network-based firewalls are considered hardware firewalls, even though they use a
combination of hardware and software to protect the network from Internet attacks.
• Network-based firewalls are more expensive and require more configuration than other
types of firewalls, but they are much more robust and secure.
• A host-based firewall is installed on a single computer in a network.
• Almost all host-based firewalls are software firewalls.
• A host-based firewall can be used to protect a computer when no network-based firewall
exists (e.g., when connected to a public network).
• Host-based firewalls are less expensive and easier to use than network-based firewalls,
but they don't offer the same level of protection or customization.
A host-based firewall can be used in addition to a network-based firewall to provide multiple
layers of protection.
• Firewalls use filtering rules, sometimes called access control lists (ACLs), to identify
allowed and blocked traffic. A rule identifies characteristics of the traffic:
• The interface the rule applies to
• The direction of traffic (inbound or outbound)
• Packet information such as the source or destination IP address or port number
• The action to take when the traffic matches the filter criteria
Each ACL has an implicit deny. This is a line at the end of the ACL stating that if a packet doesn't
match any of the defined rules, then it will be dropped.
• Firewalls do not offer protection against all attacks (e.g., email spoofing).
The following table describes different firewall types:
Firewall Type Characteristics
Packet filtering firewall A packet filtering firewall makes decisions about which network
traffic to allow by examining information in the IP packet header such as source and destination
addresses, ports, and service protocols. A packet filtering firewall:
• Uses ACLs or filter rules to control traffic.
• Operates at OSI Layer 3 (Network layer).
• Offers high performance because it examines only the addressing information in the
packet header.
• Can be implemented using features that are included in most routers.
• Is a popular solution because it is easy to implement and maintain, has a minimal impact
on system performance, and is fairly inexpensive.
A packet filtering firewall is considered a stateless firewall because it examines each packet and
uses rules to accept or reject it, without considering whether the packet is part of a valid and
active session.
Circuit-level proxy A circuit-level proxy or gateway makes decisions about which traffic to
allow based on virtual circuits or sessions. A circuit-level gateway:
• Operates at OSI Layer 5 (Session layer).
• Keeps a table of known connections and sessions. Packets directed to known sessions are
accepted.
• Verifies that packets are properly sequenced.
• Ensures that the TCP three-way handshake process occurs only when appropriate.
• Does not filter packets. Instead, it allows or denies sessions.
A circuit-level proxy is considered a stateful firewall because it keeps track of the state of a
session. A circuit-level proxy can filter traffic that uses dynamic ports, because the firewall
matches the session information for filtering and not the port numbers. In general, circuit-level
proxies are slower than packet filtering firewalls. However, if only the session state is being used
for filtering, a circuit-level gateway can be faster after the initial session information has been
identified.
Application-level gateway An application-level gateway is capable of filtering based on
information contained within the data portion of a packet. An application-level gateway:
• Examines the entirety of the content being transferred (not just individual packets).
• Operates at OSI Layer 7 (Application layer).
• Understands, or interfaces with, the application-layer protocol.
• Can filter based on user, group, and data (e.g., URLs within an HTTP request).
• Is the slowest form of firewall because entire messages are reassembled at the Application
layer.
One example of an application-level gateway is a proxy server. A proxy server is a device that
stands as an intermediary between a secure private network and the public. Proxies can be
configured to:
• Control both inbound and outbound traffic.
• Increase performance by caching frequently accessed content. Content is retrieved from
the proxy cache instead of the original server.
• Filter content and restrict access depending on the user or specific website.
• Shield or hide a private network.
There are two different types of proxy servers:
• A forward proxy server handles requests from inside a private network out to the Internet.
• A reverse proxy server handles requests from the Internet to a server located inside a
private network. A reverse proxy can perform load balancing, authentication, and caching.
Oftentimes, reverse proxies work transparently, meaning that clients requesting specific
resources don't know they are using a reverse proxy to access a server.
Unified threat management (UTM) device A unified threat management device combines
multiple security features into a single network appliance. A single UTM device can provide
several security features:
• Firewall
• VPN
• Ant-spam
• Antivirus
• Load balancing
By combining several services into one appliance, UTM devices make managing network
security much easier. However, they also introduce a single point of failure—if the UTM fails,
network security is lost. Additionally, UTM devices aren't as robust as other devices made for a
specific use. Because of this, UTM devices are best suited for:
• Offices where space limits don't allow for multiple security appliances.
• Satellite offices that need to be managed remotely. Configuration changes need to be made
on only one device, rather than multiple devices.
• Smaller businesses that wouldn't benefit from the robust features provided by specific
security appliances.
A common method of using firewalls is to define various network zones. Each zone identifies a
collection of users who have similar access needs. Firewalls are configured at the edge of these
zones to filter incoming and outbound traffic. For example, you can define a zone that includes
all hosts on your private network protected from the Internet, and you can define another zone
within your network for controlled access to specific servers that hold sensitive information.

8.1.3 Common Ports

Network ports are logical connections, provided by the TCP or UDP protocols at the Transport
layer, to be used by protocols in the upper layers of the OSI model. The TCP/IP protocol stack
uses port numbers to determine what protocol incoming traffic should be directed to. Some
characteristics of ports are listed below:
• Ports allow a single host with a single IP address to run network services. Each port
number identifies a distinct service.
• Each host can have over 65,000 ports per IP address.
• Port use is regulated by the Internet Corporation for Assigned Names and Numbers
(ICANN).
ICANN specifies the following three categories for ports:
• Well known ports range from 0 to 1023 and are assigned to common protocols and
services.
• Registered ports range from 1024 to 49151 and are assigned by ICANN to a specific
service.
• Dynamic (also called private or high) ports range from 49152 to 65535 and can be used
by any service on an ad hoc basis. Ports are assigned when a session is established, and ports are
released when the session ends.
The following table lists the well-known ports that correspond to common Internet services:
Port(s) Service
20 TCP and UDP
21 TCP and UDP File Transfer Protocol (FTP)
22 TCP and UDP Secure Shell (SSH)
23 TCP Telnet
25 TCP and UDP Simple Mail Transfer Protocol (SMTP)
53 TCP and UDP Domain Name Server (DNS)
67 TCP and UDP
68 TCP and UDP Dynamic Host Configuration Protocol (DHCP)
69 TCP and UDP Trivial File Transfer Protocol (TFTP)
80 TCP and UDP Hypertext Transfer Protocol (HTTP)
110 TCP Post Office Protocol (POP3)
119 TCP Network News Transport Protocol (NNTP)
123 TCP and UDP Network Time Protocol (NTP)
137 TCP and UDP
138 TCP and UDP
139 TCP and UDP NetBIOS Name Service
NetBIOS Datagram Service
NetBIOS Session Service
143 TCP Internet Message Access Protocol (IMAP4)
161 UDP
162 TCP and UDP Simple Network Management Protocol (SNMP)
389 TCP and UDP Lightweight Directory Access Protocol (LDAP)
443 TCP and UDP HTTP over Secure Sockets Layer (HTTPS)
445 TCP Microsoft Server Message Block (SMB) File Sharing
1720 TCP H.323 Call Signaling
2427 UDP Cisco Media Gateway Control Protocol (MGCP)
3389 TCP and UDP Remote Desktop Protocol (RDP)
5004 TCP and UDP
5005 TCP and UDP Real-time Transport Protocol (RTP) Data
Real-time Transport Protocol (RTP) Control
5060 TCP and UDP
5061 TCP Session Initiation Protocol (SIP)
Session Initiation Protocol (SIP) over TLS
To protect a server, ensure that only the necessary ports are open. For example, if the server is
being used only for email, then shut down ports that correspond to FTP, DNS, HTTP, etc.

8.1.4 Configuring Windows Firewall

A firewall restricts the type of network communication that is allowed into and out of a device.
Firewalls can be network-based or installed within a router to control the flow of packets through
the router between networks. You can also install and configure host-based firewalls to protect
individual hosts and the traffic that is allowed to that host.
Network Types

Let's take a look at the Internet connection firewall that's included with Windows 7. To do that
I'll click the Start button down here, click on Control Panel, System and Security and here we
have Windows Firewall. With Windows 7 the firewall can be configured depending on the type
of network you have. We have the private network and the public network. Private networks are
typically your home or your work networks, whereas public networks are networks at the airport,
or maybe at a cafe or hotel.
Turn on the Firewall

To turn on the firewall for a specific network we'll click on the link Turn Windows Firewall on
or off. Currently, we do not have the firewall turned on in either of the network locations. Let's
go ahead and talk about turning it on in the public network. We'll click Turn on Windows Firewall
and we will also select, Block all incoming connections.
Block Incoming Connections

If my computer were connected to the Internet through a public network, such as a hotel I might
choose to Block all incoming connections. This does not prevent me from browsing the Internet.
It simply blocks all connections that have been initiated from a source other than my computer.
For instance, when I browse the Internet I will go to a website and the communications back
from that website will be allowed through this firewall even with this option being enabled. By
blocking all incoming connections I disallow any computer from being able to view my computer
and access it directly.
Allow Incoming Programs

We'll go ahead and click OK.


Another feature we want to look at is allowing a program or a feature through the firewall. We
have the types of networks listed, private and public, as well as the programs and features that
are installed on this computer. Remote Assistance and Remote Desktop are checked. They also
have check marks through the private network and the public network. The checkboxes indicate
the traffic from this feature is allowed through the network. If this feature does not have a check
box then it will be blocked by the firewall. If I want to I will go ahead and turn off network
discovery and that disables traffic not only through my home and work networks, but also public
network.
In this case, I can keep it off for the public but will allow it through my private network. If there
is a program or a feature not in this list, we can go ahead and click Allow another program and
here is a common list of programs or features you might want to allow through the firewall. You
can also browse for specific programs installed on this computer. We'll go ahead and click OK
and return back to our Windows Firewall setting.
Allow Incoming Connections

We turned on the firewall for the public network and we also selected the option to Block all
connections, but that includes the programs on the list of allowed programs. Not only did we
select the firewall to be on, we selected to Block all incoming connections included in this list.
We will no longer have these features; Core Networking, File and Printer Sharing, as well as
Remote Assistance and Remote Desktop. In order to have those go through the firewall we'll
need to go back to Turn Windows Firewall on or off and deselect Block all incoming connections,
including those in the list of allowed programs. That would allow a Remote Desktop, Remote
Assistance and so on.
8.2. Security Appliance

As you study this section, answer the following questions:


• Under what conditions would you use an all-in-one security appliance?
• What security functions are included in an all-in-one security appliance?
After finishing this section, you should be able to complete the following task:
• Configure network security appliance access.

8.2.1. All-in-One Security Appliances

Let's spend a few minutes talking about all-in-one security appliances. An all-in-one security
appliance merges many network functions into a single piece of hardware. They are sometimes
called a unified threat management device or a web security gateway.
Benefits of All-in-One Appliances

An all-in-one security appliance is useful in several situations. For example, if you're working
with a small startup company, then an all-in-one security appliance could be very beneficial
because they probably don't have the budget available to buy a separate firewall, spam filter, and
router.
They're also useful in situations where limited physical space is available. For example, a small
business might have limited space in its data center. In this situation, an all-in-one appliance can
be advantageous because it provides many different services, but only takes up the space of one
device.
All-in-one devices are also useful in cases where you have to manage a remote branch office.
Branch offices typically do not have a lot of room available and most likely don't have IT
personnel on hand. Because you would have to manage the network remotely, you could save a
lot of time by logging into only one device to manage multiple network functions.
All-in-one security appliances will vary in functionality based on the make and model of the
device.
Features of All-in-One Appliances

One commonly implemented feature that is typically included is URL filtering. The device may
also implement content inspection to make sure HTTP connections and the content being
accessed meets certain criteria that you specify. All-in-one security appliances may also provide
malware protection to make sure that the content entering your network does not contain viruses,
Trojans, or worms.
Some all-in-one security appliances provide spam filtering. You can direct all of the email for
your organization through the security appliance to filter out spam before it ever gets to the end
users. This prevents them from being able to click on malicious links in email messages because
the phishing exploits never make it to them in the first place.
Some all-in-one appliances may even include a switching function, so you don't have to purchase
a separate switch to support your network segment. It may also include network routing
functionality along with a network firewall that allows you to set up ACL rules to manage
network traffic and log connections.
Some all-in-one appliances may include an intrusion detection function to identify intrusions
attempts. Higher-end units may even include an uplink connection using an integrated
CSU/DSU.
Some all-in-one appliances may include bandwidth shaping, which allows you to prioritize
certain types of network traffic to optimize communications.
When to Use All-in-One Appliances

Depending on the implementation, all-in-one security appliances might be better than using
traditional network components that each provide only a single function. For example, they are
a good fit for small office deployments or for deployments where the budget is limited. They
provide adequate performance in these situations.
Basically, an all-in-one appliance usually does a lot of things reasonably well. However, they
typically don't perform any one particular task extremely well. If you're in a situation where you
need to have the best performance possible from a network device, then you'll be better off buying
individual network components designed and optimized specifically for those functions.
However, it will cost a lot more to do this and will likely require additional management time
and cost.
Summary

In essence, that is how all-in-one security appliances work.


8.2.2 Security Solution Facts

All-in-one security appliances combine many security functions into a single device. These
appliances are also known as unified threat security devices or web security gateways. These
types of devices may be the best choice for:
• A small company without the budget to buy individual components.
• A small office without the physical space for individual components.
• A remote office without a technician to manage individual security components.
An all-in-one security appliance can include the following security functions:
• Spam filter
• URL filter
• Web content filter
• Malware inspection
• Intrusion detection system
All-in-one security appliances can also include the following:
• Network switch
• Router
• Firewall
• TX uplink (integrated CSU/DSU)
• Bandwidth shaping

8.2.3 Configuring Network Security Appliance Access

In this demo, we will secure access to the network security appliance. In this course, we're using
the Cisco small business network security appliance which has a graphical interface for
configuration, which is accessible through Internet Explorer.
Change the Default Username and Password

One of the first and most important things to do with any network appliance or network device,
is to change the default username and password. This appliance has a default username and
password of 'cisco' and 'cisco', which could easily be guessed by just about anyone or could be
looked up on a website such as defaultpasswords.com. From the Getting Started page, we have
the opportunity or ability to select Change Default Admin Password And Add Users which will
take us directly to the Administration Users page. On the Administration Users page you'll see
that the only current user available on this device is the cisco default user.
The first thing we would like to do is to change that username so that somebody who knows the
default username and password won't be able to access our appliance. To do that we click on the
Edit button next to this user, and we can change the username to something like ciscoAdmin'. It's
okay to leave the first name and last name the same. This is an administrative user and we'll go
ahead and change the password too, because we don't want to keep cisco, that's too easy. We'll
change it to something a lot longer and we'll change the Idle Timeout we'll give it about 15
minutes. When you're working in the interface, you don't want the interface to stay logged in
forever, but you do want to be able to make reference to some notes while you're working without
it logging you out.
Click 'Apply'. Because we've changed the username of the user we were logged in as, the
appliance will make us log in again, so we'll need to log in now as our new user the 'ciscoAdmin'
user with the new longer more secure password. Go ahead and log in and we can go back to the
admin password and user administration page using the link.
Deny Login from WAN Interface

If you'll notice, the new username: ciscoAdmin.


We can also define how this user is able to log in or what workstations they're able to log in from.
Currently, this is set to Deny Login from the WAN Interface and only to allow login to this
configuration utility from the LAN side. A workstation which is on the LAN will be able to access
this utility. But a workstation which is outside on the WAN interface from the outside of the
corporate network will not be able to log-in.
IP Address Restrictions

We'll click' Apply'.


We can also select which users are able to login or which workstations are able to login to by IP
address. We're going to add an IP address and tell this interface that we're only going to allow it
to login to this workstation or to this configuration utility from 192.168.0.200, which is this
workstation, and we'll create this. The operation succeeded and now that we've created this we'll
say, "All right, we're going to only allow login from the defined addresses." We've defined the
address of the workstation.
The administrator's workstation will be the only one which is able to access this configuration
utility. You want to change the default username and password, but you want to restrict the access
to the configuration utility as much as you can. You don't want your normal users throughout the
organization able to login to your network devices and to change the configuration.
Create and Configure Another Administrative User

We'll click 'Apply'.


Let's go ahead and create another user so that we can have an administrator; we don't want to just
be using the default the administrative user all of the time. We'll create a user, we'll call it
mbrown. Her name is Mary, the last name is Brown, and she'll be an administrative user also so
we'll select Administrator. We only have the SSLVPN group right now so that'll have to work.
We want to give her a fairly secure password; and we'll confirm that, and then we'll set an idle
time-out, once again 15 minutes. It's nice to have an idle time-out as long as you can accept. You
don't want to be left logged in to the interface too long, but at the same time you want to have
time to be able to complete your work. Go ahead and click 'Apply'.
We have our user, mbrown, which we'll go ahead and edit her information also and see that the
information we just put in there will go back to the login restrictions. Right now, she's able to
access this configuration utility from the WAN interface and we don't want her to be able to do
that. We'll go ahead and select that she can only access the configuration utility when she's on
the private side of the LAN. Go ahead and click 'Apply' for that. All right, that succeeded.
We'll also give her an IP address restriction and once again we'll add the source address; the
192.168.0.200 which is the workstation we're working from. The IT administrator's workstation,
we'll go ahead and click 'Apply'. All right that operation's succeeded and you'll notice that there
is an IP address and we're going to tell Mary that she's only to allowed to administer the security
appliance configuration utility from the 192.168.0.200 workstation.
Create Groups

We'll click 'Apply'.


In addition to users on this local device, we could also create Groups. Right now we have the
SSLVPN group.
Create Domains

We can create different types of groups with different attributes.


We can also create Domains. Domains is an important piece of this because if we don't want to
use just the users who are defined on this local box; the local user database. For example, we
have our ciscoAdmin and we have our mbrown are the only two people who are able to access
this box for administration. If you wanted it to be able to use RADIUS or an NT Domain or
Active Directory to be able to control access to this utility and this device, you can connect to
Active Directory domain or a RADIUS server and then be able to control the user access and
authentication from a central location.
Summary
Very powerful for what you're able to do.
In this demonstration we've changed the default username and password. We've configured IP
address restrictions for being able to access the security appliance configuration utility. We've
also restricted the access to the security appliance configuration utility from the private side of
the network.
8.3 Firewall Design and implementation

As you study this section, answer the following questions:


• How do firewalls manage incoming and outgoing traffic?
• What is the difference between a standard and an extended ACL?
• What does the deny any statement do?
• What is the difference between a routed firewall and a transparent firewall?
After finishing this section, you should be able to complete the following tasks:
• Configure a DMZ.
• Configure a perimeter firewall.

8.3.1 Firewall Network Design Principles

In this lesson, we're going to discuss how you design a firewall implementation. As you know,
firewalls are used to protect internal networks that are connected to the Internet by filtering
traffic.
Purpose of a DMZ

A firewall can be installed in order to protect an entire network or it can be installed on a single
network host. A network firewall sits at the edge of your network. It acts as a barrier between
your entire network and the outside external network, which is typically the Internet. It filters all
the network traffic going to and from the external network.
A host firewall, on the other hand, functions in a similar manner, but it only protects one single
computer. With this in mind, let's take a look at how a firewall can be implemented in order to
protect an entire network. We'll take a look at a sample scenario here. Suppose you manage
networks for a small company and the company provides, say, driver downloads for their
products as well as maybe online support. You provide those services through a website that's
hosted on an in-house web server. In other words, the web server itself resides on the company's
network.
The driver downloads and online support features need to be available to users over the Internet.
How do you allow Internet users access to that internal web server? Here's the Internet, here's
the boundary firewall, here's the web server, and here's the internal network. You want to allow
Internet users to access the web server. One option would be to place the web server on the
outside, on the public side over here of our boundary firewall.
If you did this, Internet users would be able to access the website, no problem. However, it would
also be a very dumb thing to do because you would create a very significant security issue.
Because the web server resides outside the boundary firewall, you really don't have any control
over what's going to happen to it. Every hacker on the Internet can access this web server with
unfettered access and run all types of exploits against it, so moving the web server outside the
boundary firewall is really not a good idea.
We can also move the web server inside the boundary firewall, but this also prevents another
issue. The boundary firewall is most likely configured to allow outbound connections to the
Internet, but it should be configured to not allow inbound connections coming from the Internet.
As a result, Internet users are not going to be able to access the web server. To access the web
server you would have to open up several ports in the boundary firewall.
Because this is a web server we would open up port 80 and port 443. With these ports open, users
would then be able to initial connections from the Internet to the internal web server. However,
in this configuration we've opened up two significant holes in the boundary firewall: port 80 and
port 443. This increases the likelihood of some type of security breach occurring. In fact, many
exploits take advantage of just this scenario. An attacker gains access to the web server itself
through the firewall using these two open ports. Then the attacker perpetrates a variety of exploits
on the web server in order to gain control of that system.
Once that's done, the attacker has control of a system within your protected network. That's bad.
The attacker could then use it as a platform for launching additional attacks against other internal
hosts on your network. In other words, you just let the bad guys in. Essentially, you have a
quandary. You need to protect your network and your web server, but you also need to make the
content on this web server available to Internet users. How do you do this?
One solution is to actually divide the network into multiple zones with different levels of security.
DMZ Definition/ Creating a DMZ

You can create a high security area and another area within the network that has a lower degree
of security. This is called creating a demilitarized zone. We just call it a DMZ. First, you have to
accept the fact that you have to open ports on the boundary firewall in order to let Internet users
initiate connections to and access the content on your web server. We still have that boundary
firewall in place, and in order to allow this access we're going to have to open up ports 80 and
443 just like we talked about earlier.
These ports, as you know, will allow http and https connections to be established with our web
server. As I talked about earlier, opening these ports creates an area of low security on the
network. The web server with ports 80 and 443 open is in the low security area directly behind
the boundary firewall. In order to maintain the security of our internal network we're going to
install a second firewall between this area of low security and our internal network. By doing
this we create a zone of high security. This is our internal network.
Because this web server resides in the low security area that has ports open in the boundary
firewall, the web server resides in the DMZ. The key rule that you've got to remember is that you
do not put anything in the DMZ that doesn't absolutely have to be there. For example, the DMZ
is not the place that you're going to be putting your payroll or your research and development
servers. Only the servers with information that needs to be accessed by users on the Internet
should be placed in the DMZ.
There is still risk in having this information out there on the web server. For example, somebody
could compromise the server and replace one of the driver downloads with a modified version
that maybe has malware in it. Because the web server is in that DMZ, you have to be aware of
that risk and you have to do your best to monitor it and protect against it.
The second high security internal firewall behind the boundary firewall constrains that low
security area of the network to this single segment. The internal network is secured behind the
second firewall. When you're planning your firewall rules you typically allow traffic originating
in the secured internal network to enter the DMZ and also to go on through to the Internet. For
example, a user on your internal network here should be able to open up a web browser and
access the Internet through various firewalls. However, you must not allow traffic that originates
in the low security area here or in the no security area over here, the Internet, to initial a
connection in the other direction. We do not allow them to initiate a connection with the host in
the high security area.
There are actually a couple of different ways that you can implement a DMZ. One is the example
we just looked at. In this configuration, two different firewalls are used, one on the boundary and
a second one behind it on the edge of the internal network, in order to create our DMZ. However,
you can also create a DMZ using a single firewall. In this scenario there is just one firewall as
we saw at the very beginning of this lesson, but notice that it has multiple network interfaces
installed. One interface connects to the internal high security network, but another connects to a
separate isolated network segment. You can configure this firewall with different sets of rules for
each of these different network segments. You establish high security firewall rules for the
internal network, creating our area of high security. We implement low security rules such as
allowing traffic through port 80 and 443 for this network segment where the web server resides.
This creates the DMZ, which is a lower security area of the network. The lower security rules for
this network segment allow Internet connections to the web server on ports 80 and port 443.
Again, you should harden this server as much as possible because it's sitting in a low security
area inside the DMZ. Again, please do not place anything in the DMZ that doesn't absolutely
have to be there. You don't want proprietary information or any type of confidential information
in the DMZ. You want that information to be stored on your high security network.
Each of these options has its strengths and its weaknesses. The two firewall system requires more
hardware and therefore requires you to administer two separate firewall systems. The benefit is
that an attack on the boundary firewall has no effect on the internal firewall. Everything on the
high security network in the high security zone could just keep on functioning just fine while the
boundary firewall is being hacked to death by continual attacks.
On the other hand, the single firewall solution with multiple interfaces requires less hardware
than the first solution. Therefore, you only have to maintain a single system. The single firewall
system also allows you to create your multiple DMZs and you can create multiple security zones
with the one firewall device. All you have to do to do this is add additional network interfaces.
For example, you could create a very high security zone. You could create a medium security
zone and maybe a low security zone.
The drawback to this implementation is that it introduces a single point of failure. If that one
firewall device goes down, then everything else goes down with it. You also need to be aware
that there are two different types of firewalls that you can implement in these scenarios.
Routed Firewalls

One is called a routed firewall. In a routed firewall the firewall device itself is also a layered three
router. This is very common. In fact, many hardware routers include firewall functionality.
Accordingly, transmitting data through the firewall will count as a routing hop because the data
is being routed as it goes through the firewall. A routed firewall usually supports multiple
interfaces, each connected to a different network segment as we talked about earlier.
Transparent Firewalls

There is also an option called a transparent firewall. A transparent firewall works differently. A
transparent firewall, by the way, is also sometimes called a virtual firewall. A transparent firewall
operates at layer two and it is not seen as a router hop by the connected devices. Therefore, the
internal and the external interfaces on a transparent firewall are actually connected to the same
network segment. Because it's not actually a router, you can easily introduce a transparent
firewall into an existing network.
Summary

We don't add a hop to the hop count.


That's it for this lesson. In this lesson we discussed firewall design principles. We first defined
what a demilitarized zone is. Then we looked at the different ways that you can create a DMZ
either with multiple firewalls or with a single firewall with multiple interfaces installed. Then we
ended this lesson by discussing the difference between routed and transparent firewalls.
8.3.2. Configuring a Perimeter Firewall

In this demo we'll look at configuring firewall rules and other firewall features on our Network
Security Appliance. We're already logged in to this Security Appliance Configuration Utility and
Internet Explorer as our admin user. We're on the Getting Started page.
We'll go first to the Advanced Getting Started page where we find a checklist for the Firewall
and NAT Rules. The checklist lists the typical things that you would want to configure when
configuring firewall rules. If you'll notice, Configure Custom Services, Configure Schedules and
Configure the actual Firewall and NAT Rules.
Configuring Custom Services

First we'll take a look at configuring custom services. The Network Security Appliance has a list
of all of the typical Services that you would allow in and out of the firewall. But if for some
reason you have a service that's not one of the typical services, you can add a service that is
custom. In this case I've added a service called MyService, we take a look at this. We can give it
a different type, it's either TCP, UDP, ICMP types, and a specific port range. In this case it's one
port, so you list it both in the Start Port and the Finish Port to give it one port; so that's the idea
of Custom Services. You can actually create a service which will be allowed through the firewall
and we'll show you where that shows up in the list of services later.
Configuring Custom Schedules

The next piece in the checklist was Schedules. You can create a custom schedule that you can
use to apply to the firewall or have the firewall rules apply to the schedule. In this case I've
created a schedule called Business, which is intended to represent the business days of the week,
so specific days; Monday, Tuesday, Wednesday, Thursday, Friday from 8 AM to 5 PM. That's the
specific schedule that I've created. You can create other schedules for allowing traffic in and out
of the firewall on weekends or different days as appropriate for your organization and the security
policies that you may have.
Configuring the Default Outbound Policy

Next, let's take a look at the default outbound policies. The firewall on the network security
appliance that we're using, the default policy is to allow all traffic out and to block all traffic
coming in from the WAN to the secure side of the network. All traffic is blocked unless we create
a specific rule that will allow that traffic. Now we can also change the default policy by toggling
here from Allow Always to Block Always.
Configuring IPv4 Rules

But we'll go ahead and stay with Allow Always.


Blocking a Service

And then let's take a look at the IPv4 Rules. A typical rule that we may want to create would
block a service from inside to the outside. For example, let's go ahead and click Add. We may
want to create a firewall rule that will prevent people from using an external email server; an
external SMTP server. In this case, from the secure side of the LAN to the unsecure side, outside,
we want to block the SMTP service. Let's look at SMTP, Simple Mail Transfer Protocol. We'll
go ahead and BLOCK that and in this case we'll BLOCK it by schedule and choose our business
schedule that we've created.
During business hours, we don't want anybody using an outside mail server for sending their
mail. We can choose which hosts this apply to on the inside, Single Address or an Address Range,
in this case we'll just choose all hosts or Any. We can also choose the destination machines that
this applies to. In this case, we'll select Any also. We'll go ahead and click Apply to create this
rule, which secures traffic to an SMTP server external to our organization. In this case, we've
prevented that traffic so from our LAN to the WAN, we've prevented any SMTP traffic during
the scheduled days that we have.
Allowing Traffic In

Let's create another firewall rule. Another typical firewall rule that you would create would allow
traffic in. In this case we have a web server in our DMZ and so we'll go ahead and allow traffic
from the UNSECURE network or the outside, the WAN side, to our DMZ. The type of service
we would like to enable is HTTP. Now while we're here on this screen, we will also show you
that if we click on Service and scroll down to the very bottom you'll see the custom service that
we created: MyService. Let's bring the page back down now.
All right so we want to go ahead and select the HTTP service. We're going to allow HTTP traffic
into our web server. We're going to allow that always. We'll allow it from any host that's out there
on the network and the one thing that we do have to specify in addition for this particular rule is
we need to specify the server that's hosting the services 172.16.2.100 is the server that's hosting
that service. In this case, we can also specify if we want to forward the port, so HTTP traffic will
come in on port 80. But let's suppose that our server that's internal to our network is hosting the
web server on a port 8080 instead.
Another thing that we can choose is which external IP address is being represented here and
which external IP address we're going to be accepting this traffic on. In our case we have the
dedicated WAN interface which is 5.1, but we've also created aliases on 5.2 and 5.3. We'll go
ahead and select 5.1 as our interface for this rule.
Allow Traffic for HTTPS

All right, now that we've created a rule that allows traffic for HTTP, we may also want to have a
rule that allows traffic for HTTPS. The same thing: we'll accept traffic from the UNSECURE
network to the DMZ, on the HTTPS protocol. We're going to allow that and we will allow that
from any host. Once again we have to specify the server 172.16.2.100. We'll go ahead and keep
the default port for that instead of forwarding the port to a different port number.
Manage Servers in the DMZ

Go ahead and click Apply.


One final rule that we may want to add is to allow our machines that are inside of our network
to be able to manage the servers that are in the DMZ. We'll go ahead and add one more rule. This
rule is going to go from our secure LAN inside of our network to the DMZ, and we're going to
allow any traffic to go out to the DMZ at any time so that the machines that are on our private
network to get to the servers that are in the DMZ network for management of those machines.
We'll go ahead and click Apply.
Setting Priorities of Firewall Rules

We have four firewall rules that we've created. If for some reason one of these firewall rules was
more important or took priority over another, you can also select these firewall rules and move
them up or down. We've created four good firewall rules that will allow our web server, which is
in the DMZ, to be accessed from the outside world. We can manage our servers from inside the
network and we can also block services that we don't want people using from inside the network
on the outside network.
Blocking Attacks

One of the other things that's important about the NSA firewall is it has the ability to look for
specific attacks and block those. In this case we'll look at Attacks. Some of the attack checks at
the firewall has built into it. In this case, for WAN security checks, we can actually block the
ability to ping the WAN interface. Our 5.1 interface that I mentioned or the 5.2 or 5.3, those
aliases, we can tell it, "Don't let people ping that and be able to see that." We can enable a stealth
mode, we can block TCP floods. You'll notice some of the other pieces here we can block ICMP
notifications. We can block fragmented packets and we even have the ability to detect a SYN
attack. We'll go ahead and click Apply to save those changes.
Content Filtering

All right, that succeeded.


The next piece of the firewall and what it has the ability to do is to actually block content, or
provide content filtering. Now our Network Security Appliance also has a subscription content
filtering service which categorizes URLs and you can block URLs by time of day and category.
In this case you have the ability to block specific URLs that you've determined you don't want
to have or approved URLs that you want to be able to allow. By providing keywords or specific
URLs, you can block content specifically, aside from the subscription service that you have.
MAC Filtering

Now also, MAC Filtering features are provided on the firewall. If you want to allow only certain
MAC addresses to be able to connect to the NSA, in this case, policy for MAC addresses we
have MAC address filtering enabled already. You have the ability to choose either block and
permit the rest, which means we're going to block the MAC addresses listed and we'll allow the
rest. If you have a particular machine that you don't want to connect but you want to allow
everybody else, that works. Typically that's a little harder to manage; it's a lot easier if you want
to maintain a very secure network to permit only those machines that you want to connect and
block the rest.
IP/MAC Binding

You also have the ability to associate an IP address with a MAC address. This is intended to
prevent spoofing. If you know a particular MAC address always uses a particular IP address, you
can specify those associations and block spoofing attacks.
Session Settings

The firewall also has the ability to set certain Session Settings. The Maximum Half Open
Sessions is a typical type of setting that allows you to prevent SYN attacks, to prevent the SYN
packets from overwhelming the server and taking up all of the connections.
Summary

Those are some of the features of the firewall on the Network Security Appliance. Specifically
in this demo we've created IPv4 firewall rules to allow traffic in to our DMZ for the web server,
to prevent traffic out for certain services like SMTP that we don't want people to use. We also
allow traffic from our LAN to go to DMZ to manage those servers.
8.3.3 Firewall ACLs

In this video we're going to discuss access control lists, or ACLs.


Role of ACLs

ACLs are used to define firewall rules that filter traffic between network hosts protecting your
network from unauthorized connections and intrusions. An ACL is a list of rules that a packet is
going to be evaluated against to determine whether or not that packet is allowed through the
network or not. By defining ACLs and then applying them to firewall interfaces, you tell the
firewall to compare incoming packets against the conditions that you define in your ACL ruleset.
For example, you could specify that any computer on the sales subnet should not be allowed to
initiate a connection with a host within engineering. If the firewall receives a packet from the
sales subnet and it's addressed to a host on the engineering subnet it's going to drop that packet,
effectively blocking communications. Why would you configure firewall ACLs? Well, ACLs are
very useful in situations where you need to restrict access to network resources or servers. In the
example we just looked at, there was no reason for a user in sales to be able to access an
engineering server; so an ACL was used to block access.
You can use ACLs to block access to entire networks or you can block access to a specific host
within a subnet. You can even block access to a specific host based on port numbers. This enables
you to allow traffic for certain services while restricting access to other services on the same host.
For example, let's suppose we have a server that's running the Apache Web Service and it's also
running an FTP service which is used to upload content for the web server. We want everybody
in the company to be able to access the web server with their web browser, but we want to restrict
FTP access to just a few individuals.
We could do this by creating firewall rules for ports 80 and 443 for our web server; and ports 20
and 21 for the FTP server. You can also use ACLs to restrict mode access connections. For
example, the secure shell protocol, SSH, is commonly used to remotely access network
infrastructure devices and configure them. Obviously, you don't want just anyone to be able to
establish an SSH connection with your infrastructure devices. What you can do is define an ACL
to restrict SSH traffic and restrict it to just the computers located on the subnet that's used by IT
employees.
There are many other use cases for ACLs far more than we have time to discuss here. Just
remember that whenever you need to restrict network traffic, an ACL can be used to accomplish
this.
How ACLs Function

Let's look at how we do this on this simple network diagram. We have a server off to the right
here: it's an internal web server. We also have a workstation off to the left. In this scenario, we
actually want to prevent this workstation from accessing this web server.
Type/Direction of Traffic

To implement ACLs you have to account for both the type of traffic as well as the direction of
the traffic flow that you're trying to block. In this case we're trying to block the flow of traffic
from the workstation to the web server. The path that a packet takes through the network to get
there goes through this router first, then through the second router, and then down to the network
segment where the web server resides. You might be wondering at this point where the firewalls
reside in this scenario. We're going to assume that these two routers are Cisco routers and that
they can also be easily configured to function as packet-filtering firewalls as well as routers.
In this scenario we have 2 network firewalls running on each of these routers. When designing
your ACLs the direction of the traffic flow that you're trying to block is very important. In this
scenario the traffic we're concerned about comes in on this interface on the first router then it
exits on the second interface on that same device. Then it enters the interface on the second router
here and then it exits on this second interface on this router. As you can see there are actually 4
firewall interfaces involved in this traffic flow.
This is very important because when you build your ACL rules you have to assign those rules to
a specific firewall interface in the direction of the traffic that you're trying to block. In this case
we have 4 options for where to place any ACLs to block this traffic. If we put an ACL on this
firewall interface, it would be an inbound ACL because that's the direction of the traffic flow. On
the other hand if we were to put the ACL on this interface, it would be an outbound ACL because
that is the direction of the traffic flow.
Way Rules are Processed

The same is true on this second firewall.


At this point we need to discuss another very important fact that you have to remember, and that
is that ACL rules are processed in order from top to bottom. For example, let's suppose that we
had an ACL with several rules in it. Each rule is listed on a separate line within the ACL. The
first rule states, 'deny sales from surfing.' The second rule states, 'permit all to surf.' Be aware
that these are simplified rules and we're only using them for demonstration purposes. To actually
implement them, you'd have to use the specific syntax required by the firewall that you're actually
implementing.
In this scenario if the salesperson tries to access the Internet to surf websites through the firewall
this first rule in the ACL will be processed first to see of the conditions match. If they do then
the rule will be applied. If not, it will be skipped and then the next rule will be applied. In this
scenario, a salesperson is trying to surf the web so both conditions in this first rule match. The
user is in sales and the user is trying to surf the web, therefore, this rule will be applied and that
traffic is going to get dropped.
Here is an important thing that you have to remember. If a match is made and an ACL rule is
applied, then processing stops for that packet. The rest of the rules in the ACL are not applied.
However, if a match is not made then the next rule is applied and then the next, and then the next,
until a match is made. For example, let's suppose this time that an engineering employee tries to
surf the web through this firewall. The traffic will again be compared against the ACL rules. The
first line will be processed, which says, 'deny sales from surfing.' Is this a sales employee
attempting to surf? Nope, it's not. The first rule does not match so it's not applied, therefore, we
will move down to the second rule within the ACL; permit all to surf.
This time the rule does match because it matches all employees, not just those from a specific
department. Therefore, the engineering employee will be allowed to surf the web. Here is another
important thing to remember. It is possible for none of the rules within an ACL to match the
traffic being evaluated. What do we do in this situation? With just about every type of firewall
implementation, a packet that does not match any firewall rules will be automatically dropped.
This is because almost every firewall includes an implicit deny at the end of every ACL. Because
it's implicit, it's not actually listed within the ACL list of rules, however, it is still there even
though you don't see it.
Therefore, any traffic that doesn't match any of the rules within the list will be automatically
denied. For this reason, you need to make sure that each ACL that you create has a permit rule in
it somewhere. If you don't, no traffic will ever be able to get through the ACL. Another thing to
remember is the fact that the order of the rules within the ACL is also critical. Recall earlier that
we said that if a match is made on a particular rule within an ACL, that rule is applied and then
processing stops. Any rules that come after the matching rule are not applied.
For example, let's swap the 2 rules we defined earlier within this ACL. The 'permit all to surf'
rule comes first and the 'deny sales from surfing' rule comes second. What's going to happen
now? Let's suppose that same salesperson now tries to surf the web through this firewall with
this new version of the ACL. It's going to evaluate the traffic against the conditions in the first
rule, 'permit all to surf.' Since this rule always matches, it's applied and the sales traffic is
permitted. The traffic is never processed against the second rule within the ACL even though this
specific traffic matches the second rule and would be denied by it if it were applied.
Likewise, if a marketing employee tries to surf the web, that first rule matches again because it
says 'permit all,' therefore all employees will be allowed to surf regardless of which department
they're in. No matter what, this second rule will never actually be processed because it's preceded
by a more broadly-scope, globally-applied rule. Here is the key thing you need to remember from
this example. You should place the narrower, more specific rules at the top of the ACL and you're
more broad rules at the end.
Think of the broader rules as a safety net at the bottom of the ACL that catches everything else
that made it through the more specific rules up at the top. In this example, the 'deny sales' rule
should come before the 'permit all' rule; otherwise, it does absolutely nothing in the ACL because
it's never going to be processed.
Summary

That's it for this lesson. In this lesson we reviewed the role of ACLs within a firewall. We first
discussed how ACLs work. Then we discussed several design considerations when deploying
ACL rules including the type of traffic, the direction of the traffic, the location of the traffic and
the way rules are processed to block or allow traffic.
8.3.4 Creating Firewall ACLs

In this demonstration, we're going to create a firewall ACL or access control list.
Firewall Troubleshooting

First, just so we can see the difference, I'm going to show you what happens before we configure
the firewall, so I'm going to do a ping 192.168.1.10, which is the IP address of the computer
we're going to be configuring. You can see we get a request timed out. At this point, we don't
know exactly what's blocking the communication. It could be that there are network connectivity
issues, or it could be a firewall. Very often, I'll sit, look at things, and say, "hey, the math looks
good. Everything looks good, but the ping isn't going through," and in that case, it's time to check
the firewall.
If you can only ping in one direction then you very clearly have a problem with the firewall,
because there's no such thing as a ping in one direction. If I can ping from computer A to computer
B but not from B to A that would indicate that there's a problem with the firewall, because the
ping goes roundtrip
Configuring a Firewall

Now let's look at the firewall. I'm here on my server, which has an IP address of 192.168.1.10,
and I'm going to go into my search and just type in "fire." We're going to open up the Windows
Firewall with advanced security. There is a very basic firewall, but we want to look at an ACL.
So to directly configure ACLs on a Windows box, you'll want to make sure that you go into the
Windows Firewall with advanced security. I'm going to open that up.
Let's take a look at this. Firewalls often have either an implicit allow or an implicit deny. An
implicit allow means if it's not specified, it's going to be allowed. If there's no rule that covers it,
it's allowed. An implicit deny is exactly the opposite. If there's no rule that covers it, then the
traffic is denied. We can see here that in Windows we have different profiles: domain profile,
private profile, and a public profile. The firewall is on for all three of these profiles. Inbound
connections that do not match the rule are blocked, so all three of these firewalls have an implicit
deny on inbound traffic. If I haven't specified it's allowed in, the answer is no. You can see that
outbound connections that do not match a rule are allowed, so there's an implicit allow on
outbound, and an implicit deny on inbound.
When I start setting up my rules, I have to choose if it's going to be an inbound or an outbound
rule. Since we know inbound connections that don't match a rule are being blocked, we know
that what we need to make our ping work is an inbound rule. The method that we're going to go
through to create the rule is identical for inbound and outbound; it's just determining which of
these two folders you are going to click on.
I'm going to go ahead and right-click 'Inbound Rules,' and do a new rule. It comes up and asks
for the type of rule to be created. It could be for a program, in which case it'll allow that program
to come in. It could be a port, and I could specify the ports. Maybe I can use one of the predefined
rules and find something that works for me, or if I want the most flexibility, I can do custom. You
can probably see over here I've got very little on there. A little bit more with program, but if I
click in custom, I'm going to have the most choices because it's going to ask me to define
everything that I'm interested in with respect to that traffic. We'll do a custom rule and then go
ahead and click next.
Here you can choose whether the rule will apply to all programs or to a specific program. I can
even hit customize and have it apply to a particular service if I want to. In our case, we're not
interested in a specific program, so I'm going to leave it on all programs and hit 'Next.' Now, it
wants to know is it a particular protocol, so it could be anything, it could just be TCP, it could
just be IP version 6. A whole bunch of options in here. I'm going to go ahead and say that what
we're interested in, is ICMP version 4, and in this particular firewall, if I do specify ICMP, I can
click customize and talk about exactly which type of ICMP traffic I'm interested in.
It doesn't have to be all of it. I can say, "I just want to allow ping in," which is the echo request.
Here I'm being very specific. When you're programming a firewall ACL, you want to be very
specific about the traffic. If you're vague, you might open up too big a hole in the firewall, or if
you don't exactly pinpoint what you want to talk about, it might be too narrow. When you're
setting these things up, you want to know exactly what type of traffic you need to allow or deny.
Click Okay.
I've specified the protocol type is ICMP version 4. If I chose a protocol that had ports, I could
then specify the specific ports or port range. I've narrowed down the type of traffic we're talking
about, so I'll click 'Next.' Now, I can also limit the scope, so I could say, "Hey, these only apply
to specific, local IP addresses." Right now, it's set to any. I could actually just specify the IP
address of my client if I want just that one machine to be able to ping this particular server. I can
also specify remote IP addresses it applies to, or I can go in and say, "Hey, it applies to everything
that's coming from the local area network, coming from remote access, coming from wireless,
however I want to do it."
We're going to leave this wide open and say that our server is going to respond to ping from any
IP addresses, but if we were setting this up, let's say, specifically for DNS from a particular server,
then I might choose TCP traffic, port 53, and then particularly specify the address I'm interested
in. Again, it's best to know exactly which traffic you're talking about and be as specific as
possible. Now that we've talked about the type of traffic that we're going to allow or block, what
are we going to do? Notice the default is allow, which is what I'm interested in setting up, but if
I were working on the outbound traffic, where there's an implicit allow, I might be setting up a
rule to block the connection.
It depends on what's going on. In the Windows firewall, the outbound traffic has an implicit
allow. Most firewalls work with an implicit deny where if you don't make a rule that specifies
the type of traffic, then the answer is no. So most of the rules you'll be making there would be
allow rules. But again if there is an implicit allow, then you would be making blocked
connections. To which of the profiles does this apply? I'm going to leave it wide open because
this is just a test, but in real life, you want to assign it to the profiles that match whatever traffic
you're trying to talk about.
Domain profiles apply when the computer's connected to a corporate domain. We have a
standalone computer, so domain profile is not in play. Private, when it's connected to a private
network, and I have specified that it's private. Public when it's connected to a public network.
I'm just going to go ahead and click 'Next.'
Finally, I give my rule a name. I'm going to call it "Ping OK," but you want to name your rules
so you can look at the rule and see, and know what that rule actually talks about. Then I'll click
'Finish.' You can see my rule has come in here at the top of my inbound rules.
Now let's go back to our client, and see what's happened now that we've changed the firewall
ACL on the server.
Testing the Firewall

I'm back in my client here. I'm just simply going to hit the UP arrow, try my ping again, and you
can see that now it's successful.
Customizing a firewall ACL is not very difficult. It's just a matter of creating the rules you want.
You need to know if you're starting with an implicit allow or an implicit deny, meaning whatever's
not specified, is it going to be allowed or denied, and then you will create rules that will apply to
specific traffic, being as specific as you can be in identifying the traffic you wish to allow or
deny. Also know that type of traffic you're looking for - Is it a particular protocol? Is it a particular
program? Are they particular ports? Is it a particular IP address?
Summary

In this demonstration, we created an ACL on our firewall.


8.3.7 Configuring a Proxy Server

In this demonstration, we're going to practice implementing a proxy server, and we'll do it in the
network configuration that you see here. We have our internal network over here that has
workstation hosts that need to access hosts out here on the Internet. In order to do that, what we're
going to do is actually push all those requests into a proxy server, which will sit between our
internal network and our boundary firewall.
What a Proxy Server Can Do

A proxy server can be configured to do a lot of different things. For our purposes, we're going to
configure our proxy server to do two things. Number one, we're going to configure it to cache
frequently used content, so that if we have multiple hosts over here that access the exact same
content off of the Internet, the proxy server can just pull it out of its local cache instead of
continually going out and getting the same information redundantly, over and over and over, from
the Internet.
The other thing we're going to do is configure this proxy server to filter content. In other words,
we want it to inspect these IP packets that are coming back, look at the payload within those
packets, and see whether or not it meets our organization's security policy.
Configuring Server to Cache Frequently Used Sites
Let's go ahead and do that.
I'm on a workstation now that's on that internal network, and I'm going to connect to the
configuration interface of our proxy server. We do that in a Web browser. The URL we need to
go to is http://proxy.corpnet.com. We'll need to log in. This particular proxy server is actually an
appliance that can do a lot of other things besides just being a proxy server. You can load the
various applications you want to implement on it over here. We're just going to focus on the
proxy server components.
The first thing we want to do is implement our Web cache over here. This will cache frequently
used content from the Internet locally on the proxy server, so let's hit that to install it. Let's go to
our settings. By default, it's going to cache pretty much everything. The status over here shows
that nothing's been cached yet, because we haven't put any traffic through it, but as we do, these
statistics will increment.
Notice down here it tells us that if the content that's stored in this cache becomes corrupt or old
or otherwise unusable, then we can come down here and clear the cache. It warns us over that if
we do this, we're actually going to kill any existing Web sessions that are going on, and they want
us to mark that we understand that this is going to happen before we clear it. We're not going to
worry about that right now, but do be aware that that option is available.
Be aware that the proxy server should, by default, go out and just check the pages that it has in
its cache out on the Internet and make sure that they haven't been updated, and if they have, pull
down fresh content. However, there could be a little bit of a time lag between, in which case you
might need to go in and clear the cache and start rebuilding it again.
Cache Bypass

There's another option over here called cache bypass. You can add different websites that you
want internal hosts to be able to access directly off of the Internet and not look inside the cache
on the proxy server for that content. You can notice here that one website has been added to our
cache bypass by default, and it is maps.google.com. If we wanted to add additional sites, we
could do that by clicking add. We're happy with what we've got now, so I'm just going to click
'OK,' and let's turn the service on.
The button turns green, so at this point we know that it is caching content from the Internet.
Filter Content

We've taken care of our first task at this point. The second task that we wanted this proxy server
to do is to control access to content on the Internet. In order to do this, we need to install a Web
filtering application on the proxy server. There are actually two different versions you can pick
from, this is a paid version, this is a free version; for our purposes today, let's just go ahead and
add the free version.
I click it to install it. Now that our web filtering application has been installed on the proxy server,
let's go to settings to configure how it's going to work. Notice here on the block categories tab
that we have several different categories of websites that we can block, and we can decide one
of two things with each category of sites. Do we want to block it or just flag it? If you flag it,
access will be allowed, but an entry will be created in the proxy server log for the administrator
to look at. If you block it, then it will, of course, actually be blocked when end user tries to go to
that site.
Let's say for our organization we don't allow employees to access gambling sites. We don't allow
them to access dating sites. Notice, by the way, if I block, it automatically flags the site as well,
so that if a user does try to go to, say, a gambling website, that it will create an entry in the proxy
server's event log. We also do not allow access to hacking sites, hate and aggression sites, and
illegal drug sites. We probably will allow them to go to job search sites. We will block access to
pornography, proxy sites. We will allow access to shopping, social networking. We will block
access to sports and violence. We will allow access to Web mail sites. Let's go ahead and apply
these changes.
Editing Allowed/Blocked Sites

By the way, you can come over here and edit the various parameters for a particular category by
clicking this edit button right here. There may be particular sites that you want blocked that don't
fit in one of these categories. If that's the case, you can come over here to block sites, and then
manually add a particular site. For example, let's say we need to block access to go.com. Let's go
ahead and add that. Now it's added to our proxy server's block. There may be specific sites that
fit within one of our categories. We need to apply first. We applied the change.
Be aware that there may be specific sites that do fall within one of these categories that we've
blocked, say, hacking. Maybe we have an organization that deals in software development, and
we have programmers that do need to go out and look at certain security-related sites so they
know what they need to watch for when they program their applications. If this is the case, and
we do need to grant them access to these sites, we come over here to pass sites and add that.
When we do, it'll allow access through the proxy server to a site that is technically blocked by
the blocked categories.
For example, one great site is called insecure.org. That's where you can get the Nmap security
utility. We'll go ahead and add insecure.org to our pass sites. We can apply it. We can also grant
specific clients on the private network access through the proxy server to bypass our blocked
categories. For example, we could add a host that has an IP address of 192.168.2.22, and we will
allow that host to pass through the proxy server without being filtered.
Test Parameters

We'll apply our changes.


Let's go ahead and test whether or not our proxy server is doing what it's supposed to do. Let's
go over here and open up a new tab. Let's just Google for online gambling. Let's pick this link
down here, #1 online gambling sites guide for 2015. It is being blocked. That's exactly what it
should do, because we specified that gambling sites are not allowable according to our
organization's security and acceptable use policy, so they're blocked.
Let's try accessing the site that we added to our allowed site list which was insecure.org. We are
allowed through the proxy server to that particular site.
Summary

That's it for this demonstration. In this demo, we looked at configuring a proxy server. We first
configured the proxy server to cache Web content, and then we configured it to filter Web content.
8.3.8 Firewall design and configuration Facts
A demilitarized zone (DMZ), also called a screened subnet, is a buffer network (or subnet) that
sits between the private network and an untrusted network (such as the Internet).
• The DMZ is created using the following configurations:
• Configure two firewall devices: one connected to the public network and one connected
to the private network.
• Configure a single device with three network cards: one connected to the public network,
one connected to the private network, and one connected to the screened subnet.
• Configure a single device with two network cards: one connected to the public network
and another connected to a private subnet containing hosts that are accessible from the private
network. Configure proxy ARP so the public interface of the firewall device responds to ARP
requests for the public IP address of the device.
• Publicly accessible resources (servers) are placed inside the screened subnet. Examples
of publicly accessible resources include web, FTP, or email servers.
• Packet filters on the outer firewall allow traffic directed to the public resources inside the
DMZ. Packet filters on the inner firewall prevent unauthorized traffic from reaching the private
network.
• If the firewall managing traffic into the DMZ fails, only the servers in the DMZ are subject
to compromise. The LAN is protected by default.
• When designing the outer firewall packet filters, a common practice is to close all ports
and open only those ports necessary for accessing the public resources inside the DMZ.
• Typically, firewalls allow traffic originating in the secured internal network into the DMZ
and through to the Internet. Traffic that originates in the DMZ (low security area) or the Internet
(no security area) should not be allowed access to the intranet (high security area).
Do not place any server in the DMZ that doesn't have to be there.
There are two types of firewalls that you can implement:
• A routed firewall, is also a Layer 3 router. In fact, many hardware routers include firewall
functionality. Transmitting data through this type of firewall counts as a router hop. A routed
firewall usually supports multiple interfaces, each connected to a different network segment.
• A transparent firewall, also called a virtual firewall, operates at Layer 2 and is not seen as
a router hop by connected devices. Both the internal and external interfaces on a transparent
firewall connect to the same network segment. Because it is not a router, you can easily introduce
a transparent firewall into an existing network.
Firewalls use access control lists (ACLs) to manage incoming or outgoing traffic. You should be
familiar with the following characteristics of an ACL:
• ACLs describe the traffic type that will be controlled.
• ACL entries:
• Describe traffic characteristics.
• Identify permitted and denied traffic.
• Can describe a specific traffic type, or allow or restrict all traffic.
• When created, an ACL usually contains an implicit deny any entry at the end of the list.
• Each ACL applies only to a specific protocol.
• Each router interface can have up to two ACLs for each protocol: one for incoming traffic
and one for outgoing traffic.
• When an ACL is applied to an interface, it identifies whether the list restricts incoming or
outgoing traffic.
• Each ACL can be applied to more than one interface. However, each interface can have
only one incoming and one outgoing list.
• ACLs can be used to log traffic that matches the list statements.
Many hardware routers, such as those from Cisco, also provide a packet filtering firewall. These
devices are frequently used to fill both network roles (router and firewall) at the same time.
When you create an ACL on a Cisco device, a deny any statement is automatically added at the
end of the list (this statement does not appear in the list itself). For a list to allow any traffic, it
must have at least one permit statement that either permits a specific traffic type or permits all
traffic not specifically restricted.
There are two general types of access lists used on Cisco devices:
Access List Type Characteristics
Standard ACL Standard ACLs:
• Can filter only on source hostname or host IP address.
• Should be placed as close to the destination as possible.
• Use the following number ranges:
• 1–99
• 1300–1999
Extended ACL Extended ACLs:
• Can filter by:
• Source IP protocol (IP, TCP, UDP, etc.)
• Source hostname or host IP address
• Source or destination socket number
• Destination hostname or host IP address
• Precedence or TOS values
• Should be placed as close to the source as possible.
• Use the following number ranges:
• 100–199
• 2000–2699

UNIT_IV

Chapter 11: WAN Concepts

11.1 WAN Concepts

As you study this section, answer the following questions:


• What is the optical carrier specification base rate? Why is the base rate significant?
• What are the differences between T1 and T3? E1 and E3? J1 and J3?
• With WAN technologies, what is a channel and how is it important?
• What is the difference between a packet-switched network and a circuit-switched
network?
• What are the two parts of a CSU/DSU and what functions does each perform?
• Which WAN technology uses fixed-length cells?
• Which WAN technology is a transport technology for carrying signals over fiber optic
cables?
• Which WAN technology can be implemented over regular telephone lines?
• How does MPLS add labels to packets? What are these labels used for?

11.1.1 WAN Structure

A wide area network connects two or more networks over long distances. For example, you might
have offices in two different cities and you want to connect those two networks together to share
data.
WAN Cloud

Most companies can't afford to install the hardware and wire they need to connect networks
together. Instead, they rely on service providers who already have the right equipment and wiring
installed between the two offices. Let's look at a typical structure of a wide area network. In this
example, we have one network in one location that needs to connect to another network in a
distant location.
Service Providers

You can do this by using a wide area connection. Often the network that provides the wide area
connection is represented as a cloud. The cloud is a collection of devices and wiring that connects
two distant locations. We depict it as a cloud because it's managed by the service provider, not
the network administrator. Data is sent from your network into the cloud where it traverses the
service provider's network and eventually comes out the other end. Depending on the provider,
this cloud can use a variety of technologies to provide the connection. For instance, it could use
the public switch telephone network (PSTN). Alternatively, it could use the Internet.
The service provider has offices that interface with the cloud. If the service provider uses the
Internet to provide WAN connections, then this is the local ISP. If the service provider uses the
PSTN, this would be the local exchange carrier (LEC). In some cases, this is also called the
central office, because it is the office that is connected to the main cloud.
Local Loop and Demarc

It's the central office where all the local lines enter and leave the cloud. Your location is connected
to the central office through a line that's called the local loop.
For example, if the provider used the public telephone network, the local loop would be the
copper cabling that connects your phone line to the telephone network. The point where the local
loop enters your business is called the demarcation point, or the demarc. In this example there is
a demarc at this location and at this location. The service provider is responsible for everything
in between the two demarcation points. Your company is responsible for everything at your
location on the other side of the demarc.
CPE

Often the equipment at your location is called customer premises equipment (CPE). This is
important, because if you experience WAN-related problems, you are responsible for fixing
issues with equipment on your side of the demarc. The service provider will support the
equipment and protocols only between the demarc points.
CSU/DSU

Another type of equipment that is often used to connect an organization's network to a wide area
network is called a channel service unit data service unit (CSU/DSU). As its name implies, the
CSU/DSU is two different components. The CSU terminates the digital signal that comes out of
the WAN cloud, while the DSU converts the signal into a format that can be used by routers and
other devices at your location. The CSU/DSU may be a separate device on your side of the
demarcation point. Some service providers provide this device to their customers. From there,
you connect the CSU/DSU to your router and to the rest of your network. Some routers have
integrated CSU/DSUs, in which case your router can connect directly to the line that comes from
the service provider.
The WAN cloud in between is maintained by the service provider and includes the equipment
needed to move the data from one point to another. The way that data moves between locations
depends upon the technology used within the WAN cloud.
Circuit Switching

One method of moving information through the cloud is called circuit switching. Circuit
switching establishes a dedicated path from one end of the cloud to the other. For instance,
suppose you had multiple devices within the cloud that are used to move data from one point to
another. When you establish a connection from one entry point to another, a circuit is established
between the devices in the cloud, from one end to the other. As information is transmitted, all
data travels the same path through the cloud. This is a dedicated connection that stays connected
throughout the entire conversation. It might be a permanent connection, that stays established, or
it might be a temporary connection, where the circuit is established and then taken down when
the transmission is over, much like a phone call. With temporary circuits, a different path might
be established each time that a session is created. Be aware that circuit switching is not widely
deployed and is used with only a few WAN technologies.
Packet Switching

Another method of moving data through the cloud is called packet switching. With packet
switching, the data you're sending is broken down into packets. Each packet is then individually
routed through the WAN cloud. This means each packet could potentially take a different route
through the cloud. For example, this packet might take this path to reach the destination, but a
different packet could take a different path. The exact path through the WAN cloud is not
important. All of data arrives at the destination.
With packet switching, the service provider must ensure that all packets arrive and that packets
are able to be reassembled in the original order at the destination. Packet switching is used with
all IP-based networks because the data has already been divided into packets.
Summary

Therefore, packet-switching WANs are very widely used.


That's it for this lesson. In this lesson, we discussed what a WAN is and how it works. We then
discussed the different technologies and equipment that can be used to create a WAN.
11.1.2 WAN Technologies

Let's talk about some WAN technologies.


In this lesson we're going to review some of the media and technologies that are used to create a
WAN connection. It used to be that your only option to establish a WAN connection was to use
a modem and go through the public telephone network; this was very slow. Remember, with a
modem we were limited to about 56 kilobits per second, which really isn't fast enough to do any
kind of work. Luckily, we now have a lot more options today, some of which are extremely fast.
T-carrier Systems

First, let's look at the T-carrier System. The T-carrier system isn't really new at all; it's actually
quite old. It was introduced back in the 1960s by some of the major American telephone
companies. It's still widely used in the United States today because it works so well.
Unlike a modem, the T-carrier system is completely digital. It uses two pairs (four wires) of
copper cabling.
One pair of these wires is used for transmitting data, and the other pair is used for reception.
Because the T-carrier system is completely digital, you can mix both voice and data
communications on these two pairs of copper wiring. A T-carrier connection is implemented
usually with 100 ohm twisted pair cabling, but it uses a special type of twisted pair. It's not UTP;
instead it's called individually shielded twisted pair cabling.
The shielded twisted pair that we use for LAN networking is encased within a shield (insulator)
inside the cable. That's not the case with T-carrier cable. Instead of having one insulator going
around the entire cable bundle, each individual wire has its own individual insulation for
shielding.
There are several different specifications within the T-carrier system that you need to be familiar
with. The first one is called a T1 line.
A T1 line is composed of 24 multiplexed channels on these two pair of copper wiring. Remember,
we said that we have two pair of wires, two for transmitting, two for receiving. That means you
can transmit 24 multiplexed channels at a time to a destination network or you can receive 24
multiplexed channels at the same time. Because the transmission and reception are done on two
different pairs, you can transmit data at the same time as you are receiving data.
Each of these 24 multiplex channels can each transfer data a rate of 64 kilobits per second. If you
do the math, multiply 24 times 64 kilobits per second and you get a total bandwidth of 1.544
megabits per second.
T1 was fast enough for a lot of organizations for a long time. However, in today's modern
networking world a total throughput of 1.544 megabits per second is probably going to be
inadequate. The one advantage T1 has over other WAN technologies is the fact that you get the
same bandwidth on your uplink connection as you do on your downlink connection.
Many organizations still need faster connections than can be provided by a T1 line. Those
organizations can use a T3 line. The T3 specification is also part of the T-carrier system and it
uses the same basic technology as a T1 connection. The key difference is that it provides many
more channels.
Instead of 24 channels, a T3 line provides 672 DS0 channels, each capable of transferring data
at 64 kilobits per second. This is equivalent to 28 DS1 signals that are used with a T1 line.
Together, these 672 DS0 channels are called a DS3 signal. Because of the number of channels,
you get a much better throughput than a T1 line. A T3 line provides a bandwidth of 44.736
megabits per second.
E-carrier Systems

In addition to the T-carrier system there's another carrier system that you need to be familiar with
called the E-carrier system. The E-carrier system is implemented in Europe and other places
around the world. The T-carrier system is specific to only the United States and a handful of other
countries. Most of the rest of the world uses the E-carrier system.
Remember how we had a T1 line and a T3 line within the T-carrier system? The E-carrier system
uses equivalent E1 and E3 lines.
An E1 line uses 32 channels, each transferring data at 64 kilobits per second. You can see this is
much like a T1 line; it just has a few more channels. If you do the math, multiply 32 times 64
kilobits per second, you'll find that an E1 line provides a total throughput of around 2.047
megabits per second. There is also an E3 line in the E-carrier system. An E3 line uses 16 E1
signals. These 32 channels together form an E1 signal, just as we had a DS1 signal with the T-
carrier. Sixteen of these E1 signals together provide us with a total bandwidth of 34.368 megabits
per second.
As you get into the IT field you'll probably find that most organizations employ some type of T-
carrier (in the United States) or E-carrier (in the rest of the world).
Optical Carrier Levels

In addition to understanding T and E carriers, you also need to be familiar with the Optical Carrier
specifications when you're dealing with WANs. These specifications are used to define the type
and throughput of fiber optic cabling that's used within a SONET. SONET is an acronym that
stands for Synchronous Optical Network. It's really just a series of standards that are used by the
major telecommunication companies to provide high-speed WAN connections.
Associated with SONET are several different optical carrier specification levels that you should
be familiar with. The first one is called OC-1.
OC Level 1, at 51.84 megabits per second, is called the base rate. There are actually many more
OC levels and each level is a multiple of the original OC-1 base rate.
For example, OC-3 transfers data at a rate of 155.52 megabits per second. If you do the math,
you'll see that 3, from OC-3, times the base rate of 51.84 provides a total throughput of 155.52
megabits per second. Likewise, OC-12 transfers data at a rate of 622.08 megabits per second,
that's 12 times the base OC-1 rate. OC-24 transfers data at 1.244 gigabits per second. OC-48
transfers data at twice that speed, 2.488 gigabits per second, all the way down to OC-768 which
transfers data at 39.82 gigabits per second.
DWDM

Before we finish discussing optical WAN carriers, we need to review Dense Wavelength Division
Multiplexing (DWDM). DWDM is an important technology in a modern fiber optic network.
DWDM is a type of wavelength division multiplexing. It's a technology that uses multiplexing
to combine multiple optical carrier signals onto a single fiber optic cable.
At this point, you should already be familiar with the frequency division multiplexing that is used
with copper wiring. By using different frequencies within the copper wiring we can multiplex
multiple signals onto the same copper wire. We can do exactly the same thing with fiber optic
cabling. However, in this case, it's done using different wavelength, or colors, of laser light. For
example, we can transmit one signal on this fiber optic cable using a longer wavelength ray of
light indicated by this red wave going through the fiber optic cable.
We can also transmit additional data on the same fiber optic cable at the same time as this signal
right here. We do this by transmitting that information on a different wavelength.
In this example we're transmitting another signal on the same fiber optic cable at the same time
as this first signal, but we're transmitting it with a ray of light that has a much shorter wavelength.
Because these two signals use different wavelengths of light, we're able to transmit multiple
signals at the same time on the same cable. This is really valuable; it enables you to transmit
multiple signals on the same cable at the same time and it allows you to enable bi-directional
communications on the same cable at the same time. In other words, you can send and receive
on the same cable.
WDM uses a multiplexer at the transmitter to join these signals together and transmit them
through the network media. Then it uses a demultiplexer at the receiving end to split that signal
back out again into its separate signals.
CWDM

There are two different types of wavelength division multiplexing: coarse and dense. Coarse or
CWDM (Coarse Division Wavelength Multiplexing) is mostly used in fiber optic Ethernet
networks. For example, 10GBaseLX4 uses coarse division wavelength multiplexing.
There's second type of wavelength division multiplexing called Dense (DWDM). Both DWDM
and CWDM use WDM technology to transmit multiple light signals simultaneously on the same
fiber optic cable. The difference is that DWDM can carry more fiber channels than CWDM.
Therefore, DWDM is usually used on fiber optic backbones and long distance data transmission
lines.
Another advantage of DWDM is the fact that the protocol is not necessarily tied to the
transmission speed. Thus, you can use DWDM with a variety of other WAN technologies: Like
IP protocol, Ethernet frames, ATM and even SONET. All of these different technologies can be
used with DWDM to provide transmission speeds from 100 megabits per second all the way up
to 2.5 gigabits per second. Another advantage of DWDM is the fact that it can transmit different
types of data at different data rates on the same channel at the same time.
Summary

That's it for this lesson. In this lesson we looked at several different technologies you can use to
establish a WAN connection. First, we looked at the T-carrier and E-carrier systems. We
discussed the OC, Optical Carrier levels. And then we ended by talking about Wave Division
Multiplexing. We looked at both Dense and Coarse wave division multiplexing.
11.1.3 WAN Services

In this lesson, we're going to look at the different types of WAN technologies that are used to
connect the WAN cloud.
PTSN

If you needed to connect one site to another, you could send data over the public switched
telephone network, or PSTN. In this case, you would install a server with a modem at each
location. My network would generate digital data that is sent to the remote access server. The
server uses its modem to translate that data into analog data for transmission on the public switch
telephone network. On the other end, the analog data received from a telephone network is
converted by the modem back to digital data for processing on the remote network.
Using the telephone for data transmission is limited to a transfer rate of 56 kilobits per second.
When you compare this to the transfer rate of your LAN, you see that a dial-up WAN connection
for networks is a very slow solution and would only be implemented when you have very small
amounts of data to transfer. The 56 kilobits per second limitation comes from the local loop that
connects your modem to the telephone network at the central office.
ISDN

Another technology you could use for a WAN is ISDN. ISDN is very commonly used in Europe
and other parts of the world, but is not widely implemented in the United States. ISDN is not the
mechanism used within the WAN cloud itself, but is rather the technology that connects your
location to the WAN service. It may also connect the other location on the other end.
PRI

ISDN has two main implementations. The first is called Basic Rate Interface, or BRI. BRI uses
plain old telephone service lines, or POTS. These are the regular telephone lines that are already
installed in your location. BRI may be suitable for a small business or a home network, but not
for a large organization. Your organization probably already has a single telephone line that
provides telephone service. Because BRI uses the same type of wiring you can simply switch
your telephone service from the central office of the telephone company to the central office of
the ISDN service provider. You can then send digital signals over the existing wiring. BRI uses
four wires, which should already exist in a normal telephone installation. The only difference is
that an RJ45 connector is used instead of an RJ11 connector so that you don't mistakenly plug a
regular telephone into an ISDN line.
ISDN works by taking the regular copper cable that's used for your telephone line and dividing
it into channels, allowing you to send multiple streams of data along the same physical wire. It's
much like having cable TV where you have multiple channels on the same physical wire. With
ISDN BRI, you have two data channels which are referred to as B channels. Each B channel can
provide 64 kilobits per second of bandwidth.
There is a third channel, referred to as a D channel, which is used for control information, such
as setting up a call and taking down a call. This D channel provides 16 kilobits per second of
bandwidth, but isn't used for transferring network data. The two B channels are the channels that
you actually use for sending data. You'll typically see ISDN BRI listed with a maximum of 128
kilobits per second for data. If you see 144 kbps listed for BRI, then the 16 kbps provided by the
D channel has been added in.
With ISDN BRI you can use each channel separately. For instance, you can use one B channel
for telephone calls and the second B channel for network data. You can also bind two B channels
together to get the full 128 kilobits of bandwidth for data transmissions.
The second implementation of ISDN is the Primary Rate Interface, or PRI.
PRI uses a T1 line, which provides 23 B channels and one D channel, with all channels providing
64 kilobits per second of data transfer. With ISDN PRI, you get a total bandwidth of 1.544
megabits per second. Be aware that when using ISDN PRI, you'll probably need to get a new line
installed into your location.
Frame Relay/CIR

You can't use your existing telephone wiring.


Another solution for connection two sites together is called Frame Relay. Frame relay is a packet
switching technology, so it's well suited for data networks. Frame relay uses T1 lines to connect
your location to the WAN cloud. The T1 line provides 1.544 megabits per second of bandwidth.
With Frame relay, a permanent virtual circuit is established through the WAN cloud to the
destination network. Whenever you send data on this connection it will automatically flow
through the permanent virtual circuit to the destination location. These virtual circuits can be
configured in a couple of different ways.
One would be a point-to-point connection between two locations. In this case, all data being sent
from the source arrives only at that destination. If I needed to connect a third location with a
point-to-point connection, I would need a second virtual connection to the second destination
network. As I add more locations, the number of virtual connections increases.
Another option is to configure a multipoint connection. In this case, a single virtual circuit
connects multiple locations within the WAN cloud. So rather than defining three separate virtual
circuits to connect all destination networks, I use a single multipoint connection which can go to
three separate destinations.
When implementing a frame relay network, you need a router and a CSU/DSU. When you sign
up for frame relay service, you get a Committed Information Rate, or CIR. This is a level of
service that defines how much data you can send through the network. When congestion within
the WAN cloud is low, you can probably get more than your committed information rate. You
are guaranteed to have a bandwidth at or above the CIR, but not below that level. However, the
frame relay cloud itself will drop packets as congestion increases. For this reason, when you send
information through a frame relay network, you need to implement error correction and error
recovery on the interface devices so that if data is dropped in the cloud, you have a mechanism
for retransmission.
ATM

Another WAN technology is called Asynchronous Transfer Mode, or ATM. ATM can be used for
a wide variety of purposes, including video, audio, and traditional network data. ATM is a packet
switching technology that takes data and divides it up into packets called cells. Each cell is a
fixed length of 53 bytes. The fixed length of the cells relieves the WAN cloud from having to
figure out how long the data should be. This allows data to be transmitted at a constant rate.
Let's look at an example. Suppose you have an IP network that is connected to your ATM WAN.
In this situation, you would have network IP packets of various lengths. Each packet would have
to be divided into a cell for transmission on the ATM WAN. On the other side, the data must be
reassembled back into the original packet.
You can connect your network to an ATM WAN using a simple network adapter in a router. ATM
is often implemented for time-sensitive network traffic, such as audio and video. Information
within the cell identifies the path to follow through the WAN cloud to the destination. Switches
within the ATM WAN read the information within the cell header to identify where to send the
cells through the WAN cloud. Unlike frame relay, which drops packets when it gets congested,
ATM includes mechanisms to help ensure delivery of cells throughout the network.
SONET

Another WAN technology is called SONET, also referred to as SDH. SONET is a standard for
transmitting data over an optical network. It is referred to as a transport protocol, in that it defines
the structure of the WAN cloud and how information is passed within the WAN cloud. You'll
often find SONET being combined with other types of WAN access. For example, you might
have a dial-up connection to your central office connected to the public switch telephone
network. This dial-up connection gives you an analog signal into the central office, and the
central office will then convert this to a digital signal and put it on a SONET network for
transmission to the destination device.
SONET actually refers to the underlying technology that transports information through the
WAN; it's not necessarily a separate network. In this case we see that the PSTN actually runs on
top of SONET to move data throughout the public telephone system. SONET uses fiber optic
cabling in a series of interconnected rings throughout the SONET network. These are typically
dual counter-rotating rings, meaning there are two rings for every reconnection. Data flows in
one direction on one ring, and a separate direction on the other ring. If there's a break in one ring,
data can be routed thought the other ring so that you have redundancy of service throughout the
WAN cloud.
MPLS

Another WAN technology you need to be familiar with is Multiprotocol Label Switching, or
MPLS. MPLS is not so much a description of a WAN service as it is a description of what happens
in order to prioritize traffic that travels through a WAN. A normal frame consists of the packet
payload, or data, with the IP header information and the MAC address information. With MPLS,
labels are inserted between the IP header and the MAC address information that identifies
characteristics about this data that needs to be sent. Special routers on the edge of an MPLS
capable network insert labels within the packet. Switches within the WAN cloud then use the
label information to route data to the destination. At the end device, the additional header
information that has been inserted is removed and the frame is sent on to the normal network.
The labels used with MPLS can be used to prioritize data as it moves through cloud. It can be
used to identify data that is more sensitive and has a higher priority, or it can simply be used to
identify the destination of the message.
Metro Ethernet

The last WAN service we'll talk about is called Metro Ethernet. Metro Ethernet uses a collection
of routers and switches, typically connected by fiber optics, to create a metropolitan area
network, or MAN, within a city. Metro Ethernet usually uses a star or mesh physical topology to
connect businesses to each other and to the Internet.
In a typical Metro Ethernet configuration, the two locations would be connected to the MAN
using an Ethernet line, for example fiber. These two locations would then be given a dedicated,
point-to-point connection to each other. Because these routers and switches are typically
maintained by one entity, the connection between the two sites is fast and reliable.
Metro Ethernet services are usually offered by ISPs and are typically built on existing MPLS
infrastructures. They also tend to be a lot cheaper and have the potential to be much faster than
other WAN services.
Summary

The type of WAN service you sign up for depends mostly on the speed of the connection that
you want, and the type of services available in your area. Of the technologies we have discussed,
PSTN and ISD BRI for WAN connections offer the slowest data transfer rates and are no longer
commonly used for connecting two remote sites together. Frame Relay and ATM are more
common services for establishing WAN connectivity for your network. SONET is typically used
within the WAN cloud for moving data over fiber optic links, while MPLS is a service that lets
you prioritize traffic and is used within the WAN cloud for moving packets based on labels.
Metro Ethernet is primarily used to connect two sites within the same metropolitan area.
11.1.4 WAN Media Facts

WANs can be implemented using a variety of technologies, each with its own unique
characteristics. When you contract for WAN services, you need to analyze your bandwidth
requirements and then choose the appropriate technology. The table below describes several
common WAN technologies:
Carrier Speed Description
POTS 56 Kbps • POTS stands for Plain Old Telephone Service, and it uses analog
phone technology.
• Existing wires use only one twisted pair.
• Analog signals are used through the local loop.
T1 1.544 Mbps • T-carrier is a digital standard widely deployed in North America.
• T1 lines usually run over two pairs of shielded twisted pair (STP) cabling, but they can
also run over other media like coaxial, fiber optic, or satellite. T3 lines usually run over fiber
optic cable.
• A T1 line has 24 channels that each run at 64 Kbps. A T3 line has 672 channels that each
run at 64 Kbps.
• A T1/T3 connection requires a Channel Service Unit (CSU) and a Data Service Unit
(DSU). A DSU reads and writes synchronous digital signals, and a CSU manages the digital
channel.
• To connect routers by using their CSU/DSU interfaces, you can use a T1 crossover cable.
• T3 is also known as a Digital Signal 3 (DS3).
T3 44.736 Mbps
E1 2.048 Mbps • E-carrier is a digital standard very similar to T-carrier, but it is widely
deployed in Europe.
• An E1 line has 32 channels that run at 64 Kbps. An E3 line transmits 16 E1 signals at the
same time.
• E1/E3 connections require a CSU/DSU.
E3 34.368 Mbps
J1 1.544 Mbps • J-carrier is a digital standard very similar to T-carrier, but it is widely
deployed in Japan.
• A J1 line is virtually identical to a T1 line. A J3 line has 480 channels that each run at 64
Kbps.
• J1/J3 connections require a CSU/DSU.
J3 32.064 Mbps
OC-1 51.84 Mbps • Optical carrier specifications define the types and throughput of fiber
optic cabling used in SONET (Synchronous Optical Network).
• Each OC level is a multiple of the base rate (OC-1). To get the 622.08 Mbps throughput
rating of OC-12, multiply the 51.84 Mbps base rate by 12.
• Optical carriers use the following types of Wavelength Division Multiplexing (WDM) to
increase capacity of communication over fiber optic cabling:
• Coarse Wavelength Division Multiplexing (CWDM) (used in 10GBase-LX4 Ethernet
networks)
• Dense Wavelength Division Multiplexing (DWDM) (used in fiber optic backbones and
long distance data transmission, with speeds up to 400 Gbps)
OC-3 155.52 Mbps
OC-12 622.08 Mbps
OC-24 1244.16 Mbps
OC-48 2488.32 Mbps
OC-192 10 Gbps
OC-256 13.271 Gbps
OC-768 39.2 Gbps

11.1.5 WAN Facts

A typical wide area network uses the following components:


Component Description
WAN cloud The WAN cloud is the collection of equipment that makes up the WAN network.
The WAN cloud is owned and maintained by telecommunications companies. It is represented
as a cloud because the physical structure varies, and different networks with common connection
points may overlap. As a network administrator, you probably don't know where data goes as it
is switched through the cloud. The important thing is that data goes in, travels through the line,
arrives at its destination, and remains secure throughout the process.
Central Office (CO) The central office is a switching facility connected to the WAN, and it is the
nearest point of presence for the WAN provider. It provides WAN cloud entry and exit points.
Local loop The local loop is the cable that extends from the central office to the customer
location. The local loop is owned and maintained by the WAN service provider. It typically uses
UTP, but it can also be implemented using fiber optic cabling or other media. The local loop is
often referred to as the "last mile," because it represents the last portion of the WAN up to the
customer premises.
Demarcation point (demarc) When you contract with a local exchange carrier (LEC) for
data or telephone services, they install a physical cable and a termination jack onto your premises.
The demarcation point marks the boundary between the telco equipment and your organization's
network or telephone system.
• Normally, the LEC is responsible for all equipment on one side of the demarc, and the
customer is responsible for all equipment on the other side of the demarc.
• The demarc is also called the minimum point of entry (MPOE) or the end user point of
termination (EU-POT).
• The demarc is typically located on the bottom floor of a building, just inside the building.
For residential service, the demarc is often a small box on the outside of the house.
Customer Premises Equipment (CPE) Devices physically located on the subscriber's
premises are referred to as the customer premises equipment. CPE includes both the wiring and
devices that the subscriber owns and the equipment leased from the WAN provider. CPE can
include the smart jack, demarc, local loop, copper line drivers and repeaters.
Channel Service Unit/Data Service Unit (CSU/DSU) A CSU/DSU converts the signal
received from the WAN provider into a signal that can be used by equipment at the customer site.
A CSU/DSU is composed of these two separate devices:
• The CSU terminates the digital signal and provides error correction and line monitoring.
• The DSU converts the digital data into synchronous serial data for connection to a router.
The CSU/DSU might be two separate devices, one combined device, or integrated into a router.
WANs employ one of the two following methods to transfer data:
Method Description
Circuit Switching A circuit-switched network uses a dedicated connection between sites.
Circuit switching is ideal for transmitting data that must arrive quickly in the order it is sent, as
is the case with real-time audio and video.
Packet Switching A packet-switched network allows data to be broken up into packets.
Packets are transmitted along the most efficient route to the destination. Packet switching is ideal
for transmitting data that can handle transmission delays, as is often the case with web pages and
email.

11.1.6 WAN Service Facts

The following table describes common WAN services that are used to connect two networks
through a WAN:
Service Description
Public Switched Telephone Network (PSTN) The PSTN is the network used for placing local
and long distance phone calls.
• The PSTN is a circuit-switched network; a dedicated circuit is established when the call
is placed and remains in place throughout the call.
• The local loop uses analog signals over POTS (through regular telephone cable wires).
The long distance network typically uses digital signaling over fiber optic.
• End-to-end speeds are limited to 56 Kbps, a restriction imposed by the usage of POTS in
the local loop at each end.
• A modem is required to convert digital signals to analog.
• The PSTN is used by remote access clients as a way to access the network, or as a
temporary or backup connection between sites.
Integrated Services Digital Network (ISDN) ISDN is a WAN technology that provides
increased bandwidth within the local loop. These are two forms of ISDN:
• ISDN BRI (basic rate interface) uses digital signals over POTS. The traditional phone line
is divided into separate channels: two 64 Kbps bearer (B) channels and one 16 Kbps control (D)
channel. ISDN BRI is often called 2B + 1D.
• ISDN PRI (primary rate interface) uses digital signals over a T1 line with 23 64 Kbps B
channels and one 64 Kbps D channel in North America (up to 1.544 Mbps), or over an E1 line
with 30 64 Kbps B channels and one 64 Kbps D channel in Europe (up to 2.048 Mbps). ISDN
PRI is often referred to as 23B + 1D.
ISDN has the following characteristics:
• It is a circuit switching technology.
• It is a local loop technology; when calls reach the WAN cloud, they are converted to
another protocol for transmission through the WAN.
• With ISDN BRI, you can use one channel for voice and one channel for data, or both
channels for different voice calls. Depending on the implementation, you can also bond the B
channels and use them together.
• ISDN PRI requires a CSU/DSU for the T1 line.
Frame Relay Frame Relay is a protocol used to connect to a WAN over dedicated (leased) lines.
• Frame Relay is a packet switching technology that supports variable-sized data units
called frames.
• Frame Relay establishes a permanent virtual circuit between two locations. Because the
circuit is permanent, there is no call setup or termination required.
• Virtual circuits can be configured in two different ways.
• A point-to-point circuit is established between two locations.
• A point-to-multipoint circuit is a single circuit that can be used to reach multiple locations.
• Frame Relay can be implemented over a variety of connection lines (e.g., T1, T3).
• Routers at the customer site connect to the T1 line through a CSU/DSU.
• When congestion occurs, the Frame Relay network simply drops packets to keep up.
Frame Relay networks provide error detection but not error recovery. It is up to end devices to
request a retransmission of lost packets.
• When you sign up for Frame Relay service, you are assigned a level of service called a
Committed Information Rate (CIR). At times, your actual bandwidth could be higher than the
CIR, but the CIR represents the maximum guaranteed data transmission rate you will receive on
the Frame Relay network.
Asynchronous Transfer Mode (ATM) ATM is a WAN communication technology originally
designed for carrying time-sensitive data like voice and video. It can also be used for regular data
transport.
• ATM is a packet switching technology that uses fixed-length data units called cells. Each
cell is 53 bytes.
• ATM establishes a virtual circuit between two locations.
• A virtual channel is a data stream sent from one location to another.
• A virtual path is a collection of data streams with the same destination.
• The cell header includes labels that identify the virtual path information. ATM switches
in the WAN cloud use the virtual path to switch cells within the WAN to their destination.
• ATM is connection oriented (compared to Frame Relay, which is connectionless).
Synchronous Optical Networking (SONET) SONET is a subset or variation of the
Synchronous Digital Hierarchy (SDH) standards for networking over an optical medium. It was
originally developed as a WAN solution to interconnect optical devices from various vendors.
• SONET is a packet switching technology that uses different frame sizes, based on the
bandwidth used on the SONET network.
• SONET is classified as a transport protocol, because it can carry other types of traffic,
such as ATM, Ethernet, and IP.
• Most PSTN networks use SONET within the long distance portion of the PSTN network.
• SONET networks use dual, counter-rotating fiber optic rings. If a break occurs in one ring,
data can be routed over the other ring to keep traffic flowing.
• Data rates for SONET can vary from 51 Mbps to about 160 Gbps.
Multiprotocol Label Switching (MPLS) MPLS is a WAN data classification and data carrying
mechanism.
• MPLS is a packet switching technology that supports variable-length frames.
• MPLS adds a label to packets between the existing Network and Data Link layer formats.
Labels are added when the packet enters the MPLS network and are removed when the packet
exits the network.
• Information in the label is used to switch the packet through the MPLS network to the
destination.
• MPLS labels can identify the route or even the network type to use. MPLS labels are often
used to provide different classes of service for data streams.
• MPLS is a connection-oriented protocol.
Cisco routers using MPLS are required to use the Cisco Express Forwarding (CEF) switching
technology.

11.2 WAN Communications

As you study this section, answer the following questions:


• What is the difference between LCP and NCP?
• In which layer of the OSI model does PPP function?
• Which feature of PPP can detect link errors?
• During PPP configuration, which authentication methods are available?
After finishing this section, you should be able to complete the following task:
• Configure a PPP WAN link.

11.2.1 PPP WAN Connections

In this lesson, we're going to spend some time talking about the point-to-point protocol or PPP.
Function of PPP

PPP is a Layer 2 encapsulation protocol specifically designed to facilitate communications over


a serial leased line. By default, Cisco routers want to use HDLC over a leased line. However,
you're not stuck with HDLC. You can choose to use PPP instead of HDLC if you want to.
PPP vs. HDLC

Using PPP instead of HDLC has several advantages. First of all, PPP is designed to support both
synchronous and asynchronous links, so if you're in a situation where you have an asynchronous
link or your WAN connection for your leased line, then PPP is the better option than HDLC. The
second thing is that PPP provides control protocol options that are not available with HDLC.
Here we see a typical PPP frame. Notice that it uses a field named type. The type field allows
multiple types of Layer 3 networking protocols to be passed by the routers over the same WAN
link. The type field basically tells the other end what kind of packet is encapsulated over here in
the payload, within the data field.
The PPP frame also includes a field named control.
PPP Protocols LCP/NCP

PPP uses two key control protocols to establish and maintain the link between devices over a
leased serial line:
* The first one is the Link Control Protocol or LCP. LCP is a Layer 2 protocol that provides
several key functions. First, it exchanges packets periodically in order to establish, maintain, and
then tear down the PPP link. It also detects errors on the link. It's important to remember that
LCP functions at Layer 2 and has nothing to do with Layer 3 protocols.
* For Layer 3, PPP uses the Network Control Protocol, or NCP. NCP is a collection of control
protocols that are called CPs that operate at Layer 3 and are designed specifically for the network
protocol that it supports. For example, NCP uses a separate control protocol to support each of
the networking protocols that you see here. We have a control protocol called IPCP for IPv4. We
have another one called IPv6CP for IPv6.
In addition to the functions that we just talked about, LCP also provides authentication services
for the PPP link and it does this by authenticating devices that are connected to the link to make
sure that each device is who it claims to be.
Authentication Options PAP/CHAP

Using PPP authentication is accomplished using either the PAP protocol or the CHAP protocol.
Let's first talk about how PAP works. PAP stands for Password Authentication Protocol. PAP uses
the two-step process that you see here in order to authenticate. In this example, Router B needs
to authenticate with Router A. So, Router B first sends a user name and a password in clear text
to Router A. Router A will take a look at the credentials that it received from Router B and, if
they are correct, then it will acknowledge the successful authentication back to Router A and the
authentication process is complete.
Be aware that the PAP authentication process is not very secure. In fact, I would strongly
recommend that you never use PAP authentication. The problem is that the user name and
password is sent from router to router as clear text. A malicious individual who has been able to
compromise the physical media and sniff frames off of the wire can very easily capture the user
name and password that is used to authenticate the routers, and then they can use that information
to compromise our system.
CHAP, on the other hand, is a little bit more secure. CHAP stands for Challenge Handshake
Authentication Protocol, and it actually uses a three-step process to obscure the pass phrase that's
being used for authentication. This is done using hashes. This is not a perfect solution from a
security standpoint, but it is much better than PAP because no clear text passwords are sent over
the wire.
CHAP leverages the Message Digest 5 algorithm, MD5, in order to obscure the information that's
being sent between the devices. MD5 is a one-way hash algorithm. When Router B needs to
authenticate to Router A, it first sends a request to authenticate. Router A responds to that request
by sending a challenge containing a random number to Router B.
In order to authenticate, Router B has to do two things:
* First, must take the random number and add it to the password.
* Then, it creates a hash of this entire string using MD5.
Before we go any further, it's very important that you remember that hashing is not the same as
encryption. Hashing is used basically to ensure that the information received exactly matches the
information that was sent. While it's very difficult to take a hash and reverse it to get the original
information, it is possible and, with modern computing power, it can be done relatively easily.
Don't confuse hashing with encryption.
This resulting hash is sent from Router B to Router A. Router A will then perform the same
process. It will take the random number that it originally sent to Router B, and concatenate it
with the password, and then use MD5 to create a hash. Then it will compare the hash that it
created with the hash that it received from Router B. If they're the same, then the password is
assumed to be valid and B is authenticated to A. If not, then authentication is denied.
Here's a key point to remember: This hash, as it's being transmitted from B to A, is still sent clear
text, so it is theoretically possible for somebody to sniff that hash and then try to use it to
authenticate to Router A. CHAP tries to get around this by using a different random number every
time an authentication challenge is sent. Because each challenge is sent with a different random
number, then, theoretically, the hash will be different each time. You could capture a hash and
then try to use it to authenticate later on, but it probably won't work because the challenge was
sent with a different random number.
Be aware that there is a variation of PPP available with many routers, such as those from Cisco
called Multilink PPP (MLP). MLP is used to aggregate multiple WAN links together into a single
logical channel. By doing this, MLP typically used to enable load-balancing of traffic from
different WAN links.
Summary

That's it for this lesson. In this lesson, we introduced you to PPP encapsulation. We first talked
about the role and function of PPP, then we compared PPP to the HDLC encapsulation protocol.
Then we looked at the different protocols used within PPP, including LCP and NCP, and then we
ended this lesson by talking about the different authentication options that can be used with PPP.
11.2.2 Configuring a PPP WAN Link

In this demonstration, we're going to set up a Point-to-Point WAN connection. We have 2 routers
named, router 1 and router 2, and we have a serial connection between the 2 routers. What we
want to do is turn on PPP. We're going to encapsulate traffic at Layer 2 across the serial link.
Between 2 routers we have that serial link, so we don't have MAC addressing. There's no MAC
addresses there, so we have to put something in its place, right? If you remember we have to put
PPP.
We're going to jump in and do this. On router 1, I'm going to type show IP interface brief. Make
sure that we are Up/Up on our serial 000. What I want to do is get into "config t." We're going to
get into the interface as 000. We are just going to simply say encap PPP. That stands for
Encapsulation Point-to-Point protocol.
I'm going to go ahead and hit Enter. It's going to say it changes to down, and that's right because
one side of our connection is using PPP the other side is not yet. We're going to jump on the other
router and configure that in just a second. I'm going to go ahead and shut this down. When we
configure PPP we need to cycle the interfaces. I'm going to do a shut on this. Then we're going
to jump over to the other router.
Here on the other router and you can see that everything has changed to down. Again, because
we have PPP on one side but not this side. What we want to do is get into our global config.
We're going to get into that interface again as 000. We're going to simply say encap PPP, for
encapsulation point-to-point protocol. Hit Enter. I'm going to do a shut on this one. I'm going to
go ahead and say no shut. Turn it right back on because remember we have to cycle it. We need
to jump over to the other router, because if we did a "do show IP interface brief," you can see
that we're Down/Down, because the other side is still in a shutdown state. Let's jump back over
to router 1, turn that interface on and see what we've done.
All right, so we're back here on the other router, router 1. Let's do a "do show IP interface brief"
and you can see that we are administratively down. That means we shut that interface down as
an administrator. What we're going to do is say no shut and we should see that it come back up
now. You can see that it says serial interface or interface serial 000 has changed its state to up.
Our line protocol has come up.
We have a Layer 1 up, that's good, and a Layer 2 up. Let me hit Enter. "Do show IP interface
brief," to make sure we're Up/Up, and we are. That's all there is to it. Remember, we get on the
interface and we simply just type encap, which is short for encapsulation. We could've typed the
whole thing out, but all we have to do is put encap PPP and make sure that we have it on both
ends.
Summary

That's it for this demonstration.


In this demonstration, we set up a Point-to-Point WAN connection on our routers.
11.2.3 PPP WAN Connections Facts

A point-to-point WAN link connects two endpoints on a pre-established communications path,


usually through a telephone company. Data is moved through the connection using the Point-to-
Point Protocol (PPP). Be aware that PPP:
• Is a Data Link (Layer 2) protocol designed to facilitate communication over leased lines.
• Can be used on a wide variety of physical interfaces, including asynchronous serial,
synchronous serial (dial-up), and ISDN.
• Defines a header and trailer that specify a protocol type field.
• Contains protocols that integrate and support higher level protocols.
• Supports multiple Network layer protocols over the same link.
• Supports both IPv4 and IPv6.
• Provides optional authentication through PAP (2-way authentication) or CHAP (3-way
authentication):
• PAP transmits the password in cleartext over the link.
• CHAP uses a hash of the password for authentication. The password itself is not
transmitted on the link.
• Supports multilink connections, load-balancing traffic over multiple physical links.
• Includes Link Quality Monitoring (LQM), which can detect link errors and can
automatically terminate links with excessive errors.
• Includes looped link detection, which can identify when messages sent from a router are
looped back to that router:
• Routers send magic numbers in communications. If a router receives a packet with its own
magic number, the link is looped.
A variation of PPP called Multilink PPP (MLP) is available on some routers. MLP is used to
aggregate multiple WAN links into a single logical channel.
PPP uses these two main protocols to establish and maintain the link:
Protocol Description
Link Control Protocol (LCP) LCP is responsible for establishing, maintaining, and tearing
down the PPP link. LCP packets are exchanged periodically.
• During link establishment, LCP agrees on encapsulation, packet size, and compression
settings. LCP also indicates whether authentication should be used.
• Throughout the session, LCP packets are exchanged to:
• Detect loops.
• Detect and correct errors.
• Control the use of multiple links (multilink).
• When the session is terminated, LCP tears down the link.
A single Link Control Protocol runs for each physical connection.
Network Control Protocol (NCP) NCP is used to agree on and configure Network layer
protocols. Each Network layer protocol has a corresponding control protocol packet. Examples
of control protocols include:
• IP Control Protocol (IPCP)
• IP version 6 Control Protocol (IPv6CP)
A single PPP link can run multiple control protocols—one for each Network layer protocol
supported on the link.
PPP establishes communication in three phases:
1. LCP phase—LCP packets are exchanged to open the link and agree on link settings.
2. Authenticate phase (optional)—Authentication-specific packets are exchanged to
configure authentication parameters and to authenticate the devices. LCP packets might also be
exchanged during this phase to maintain the link.
3. NCP phase—NCP packets are exchanged to agree on which upper layer protocols to use.
For example, routers might exchange IPCP and Cisco Discovery Protocol Control Protocol
(CDPCP) packets to agree on using IP and CDP for Network layer communications. During this
phase, LCP packets might continue to be exchanged.
To configure PPP on the router, do the following:
1. Enable PPP encapsulation on the interface. You must set the encapsulation method to PPP
before you can configure authentication or compression.
2. Select CHAP or PAP as the authentication method.
3. Configure username and password combinations. Keep in mind the following:
• Both routers need to be configured with a username and password.
• The username identifies the hostname of each router.
• The password must be the same on both routers.

11.3 Internet Connectivity

As you study this section, answer the following questions:


• What connection speeds should you expect with a PSTN Internet connection?
• What is multiplexing? How does it increase the bandwidth of a connection?
• How does DSL enable you to talk on the phone and connect to the Internet at the same
time?
• What are the requirements for qualifying for DSL service?
• Which DSL service does not support simultaneous voice and data transmissions?
• What is the difference between BRI and PRI with ISDN?
• What is the difference between a B channel and a D channel?
• What are the disadvantages of a satellite Internet connection?
After finishing this section, you should be able to complete the following task:
• Connect to a DSL network.

11.3.1Traditional Internet Connectivity

Let's talk about Internet connectivity. In this lesson we're going to cover the different ways you
can connect to the Internet, whether you're connecting from home or from a small office.
In the early days of the Internet, there were very few connection options. You had either a modem
or a T1 line. Modems were really cheap but really slow. T1 lines were really fast but really
expensive.
DSL

Luckily, now we have a lot of options to choose from.


The first option we'll look at is DSL, or Digital Subscriber Line. DSL transmits digital signals
over the same telephone wires used for modem connectivity. However, by using newer
technologies, DSL can achieve much faster speeds. DSL also allows you to use the telephone and
your Internet connection at the same time by using a form of multiplexing that is commonly
referred to as broadband.
Multiplexing lets you send more than one piece of data on the same copper wire at the same time
by transmitting data at different frequencies. For example, we have a low frequency transmission
and a high frequency transmission at exactly the same time, allowing us to send multiple pieces
of data on the same single copper wiring at once. Because DSL uses multiplexing, you can talk
on the phone and use the Internet connection at the same time. On a DSL connection we divide
our single copper wire into two channels. The digital data is transmitted at higher frequencies
above 3.4 kilohertz. Voice data, on the other hand, is transmitted at lower frequencies below 3.4
kilohertz
In order to establish a DSL connection, you need to call your telephone company and tell them
you want DSL service enabled on your phone line. This can be a problem for many homes,
because you have to live close enough to your local loop in order to have DSL capabilities. If
you live too far away, you can't get DSL from your service provider.
Before the development of ADSL loop extenders, your house or office had to be within three to
six miles of the central office to get DSL service enabled. The distance depended on the gauge
and quality of the copper wiring from the central office to your house or office.
In recent years, DSL loop extenders have been implemented. They act as an amplifier by boosting
the signal level so that it can travel longer distances. Today, DSL connections can be established
up to 10 miles away from the central office. After your service is turned on and active, the next
thing you need to do is install a DSL router.
There is a common misnomer in the industry. You'll often hear this device referred to as a DSL
modem. It may even say on the box that you purchased a DSL modem. This really isn't correct.
Remember, a modem is a modulator/demodulator. A modem converts analog signals to digital
signals, and digital signals to analog signals. With DSL you don't have to do that because the
signal is already digital. It's digital all the way. Therefore, we don't need to modulate or
demodulate it, we just need to route it. But just be aware that when they say 'DSL modem' they're
really talking about a DSL router.
This DSL router plugs into the wall jack of your existing telephone wiring using an RJ11
connector. If you look in the back of a DSL router you'll see that there are usually several different
jacks, one of them is an RJ11 jack, which is smaller than the RJ45 jacks.
This brings up an important point. Because the DSL line is multiplexed and you can use your
phone and DSL connection at the same time, you have multiple signals coming in on this wire.
You have your analog voice data as well as our digital DSL data all on the same wire. That means
that you need to filter the appropriate signal for your DSL router and for your phone service. If
you don't, you'll have problems. You're going to get static on your phone call and you'll probably
have problems connecting to the DSL service from your DSL router. This happens because, by
default, both signals are being received at the same time on both types of devices.
What we need to do is filter and split that signal so that the DSL data goes to the DSL router, and
the voice data goes to the telephone. There are a variety of different ways to implement the
splitting and filtering.
One of the simplest ways is to implement a DSL filter on the line between the analog phone and
the RJ11 jack. This will filter out the digital signals on the line, leaving just the analog signals.
Typically, DSL routers have a built-in filter that filters out the analog signals. Without this filter,
you wouldn't be able to use your phone on this line. Another type of filter is a splitter, which
splits the signal for the phone and the DSL router.
Also, on the back of most DSL routers you'll probably see a series of RJ45 jacks. Usually, a
network switch is built into the DSL router itself. Essentially we have two halves to the DSL
router. This half functions on the DSL network provided by the service provider, while this half
is our switch. The DSL router does just what its name implies: it routes data between the DSL
connection here and the standard Ethernet switch. To provide Internet connectivity for these three
workstations over here we simply connect them using a standard Ethernet cable to a port on the
back of the DSL router.
On occasion you will find DSL routers that also include a USB port, which allows a single
computer to connect to the Internet. However, this port isn't used very often because Ethernet
connections are a lot better and allow for much more flexibility as compared to just one single
USB connection.
When you order your DSL service, be aware that there are multiple types of DSL available. They
use different protocols and they provide different features. Probably the most commonly used
version of DSL is called Asymmetric Digital Subscriber Line, or ADSL. ADSL is what you will
probably be using if you sign up for a home or small business DSL connection. The reason it's
called asymmetric is because the upload bandwidth is much slower than the download
bandwidth. If you've signed up for an ADSL connection, you probably noticed that your
download speeds are much faster than your upload speeds.
This actually works well for most home users because they use their download connection
heavily, but don't use their uplink connection very much at all. A business, on the other hand,
may require upload speeds that are just as fast as the download speeds. For these types of
organizations, you can order a Symmetric Digital Subscriber Line, or SDSL. SDSL services
provide a downstream data rate that's exactly the same as the upstream data rate.
When we talk about DSL, we're talking about all these different technologies generically, so we'll
simply will refer to it as XDSL.
Cable Internet

This term refers to all the different flavors of DSL.


In addition to DSL, there's another type of Internet connectivity provided by cable TV companies
that is popular in the United States. This type of Internet takes advantage of the coaxial cable
used for cable TV that is connected to most houses. Your cable TV line is probably already
multiplexed, that's why you can watch multiple channels of TV on the same coaxial cable. It is
relatively easy for cable TV service provider to add a couple of extra channels to that cable and
use those channels for data instead of television.
Broadband cable Internet access requires that you implement a cable modem at your location. It
also requires the cable TV provider to implement a cable modem termination system at their
facility. One of the advantages that cable Internet has over DSL is the fact that cable Internet
systems have a much longer transmission range. Cable Internet allows transmission distances up
to about 100 miles. In addition, downstream bit rates tend to be higher than that offered by DSL.
However, just as with DSL, the upstream traffic is usually throttled on a cable line.
One disadvantage to using a cable Internet service is the fact that the bandwidth provided by the
cable is shared by everybody who is on that same segment. More than likely, you're sharing your
Internet bandwidth with everybody else on your street that is using the same cable TV Internet
provider. During non-peak times that is usually not a problem. But during peak times, you may
find that the network is heavily used.
In order to establish a cable Internet connection, the first thing you have to do is install a cable
modem. This cable modem is typically installed as standalone router. After the service has been
set up by the cable company, the cable modem is connected to the cable TV wall jack using RJ6
coaxial cabling. At this point, you can connect each of the workstations to an available RJ45 jack
in the back of the cable modem. And just as with our DSL connection, the cable modem is really
a router of sorts because it has a built-in Ethernet switch and it has routing functionalities that
route between this Ethernet switch and the cable Internet connection.
You may run into cable modems that don't include Ethernet switch ports. Instead they'll have just
either a single RJ45 jack or a USB port right here that'll allow you to connect one computer to
this cable modem. If this is the case, you can purchase a separate switch and connect it to the one
available cable modem jack, and then connect all of your PC systems to that switch.
Satellite

If you live in a rural area you may not have the option of using a DSL or cable TV provider for
your Internet service. In this situation, you have a few other options to choose from. The first one
is to use a standard modem connection, which is very slow, or you can choose to use a satellite
connection. A satellite Internet connection uses a satellite dish to connect to the Internet. This
can be very useful if you live in a remote location and you don't have any other options for
connecting to the Internet. The hard part about implementing a satellite Internet connection is the
fact that you have to make sure that the satellite dish is pointed right at the satellite in order to
get connectivity. You're also subject to atmospheric interference. For example, if it snows and
that snow collects on your dish, you lose all service.
Satellite Internet access is provided using low earth orbit (LEO) satellites. It's important to
remember that, in order for you to establish a connection with the satellite, you need to have line
of sight access to the satellite from the dish mounted on your house or office building. If there
are trees, buildings or mountains that come in between your satellite dish and the satellite itself,
then it will interfere with the signal and you may get poor reception or no reception at all.
Satellite Internet access does have a lot of advantages:
It offers relatively high bandwidth
and provides connectivity to places where DSL, cable, or any other type of Internet connectivity
may not be available.
On the other hand, satellite systems have one key disadvantage that you need to be aware of, and
that is latency. Satellite Internet systems have high latency because the signal has to travel a very
long distance from the sender to the receiver. That signal has to travel over 22,000 miles as it
goes up to a satellite in orbit, and then back down to earth again. Even running at the speed of
light it takes that signal several milliseconds to make this trip. The latency on a satellite
connection could be anywhere from 1000 to 2000 milliseconds, depending on the provider and
environmental conditions.
That may not sound like a lot, but consider the fact that a cable system or DSL system usually
only experiences latency of less than 100ms. This latency can cause some Internet services to not
function properly. Any service that is latency sensitive, such as voice over IP, video streaming,
VPN connections, and so on, will not work very well in this type of configuration. In addition to
latency, satellite systems also tend to be a lot more expensive than DSL or cable.
If you decide to implement a satellite-based Internet connection, be aware that there are two
different types to choose from: a one-way system and a two-way system. A one-way system is
the older type. In a one-way system, we have two different links: our upload link and our
download link. The satellite is used for the download link because it has the higher bandwidth.
The upload link, on the other hand, does not go through the satellite. Instead the upload link goes
through a standard modem connection. If you want to use the satellite based Internet connection,
you first have to dial into the service provider over your modem link. At this point you could
send (upload) requests for web pages though the modem link which would then be downloaded
through the satellite link. This system required you to manage two separate links. If one of those
were to go down, then the whole system would stop working.
You can also choose a two-way satellite Internet access system, in which case both the upload
and the download links are transmitted via satellite. This works a lot better than the first system
where we have to have a modem and the satellite dish going at the time to make things work.
The disadvantage of the two-way system is that it takes really precise aiming and configuration
for the uplink connection to work. In fact, many of the providers that sell two-way Internet access
via satellite won't let you set up your own dish. Instead, they send out a trained technician to do
it for you so that they can get the aim just right.
Summary

That's it for this lesson. In this lesson, we reviewed traditional Internet connectivity options:
DSL, Cable, and Satellite.
11.3.2 Mobile Internet Connectivity

In this lesson, we're going to look at the cellular option for Internet access. This option is also
called mobile web, mobile data, or mobile broadband. Most cellular providers today offer a data
plan of some sort that give you Internet access through the wireless cellular network.
The way this is done has evolved over the years. In the past, users would tether their mobile
phone to their computer to get internet access through the cellular network, and this usually still
works.
Phone as a Wireless Hotspot

Today, there are plenty of other options available.


For example, some providers allow you to use your phone as a wireless hotspot. In this scenario,
the mobile broadband data service is provided using a radio signal from a cell tower. We have
our mobile phone here, and then we have our network of PCs and laptops over here that are going
to access the Internet through this phone using its data plan. These phones incorporate Wi-Fi
functionality that allow them to act as a wireless access point on an 802.11 network. This allows
these systems over here to connect via the Wi-Fi network to the cell phone, which routes that
signal to the service providers' wireless mobile broadband network. This allows these systems
over here to access the Internet through the cell phone. These implementations usually only allow
a limited number (four or five) systems over here to connect to the wireless hotspot over the Wi-
Fi network.
Broadband Hotspot Devices

Some providers offer mobile broadband hotspot devices that look like a wireless access point,
but they provide the same function as the cell phone in the previous scenario. They provide
Internet connectivity via an 802.11 wireless network to these PCs and laptops and tablet devices
over here and route data though the device onto the providers' mobile broadband network. These
devices are also usually limited to a maximum of four to five devices connected at once through
the Wi-Fi network.
Advantages

Using mobile broadband for Internet access has a lot of advantages. For example, you get Internet
access without any wires. All of the data comes through the mobile broadband network.
Disadvantages
You have Internet access anywhere you can get a cell phone signal. That said, there are some
disadvantages.
First, the speed you're going to get depends on a lot of different factors. For example, speed
decreases as your distance from the tower increases. Basically, the farther away from the tower
you are, the slower your connection will be. In addition, the speed also decreases if the mobile
device is moving. You'll get a stronger, faster connection if you're standing still rather than
driving down the freeway at 85 miles an hour. Also, the speed will decrease if the mobile
broadband network is being heavily used. This can be an issue in large cities where there are a
lot of users on the mobile broadband network at the same time.
Another thing you need to keep in mind is the fact that mobile broadband tends to be a lot more
expensive. Most mobile broadband providers will allow you a certain amount of download
bandwidth per month for a specific price, and if you go over that they charge extra fees.
GSM/CDMA

With that in mind, let's talk about the two main mobile communication technologies that are used
with mobile networks.
The first technology is called Global System for Mobile Communications, or GSM for short.
GSM was created in Europe and is used by the majority of the world's mobile service providers.
GSM uses time-division multiple access technology to allow multiple connections on the same
frequency.
The second technology is called CDMA. While CDMA is used less throughout the world, it's
used by the majority of service providers in the United States. It uses Code Division Multiple
Access (CDMA) technology to allow multiple connections on the same frequency. With CDMA,
each call is encoded with a unique key and then transmitted simultaneously. The unique key is
then used to extract a single call from the transmission.
The creation of GSM and CDMA marked the second generation, or 2G, of mobile
communication technologies. 2G communications allowed for data encryption and basic data
services, such as SMS text messages and multimedia messages. However, because 2G
communication can't support mobile broadband, it was superseded by the third generation mobile
telecommunications standard, or 3G.
3G/HSPA/LTE

In order to be considered 3G compliant, a mobile broadband service provider must offer a peak
data rate of at least 200 kilobits per second, which isn't very fast. However, most 3G providers
offer a much faster connection than this. Typically, most 3G providers offer speeds of 2-7 Mbps.
GSM and CDMA implemented newer technologies, such as EDGE and CDMA2000, in order to
support 3G speeds. However, 3G speeds are still a bit lacking, especially when using high-
bandwidth applications.
Because of this, several extensions have been implemented over the years to enhance the
performance of 3G.
The first one we need to look at is HSPA+, which stands for Evolved High Speed Packet Access.
HSPA+ is defined in the 3GPP extension Release number 7. It increased the speed available on
the 3G network dramatically. You could have downloads as fast as 84 megabits per second and
uplink speeds of up to 22 megabits per second (depending on the provider).
It does this using MIMO, which stands for Multiple-Input Multiple-Output. Basically, MIMO
uses multiple antennas on both the transmitter side and on the receiving side. This offers
significant increases in data throughput and range without using any additional bandwidth or
increased transmitting power. Using MIMO we can multiply the base data rate offered by the
network by the number of antennas established on the transmitter and on the receiver. By doing
this we can dramatically increase the amount of throughput available on the broadband network.
Sometimes MIMO is called Smart Antenna for this very reason.
Another extension to 3G is called Long-term Evolution or LTE, and this is considered a pre-4G
technology. It was introduced in the 3GPP extension to the 3G network, Release number 8, and
like HSPA it also uses MIMO to dramatically increase the amount of bandwidth available. An
LTE network supports download speeds upwards of 100 megabits per second, and upload speeds
up to 50 megabits per second, depending on the service provider. There is also a new and
improved version of LTE that was released in 2011 called LTE Advanced. LTE Advanced is very,
very fast. Downlink speeds can be upwards of 1 gigabit per second, and the uplink speeds can be
upwards per 100 megabits per second.
4G

That's a fast network.


The next type of mobile broadband network that you need to be familiar with is the 4G network.
4G is a successor to 3G, and it's much faster. In order for a mobile broadband network to be
considered 4G compliant it must support a peak speed for a stationary user of 1 gigabit per
second. If a user is in a car or within a building, then it needs to support a speed upwards of 100
megabits per second.
To do this, 4G uses MIMO just like HSPA and LTE. It uses multiple antennas to multiply the
base rate of the network. One of the disadvantages of 4G is the fact that 4G equipment is
completely incompatible with 3G equipment. That means for a service provider to upgrade from
3G to 4G they basically have to rip out all their existing 3G equipment and replace it with 4G.
Likewise, on the consumer end, if you want to upgrade your phone from 3G to 4G you have to
get a new phone, because your 3G phone will not work on the 4G network. Despite this 4G is
becoming very, very popular. In fact, most of the mobile broadband hotspot devices that we
talked about earlier are offered by 4G service providers.
WiMAX

There is a specification within 4G that you need to be aware of called WiMAX. WiMAX stands
for Worldwide Interoperability for Microwave Access. Basically, the goal of WiMAX is to
wirelessly deliver high-speed internet service to large geographical areas. Because it's part of 4G,
it provides speeds upwards of 1 gigabit per second and it uses MIMO in order to increase the
overall throughput of the network.
Just as you can purchase 4G wireless hotspots, you can also buy WiMAX wireless hotspots, and
they work similarly to a 4G wireless hotspot. You have your mobile broadband device router that
gets its signal from the mobile broadband network, which then communicates with the devices
on your internal LAN via a wireless connection.
Summary

That's it for this lesson. In this lesson we discussed mobile broadband Internet connectivity. First,
we looked at the types of devices that can be used to provide mobile broadband access. Then, we
looked at some common types of mobile broadband networks, including 3G, 4G, and WiMAX.
11.3.3 Internet Servicers Facts

There are many options for connecting computers to the Internet through an ISP. Each method
has its advantages and disadvantages, as discussed in the following table:
Method Description
Public Switched Telephone Network (PSTN) The PSTN uses a single POTS phone line with
a modem.
• Dial-up uses a single 64 Kbps channel.
• Common data transfer rates include 14.4 Kbps, 28.8 Kbps, 33.3 Kbps, and 56 Kbps.
• Dial-up offers a minimum level of network connectivity for a low investment. It is
available virtually anywhere that regular voice grade communications are available.
• Computers dial an access server at the ISP. You must configure the system with the ISP
server's phone number, along with a username and password to log on.
• The phone line cannot be used for voice and for the Internet simultaneously.
Digital Subscriber Line (DSL) DSL offers digital communications over existing POTS lines.
• Data is sent using multiplexed channels over existing telephone wiring.
• Implementation requires a DSL router or a single DSL network interface connected to the
phone line.
• DSL service is not available everywhere; the location must be within a fixed distance of
network switching equipment.
There are several variations of DSL (collectively referred to as xDSL):
• Asymmetrical DSL (ADSL) provides different download and upload speeds.
• ADSL allows regular analog phone calls and digital access on the same line at the same
time. Splitters are required to separate the analog signals from the digital signals.
• ADSL works well for regular Internet access (browsing), but is not the best choice if you
need to host Internet services (e.g., maintaining your own website).
• Symmetrical DSL (SDSL) provides equal download and upload speeds.
• The entire line is used for data; simultaneous voice and data is not supported.
• Splitters are not required, because voice traffic does not exist on the line.
• This is a viable option for organizations that wish to host Internet services (like a web
server).
• Very high DSL (VDSL or VHDSL) is similar to asymmetrical DSL but has higher speeds.
• Both voice and digital data are supported on the same line at the same time.
• Splitters are required to separate voice signals from digital data signals.
Cable Companies that provide cable television access typically offer Internet access. Existing
cable TV lines provide bandwidth for Internet access, in addition to cable TV stations.
• Cable Internet uses the Data Over Cable Service Interface Specification (DOCSIS) that
allows data signals to be sent on existing cable TV infrastructures. DOCSIS specifies channel
widths, modulation techniques, and how core components of the network communicate.
• Cable modems are used to convert analog signals over multiple channels.
• Speeds are usually much faster than those provided by DSL, but the bandwidth is shared
between all users within the same area (neighborhood). Actual speeds may be much less than the
maximum.
Satellite Satellite provides Internet access by using signals transmitted to and received from
orbiting satellites.
• Satellite service providers offer nearly 100% global network coverage, making local
network infrastructure unnecessary. Satellite service is usually available when other Internet
access technologies are not.
• A local portable transmitter with an antenna (dish), along with direct line of sight (no
obstructions), is required.
• Satellite reception is subject to atmospheric and weather conditions. Fog, rain, or snow
can disrupt service.
Cellular Cellular networking uses a digital mobile phone network for Internet access. Two
main communication technologies are used with mobile networks:
• Global System for Mobile Communications (GSM) was created in Europe and is used by
the majority of the world's mobile service providers. GSM uses time-division multiple access
(TDMA) technology to allow multiple connections on the same frequency.
• Code Division Multiple Access (CDMA) is used by the majority of mobile service
providers within the United States. It enables multiple connections on the same frequency. With
CDMA, each call is encoded with a unique key and then transmitted simultaneously. The unique
keys are then used to extract each call from the transmission.
Many mobile devices, like smart phones and tablets, can be purchased with mobile data
technology integrated. A cellular adapter can be installed on a notebook computer to enable
cellular Internet access. Cellular networking is a truly mobile solution; the mobile device can be
in motion and still have Internet access. The user does not have to manually reconnect the device
as it moves from cell tower to cell tower. However, the faster the device is moving, the less
bandwidth that is available.
Cellular Internet access is limited to areas with cell service coverage. Coverage is dictated by the
provider's network. Some areas will have weak coverage or no coverage at all.
Cellular networks used for voice and data include the following types:
• 2G (second generation) networks were the first to offer digital data services. 2G data
speeds are slow (14.4 Kbps) and are used mainly for text messaging, not Internet connectivity.
2.5G supports speeds up to 144 Kbps.
• EDGE (also called 2.75G) networks are an intermediary between 2G and 3G networks.
EDGE is the first cellular technology to be truly Internet compatible, with speeds of 400–1,000
Kbps.
• 3G offers simultaneous voice and data. Minimum speeds for stationary users are quoted
at 2 Mbps or higher. The following extensions enhance 3G networks:
• HSPA+ (also known as smart antenna) uses multiple-input and multiple-output (MIMO),
and significantly increases data throughput and link range without additional bandwidth or
increased transmit power.
• Long Term Evolution (LTE) and LTE-Advanced increase downlink/uplink speeds to
100/50 Mbps and 1Gbps/500Mbps, respectively.
• 4G is available with minimum speeds around 3–8 Mbps, with over 100 Mbps possible.
4G:
• Uses MIMO.
• Is not compatible with 3G; 4G requires a complete retrofit on the part of service providers
and new equipment for the consumer.
• Utilizes Worldwide Interoperability for Microwave Access (WiMAX). WiMAX delivers
high-speed Internet service (up to 1 Gbps for stationary users) to large geographical areas.
Wireless Wireless Internet access is frequently available at local businesses, hotels, airports,
libraries, and mass transit. Additionally, many city and residential areas have coverage from a
wireless Internet provider.
• Some providers offer a nationwide network of wireless access points in public locations
like airports.
• Wireless networks in downtown areas allow limited roaming (moving) within the area of
coverage. However, dead spots might limit access.
• Wireless networks in residential areas are best suited for stationary clients.
Broadband over power line (BPL) BPL is a system that transmits two-way data over the existing
electrical distribution wiring. This service could be enabled within a single building or provided
throughout a metropolitan area. BPL avoids the expense of a dedicated network of wires for data
communication. Multiplexing is used to divide the electrical wiring into multiple channels used
for data transmissions and electrical power delivery.
Another version of BPL is used within a home to interconnect home computers and networking
peripherals. The electrical connections in a home serve as a LAN to home computers and other
networking devices that have an Ethernet port, like home entertainment devices. Configurations
for this version of BPL typically include the following:
• An Ethernet cable connected to the computer (or peripheral) and a powerline adapter.
• Powerline adapters plugged into power outlets throughout the facility.
• An Ethernet connection established using the existing electrical wiring as the network
medium.
Integrated Services Digital Network (ISDN) ISDN offers digital communications over
existing POTS lines or T1 lines.
• ISDN is more common in Europe than in the United States.
• The transmission medium is divided into channels for digital data.
• Subscribers must be within a certain distance of the phone company equipment, although
this distance can be extended with repeaters.
• Phone calls use digital ISDN phones or analog phones connected to a converter.
There are two main implementations of ISDN:
• ISDN BRI (basic rate) provides two 64 Kbps data channels and one 16 Kbps control
channel. BRI uses 4 wires on the existing POTS installation. With ISDN BRI, you can use one
channel for voice and one channel for data, or both channels for different voice calls. Depending
on the implementation, you can also bond the B channels to use them together for faster data
speeds.
• ISDN PRI (primary rate) provides 23 64 Kbps data channels and one 64 Kbps control
channel on a T1 line (or 30 64 Kbps data channels and one 64 Kbps control channel on an E1
line).

11.4 Remote Access


As you study this section, answer the following questions:
• What functions are performed by PPP for remote access connections?
• How does PPPoE differ from PPP?
• Why is proxy ARP necessary for dialup remote access clients?
• How does EAP differ from CHAP or MS-CHAP?
• What is the difference between authentication and authorization?
• What is an advantage of using RADIUS or TACACS+ in your remote access solution?
• How does RADIUS differ from TACACS+?
After finishing this section, you should be able to complete the following tasks:
• Create and configure a remote access connection.
• Configure a server for remote access connections.
• Configure a RADIUS solution to provide AAA for remote access.

11.4.1 Remote Access

Remote access allows a host to connect to a server, or even a private network, and access
resources as if they were connected locally to the LAN. Remote access connections are typically
used by business users to connect to the office from home or while traveling.
A remote access connection requires some type of a physical connection between the devices. In
this case, we have a remote client that wants to access an office network.
PSTN

One way to do this is through the public switch telephone network (PSTN) using modems to
connect to a special server called a remote access server. This option was widely used at one
time, but is rarely used now because of the slow connection speeds.
Internet Connectivity

Another option for physical connectivity would be to use the Internet. Most users and businesses
connect to the Internet through an ISP using DSL, cable, or some other kind of WAN connection.
In this case, the client computer connects through the ISP to gain access to the Internet. The
Internet is the remote network, much like the office network in the first scenario.
When you make a connection from the remote client to the remote access server, you first
establish the physical connection.
PPP/PPPoE

Then you need to negotiate a data link layer connection. One protocol that's used to establish the
data link connection is called PPP, or Point-to-Point Protocol. The job of PPP is to identify the
upper layer protocols that the devices will use to communicate.
When the connection is first established, the devices will negotiate and decide together what
protocols they will use to communicate. They might also decide to use encryption or
compression. During this process, the client device is also assigned an IP address so that it can
communicate on the network. The devices also negotiate what type of authentication method to
use.
PPP was typically used in dial-up connections. If you are using a modern broadband connection,
you're using an always-on Ethernet connection to the ISP. In this scenario, we use PPP over
Ethernet, or PPPoE. If we were to look at the protocol stack, we would have IP at the network
layer. At the data link layer, we would use PPP for negotiating the connection parameters then to
create Ethernet frames. Those Ethernet frames would be sent from one device to the other device.
This device would then take the Ethernet frames and use the PPP information within those frames
to identify the protocols to use, such as IP, authentication, and encryption.
With a normal Ethernet connection without PPP, the connection is simply established between
two devices. However, using PPP over Ethernet allows the devices to do additional things such
as authenticate and control other parameters of the connection. These tasks typically can't be
done on a normal Ethernet connection.
The process of communicating over a DSL link is actually a little more complicated than what
I've illustrated here. With DSL, I have a computer which runs IP and uses PPP. It encapsulates
PPP into PPP over Ethernet. It then uses Ethernet to send those frames to my DSL router.
The DSL router actually runs Ethernet on top of ATM, which is a wide area networking protocol.
It also uses another protocol called ADSL, which represents the DSL signal. The signal from my
DSL device is sent to a central switching office close to the ISP. The ADSL information is only
good for local loop communications and not for wide area networking. So ADSL may be
translated by another device into a protocol called SDH or SONET for transmission through a
wide area network. SDH is converted back to ATM, which is converted back to Ethernet, in
which PPP over E is extracted to give me PPP, and then finally IP. This rather complex method
is often called PPP over Ethernet, over ATM, ATM being a transport protocol that gets me through
a wide area network.
You can see from these examples that devices don't simply communicate using IP, but rather they
use PPP for negotiating communication parameters, and they use various lower layer protocols
for transmitting the data through the network.
One of the things that happen during the connection negotiation is that the remote client is
assigned an IP address. If you're using dial-up, a modem is used to establish a connection. Once
the connection is established, this computer looks as if it is part of this network. Therefore, it is
given an IP address which is typically on a subnet on the remote network. In this case, the remote
access server is connected to a specific subnet. One approach is to assign the remote client an IP
address that comes from this subnet. In fact, you may have a DHCP server on the subnet whose
job it is to assign IP addresses to the remote access clients. If this were the 1.0.0.0 network, this
computer may be assigned the address of 1.0.0.12 once that connection is established.
One problem with this connection is that I am actually taking a dial-up connection and moving
on to a local area network, typically an Ethernet network. So the remote computer gets an address
for a subnet that it really is not connected to. It is instead connected through the remote access
server which is acting as a router.
Proxy ARP

Suppose a device on the private network needs to communicate with the remote access client. It
thinks the remote client is on the same subnet and will use the ARP to discover its MAC address.
But this presents a couple of different problems. First, the device is not really on the same
segment, so those ARP requests will not be passed through the router (which is the remote access
server) to the device. Second, the connection is a dial-up connection, not Ethernet. The modem
does not have a MAC address. Even if an ARP request were able to make it through the remote
access server, the remote modem could not respond with a MAC address.
In this case, the remote access server uses a protocol called proxy ARP. When a device on this
network tries to communicate with the remote device that has the address 1.0.0.12, the remote
access server identifies itself as the remote device and responds with its own MAC address. With
proxy ARP, frames are sent from devices on the network to the remote access server. The remote
access server then performs layer three bridging, where it converts frames into a format that can
be sent through the dial-up connection to the remote device.
For broadband connections such as this, this problem doesn't exist. The connection itself is
treated as a separate network. The device may be assigned a public IP address, or it may use NAT
to translate private addresses into public IP addresses.
This negotiation process is also used to determine an authentication protocol to be used.
However, PPP does not actually perform the authentication. It simply helps the devices choose
which authentication method that they will use. Once the connection is made, then the chosen
authentication method is invoked and the devices must authenticate.
CHAP, MS-CHAP, EAP

For a remote connection, several protocols can be used, including CHAP, MS-CHAP, or a
protocol called the Extensible Authentication Protocol (or EAP). Both CHAP and MS-CHAP use
a form of username and password, but the password itself is not sent between devices. EAP is
more secure. It can be configured to support multiple methods of authentication. For example,
EAP can be used to support usernames and passwords, along other authentication mechanisms
such as using a smart card or biometric devices.
After the physical connection is established, the communication parameters must be agreed upon,
then authentication can occur.
Authorization

The next step is authorization. Authorization identifies what resources the remote clients can
access on the remote network. In this example, I have a remote client that's connected through a
remote access server to a private network. Authorization identifies what this user or device can
access on the private network.
For instance, it may be configured so that remote clients can only access the remote access server
itself and nothing else. In this case, you would place all of your shared content for remote users
directly on the remote access server itself. You can also allow remote access clients to connect
into the private network. From there you may use authorization to restrict remote client access to
only certain devices.
Remote Access Policies

Both authentication and authorization are usually controlled by remote access policies. Remote
access policies identify users who can connect, and then whether the connection is allowed or
denied. For example, you might allow only certain users to use remote access. Then you might
restrict those users based on the time of the day. For example, you may only allow remote access
during working hours. You could also restrict remote access based on the type of connection that
is used. For example, dial-up users could be granted a lower level of access, while users who
connect using a different method are granted a higher level of access. When the remote user
connects, the remote access server checks the policies to find out what type of restrictions are to
be applied. Then it either allows or denies access based on the information in the policies.
These policies can be defined on the remote access server itself. In this case, you would go to the
server and define the policies that apply to your remote access users. In some cases, you can
configure your remote access server to connect to a separate database that contains your user
account information. For example, on a Microsoft network, your remote access server would use
Active Directory to identify users who can connect and to match users with passwords.
Suppose now that the number of remote access clients has grown such that a single remote access
server can no longer support all of the remote clients. One solution would be to install a second
remote access server on my network. However, because my policies must be defined on each
server, I would have to copy the policies on this server and duplicate them on this server. As the
number of remote access servers grows, the work required to maintain all of these policies also
grows.
AAA Server/Accounting

One solution to this is to use an AAA server. AAA stands for three parts of this remote access
process; Authentication, Authorization and Accounting. Accounting is the process of keeping
track of what was done during a connection. For instance, you might need to keep track of how
long clients were connected so you can bill the department they work for based on their
connection time.
Accounting is the process of keeping track of the connection characteristics. With an AAA server,
policies are defined once on the AAA server instead of on each individual remote access server.
When an authentication request is received by any of these servers, the authentication request is
forwarded to the AAA server where the credentials and the policies are consulted to identify
whether the access should be allowed or denied.
RADIUS

There are two common solutions for providing this type of authentication mechanism. One is
called RADIUS. With a RADIUS server, authentication and authorization are combined into a
single device, but accounting can be moved on to a different device. (Or you can have a single
device which provides all three functions.) RADIUS uses UDP during the authorization process.
RADIUS is used a lot in Microsoft implementations. So a Microsoft remote access solution
would likely use a RADIUS server for authentication, authorization, and accounting.
TACACS/TACACS+

Another solution is called TACACS (and its updated version called TACACS+). TACACS+
separates all three functions into different services. You can combine all three services on a single
physical server that provides authentication, authorization, and accounting. You can also split
those services between different several different servers. So you may have one server that
performs authentication, a different server that is used for authorization, and a third that is used
for accounting.
Another key difference between TACACS+ and RADIUS is that it uses TCP instead of UDP. In
addition, TACACS+ encrypts the entire communication session between the remote access
servers and the AAA servers. It also supports more protocols than just IP. TACACS+ was
developed by Cisco and is used by many other vendors.
When implementing this type of a solution, the server that provides the AAA services is called
the server. For instance, a RADIUS server is the device that performs authentication,
authorization, and accounting. The remote access servers are called clients in this process. So
this remote access server would actually be a RADIUS client that sends the information to the
RADIUS server. The remote access clients are simply called remote access clients. They connect
to the remote access server; the remote access server forwards the authentication information to
the RADIUS server, which provides the authentication.
Review

Before we end, let's review the process of a remote client connecting to a remote access server.
I'm going to take you through this process as if it were a dial-up connection. The first step is to
establish a physical connection. Your modem would dial the number for the modem on the remote
access server, which would answer the call.
The next step in the process is to establish a data link connection and to negotiate upper layer
protocols that will be used throughout the conversation. It will also use PPP to negotiate
encryption and the authentication protocol that you'll use. Then you'll get an IP address assigned
to the remote client.
Once the data link layer is established and you have negotiated the upper layer protocols that you
will be using, you then move on to authentication. Once the authentication process is done, then
you move on to the authorization process, which determines whether the connection is allowed
and (if it is) identifies the resources you have been granted access to.
Once this happens, your connection is complete. You now have a valid connection to the remote
access server. At this point, the process known as accounting is used to track your usage. For
example, it may keep track of your connection status and how long you've been connected.
Summary

That's it for the lesson. In this lesson, we discussed remote access connections. Remote access
connections allow a client to establish a wide area connection to a remote server or a remove
network to gain access to resources on that server or that network. The process involves creating
a physical connection, establishing a datalink connection where upper layer protocols are
negotiated, and then authenticating and authorizing the user to the network. Accounting keeps
track of what happens during the connection, and may be used for billing purposes.

11.4.2 Remote Access Facts

Remote access allows a host to connect remotely to a private server or a network to access
resources. Remote access connections are typically used to connect to an office network, but they
can also describe how a connection with an Internet Service Provider (ISP) is established. A
remote access server is used for remote access connections.
The following process is used to establish a remote access connection:
Process Description
Physical connection Clients must first establish a physical connection to the remote access
server.
• When using a broadband connection, you connect the device to the network and turn it
on.
• When using a dial-up connection, the device dials the number of the remote access server,
and the remote access server answers the incoming call.
Connection parameters After the physical connection is set up, a Data Link layer connection
is established. During this phase, additional parameters that will be used during the connection
are decided. For example, the devices identify the upper layer protocols that they will use during
the connection. Protocols negotiated at this phase control the following parameters:
• Upper layer protocol suite (such as IP)
• Network layer addressing
• Compression (if any)
• Encryption (if any)
• Authentication method
Two common protocols are used during this phase.
• The Point-to-Point Protocol (PPP) is used for dial-up connections.
• PPP over Ethernet (PPPoE) is used for broadband connections, such as DSL, cable, or
fiber optic running Ethernet. PPPoE is a modification of PPP that is able to negotiate additional
parameters that are not present on regular Ethernet networks. ISPs usually implement PPPoE to
control and monitor Internet access over broadband links.
During this phase, the remote client is assigned an IP address. The IP address can be assigned
from a range configured on the remote access server or even from a DHCP server on the private
network.
• If the IP address for the remote client is on the same subnet as the private network, the
remote access server uses a process called proxy ARP to forward packets from the private
network to the remote access client. With proxy ARP, the MAC address of the remote access
server is associated with the IP address of the remote client. The remote access server receives
the frames addressed to the remote access client and forwards the packets to the remote access
client.
• If the IP address for the remote client is on a different subnet (such as a special subnet
defined for remote access clients), then the remote access server acts as a router, sending packets
between the remote client and the public network. In this configuration, the remote access server
must be configured with routing enabled.
Authentication Authentication is the process of proving identity. The authentication
protocol is negotiated during the connection parameter phase. After devices agree on the
authentication protocol to use, the logon credentials are exchanged and logon is allowed or
denied. Several common protocols are used for remote access authentication.
• Challenge Handshake Authentication Protocol (CHAP)
• Microsoft Challenge Handshake Authentication Protocol (MS-CHAP)
• Extensible Authentication Protocol (EAP)
Both CHAP and MS-CHAP are used for username and password authentication. EAP allows
authentication using a variety of methods, including passwords, certificates, and smart cards.
Authorization Authorization is the process of identifying the resources that a user can
access over the remote access connection. Authorization can restrict access based on the
following parameters:
• Time of day
• Type of connection (e.g., PPP or PPPoE, wired or wireless)
• Location of the resource (e.g., restrict access to specific servers)
Accounting Accounting is an activity that tracks or logs the use of the remote access
connection. Accounting is often used by ISPs to bill for services based on time spent or the
amount of data downloaded.
It's important to know the following information about remote access:
• Remote Access Service (RAS) is used by a remote access server to control access for
remote access clients. Clients might be granted access to resources on only the remote access
server, or they might be allowed to access resources on other hosts on the private network.
• Both the remote access server and the client computer must be configured to use or accept
the same connection parameters. During the connection phase, the devices negotiate the protocols
that will be used. If the allowed protocols do not match, the connection will be refused.
• Remote access policies identify allowed users and other required connection parameters.
• In a small implementation, user accounts and remote access policies are defined on the
remote access server.
• When using a directory service, you can configure the remote access server to look up
user account information on the directory service server.
• If you have multiple remote access servers, you must define user accounts and policies on
each remote access server.
• Use an AAA server to centralize authentication, authorization, and accounting for multiple
remote access servers. Connection requests from remote clients are received by the remote access
server and are forwarded to the AAA server to be approved or denied. Policies defined on the
AAA server apply to all clients connected to all remote access servers.
• There are two commonly used AAA server solutions:
Solution Description
Remote Authentication Dial-In User Service (RADIUS) RADIUS is used by Microsoft
servers for centralized remote access administration. RADIUS:
• Combines authentication and authorization using policies to grant access.
• Uses UDP.
• Encrypts only the password.
• Often uses vendor-specific extensions. RADIUS solutions from different vendors might
not be compatible.
When implementing a RADIUS solution, configure a single server as a RADIUS server and
configure all remote access servers as RADIUS clients.
Terminal Access Controller Access-Control System Plus (TACACS+) TACACS+ was
originally developed by Cisco for centralized remote access administration. TACACS+:
• Provides three protocols, one each for authentication, authorization, and accounting. This
allows each service to be provided by a different server.
• Uses TCP port 49.
• Encrypts the entire packet contents.
• Supports more protocol suites than RADIUS.
11.4.3 Configuring a Remote Access Server

In this demonstration, we're going to configure a remote access server. The first thing we need to
do is add the role. So, I'll click Roles and Features and then click Next. It is going to be a role-
based or feature-based installation, so I'll click Next, and on this server, Next. I want to go ahead
and check Remote Access. I'll click Next. We don't need any features, so I'll click Next again,
and then we can go in and add in the particular role or services we're interested in.
Installing Remote Access

Now, since I want to install remote access and set up VPN I want to put a check mark in Direct
Access in VPN. I'm prompted to add the tools to use to configure that, so I'm going to add
features, and then I'm going to ahead and check routing because VPN itself allows you to connect
to the VPN server, but I actually want that VPN server to relay that communication to the internal
work network so I need routing for that.
I'm going to click Next and then let it install. After this is done, I'll click Close.
You can see that we have a notification up here and if I click on Notifications, it prompts me to
run the Getting Started Wizard.
Setting Up VPN

One of the new things with Windows Server 2012 (and I'm actually working on 2012 R2) is that
it has a streamlined interface for setting up VPN and direct access which is intended to be a VPN
replacement. It's only available with Microsoft and it uses IPv6. I'm actually going to go in the
traditional way but I'll show you the Getting Started Wizard.
It's actually just going to funnel me back into routing and remote access, but I will show you the
wizard. I'm also going to minimize Server Manager, because often the wizard will pop-up behind
server manager and you don't realize that it's running. The wizard pops up and it will walk me
through deploying direct access or both direct access and VPN. If I actually click Deploy VPN
only, it's going to jump me into the routing and remote access console which is the way we
traditionally set up VPN.
Now, to set up VPN, you go into Routing and Remote Access. You right-click the server, in my
case, the server is actually named Router, and then you click Configure and Enable Routing and
Remote Access. It says, "Welcome to the Wizard." I click Next, and what I want to do is set up
remote access. Historically, we sometimes would use dial-up or VPN but now nobody is really
using dial-up via modems. Everybody is using VPN, so even though it says dial-up or VPN, I'm
really only interested in setting up VPN.
I'll go ahead and click Next and then tell that I want to set up VPN, although dial-up is still
supported. I'll click Next, and then it says, "Hey, for VPN, we need a network adapter that's
connected to the Internet. Which of your network adapters is connected to the Internet?" I've
named one my network adapters 'Internet,' and you can see I have assigned it a static IP address
theoretically from the Internet of 131.1.1.1. Down here, I can leave this check mark if I want to
set up packet filters and have routing remote access act as a rudimentary firewall.
In our case, we're not interested in that, we just want to set up VPN, so I'm going to go ahead and
uncheck Enable security on the selected interface by setting up packet filters. If I want to protect
it, I can use the firewall for that. I'm going to go ahead and click Next. It says, "All right. You've
identified which adapter is connected to the Internet. Which one of the adapters is connected to
the internal network that we're going to give the VPN clients access to?"
We're going to be getting them into the 192.168.1.0 network, so I'm going to use this adapter
which has an IP address of 192.168.1.1. Now I'll click Next. It now says, "How are the remote
access clients going to get an IP address on the internal network?" In general, the two ways of
getting an IP address are automatically from a DHCP server or statically by typing one in. This
is no different. I can do it automatically using my remote access server as a DHCP relay agent to
grab an IP address from the DHCP server and give it to the remote access clients or I can specify
a range of IP addresses.
And that's what we're going to do. I'm going to go ahead and hit Next, and then it says, "All right,
you're using a specific range of IP addresses, can you please provide that?" You need to make
sure that you give it a range of IP addresses that's going to be valid for the internal network, but
is not going to overlap the DHCP scope. In our case, I'm going to hit New and I'm going to
specify 192.168.1.101 and that will run through 192.168.1.150 so it's going to be the same range
that's good for the internal network but not any addresses that the DHCP server is giving out
because then we'll have address conflicts.
Now, if DHCP is giving out 101 through 150, then RRAS might give it to external clients coming
in over remote access and DHCP might give it to internal clients and then we'll have IP conflicts.
In this case, I know my DHCP server is not giving out 101 through 150 so I can safely give that
range to RRAS so I'm going to go ahead and click OK and then Next. Then it says, "How are
you going to authenticate these particular users?" I can use routing and remote access to
authenticate the connection request or I can use radius which acts as a third party authentication
for remote access.
Now, radius would take that information and relay it to some other database. In our case, we're
not going to use radius, we're just going to let the RRAS server authenticate the clients so I'm
going to go ahead and click Next and then Finish. Now it says, "If you are going to relay DHCP
messages and use DHCP, you need to set up the DHCP relay agent." I agree with that, it's not me
but I hit Agree With It. It's going to start the routing and route access service and my VPN server
is essentially set up.
If I expand this I can see all the network interfaces that are in play. I can see the port and you can
see that it's actually created a bunch of ports for people coming in over VPN and if I did have
any remote access clients that were connected, I would see them listed here. If I expand IPv4,
you can see again the different interfaces we have, any routes I have, there's my DHCP relay
agent.
Customizing Properties

If I want to go in and tweak this after the fact, I would go into the properties of the server.
I would right-click and go to Properties and this check box down here is what's allowing this to
act as a VPN server. You can see it also has selected IPv4 router, so if I uncheck that but left the
remote access server checked, the clients could just connect to the VPN server, but none of their
traffic would be relayed to the internal network. I actually want both of those things checked. On
the Security tab, it says that I'm using Windows authentication and I do want to go ahead and
click Authentication Methods.
These are the list of their remote access authentication protocols the ones I'm supporting. EAP is
a very secure protocol that's used by NAP or Network Access Protection. If I want to check the
health and machines coming in, it may not be compatible with all the clients so I'm also going to
leave MS-CHAPv2 checked. I'm not going to check CHAP which is a little bit older protocol.
Sometimes they're used by non-Microsoft clients. PAP is a terrible encryption that uses plain text
to send over the username and password.
I'm just going to leave it EAP and MS-CHAPv2 and click OK. I can go in at IPv4 and you can
see that range that we put in so it's telling you how the VPN clients are going to obtain an IP
address and there is the static pool that they will get the IP address from.
We don't really have anything going on with IPv6 that's if I wanted this router to advertise the
prefix of the network. We're just interested in VPN so there's nothing going on in that tab. IKEv2
we could set up, and I'm actually just going to use PPTP so without further configuration, that's
what it should use. IKEv2 uses machine certificates to use IPsec and it supports VPN reconnect
which means if the Internet connection drops from the client and then comes back, it would
automatically reconnect if I was connected to a VPN. I'm not going to do any of that.
PPP is used for dial-up connections. We're using VPN so none of these selections are available,
but if we were using dial-up, they would be. And then logging says exactly what types of events
we're going to log.
I'm going to click OK and at this point my routing remote access is set up. If I just left it at this,
my clients would not be able to connect, because one thing that happens when you set up VPN
on a Microsoft server like this is that it relies on policies in order to decide whether to accept or
reject the connection.
You used to be able to come in here and click on Policies, but you can see if you do that, it says
"Wait a minute, the policies have moved to the network policy server tool which we can get out
of Server Manager." That's the second and last step that I need to do to set up Microsoft VPN.
I'm going to close out of routing and remote access, go back in the Server Manager and under
Tools I'm going to go into Network Policy Server.
VPN Policies

There are a couple of different policies for VPN: one would be a connection request policy and
that's set up in here where Windows authentication is enabled for all users. I can come in and
there is a remote access policy that says they can connect 24/7 7 days a week. There has to be a
connection request policy that allows them to connect. Think of connection request policy as 'I
request the connection and I get in.' That in itself is not enough, because after I request that
connection that will allow me to connect, it won't necessarily let me through VPN. After the
connection request policy is processed and says, "Okay, they can connect," it goes to a network
policy to see if they're allowed on the network.
Now, you can see that there are two default policies that basically say connections are going to
be denied. When you're dealing with the policies and NPS they're always processed from the top
down. First, I would hit this policy and for some reason they got through that, they would hit this
policy. We need some type of a policy that says, "Okay, you're allowed to connect using VPN if
you meet our criteria." I'm going to right-click on Network Policies and make a new policy, so
that I can do this. I'm going to name my policy VPN OK.
They're going to be coming in using VPN, so I'm going to select VPN from the drop-down combo
list, although I could specify however they're going to get in. I'm going to choose VPN and hit
Next. Then it says, "How do we know what's the criteria for which this policy will be a match?"
I'm going to go ahead and add a condition that says, "It's a match if they belong to one of the
selected groups." I can use one of the selected Windows groups, so I'll use Windows Groups and
click Add.
It says, "What group?" I add a group and I say, "Well, if they're a member of administrators."
Normally you would use an Active Directory group but in our case, this machine is a standalone
machine so it's going to be a group that's pulling from the security accounts manager in Windows,
the local groups. I'm going to click OK. So if they are a member of administrators, they're going
to be granted access. Of course I could deny it if I was trying to create a policy that would keep
them out.
Authentication Methods

Now it says, "What authentication methods are you going to support?" This matches that list we
saw back on the VPN server. If I wanted to support EAP, I could specify that here. I'm going to
support MS-CHAPv2. I don't need CHAP, that's not in use, so I'm going to pick MS-CHAPv2,
that's the one we're going to be using and then click Next. Now I can configure some constraints
if I want to. I could have them be disconnected if they were idle for a specific time, so I can have
an idle timeout. I could set the maximum length of time they could come in over VPN, so if
they're in for more than an hour, I can disconnect them.
I could also do date and time restrictions. We're not going to set any constraints, we're letting
them come in regardless of the date and time. So, I'll click Next.
In here, I can set up any settings I want. If I want to have an interface with network access
protection and make sure that they have the firewall on and they have Windows updates enabled
I could do that, I could set up IP filters or encryption. There are lots of different things you can
do in here.
We're just interested in a straightforward policy that will allow them to come in over VPN, so
I'm not going to configure any extra settings. I'll click Next, and now it says, "Okay, great. Looks
like you've got your policy." I'll go ahead and finish. You can see that it comes in as number one
and as long as the user matches that policy, meaning they are part of the administrator's group,
they're going to be allowed in over VPN.
Summary

In this demonstration we set up a remote access server. We configured routing and remote access
to allow users to come in over VPN. Make sure that you know where the users are getting their
IP address from by either setting up the DHCP relay agent or providing a pool of addresses. Also
make sure you go into the network policy server and create a policy that will allow them to
connect. If you do all that, your users will be able to get in using remote access.
11.4.4 Configuring a RADIUS Solution

In this demonstration, we're going to look at configuring a radius server to support a remote
access solution. Currently, we have routing and road access configured on this Windows server,
and it's providing VPN connections for remote clients. It also has as you can see here, an old
modem connected just in case somebody is still living in the dark ages and needs to dial into this
server.
Configuring a Radius Server

Currently, I have several of these servers set up throughout my network. Because of this, each
one has to have its own unique set of access policies to find the control when and how users can
make these VPN or dial up connections, and that's not necessarily a good thing. What we want
to do is install and configure a radius server in our network, and configure all of these remote
access servers so that they use that one radius server for authentication, for authorization, and for
accounting. That way, I can define one single set of network policies on the radius server, and
then have the automatically applied to each one of my routing and remote access servers. That
way if I need to make a change, I make the change in one place that's automatically applied to
all of my remote access servers. Instead of having to make that change manually on each and
every remote access server.
As you can see, this server has two network interfaces installed. This one is connected to the
public network that has internet access. This interface is a private network segment. Any requests
going out to the Internet have to through this interface. In addition, this interface is the one that
is configured to accept remote access connections coming in from the Internet, which will then
be forwarded onto this internal network segment right here.
Configuring a Radius Server in Microsoft

Let's go ahead and configure a radius server that we can use for this remote access server. I'm
going to switch over to a different Windows server. As you can see, this server is called Rad
Server, built for Radius server, and its interface is connected to the same public facing network
that my remote access server is. This server is a pretty basic server, there's not many services
installed, so it's an ideal candidate to become a radius server. Let's go over here and click on
manager, add roles and features. We have to add the network policy server role on this server in
order to make it a radius server. I'm going to click next, click next, they want a role based
installation. It asks me which server I want to install the role on. We'll just use the current server.
I need to find network policy and access services.
We'll add the necessary remote administration tool features that are required to support that role.
Click 'next'. Next in the features screen. There's a screen here displaying interesting information
about how the network policy and access services role works. We have to specify which role
services we want. If we wanted to configure a full network access control, or network to access
protection deployment, we might want to install these other role services as well. For just
installing a radius server, all we need is the network policy server. Let's go ahead and restart the
server automatically after installation if needed. Can't remember if it's necessary for this role or
not. We'll find out. Click 'install', and we'll wait a minute while the network policy server role is
installed on this box.
All right, we've successfully installed the network policy server on this box. Let's hit 'close'. Now,
it's installed, but it's not doing anything. We need to configure it, so let's go over to tools. We'll
go to network policy server. What we want to do is use network policy server on this radius server
to create a policy that will be applied to all of our remote access servers, and the network. For
example, we could create a policy that restricts VPN access to a certain time window. Maybe
work hours, from six in the morning, till six at night. Let's go ahead and do that. We'll expand
policies, we'll click on network policies. As you can two policies are created by default, and they
are enabled and they are set to deny access. Using this policy without making any changes, would
basically deny access to everybody. We need to make a policy that enables access.
Let's right click over here. Click on 'new'. Let's name this policy remote access, and we will apply
it to remote access servers, our VPN and dial up servers. Next, now we need to add conditions.
Remember, we said that we want to restrict access to a certain time window during the day. We
go down to day and time restrictions, and click on it. Click on 'add'. We have to specify what that
time window is. We want to set that window from six in the morning, till six at night, Monday
through Friday. We want to permit access during this window. We'll click 'okay', click 'next'. We
want to grant access, if the client connections matches the conditions of this policy. In other
words, if the connection is made between six in the morning and six at night, we're going to grant
access, next.
We need to configure what authentication methods will be allowed with this policy. I want to
turn off MS Chap, which will require our users to use at least MS Chap version two. Click 'next',
we don't want to configure any constraints, click 'next'; however, we do want to make sure that
at least a basic level of encryption is being used. Notice there's an option here that allows no
encryption. That's not cool. We're going to turn that option off so we're at least using some form
of encryption. Click 'next', click 'finish', and now our policy has been defined. If a client system,
a remote client system accesses our remote access server, using a VPN connection for example,
between six in the morning and six at night, they'll be granted access. If on the other hand they
try to establish a connection outside of that time window, then they will be denied by these
policies down here.
Configuring a Windows Remote Access

The next thing we need to do is add our remote access server as a radius client. To do that we
come up here, and expand radius clients and servers. Go to radius clients. There currently aren't
any because we just barely installed network policy server on the system. We need to add a new
radius client, and this can be confusing because we're adding a remote access server as a client.
Sometimes when folks hear the work radius client, they think workstation, because that's what
we associate clients with. When you're working with radius that is not necessarily the case. In
fact, it's very likely that the radius client is actually going to be another server, and that is the
case today. So we want to enable the radius client. We need to give it a friendly name. Let's just
call it corpnet, cause that's the name of the remote access server that we want to add. We need to
add the IP address of that server, which is 10.0.0.61.
We need to find a shared secret. The shared secret is used to authenticate the radius client with
the radius server. We could either generate it, or we can just create it manually. That is what I'm
going to do. It's important to remember what you typed here, because you'll have to use it again
on the remote access server itself when you configure the radius connection on it. The value
supply here on the radius server must match the value that you configure on the radius client.
Click 'okay'. We have our corpnet server, our remote access server added, and the network policy
server as a radius client. We need to go over to our remote access server, and configure it to use
the radius server for authentication.
To do that, let's go down to routing and remote access. We'll access the properties of corp server
up here. We'll go to the security tab, and notice we have two options here for authentication
provider, and for accounting provider. By default they're set to use Windows, but we don't want
to use Windows for authentication now. We want to use that radius server we just set up. I'm
going to hit the drop down list, and select radius authentication this time, and we'll click on
configure. We'll add, and then we need to add the IP address or DNS name of the radius server
that just configured. Its IP address was 10.0.0.61, and we have to enter in that shared secret that
we configured so that this server can authenticate to the radius server. Okay, hit 'okay' again.
Authentication
At this point, or authentication service is going to be provided by the radius server; however,
notice down here that accounting is still going to be provided by the Windows system and we
don't want that. We want to switch it over to the radius server as well. Select radius accounting,
and we'll hit configure, add, and then we'll enter in again the IP address of the radius server. You
could actually split accounting, authorization, and authentication between multiple servers if you
wish. That's the way radius works, or you could have them all on the same box which is what
we're doing here. We're using the same IP address because of that, and the same shared secret.
Hit 'okay', 'okay', and we'll hit 'okay' one more time to apply the change, and that's it.
Accounting

When a remote user establishes say a VPN connection, or heaven forbid a dial up connection, to
our remote access server here, what's going to happen is those authentication requests are going
to be passed over here to our radius server, and the credentials that we checked to make sure that
they're valid, and if they are, then the network policy we just defined right here be will applied.
The nice thing about this is I can now go and add all of my other remote access servers as radius
clients as well, and then, this network policy will be applied uniformly to every single remote
access server in my network. If I make one change here, it's automatically applied to all those
servers.
That's it for this demonstration. In this demo we talked about configuring a radius server. We first
talked about the benefits of using radius. We then configured a radius server on a Windows
system, and then we configured a Windows remote access server to use that radius server for
authentication, and for accounting
11.5.1 WAN Troubleshooting

In this lesson, we're going to spend some time discussing how to troubleshoot WAN link. If you're
going to use a WAN link to connect two different sites together, problems will arise at some
point, and you're going to have to be familiar with how to troubleshoot them. Before we get
going, I do need to point out that there's no way we can possibly cover every single possible
WAN-related problem in this lesson. Instead, what we're going to do is provide you with a
framework that you can use to troubleshoot the various problems that you're likely to run into.
Common Symptoms of a Malfunctioning WAN Link

Common symptoms of a malfunctioning WAN link could include many different things,
depending on how the link is configured and used. For example, if the link connects two sites
together, then a malfunctioning WAN link will result in a loss of connectivity to the site at the
end of the link.
On the other hand, if the link is used to connect to an ISP, then a malfunctioning WAN link will
result in a loss of Internet connectivity. The troubleshooting process you're going to use with
WAN links is basically the same one that you're going to use if you're troubleshooting a
traditional Ethernet LAN. The first thing we need to do is isolate the fault domain. Essentially
we're asking, "Where is this problem occurring?" Knowing where the problem is occurring really
helps you focus in your efforts and then zero in on what's causing the problem. One of the biggest
problems that I see when trying to troubleshoot problems, whether it's a WAN link or just a
standard LAN link, is that some administrators just start throwing fixes at the problem without
ever concretely identifying what's wrong first.
They're basically just hoping that one of these fixes that they're throwing at the problem is
actually going to rectify the situation. This approach may work from time to time, but my
experience has been that you usually end up causing more problems than you end up solving.
Before we dig into router configuration errors or malfunctions that may be causing problems, we
need to look at several other possible causes of WAN connectivity issues that should be checked
first.
Eliminate Obvious Problems

I call this checking the obvious.


For example, you need to make sure that you're company's security policies aren't actually
blocking WAN communications. Usually these are in the form of either router or firewall ACLs.
In addition, you ought to check the contract that you have with your WAN service provider to
make sure that they aren't actually throttling your bandwidth. It's not uncommon for WAN service
providers to impose bandwidth or utilization caps that could actually be hampering
communications on your WAN link. In addition, you need to make sure that your WAN
infrastructure doesn't have any inherent limitations that could be disrupting WAN
communications. For example, if you're using say a satellite link connection for your WAN link,
then you're going to experience latency on that link.
That latency could disrupt time-sensitive data communications, such as voice over IP or video
streaming.
Isolate the Fault DomainPing Command

With that in mind, let's discuss WAN troubleshooting. A great first step in isolating the fault
domain is to use a simple ping test from one router to another router across the WAN link to see
whether or not IP packets can make it. The ping command tests connectivity between the two
different routers by sending small requests to each other, asking basically, "Hey, are you there?
Are you available?" If true, then the target system should respond back to the sending system
affirmatively. If it does respond affirmatively, then you know you have connectivity between the
two routers over the WAN link.
However, if it does not respond, then you know that there's something wrong somewhere. If you
don't get a ping response, then the problems could reside at layer one.
Interface Status Commands

It could reside at layer two, or layer three. Be aware that the troubleshooting process that you
need to use will depend upon what type of networking equipment you've implemented. For
example, on Cisco equipment, you can view the line status and protocol status for the interfaces
on both routers on both ends of the WAN link to narrow down which layer the problem most
likely resides at. To do this, you would use the show interfaces command. This command will
display a lot of information, but there's two pieces that we are particularly concerned with.
Layer OneLine Status

First of all, the status of the interface itself, up or down.


Layer TwoWAN Protocol Status
This is called the line status. This is a layer one function. In addition, we're also concerned with
the status of the WAN protocol on that interface, either up or down. This is a layer two function.
With that in mind, let's take a look at some of the common WAN link problems and their possible
causes. First of all, if the WAN link's line status is down and the protocol status is also down,
then we're most likely dealing with a layer one issue, so that's where you need to start
troubleshooting. For example, the interface itself on the router might have been shut down or
disabled. It's also possible that you have WAN link cables that have either gone bad or that have
been unplugged. Maybe they've been broken, or maybe they've been cut somehow. It's also
possible that the CSU/DSU on your end of the link has been misconfigured, or maybe it just plain
old stopped working. It's also possible that a source of interference is disrupting the signal on the
link. Maybe you have a powerful EMI emitter in your vicinity.
In any of these situations, you should see many interface errors occurring when you try to view
the statistics for each interface. The issue might also be with the WAN service provider, in which
case there's really not much you can do the troubleshoot it. Ask your service provider to test the
local loop to help narrow down the scope of the problem.
Smart Jack

One of the ways that you can do this is with a Smart Jack. The Smart Jack allows the service
provider to send a signal on the wire to test and then diagnose the connection up to the
demarcation point. If the tests sent to the Smart Jack are successful, then the service provider
knows that their side of the connection is good, and any problems you're having are your
responsibility. On the other hand, if the Smart Jack test fails, then they know that the problem is
on your side of the link and that you have to fix it.
There may be situations, however, when the line status on your router is up but the protocol status
is still down. For example, this could be caused by a mismatch in the data link layer encapsulation
protocol that's being used on the WAN link. Some common layer two point-to-point WAN
protocols include HDLC, PPP, as well as frame relay. It really doesn't matter which one you're
using. The important thing you need to remember though, is that you've got to use the same layer
two protocol on both ends of the WAN link. Another possible cause could be the authentication
method that you're using. For example, this type of error would occur if you've got PAP
authentication configured on one end of the WAN link and CHAP authentication configured on
the other. It could also be caused by simply configuring the wrong passwords. Again, the
configuration on both ends of the link has to match.
A variation of this scenario occurs when the protocol status is down on one end, but it is up on
the other. This can happen if you're using HDLC as the layer two protocol on the WAN link, but
the keep-alive messages that are needed to keep the link active aren't being sent for some reason
between the two ends of the connection. Usually what happens is somebody accidentally disabled
the keep-alive messages on just one end of the WAN link. If you're going to be using the HDLC
protocol for your WAN link at layer two, then the routers on both ends of the link have to have
keep-alive messages enabled.
Be aware that if you're using PPP instead of HDLC for your layer two WAN protocol, that the
PPP protocol will actually automate the keep-alive configuration for you to prevent this very
problem from occurring. If the line status is up, and the protocol status is up, but your ping tests
still fail, then you know that the problem most likely resides at layer three. Essentially, the fact
that the line status is up, which is layer one, and that the protocol status is up, which is layer two,
tells us that those two layers are most likely working properly.
Layer ThreeIP Addressing

Therefore, the most probable cause must lie at layer three, where IP addressing comes into play.
The most likely issue here is that IP addresses that have been assigned to the serial interfaces at
both ends of the link have been misconfigured in some way. For example, it's possible that the
interfaces have been misconfigured to run in different subnets.
Maybe you used an incorrect subnet mask, or maybe you used the incorrect network address.
Remember, the interfaces on each end of the WAN link must be configured with an IP address
that's within the same subnet.
Summary

That's it for this lesson. In this lesson, we gave you some guidelines for troubleshooting WAN
connections. We first talked about eliminating the obvious. We reviewed the importance of
isolating the fault domain. One of the ways you can do this is with the ping command. We talked
about how to narrow down the source of the problem to a specific layer using the interface status
commands. By using those commands, we can identify whether the problem resides at layer one,
layer two, or layer three.
11.5.2 Troubleshooting WAN Issues

As we've seen throughout this whole video series, there are ways that we can troubleshoot our
infrastructure. In this demonstration, we're going to look at some particular tools that we can use
to resolve WAN issues, those wide area network connections. We'll look at proprietary tools like
OSPF tools that can be used if you're running OSPF. If you're running another routing protocol,
obviously OSPF is not going to be the one that you want to use to troubleshoot any WAN activity.
We'll talk about OSPF, because it's very popular; its open-sourced and lots of companies use it
because of the flexibility and the speed that it offers.
Show IP Interface Brief

Let's talk about our WAN issues and what we can do to troubleshoot some WAN issues. The first
thing you'll want to ask is, 'Where would I look if something is broken?' You want to start at the
simplest level. Look at your interfaces. That is the best place to look for just about anything. One
of my favorite commands on routers is a show IP interface brief.
I'm going to hit Enter and then scroll up a little bit so you can see it. What this shows us is a lot
of information. It shows us the interfaces that we have available to us on that first column. The
second column is the IP addresses assigned to those interfaces. Then it says OK, that is that self-
check that the router does when it comes on. Everybody is 'yes' right now, so that's good for us.
The method of how that interface was configured, the status and the protocol columns we really
want to pay attention to because we have where it says administratively down, which means it
hasn't been configured or we explicitly shut this down. Then the protocol is up or down.
The status is our Layer one, meaning do we even have electrons? Do we have connectivity at all?
It doesn't matter if it has an IP address or if it has any protocols running. That status says is it
plugged in? Do I have electricity flowing through this cable? That has to be up for the second
column to be up. If the second column is not up, that points to a Layer two problem. For example,
maybe we need to go look at our encapsulation like PPP.
This is a powerful command.
It doesn't do anything other than show you as the network admin of where you need to go look.
You may need to go look at IP addressing if you have connectivity issues. You may need to look
and say, "Oh, my status is down." That doesn't mean you shut it down. If it just says it's down
maybe the cable got unplugged. Maybe it got broken. Maybe somebody's doing some
construction work outside. The best place to look first is the protocol, because that can lead to
configuration issues. Very first one that I always look at, show IP interface brief.
Show IP Route

Another one we can do is show IP route. I'll type it all out this time. 'show ip route'. This shows
us the routes available to us in our router. If we can't get to the destination, if packets are being
lost, data is not making it from point A to point B, this is a great place to look after we look at
our interfaces, because this is going to tell us which way the traffic is flowing.
For example, if we're sending something or trying to send something to 192.168.100.0, we're
trying to get to that network, we need to see how it's getting there. It's going out interface S000.
Make sure that that's the right interface in the first place. Make sure it's pointing in the right
direction. We don't want it going out the left hand door when it's supposed to be going out the
right. That's a great place to look to make sure that we have it configured right. This one happens
to be an OSPF route, so I'm feeling pretty confident that this is going to be right. But if we typed
in a static router and had the letter S, that means we, as a network admin explicitly typed that in.
We're telling the router how to get there. We may have made a mistake. It happens.
Show IP OSPF Neighbor

We'll look here to see if we're telling it to go out the right interface.
Let's say in this case we're running OSPF so there's a couple things we can do here. Let's do a
show IP OSPF neighbor, Enter. This is going to show us the neighbor relationships with other
OSPF routers. If we have OSPF in our infrastructure, this is a great one to run, because if you're
not neighbored up with say, the next router that you're think you're supposed to be neighbored
up with, he's like, "Hey, you're running OSPF, I'm running OSPF we have this configuration but
whatever reason we can't pass traffic. This is a great place to look because if you're not
neighbored up, you're not going to pass that OSPF traffic. Then in turn, you're really not going
to have any routes in your routing table, because of the OSPF database does not have the
information, because again, it's not talking to its neighbor.
Show CDP Neighbor

We can also do something at layer two because OSPF is a routing protocol, so that's working at
Layer three with our IP addressing. Let's do a show CDP neighbor. This is going to go out and
run our Cisco Discovery protocol. This will work and we have Cisco devices, we're talking Cisco
router to Cisco router. This is a Layer two protocol that will tell us who is next down the line.
We're on router one in our topology. The next one out is just router two. I know that because it
tells me right here. It says router two and I can get to router two going out my serial, 000 interface.
It tells me its capabilities. It's telling me that the destination is a router and the port ID that I'm
connected to and its serial is 000.
You can see lots of powerful information here. If you know your logical topology, maybe you
have a logical topology map and you know how things are supposed to be connected, but you
can't pass traffic. There's an error. You can run show CDP neighbors, see how it's connected and
say, "Wait a second. That's supposed to be connected to the serial 001 interface on the next
router." Then you can either make a phone call or if it's your router, you can make that
configuration change.
Ping

Then a couple other commands we can run, we can always do ping. We can ping the destination.
Let's see if we can ping 192.168.100.1 which is router number three on the destination. I know
that because we did a show IP route just a minute ago. Ping, remember that's that echo request,
echo reply. I'm requesting you to reply to me. That just tells me that hey, I can reach that interface.
That doesn't tell me how I'm getting there. It doesn't tell me what protocols I'm using, it doesn't
tell me what path I'm taking. It doesn't tell me anything. It just simply says, yes you can reach
that computer or you can reach that device and it's talking back to you.
Trace Route

That tells me I have connectivity. That's a really good test.


What if I wanted to see the path that it's taking? There's another tool called trace route that I could
use. I can say 192.168.100.1 and I can see the path that it's taking and it says oh, you're going to
the next router which is 10.10.10.2, and then it's talking to 192.168.100.1. Let's do 100.2. You
can see that it's going from my router, because it has to leave my interface first, then it goes to
10.10.10.2, which is on router 2, and then it's going to 192.168.100.2, which happens to be on
router three.
Each time it goes to the next hop, the next router. It comes back and gives me a message. Then
it skips that next hop that it just told me it was at and goes to the next one. Then it would keep
on doing this until I reached my destination. We've only got one hop in between the router that
I'm trying to get to because we only have three in our topology. That was very quick. If you were
to type trace RT. In that case, it's not trace route, but it does the same thing. You could trace all
the way around the globe to see the different hops that it's taking.
These are good powerful tools that we can use. They're all built in.
Summary

We don't have to have anything third party for our router


In this demonstrations, we troubleshot WAN issues using commands show IP interface brief and
show IP route, which specifically looks at that exit interface. We used show IP OSPF neighbor
and show CDP neighbor; and then we wrapped it up with ping and trace route.
11.5.3 WAN Trobleshooting Facts

You can use the show interfaces command on Cisco routers to view the interface status and
identify connectivity problems on a WAN link. The following table summarizes some possible
conditions indicated by the interface status:
Line status Protocol status Condition
administratively down down The interface is configured with the shutdown command.
down down There is a hardware or network connection problem (Physical layer), such as:
• No cable or bad cable
• A powered off device or administratively shut down interface on the other end of the cable
up down There is a connection or communication problem (Data Link layer), such as:
• No clock rate provided by the DCE device.
• Mismatched encapsulation.
• Incorrect authentication parameters for PPP, including:
• Mismatched authentication method
• Missing username statements
• Mismatched passwords
up up The interface is working correctly.
After verifying that the interfaces have Layer 1 and Layer 2 connectivity, proceed to troubleshoot
TCP/IP connectivity by verifying the following:
• Devices have unique IP addresses.
• The same subnet mask is used on all devices on the same subnet.
• The IP addresses assigned to each device are on the same subnet.
• Routing table entries are correct.
When troubleshooting connectivity, know the following:
• If a problem exists at Layer 1, you must correct that problem before troubleshooting Layer
2 connectivity. If a problem exists at Layer 2, you must correct that problem before you can
troubleshoot upper layer connectivity.
• ping and traceroute are used to verify Network layer connectivity, and Telnet is used to
verify Application layer connectivity and configuration.
• A failed ping or traceroute test might indicate Layer 1, Layer 2, or Layer 3 problems.
Examine the interface status to rule out Layer 1 and Layer 2 problems.
• A successful Telnet test means that ping and traceroute will also be successful. A failed
Telnet test merely indicates a failure at the Application layer or below. It does not tell you at
which layer the problem exists.
• Because some devices do not respond to ICMP messages, you might have Network layer
connectivity between devices even if ping or traceroute fail.
• A successful ping test followed by an unsuccessful Telnet test means that Network layer
connectivity exists. Troubleshoot the upper layer configuration.
• Even if Telnet to a router fails, the router might still be routing packets. Routing happens
at the Network layer, while Telnet happens at the Application layer.
The following list of commands may help when troubleshooting WAN connections on a Cisco
router:
Command Action
router#show interfaces Lists a large set of information about each interface.
router#show interface status Displays summary information about the interface status.
router#show ip interfaces Displays a small set of information about each IP interface.
router#show ip interfaces brief Displays a single line of information about each IP interface.
router#show ip route [ip address] Displays details about the route a packet takes to reach the
specified IP address.
router#show controllers [serial interface] Displays the serial interface configuration, such as the
type of serial cable and which end of the cable is connected to the device (DCE or DTE).
router#ping [ip address] Tests communication with a specific interface, using its IP address.
UNIT_V

Chapter 15: Network Management

15.1 Update Management

As you study this section, answer the following questions:


• What is the difference between a hotfix and a service pack?
• What does flashing do to firmware?
• Where can you go to find updates for applications or drivers?
• What does Windows Update do?
After finishing this section, you should be able to complete the following task:
• Configure an update server.

15.1.1 Update Deployment and Management

Let's spend a few minutes talking about updates. We'd like to think that, when we purchase
software and install it, that it's in a perfect state. Unfortunately, this isn't true. All software
needs to be continually updated. Some updates are required to fix errors in the code. Others are
released to fix security problems that have been discovered. Some updates may add
functionality or new features that weren't included with the original software.
For these reasons, it's important to keep your operating system, your applications, your device
drivers, and your firmware up-to-date.
Let's discuss operating system updates first.
Operating System Updates

All operating systems, including Windows, Linux, and Mac OS, need to be updated as patches
are released by their vendors. For example, Microsoft classifies updates for the Windows
operating system as either hotfixes or as service packs.
Hotfix

A hotfix is a patch that addresses one specific problem with the operating system or its related
files. Microsoft hotfixes are identified with a number that's preceded by the letters KB, which
stands for Knowledge Base. This is because a hotfix is usually associated with a Knowledge
Base article. If you are experiencing a problem, you may find a Knowledge Base article with a
specific number, which identifies an associated hotfix that you can install to fix that particular
issue. Hotfixes are created frequently, whenever Microsoft finds and fixes problems in the
operating system code.
Service Pack

In addition to hotfixes, Microsoft also distributes service packs. A service pack has all of the
hotfixes that have been released up until that point in time. Essentially, a service pack is a
collection of lots of fixes that bring the entire operating system up to its most current level.
Service packs are identified using the characters SP (for Service Pack) followed by a number
indicating the revision level of that service pack. For example, SP1 would be a service pack
that included all of the hotfixes that Microsoft released for a particular version of Windows up
to the point in time that the service pack was created.
Following the release of SP1, there will be additional hotfixes released to address new
problems as they are identified. Those hotfixes are not included in SP1. So, later on when
Service Pack 2 is created, it will include all of the new hotfixes that were released since SP1.
After installation, the service pack designation is used to identify the patch level of the
operating system on the computer. The first version of the operating system, that doesn't have
any service packs applied, is usually referred to as the Release Candidate (RC) version. As
service packs are installed, it will be identified as SP1, SP2, and so on. For example, a
Windows 7 system with SP2 installed is referred to as 'Windows 7 SP2.'
In early versions of Windows, you could bring the system up to the latest service pack revision
level by simply installing the latest service pack (such as SP2). This is no longer the case with
modern versions of Windows. For example, if SP2 is the latest service pack available, then you
must first install SP1 followed by SP2. Then you would apply any additional hotfixes that may
have been released since the last service pack was released. This process is required whenever
you initially install an operating system, and it can take some time to complete.
Windows Updates

Microsoft includes a feature called Windows Update that keeps the operating system up-to-
date. Windows Update automatically identifies, downloads, and installs updates for the
operating system as well as the driver files that are provided with Windows by Microsoft.
Windows Update can also be configured to provide updates for other Microsoft products, such
as Microsoft Office, using a service is called Microsoft Update. Microsoft Update is disabled
by default. If you want to use it, you must manually enable it in the Control Panel.
Windows Update only provides updates for driver files that have been registered with
Microsoft. If a hardware manufacturer has not registered their driver with Microsoft, then
updates for those drivers will not be made available through the Windows Update service. If
this is the case, you must check the hardware manufacturer's website for updated versions of
the driver. If one is available, you must download and install it manually.
Whenever you install a new hardware device in your system, you should always download and
install the latest driver version. The drivers that are commonly included on the installation disc
with the hardware are usually out of date.
In addition, if you are having problems with a particular hardware device in your system, one
of the first things you should do is update its driver to the latest version. Device drivers are
software, and they contain bugs just like any other software. As those bugs are discovered,
updated versions of the device driver will be released to correct the problems with the older
versions of the driver.
In addition to operating system and device driver updates, you also need to ensure that the
applications installed on a system are kept up-to-date. How this is done depends on the
application. If it's a Microsoft application, then you are most likely going to get updates
automatically from the Microsoft Update service that we talked about.
Third-Party Application Updates

However, you cannot get updates from Microsoft for third party applications. There are a
couple of ways you can get updates for these kinds of applications * For some applications, you
need to go to the manufacturer's website and check for updates. If any are available, you must
manually download and install them.
* Other applications have a built-in update feature that will periodically check the
manufacturer's website to see if an update is available.
Firmware Updates

If so, it will prompt you to download and install it.


In addition to the update types we have discussed so far, you also need to be aware of firmware
updates. Firmware is software that's embedded in the flash memory of a hardware device. This
software needs to be updated just like any other software. There may be problems with it, there
may be security issues with it, or it could contain bugs. You need to update the firmware for
devices such as:
* The BIOS on the motherboard of a computer
* The firmware on an expansion board
* The firmware used to run devices such as network switches, routers, or wireless access
points.
The way you update firmware is completely different from the way that you update
applications, operating systems, or device drivers. For example, to update the BIOS on a
motherboard, you must download the firmware update from the vendor's website, and then run
an executable which will rewrite the BIOS with the updated software.
Network Device Updates

The process for updating a network device is different. In this example, we need to update the
firmware on a wireless access point. To do this, I would first go to the manufacturer's website,
download the firmware update, and then connect to the device from computer on the network.
The specific way this is done will vary from manufacturer to manufacturer, but the general
steps will be similar.
Updating the firmware is often called flashing, because the update process overwrites the
contents of the flash memory on the device with updated software. Before doing this, be sure to
back up the current firmware first. Most firmware update utilities provide an option to back up
the current firmware to a file prior to installing an update, just in case something goes wrong.
Then, while the update process is running, be sure you do not turn off the device until the
update is complete. If you turn the system off before the update process is complete, the system
will become unbootable.
Summary

In this lesson, we discussed system updates. We talked about the importance of keeping your
systems updated as a part of your overall regular system maintenance plan. We talked about
updating the operating system itself and the applications along with updating device drivers.
Then we ended this lesson by looking at firmware updates on hardware devices.

15.1.2 Configuring an Update Server

Let's take a look at how to install Windows Server Update Services, WSUS.
We're going to start by adding a Role. We want to click 'Add Roles and Features', hit Next,
Next, on this server.
Add the Role

Scroll down and check 'Windows Server Updates Services.' Notice that Microsoft recommends
that you secure it with SSL. You can add a certificate later and secure the website with SSL, but
you actually don't have to, but that's what they recommend. At least one of the servers needs to
be able to download updates from Microsoft.
Since we're installing the first WSUS server we're going to let it synchronize from Microsoft. If
we were installing a second one, a downstream server, we could actually point it up to this one
and have the downstream synchronize from the upstream. Because this is our first WSUS we're
going to have to talk Microsoft.
We're going to leave the WID Database and WSUS Services selected.
Role Services

This database option would be use if we were going to store the updates in SQL which we're
not.
Now it asks us to give it a local path, where it's actually going to store the updates.
Identify the Local Path

If some reason you didn't have enough space to store the update files or if you had a situation
where the clients were better connected to the Internet than the Intranet we could actually
uncheck this and the clients would check in WSUS for permission to download the updates
because we're not storing the updates they'd pull them down directly from the Windows Update
website.
We're going to go ahead and store the updates on the server. I'm going to put them in a folder
called c:\wsus. We want to make sure wherever you put them that it's formatted with NTFS and
has at least 6 GB of free space, but I would plan for quite a bit more space depending on how
many updates you plan on downloading.
WSUS is a website it's an Intranet version of the Windows Update website.
Installing IIS

It's installing IIS and configuring that. If IIS is already installed it's going to go ahead and make
a virtual directory for WSUS. Once WSUS Role installation succeeds you need to configure the
server.
Configure the Server

I'm going to go ahead and Launch Post-installation tasks. Once the configuration is complete
you want to go ahead and open up WSUS and run the configuration wizard.
WSUS Server Configuration Wizard

Generally the first time you come it we'll jump right into the Configuration Wizard. If it doesn't
just click on 'Options' and at the bottom you can run the Configuration Wizard.
Make sure the firewall is setup to allow access to the clients.
Prerequisites
Make sure if it's the first WSUS server it can connect to the Microsoft Update. If it's a
downstream it can connect to the upstream server. Make sure you have any credentials for a
proxy server if you're using one in your environment. You can opt whether and not to join
Microsoft Update Improvement Program.
Microsoft Update Improvement Program

This isn't a production server so we're just going to go ahead and hit 'Next'.
This is where you choose where you are going to synchronize WSUS because we're the first
server we're going to synchronize for Microsoft Update.
Choose the Upstream Server

If I was the downstream server, I would click here, I'd put in the name of the upstream server,
whatever port I'm going to use, 8530 is the default, whether or not I'm going to use SSL. If this
is going to be a replica I would check that it's going to be replica.
Replica servers automatically get the update that are approved at the upstream server. There's
no approval going on at the downstream. If I leave this unchecked, updates will be approved at
the upstream server and they'll be able to approve at the downstream server. Again, because this
is the first WSUS server we're going to synchronize for Microsoft. If I have a proxy server in
my environment I'm going to go ahead and put that in and then I need to connect to the
upstream server.
Specify Proxy Server

Anything you choose in the wizard can be modified after the fact. You need to go through it at
least once to get WSUS up and working. Once WSUS synchronizes with the upstream server
for the first time you just go ahead hit 'Next'. Now we need to choose which languages we're
going to support.
Choose Languages to Support

You also can select which products you're going to use to distribute using WSUS.
Choose Products

You can see pretty much everything Microsoft is in here. You can use what type of updates you
want to download. You can see by default it's just going to download Critical, Definition
Updates, and Security Updates.
Choose Classifications

You could pull down Service Packs, Tools, Drivers, pretty much anything that you want and
then you setup how it's going to synchronize.
You can setup to synchronize manually. So just whenever you specify or you can have it
synchronize automatically however many times per day you want it to do that.
Configure Synchronize Schedule

I'm going to let it begin the initial synchronization and then finish.
That's pretty much all there is to it to get WSUS installed. Once you've got it configured, it's
going to download the updates and then you'll be ready to go to roll them out to your clients.
Configuring WSUS

Now we're going take a look at managing and configuring WSUS. I'm going to open up WSUS
and we'll start by taking a look at Options.
Options

Pretty much anything that you set up during the wizard, you can change using Options.
Update Source and Proxy Server

So for example if you need to change where you're getting your updates from. If I needed to
turn this into a downstream server, I would do it in Update Source.
Proxy Server is for setting up a proxy, if your environment requires you to go to a proxy to get
to the Internet.
Products and Classifications

Products and Classifications is just that, which products you're going to have updates come
down for and then which type of updates you're going to download, Classifications.
Update Files and Languages

Update Files and Languages have to do with where you're going to store the update files. So
currently I'm storing them locally on this particular server and I'm going to download updates
only when they're approved. I can also opt whether or not to download the express installation
files.
Here's the deal, some of the updates that Microsoft puts out, there's a part of the update that
launches and then pulls down the rest of it from Microsoft. When you get the express
installation files, it contains the entire update, so that's going to be faster for the clients.
However, since these are larger files it's going to increase download times for your server. So
you can choose whether or not to download express installation files. If this were a downstream
server, we can choose whether or not to download the files from Microsoft or the upstream
server. If I were to check this, I would download from Microsoft. If I leave it unchecked, I'm
downloading from the upstream server. Or alternately I could not store the files locally, the
computers will download them from Microsoft Update. Now notice I can either store the files
locally or I can have the clients get them from Microsoft.
If you have a mixed environment where you want some computers to download the files from
you and then some computers to go directly to Microsoft, maybe because they're better
connected to the Internet, if you have some remote servers that come in over VPN. We don't
have that opportunity to do that here. We'd actually have to build 2 WSUS servers. One for the
internal clients that are going to pull the files from WSUS and another one for the external
clients, people that are coming in over VPN or at a small office where just going to use WSUS
to approve the updates and let them go and pull the files directly down from Microsoft.
Here's where we set up which languages we're going to download, if we want to add support
from another language, we can do that in here.
Automatic Approvals
Automatic Approvals allows us to set up rules for updates that would automatically approved.
For example there's a Default Automatic Approval Rule that I could enable that says, if it's a
critical update or a security update, go ahead and automatically approve that update for all
computers. I can make my own rules. Whether it's a particular classification or a particular
product or set a deadline for the approval, so I could say, if it's for Exchange approve the
update for all the computers or whatever it is I wanted to do.
Synchronization Schedule, we had chosen that during the wizard.
Synchronization Schedule

We can synchronize manually, which means I'm going to have to come in and launch the
synchronization myself, or I can have it synchronized automatically, how ever many times per
day I want. I notice, it's going to have a random offset of up to 30 minutes, just so it's not
always hitting it at the exact same time.
Computers

Computers allows me to decide how computers are going to get assigned to groups. We'll go
take a look at groups in a few minutes. Either I can go through and create groups and manually
move the computers into those groups or much more efficient would be to use Group Policy or
the registry on the computers, and that's called client side targeting.
Server Cleanup Wizard

Generally that's the more efficient way to do it. I've got a Server Cleanup Wizard that can try to
find out old computers, updates, that type of thing.
Reporting Rollup

Reporting Rollup talks about how the reports are going to be handled between the upstream and
the downstream and the replica servers. The way it set by default is to grab the status from
replica downstream servers. But I can come in and say I don't want to see the status from the
downstream servers.
E-Mail Notifications

Either way whatever works out.


I can set up E-Mail Notifications. I can have it send me an e-mail when updates are
synchronized or send me a status reports, and then of course I've got to come in and specify the
SMTP server.
Microsoft Update Improvement Program

If I didn't opt to join the Microsoft Update Improvement Program, I can always do that after the
fact. And then Personalization also has to do with downstream servers.
Personalization

When I click on 'Computers,' I'm going to see all the computers that are clients of my WSUS
server. By default, that's going to include computers from replica downstream servers. If I just
want to see the clients of this server alone, I could come in and change it in 'Personalization.' A
lot of stuff is going on in 'Options.' WSUS can give you reports, you're not going to have a lot
of reports here, but basically it can show you which computer checked in, whether or not they
successfully downloaded the updates.
Reports

It's not able to give you a detailed inventory of what updates are on what computers. It just
shows you the status with regards to what WSUS tried to distribute to them. So if you're
looking for a definitive inventory of what updates might be there, because they came in before
WSUS, you either need to go through it manually, using the Microsoft Baseline Security
Analyzer or you'd want SCCM for that.
Synchronizations

Synchronizations will show me when synchronizations occurred and whether they were
successful.
Downstream Servers

If I did have downstream servers, I can take a look at them in here. Really that leaves us with
Updates and Computers.
Updates

Go up to 'Updates.' I can look at All Updates.


The updates are not going to go out to the clients until you approve them. So if you have a
situation where clients have some updates and they're missing some, maybe you forgot to
approve them. When I want to approve an update, I'm going to right click it and hit Approve.
Then I can say which groups I'm going to approve them. You can see them by default that's All
Computers, all Unassigned Computers.
My computers can be organized into groups, for purposes of rolling out updates. You can see
by default, there's really just All Computers and under there I have a group of Unassigned
Computers. But right now when computers contact the WSUS server and say, "I'm your client,
I need some updates.
Computer Groups

They're going to drop into Unassigned Computers."


If you want to have a situation where you roll out updates to some computers and not others,
then you would create computer groups. Maybe I have some network administrators or high
powered end users and I've decided that I'm going to test updates on their machines. I know if
something breaks, they're not going to have a heart attack. I'm going to go through and I could
make a group called testing computers or Testing. Then we saw earlier that I can either drag the
computers into that group. Once they've checked in they're going to drop in to Unassigned
Computers, I'd move them into Testing or I can use client-side targeting in Group Policy to tell
the computer in advance, you're part of the testing group and when it checks in, it will
automatically be placed into that group.
Approve Updates

Which as I said, is the better option.


If I do have groups, when I go up to my updates and I go to approve them, then I'm going to see
the groups. I could approve it just for that group. I could approve it for Removal, for Install, or
I could permanently say it's Not Approved. I'm going to Approve it for Install and then just that
group will install it. I could wait awhile to see if everything is copasetic, nothing goes wrong,
then come back in here and roll out the update to everybody else.
Set a Deadline

If you need to make sure the update gets installed within a certain time frame, you should set a
deadline. It could be one week, two weeks, one month, or if I do custom, I could set it right
now. WSUS uses something called BITS, Background Intelligent Transfer Service. The idea is
that the clients are going to pull down the update files using idle bandwidth and then once the
files are down they'll get installed depending on how I have Windows Update configured on the
client.
The reason you might set a deadline is, let's say there's not a lot of idle bandwidth. They might
see that an update is approved, but it may take it a little while to download it. So if you need to
make sure that that update gets installed as quickly as possible, when you set a deadline then
the client won't just wait for there to be idle bandwidth, it will make an effort and make sure
that that file comes down within the deadline and then gets installed. I'm going to go ahead and
approve my update, now that's approved. Anybody from the testing group would go ahead and
install that update.
So make sure with the updates that you approve them. If you don't approve then, they won't get
installed.
Summary

If you're missing updates, it might have been that you didn't improve them.
That's pretty much all there is to WSUS from the server side.
The other piece of it is setting up the clients which generally we do with Group Policy. This is a
great way to control the release of updates into your environment, so that you know when the
updates are going out. It's great if you can roll them out to a testing group first and make sure
they're okay. Does is do anything to the operating system and then by storing the updates on the
WSUS server, you save bandwidth across the WAN link, because the computers can pull the
update files down from WSUS, instead of going all the way out to Microsoft.
15.1.3 Update Deployment and Management Facts

Upgrading is the process of replacing a product with a newer version of the same product. An
upgrade is generally a replacement of hardware, software or firmware with a newer or better
version, in order to bring the system up-to-date or to improve its characteristics. Downgrading
is the process of reverting software (or hardware) back to an older version; downgrade is the
opposite of upgrade. Often, complex programs may need to be downgraded to remove unused
or bugged features, and to increase speed and/or ease of use. You should always have a
configuration backup, so that you have something to downgrade to.
Updates are periodically released to:
• Fix bugs (errors) in programming code
• Patch security vulnerabilities
• Add features or provide support for new hardware
There are two types of Windows updates:
Update Type Description
Hotfix A hotfix is an operating system patch that fixes bugs and other vulnerabilities in the
software.
• Hotfixes may be released on a regular basis as fixes are created.
• For the highest level of security, apply hotfixes as they are released (after you verify that
the hotfix will not cause additional problems).
• Microsoft assigns a number to each hotfix. This number also identifies a Knowledge
Base (KB) article that describes the issues addressed by the hotfix.
Service pack (SP) A service pack is a collection of hotfixes and other system enhancements.
• A service pack includes all hotfixes released up to that point. If you install the service
pack, you do not need to install individual hotfixes. A service pack also includes all previous
service packs.
• Service packs might include additional functionality beyond simple bug fixes.
Windows Update is a feature that helps keep your computer up to date.
• By default, Windows automatically checks for, downloads, and installs updates.
• Updates are classified as Important, Recommended, or Optional. By default, Important
and Recommended updates are installed automatically.
• Windows Update can install both hotfixes and service packs. For example, after
installing a new version of Windows, Windows Update will download and install the latest
service pack.
• Windows Update includes updates for the following:
• Windows operating system and utilities
• Drivers that have passed Microsoft certification and are made available through
Windows Update
• You can turn off automatic downloading or installation of updates. You can configure
your computer to:
• Not check for updates (you can manually check for updates at any time).
• Notify you of updates, but require your permission to download or install them.
• Download updates, but ask your permission to install them.
• Check for Microsoft updates, but not automatically update driver files.
• You can view a list of installed updates and remove any updates.
• For additional updates, you can use Microsoft Update instead of Windows Update.
Microsoft Update includes updates for Microsoft applications, such as Office applications.
You should be aware of the following facts when working with updates:
• Hotfixes and service packs are specific to an operating system version. A hotfix for
Windows 8.1 will not work on Windows 7. However, a hotfix for Windows 7 Ultimate will
typically also apply to Windows 7 Enterprise.
• In a business environment, it is wise to test updates before installing them on multiple
systems.
• Non-Microsoft applications and many drivers will not be updated through Windows
Update.
• Many applications include a feature that periodically checks the manufacturer's website
for updates. These programs typically ask your permission to download the updates.
• To check for updates to applications or drivers, go to the manufacturer's website.
• Hardware devices, such as the BIOS or many networking devices, store code in a special
hardware ROM chip. This software is referred to as firmware. Updates are done by flashing
(replacing or updating) the code stored on the chip.
• Always follow the instructions when performing firmware updates.
• Many updates are performed through a browser; some updates can only be performed by
booting to special startup disks while outside of Windows.
• Turning off the device or interrupting the update process could permanently damage the
device.
• If possible, always back up a device's configuration before installing a new firmware
update.

15.2 Data Protection

As you study this section, answer the following questions:


• What is the difference between a data and a server backup?
• What permissions do you need in order to perform a backup?
• Which type of server backup is for recovering only critical volumes?
• In Windows 8.x, which application do you use to backup user account files?
• How does backing up your server to an internal disk differ from backing up to an external
disk?
After finishing this section, you should be able to complete the following tasks:
• Configure a data backup.
• Configure a server backup.
This section covers the following Network Pro exam objective:
• Domain 7.0 Network Management
• Given a scenario, perform data and server backup tasks.

15.2.1 Data Backups

In this lesson, we will discuss the importance of backing up data on computer systems. Part of
your regular system maintenance should include performing backups to protect critical data. The
backup process creates a copy of the data on your system. If a disaster occurs, such as a failed
hard disk or a natural disaster, a backup can be used to recover your system.
Backups are also useful in situations where you need to restore deleted, changed, or corrupt files.
For example, suppose you're working on a very important file. You make several changes and
then save the file. The next day, you make many additional changes and save them. After doing
so, your supervisor informs you that the project scope has changed and you realize that all of the
changes you made today are now not needed. You need to get back to the version of the file from
the previous evening. You can use a backup in this situation to restore a previous version of the
file.
Types of Backups

Be aware that there are many different types of backups that you can create. Each type has its
advantages and disadvantages.
For example, you could create a system state backup. This back up contains system state data,
including anything required by the operating system to restore the configuration for your
computer, such as operating system files, registry settings, drivers, and any other configuration
information required to run the operating system. It may also include the applications that have
been installed on the system.
You can also back up user data. These are the word processor, spreadsheet, presentation, music,
and graphics files that are saved on the system by end users.
You can also create an image backup. This type of backup captures everything on the system's
hard drive, including operating system files, applications, and user data.
The type of backup job you configure will determine what types of files will be included when
the backup is actually run. For example, running a system state backup will include all of your
operating system files and application files. However, it will not back up your users' data files.
Backing up user data will protect these files, but it will not back up operating system files or
application files. Creating a system image will back up everything. However, they take a very
long time to create and to restore. Typically, you will create user data backups more frequently
because user files are constantly being modified. You will create system state backups less often
because operating system and application files are changed only occasionally. System images
may be taken even less often due to the amount of time required to create them.
There are many tools you can use to create backups. Some come with the operating system,
others are third-party tools that can be purchased separately. In this lesson, we're going to focus
on the tools that come with the Windows operating system that you can use to create backups
and protect your data. The actual tools you have at your disposal depend on which version of
Windows you are using.
Backup and Restore Option

In Windows 7, you can use the Backup and Restore option in Control Panel to perform two types
of backups:
* A system image backs up an entire volume to a .vhd file. It contains everything on the system,
including the operating system, installed programs, drivers, and user data files. This can be very
useful as .vhd files are supported as virtual disk files by the Hyper-V virtualization platform. You
can create a system state backup of a hardware system and then open the resulting .vhd file on a
Hyper-V hypervisor system, effectively migrating your hardware-based system to a virtual
machine.
* A file backup backs up specified files and folders to a compressed file. The files can be
manually selected, or Windows can automatically choose them for you. File backups leverage
the shadow copy feature, which allows files to be backed up even if they are open. Be aware that
file backups do not include any system files, program files, encrypted files, Recycle Bin files,
user profile settings, or temporary files.
With both types of backups, the first time you run the backup job, it backups up all of the selected
files or the entire system image. However, the next time you run either backup, the backup job
will check to see which files have changed since the last backup. Then it backs up only those
files that have been modified in some way since the last time the backup job was run.
By default, backups are scheduled occur every Sunday at 7:00 pm; however, you can modify this
schedule as needed.
Backup Locations

Backups can occur once every day, week, or month.


Backups can be saved to the following locations:
* Secondary internal hard drives
* External hard drives
* Optical drives
* USB flash drives
* Network shares
* .vhd files
File History Option

* Network Attached Storage (NAS) or Storage Area Network (SAN)


In Windows 8.1, the File History option is the primary method used for backing up and
recovering personal files. File History does not back up the entire system; instead, it backs up
users' files saved within their profile library, including their files, contacts, and Internet favorites.
A user can add additional folders to their library and also have those folders backed up using File
History.
When File History is enabled, Windows monitors users' libraries, desktop, contacts, and Internet
Explorer favorites and checks to see if any of this data have changed since the last check. If it
has, Windows saves copies of the changed files. This creates a snapshot of those files at that
particular point in time. Once a snapshot has been taken with File History, a previous version of
a file can be restored if a file gets lost or corrupted.
When File History is enabled, the location for storing the backups must be specified. You should
specify a drive other than the one the user files are on.
Windows 8.1 also supports the creation of system images, which are created in the same manner
as on Windows 7.
Windows Server Backups

Server versions of Windows provide system image and file backups, just as workstation versions
of Windows do. However, they also provide several additional backup options:
* A Full Server backup backs up all volumes on the server, which allows you to recover the full
server. Because server data is so critical, this is the recommended backup option that you should
choose.
* A Bare Metal Recovery backup creates a backup that can be used to recover just the server
operating system. Only the critical volumes are backed up.
Another mechanism that is used to protect Windows systems is called System Restore. System
Restore is extremely useful. It takes periodic snapshots of your computer's system state data. The
first snapshot that is taken includes the entire system state. Subsequent snapshots are periodically
taken thereafter. However, they include only the changes that have taken place since the last
snapshot was taken.
Using these System Restore snapshots, you can restore your computer to any previous point in
time where a snapshot has been created. For example, suppose you need to install a new video
driver on your system. Before installing the driver, you take a snapshot. Then, after installing the
driver, you find it causes huge problems with your system. Instead of trying to manually uninstall
the driver, you can simply revert the system back to the snapshot that you took prior to the
installation. System Restore is enabled by default and automatically takes snapshots as needed.
However, you can also manually create restore points before you manually make changes to the
system.
Previous Versions

One final backup method that you need to be aware of is called Previous Versions. This option
is only available on Windows XP, Vista, and 7. Basically, Previous Versions provides the same
functionality as File History on Windows 8. With Previous Versions enabled, System Restore
snapshots include not only the system state, but also a copy of all user data files. Like File History,
Previous Versions allows me to rollback or restore a previous version of a file. For instance, if I
were to delete a file or make changes to a file, I can use Previous Versions to go back and restore
a previous version of that file.
After creating backups, be sure to test them to make sure they work. Sometimes our backup
media can wear out, making the backup invalid. You need to discover this BEFORE a disaster
occurs, not after. Also, be sure to store your backup media in a safe place. It doesn't do any good
to take a backup only to find that you can't restore from it, or that you can't find it because it was
lost or used for something else.
Summary

That's it for this lesson. In this lesson, we discussed options you have for backing up data. We
talked about the different types of backups you can create and what type of data is included
within them. Then we talked about the different backup tools you can use to protect Windows
systems.
15.2.2 Windows Backup Utility

In this demonstration, we're going to use the Windows Backup utility to back up files and folders
in our actual system.
Set up Backup

If I go to Control Panel and I go to Backup and Restore, the first time you use the Backup and
Restore tool, you can click on Set up Backup. You cannot have more than one backup job on a
system at a time. I'm going to choose my E: drive as my destination and click Next. I can have
Windows choose for me or I can choose myself. Either way, an image of your entire system is
backed up. The entire C: drive and the System Reserve image are backed up, and it's put into a
VHD file for you that you can actually use to do a bare metal recovery or a complete dead
recovery of your machine. If your system is completely dead and you have a system image of it,
you can easily restore it back to the point where you made the system image backup.
Back up Files and Folders

Here I can choose to back up my users' libraries, and I can actually choose to back up specific
files and folders. For example, if I want to back up the performance logs folder on the C: drive,
I can choose it. I'm going to click Next. You can see a summary of what I'm going to be backing
up. I can also change my schedule. By default, once I create one backup, it will automatically
back up every Sunday at 7 PM.
Change Backup Time

I can change that to back up monthly, on the 1st, 2nd, 25th--whatever day of the month--or I can
do it on a daily basis. But for the time being, I'm going to disable schedules. I'm also being
warned that I might need a System Repair Disk if I want to use a system image file. I can also
boot up from a Windows PE utility CD or I can boot up from the Windows 7 media, as well. I'm
going to click Save Settings and Run Backup. I can also click View Details and see what's
happening.
As you can see, first shadow copies are created, so in case I have any open files, those open files
can be backed up, as well. The entire system will be backed up. I can also use the Backup and
Restore tool to create a system image directly, without even creating a full backup. All I have to
do is come to Backup and Restore and click Create System Image.
Create Repair Disk

I can even create a repair disk. In order to create a repair disk, I actually need a blank, burnable
media, like a CD or DVD, in the computer and it will directly build the disk for me. I don't
necessarily need a system repair disk, as I can use a Windows PE or Windows 7 bootable CD to
get into the Windows recovery environment. Once the system image creation window opens up,
I can choose a hard disk that I want to back up to. I can have it burned directly to a CD or a DVD,
or I can actually save it to a network location. We're going to wait for this backup to complete.
Available Backup Space

As you can see, our backup is complete.


Now you'll notice that I can click on Manage Space. It'll show me how much space my backups
are taking up. I can also view my backups to see all the previous backups that have been made.
Change System Image Settings

I can even delete them.


And also we can change the system image settings. Because we don't have any automatic
scheduled backups going on, we don't have any automatic system images created. We want to
make sure we retain the most recent system images that are created. Let's click Okay. We're going
to close out of this. I can configure a schedule if I want. I can change the settings of my backup
by changing the selection. As you can see, I only have one backup setting, and this one backup
setting will get executed over and over again. If I want to change the settings, I can come to
Change Settings, choose a different destination, choose a different set of files that I want to have
backed up, and even choose to have or not to have system images created.
Back up Set Folders
If I open up Explorer and go to my E: drive, I will notice two files. There will be a backup file
and a Windows Image Backup folder. If I open up the Windows image backup folder and I open
up the folder for my client machine, there is a Backup Set folder. Within the Backup Set folder I
will see two VHD files. One is very small, which is for my system reserve setting, or the
BitLocker section, and the other one is fairly large, which is my actual system image in the form
of a VHD file. I can actually go ahead right now and mount this VHD file if I wanted to.
I go to my Computer Management, go to Disk Management, attach a VHD. I can also choose
Read Only, or I can just click Okay and the image will be mounted for me. I can go ahead and,
by default, I'll get a drive letter and the autoplay will start up. If I open up Explorer and I open
up the F: disk, this is the exact content that I also see in my C: disk. I'm actually viewing an
image of my entire machine. I can put files in here, close the VHD, and if I reimage my computer
using this backup, I'll have those files there, as well. I'm going to detach the VHD. I'm going to
make sure I do not delete it.
View Contents of a Backup File

I'm going to close Computer Management.


The other file that exists back at the root of the E: drive is the backup file. What Windows 7 does
is it saves everything in a sort of compressed file. If I right-click it and click Open, I can actually
view the contents of the backup job, go inside the backup set, and go to the backup files and open
up the files one by one and look at the content of the C: drive--look at the content that was backed
up piece by piece. It's actually a file-based backup. That's the beauty of it, because later when we
look at how to restore files, you'll see it's very easy to restore files without having to choose a
backup set, choose a backup location, and load files. You just search for the file you want and
you can restore it.
Summary

That's it for this demonstration. In this demonstration, we used the Windows Backup utility. We
can use the Windows Backup utility to back up files and folders on our computer and create a
complete system image of our machine in case we need a complete disaster recovery and a
complete bare metal system restore.
15.2.3 Recovering Files from Backups
Recovering Files from Backups in Windows 7

In this demonstration, we're going to restore and recover files. The first thing we want to do it go
to our Backup and Restore option in the Control Panel.
Restoring Files

And notice we already have a backup job completed and I have the choice of doing a Restore
My Files.
Searching for Files

Click Restore My Files. I can click Search, and I can search for a file that I'm looking to restore.
For example, anything that happens to start with an A.
And you'll notice, it automatically knows where the backup is saved and it will automatically
search for it. I can scroll to the files I'm looking for. I can, for example, search for something like
stars, and it will show me all the files that have stars in it. I can then choose the file I want to
restore and click Okay, and add it to my restore batch.
Browsing for Files

I can also choose specific files by browsing for it. And notice when I choose Browse, it takes me
directly back to the Windows backup, the backup of the C Drive or the backup of the Admin
Files or the Admin Libraries.
Browsing for Folders

And I can go in piece by piece and, for example, restore a favorite from that location, or I can
browse and restore folders.
Restore Locations

So when I finally do choose a file or folder I want to restore I can click Next. I can choose to
overwrite it to the original location. So if I chose five or six different files or folders, they're all
going to overwrite the original files.
Or I can choose to restore to a different location. If, for example, I specify to save to the C: drive,
and let's say Restore, if I check the box to Restore Files to Their Original Subfolders, the files
and folder actually subfolder trees and structures will be saved, instead of all the files just getting
thrown into one single location.
Restore the File

I click Restore and my files have been restored. If I open an Explorer Window and go to C:, go
to Restore, there is my Stars file that I just restored.
Restore from Another Backup File

In addition to doing restorations directly, I can even choose restorations from another backup
file. Right now, the only backup file and device that's connected to me is the E: drive. But if I'd
made a backup to a network location or if I'd made a backup to a removable device, I could have
connected that removable device and done a restore from that removable device.

15.2.4 Workstation Backup Facts

On Windows 7 systems, the backup process is managed using the Backup and Restore console
in Control Panel. When managing traditional backups on Windows 7, be aware of the following
considerations:
Backup Considerations Description
Types of backup The Backup and Restore console supports two types of backups:
• A system image backup consists of an entire volume backed up to a .vhd file. It contains
everything on the system, including the operating system, installed programs, drivers, and user
data files.
• A file backup includes specified files and folders backed up to a compressed file. File
backups do not include system files, program files, encrypted files (including EFS-encrypted
files), files in the Recycle Bin, user profile settings, or temporary files.
When using the Backup and Restore console:
• The files to be backed up can be manually selected, or Windows can automatically choose
them.
• A system repair disc can be created.
• Backups can use the shadow copy feature to allow open files to be backed up.
• The initial backup backs up selected files or the entire system image.
• Subsequent backups include only the changes that have occurred since the last backup.
System Restore is available to backup system and application files. Keep in mind the following
about System Restore:
• Configure System Restore on the drive containing operating system files and any other
drives that contain critical applications.
• Restore points are created in one of three ways:
• Daily, at a specified time.
• Automatically before changes occur such as application installation, system updates,
unsigned driver installation, and restoring a computer.
• Manually by a system administrator or other authorized user.
Backup location Backups can be saved to:
• Secondary internal hard drives
• External hard drives
• Optical drives
• USB flash drives
• Network shares
• .vhd files
• Network Attached Storage (NAS) or Storage Area Network (SAN).
Backup files cannot be saved to:
• The same disk being backed up
• A system disk
• A Bitlocker-enabled volume
• A tape drive
System images cannot be saved to:
• Flash memory
• A Bitlocker-enabled volume
• A tape drive
• A DVD if the system image backup is scheduled
Requirements for backup Required permissions for backups include the following:
• Administrative privileges are needed to configure scheduled backups or to manually
initiate a backup.
• When performing a backup to a shared network folder, the credentials used for the backup
must have Full Control at the share and NTFS permissions of the destination folder.
Scheduling backups A system image backup cannot be scheduled, but a system image backup
can be included within a scheduled regular backup using the Backup and Restore console.
By default, file backups occur every Sunday at 7:00 pm. With scheduling backups:
• The schedule can be modified using Backup and Restore.
• Only one scheduled backup can be created with a single set of settings (multiple backup
jobs and schedules cannot be created).
• Backups can be configured to occur only once every day, week, or month. To perform the
backup more than once a day, week, or month, use the Task Scheduler to configure multiple tasks
or to execute the task more frequently. Scheduled tasks must run with administrative privileges.
In Windows 8.x, File History is used to backup user account files. You access File History from
the System and Security option in Control Panel. Be aware that:
• File History does not back up the entire system. Only the library files, contacts, and
Internet favorites associated with the user account are backed up. A user can add files to a library
and have those files backed up using File History.
• Once every hour, File History creates a shadow copy of user account files. This creates a
snapshot of user account files at a particular point in time.
• Users can easily browse and restore previous versions of files backed up using File
History.
• File History backs up files in the background.
• File History is turned off by default.
• When File History is enabled, the location for storing the data must be specified. The best
practice is to use a drive other than the drive the user files are on.
• When File History is enabled, Windows 8 monitors users' libraries, desktop, contacts, and
Internet Explorer favorites. Once an hour, Windows checks to see if any of this data has changed
since the last check. If it has, Windows saves copies of the changed files to the configured
location.
• After a snapshot has been taken with File History, a previous version of a file can be
restored if a file gets lost or corrupted.
To manage File History, configure the following parameters:
• Save copies of files to set the frequency of the backup. The default is every hour. The
backup can be configured to occur more or less frequently as needed.
• Size of offline cache to specify the disk space allocated to file backup. The default is 5%.
Setting a small number results in older files being overwritten by newer files as the allocated disk
space is used.
• Keep saved versions to specify how long the backups are saved. The default is forever as
long as disk space is available.

15.2.5 Sever Backup Facts

Windows Server Backup provides backup and recovery functions. It allows you to manage
backup and recovery from either the command line or the Windows Server Backup console snap-
in. When you create a backup, you need to specify the files, folders, or volumes that you want to
include. Windows Server Backup allows you to select from the following options:
Option Description
Full server Backs up all volumes if you want to be able to recover the full server. You can use
a full server backup to perform all types of recoveries, including system state and bare metal
recoveries. It is best practice to choose this option. Use the Full Server option in Windows Server
Backup wizard to select this type of backup.
Bare metal recovery Creates a backup for recovering the operating system (critical volumes
only). This option is a subset of a full server backup. Use the Custom option in Windows Server
Backup wizard to select this type of backup.
System state Backs up the system state. This option is a subset of a full server backup. Use the
Custom option in Windows Server Backup wizard to select this type of backup.
Individual volumes Backs up individual volumes. Use this option if you want to be able to
recover data from only those volumes. Use the Custom option in Windows Server Backup wizard
to select this type of backup.
Folders or files Backs up individual folders or files. Use this option if you want to be able
to recover only those items. Use the Custom option in Windows Server Backup wizard to select
this type of backup.
Windows Server Backup can save backups to the following storage types:
Storage Type Description
Internal disk You can use the backups stored on internal disks to:
• Recover files, folders, applications, and volumes.
• Perform operating system (bare metal) recoveries if the backup used contains all the
critical volumes.
• Perform system state recoveries if the backup used contains the system state.
When you store scheduled backups on an internal disk, you have the option of dedicating that
disk for storage. When it is dedicated for backup storage, the disk will not be visible in Windows
Explorer. This is the recommended option.
You cannot store the backup on the same volume that is being backed up. Additionally, Windows
Server Backup does not support tape devices.
External disk Backups to external disks are much the same as backups stored on an internal disk.
• Backups to an external disk can be used to recover the full server, critical volumes, non-
critical volumes, individual files and folders, and applications and their data.
• You have the option of dedicating the disk for storage.
• Best practice dictates that you use USB 2.0 (or later) or IEEE 1394 disks with 2.5 more
times capacity than the size of data you need to back up.
External USB flash drives are not supported.
Shared folder Backups to a shared folder are saved to a network share.
• Backups to a shared folder can be used to recover files, folders, applications, and full
volumes, or to perform system state or bare metal recoveries.
• Backups stored on a shared folder are not saved consecutively. Rather, each backup
operation overwrites the previous backup. If a backup operation fails, you may be left without a
backup. You can avoid this by storing your backups in subfolders of the shared folder.
DVD, other optical media, or removable media Backups can be stored on a DVD. However, this
backup media has more limitations than other media.
• You can use backups stored on optical or removable media to perform full volume or bare
metal recoveries.
• You cannot recover applications, individual files, or the system state from backups stored
on optical or removable media.
• Backups to DVDs are compressed, so it's likely that the backup size on the DVD is smaller
than the actual size of the volume.
• Backups can span multiple DVDs if necessary. When one DVD reaches capacity, the
system prompts you to insert the next DVD.

15.3 Remote Management

As you study this section, answer the following questions:


• What is the difference between Telnet and SSH?
• How does remote desktop software differ from terminal emulation software?
• How can you use a remote desktop solution for troubleshooting and technical support
within your organization?
• How does a remote desktop protocol minimize the data sent between the client and server
devices for a remote connection?
• What is device redirection and how does it add flexibility to remote desktop connections?
After finishing this section, you should be able to complete the following tasks:
• Establish a remote desktop connection to another computer.
• Configure remote desktop connection parameters.
This section covers the following Network Pro exam objective:
• Domain 7.0 Network Management
• Given a Windows system, enable and configure Remote Desktop to meet end user
requirements.

15.3.1 Remote Management

One of the challenges of managing a network is servicing devices in many different locations.
For example, you might have a main office in one location and a branch office in another location,
connected with a WAN link. What do you do if something goes wrong on a server in the branch
office if it is located hundreds of miles away?
Even if your network has only a single geographic location, you will likely run into situations
where a problem appears in the middle of the night or while you're away on vacation. Without
remote management you'd have to drive to the office to troubleshoot the problem.
Remote management lets you manage devices without being physically present at the console.
How Remote Management Works

A typical device, like a server or a router, has a console connected. For example, a server might
have a keyboard and a monitor. To manage the device, you use the keyboard to type in commands
and make changes. Some devices, such as routers, don't have keyboards or monitors. They
instead have a console port that connects to a laptop or some other device through a serial
connection. The laptop runs console management software that is then used for device
management.
In both of these cases, you must be physically at the device to manage it. With remote
management, however, you replace the console with a network connection. A network connection
to the device is configured, and then another device is connected through the network to the
device, allowing you to manage it as if you were physically present. It uses the network
connection for sending and receiving information from the device. This is a much more
preferable option than flying to a remote office or coming in late at night to fix problems.
Terminal Emulation

There are a several types of remote management solutions that you should be familiar with.
The first is called terminal emulation. Terminal emulation does just what its name implies. It
emulates the physical console terminal of the device remotely or through a network connection.
Terminal emulation software runs on a remote device (such as a PC or laptop) and connects to
the device, allowing you to issue commands and view information.
Telnet

For many years, administrators used the Telnet protocol for terminal emulation. However, Telnet
does not use encryption. Because of this, all information is transferred as clear text between the
device and the remote terminal emulation software, including administrative user names and
passwords used to authenticate to the device. A malicious individual sniffing network
communications could easily capture sensitive information. For this reason, Telnet is rarely used
today, even for terminal emulation sessions on the same local network segment.
SSH

The risk of compromise is too great.


Instead, most administrators use the Secure Shell (SSH) protocol for terminal emulation. SSH
uses very strong encryption, which makes it very difficult to view information being transferred
between devices. With SSH, you use an SSH utility to open a terminal emulation session with
the device remotely over the network. The SSH utility displays a text-based window where you
type commands that are sent to the device. Responses from the remote device are also displayed
in this window. Even devices that use a graphical user interface typically support terminal
emulation using SSH.
Remote Desktop Software

A second type of remote management solution is remote desktop software. With this solution,
you see the entire graphical desktop of the device, not just a text-based window.
A remote desktop connection has three components. The first is the target. The target is a network
device, server, or workstation that you want to manage remotely. To do this, it must run some
form of remote desktop server software that allows connections to be established with it. If the
target has a host-based firewall running, then the appropriate ports need to be opened in it to
allow remote desktop sessions to be established.
The second component is the remote desktop client. This is a network device (such as a
workstation, laptop, or tablet) that runs the remote desktop client software. The client device has
a network connection to the remote desktop target system.
The third required component is a remote desktop protocol that specifies how information is
passed between the target and the client. You use remote desktop software on the client to
establish a connection with the target. The remote desktop protocol transfers the desktop of the
target device to the client and displays it there. The display includes all open windows, start
menus, and anything else that was present on the target device.
On the client, you use the keyboard and the mouse to send commands to the target using the
remote desktop protocol. The target receives these actions and uses its processor and memory to
execute those commands, such as closing a window, entering text in a file, or launching an
application. The actions executed on the target usually result in a change to the graphical
information displayed on the desktop. As the screen is redrawn, the changes are sent by the
remote desktop protocol back to the client system where that updated information is displayed.
The remote desktop protocol is also responsible for optimizing the data that's passed between the
target and the client. This is particularly useful in situations where the remote connection has
been established through a slow Internet connection. The protocol optimizes data transmissions
such that only the necessary information is sent. For example, if the remote desktop screen is
updated with new information, the protocol will preserve graphical elements that have not
changed and transfer only the elements that have been changed. This dramatically increases
speed of the session. In addition, the protocol may employ compression algorithms to further
reduce the amount of data that is sent between the target and the client.
Many remote desktop protocols are also capable of resource redirection. For example, you could
create a print job on the remote target and have it redirected to a printer connected to the client
system, which could be miles away from the target. Audio can also be redirected to the client
from the target. Some remote desktop protocols also allow storage devices on the client to be
accessed from the target. This means you could create a file on the target in a remote desktop
session and save it on the client's hard disk.
You can also use remote desktop solutions for remote assistance. Users who are experiencing
problems with their systems can request that you connect remotely into their computer to view
the symptoms or even take control of their desktop to troubleshoot problems.
Remote Desktop Protocols

There are several different protocols that are commonly used for remote desktop connections:
Virtual Network Computing (VNC) was one of the first remote desktop solutions. It was
originally designed for UNIX and is an open source solution. VNC software is currently available
for Linux, Windows, and Mac OS.
The Independent Computing Architecture (ICA) by Citrix is another option. Originally it was
developed only for Windows, although now there are versions for other operating systems as
well.
For Windows, the Remote Desktop Protocol (RDP) from Microsoft is widely used. Versions for
Mac OS and Linux are also available.
When selecting a remote desktop solution, you should ask several key questions:
Will it run on the target host's operating system?
Does it provide client software that will run on my client system's operating system?
Can I use a web browser to connect to the target from the client?
Summary

That's it for this lesson. We looked at remote management solutions that allow you to access and
manage a remote device (router, switch, server, or workstation) as if you were physically present.
This is accomplished using a network connection. Terminal emulation provides text-based
remote access and is commonly used to manage routers, switches, and Linux servers. Remote
desktop protocols are used to provide access to systems that use a graphical interfaces, such as
Windows servers and workstations.
15.3.2 Using Remote Desktop

In this demonstration, we're going to practice enabling remote desktop. In this scenario, we have
this Windows Server system that you see right here. It's located in a server room on a different
floor from my workstation. Whenever there's a problem with this server, I have to climb 20 flights
of stairs to go up and try to figure out what the issue is.
I'm getting tired of doing that, so what I want to do here is enable remote desktop on this server
so I can get to it from my workstation down to my nice comfy office. Now, in order to do this,
the first thing we need to do is turn remote desktop on, because as you can see here in server
manager, remote desktop is disabled by defaults. When I click disabled, and we come over here
under remote desktop and mark, "Allow remote connections to this computer."
I should add before we go any further that I am turning remote desktop on for a server system,
but you can do this for the same thing for any Windows workstation system. It works just as well
in basically the same way. I have enabled remote desktop to this computer. We have an option
down here to allow connections only from computers running remote desktop with network level
authentication which is recommended. I'm going to leave that option turned on.
If, for some weird reason, you're trying to connect to the server from a really old Windows
workstation like XP, you'll probably have to turn that option off in order to establish the
connection. We're going to connect to it from Windows 7, so there is no issue, but there is one
thing we have to do before we can go any further, and that is grant my user account in the
Windows domain, access to the server through a remote desktop connection.
To do this, I need to come down to select users and it tells us down here that basically if you're a
member of the administrative group or if you are the administrator user which I am currently
logged in to this system as, then you automatically are granted remote desktop access to this
system. My user account is not a member of the administrator's group, it's a member of the user's
group, so I need to manually add my user account as an authorized user.
I'm using the student user account in the domain to log in to my workstation. We've added student
as an authorized remote desktop user. What that will do when I hit okay is add student to a special
group called the Remote Desktop Users Group. We'll click okay. Click okay again to enable the
configuration.
Now, if I come over here to tools and go to active directory users and computers, and then under
the built-in container, we go to the Remote Desktop Users Group. Double click on it. Look at
members. We should see the TestOut student, the student user account is now a member of the
Remote Desktop Users Group. At this point, that user is allowed to establish a remote desktop
connection with this system.
Let's go ahead and close this window and let's switch over to a Windows workstation and
establish a desktop connection with the server. Let's go down to the start menu and click on
remote desktop connection. Let's click on the options drop-down list. The first thing we need to
specify is the IP address or DNS name of the remote system that we want to establish a remote
desktop connection with. In this case, it is corpserver.corpnet.com, and we're going to log in as
the student user.
Now, there's an option down here to allow me to save credentials. That way, when I start this
connection up, I won't have to log in every single time. That idea, it's a security risk. If you were
to run down to the break room and leave your desktop system here logged in, someone could sit
down at your computer system and access the server through remote desktop connection. Not
cool, so we're going to leave that option turned off.
Let's go over here to display. Here, we can specify how big we want the remote desktop window
to be by default here. Its set to full screen, so it will fill up the entire desktop. We're working with
a very small desktop here. If you have a large desktop with a large monitor, you can actually put
it within just a window of the desktop. We don't really have that option here, because we're
working with very limited real estate, so I'm going to just run it full screen.
Down here, you can choose the color depth of the remote session. This will impact the overall
speed of the connection. Basically, the higher the color quality, the more color depth you use, the
slower the connection is going to be, because more data has to be transferred.
If you're dealing with a land connection like in this scenario I'm connecting to a server that's just
on a different floor, no problem, you can use highest quality. If, however, you're going through
the internet, through a VPN connection or heaven forbid maybe a dial-up connection, then you
might have to crank this color depth down a little bit so that it doesn't require so much data to
display the desktop.
Let's go over to local resources. This is kind of a cool tab, because what we can do is move
information back and forth between my local system here and the remote desktop session that
I'm connected to. For example, if I want to hear the sounds that are being generated on the remote
system, I can click this button right here and turn on this option play on this computer and it is
turned on by default.
For example, if we were to access a streaming music website on the remote system, we would
hear it on this local system here. Likewise down here, we can allow access to local devices and
resources on this system. If I say I have a folder on this system and I want to be able to transfer
data between this system and the remote desktop system, I could come down here and say,
"Please share local disk F."
When I do that, I will see a drive in the remote system that actually points to the F drive here on
this system. We're not going to actually do that today. I'm going to turn it off, but be aware that
that is an option. It's really cool because it allows you to, say, transfer configuration files back
and forth or other information back and forth between the local system and the remote desktop
system.
You can also enable printing through the remote desktop connection. If I have, say, a printer that's
defined on this local system here that I can send print jobs to, I leave that button marked. When
I go to my remote desktop session, I will see that the printer that I have connected to this system
will be available to the remote system through the remote desktop session.
I can click file, print, in the remote system and the print job will come out on my local printer.
That's, again, very useful. Another option that I really like is the clipboard option. That basically
allows me to copy and paste text back and forth between the remote system and the local system.
If you go over to programs, you can specify that a particular program startup whenever you make
the remote desktop connection. I never use that tab. This is the one that I do use a lot though, and
that is where you configure the experience. Basically, remote desktop requires quite a bit of
network bandwidth. Not a lot, but a significant amount.
If you have a low throughput connection, it can really make the remote desktop experience bad.
You need to customize it to match the type of connection you're using to access the remote
system. By default, notice that it's set to low speed broadband, between 256 kbps to 2 mbps.
If you're going through a slow VP connection, this is the option you would use. Notice that if you
do that, the desktop background is turned off, font smoothing is turned off, desktop composition
is turned off. Menu and window animations are turned off. Basically, this option turns off
everything that isn't absolutely needed.
In our case, we don't need to do that, because we're going through a LAN connection. I can come
down here to LAN, 10mbps or higher and notice when I do, all these cool little features are turned
back on so that the remote desktop will pretty much look like a regular desktop session as if I
were sitting right in front of the computer.
At this point, we've got everything configured for this connection. Let's go ahead and connect. I
do need to log in as the student user. Now, at this point, I need to accept the certificate from the
remote server system and the reason I'm doing this is because I'm using a self-sign certificate on
the server.
If I were using a third party trusted certificate, then I wouldn't see this error, but because I'm
using a self-sign certificate, I do have to say, "Yes, please accept the certificate," and I'll tell it to
not worry about it in the future because I trust the remote system.
Okay, I am now logged in to the desktop of my corp server system. If we look up at the top of
the window, notice that there is a bar up here that lets me know that I'm not looking at my local
desktop, I'm actually looking at the desktop of the remote system. If for some reason I need to
get back to my local desktop, I can click this button right here, restore down, or even minimize
if I want it to.
If I click restore down, it puts it into a window and now I can see the task bar and desktop of my
local workstation. Let's go ahead and go back into full screen mode. Notice that I can perform
tasks now on the remote server system as if I were sitting in front of it which is really nice if I
don't want to climb seven flights of stairs to get to the server room.
Here's all the configuration settings we were just looking at the desktop of the server itself. I
would go through and do whatever configuration task I would need to do on this server. When
I'm done, I can disconnect from the session. To do this, I would come down here to the start
button on the Windows Server. Click on shut down or sign on, and click on either disconnect or
sign out. Either one will disconnect me from remote desktop session.
I'm going to click on sign out so I sign all the way out of the remote session, and the session is
closed. That's it for this demonstration. In this demo, we practiced using remote desktop. We first
enabled remote desktop on the Windows Server system and then we configured the remote
desktop connection from a Windows workstation to that server.
15.3.4 Remote Management Facts

There are typically two types of solutions for providing remote management of network devices.
The following table describes them in detail:
Method Description
Terminal emulation A terminal is a monitor and keyboard attached to a device (e.g., mainframe,
server, or router) through a serial or special console port. The terminal displays a text-based
interface, and users interact with the device by typing commands. A terminal emulation utility is
a program that allows a console connection through the network. The terminal emulation
software communicates with the device over the network and displays the text-based console
screen. There are two common terminal emulation programs used.
• Telnet opens a plaintext, unsecure connection. Telnet uses TCP port 23.
• SSH provides the same capabilities as Telnet, but encrypts data. SSH uses TCP port 22.
You should never use Telnet for any type of network communication. Telnet sends all data in
plaintext—including usernames and passwords—making it extremely easy for someone to
intercept and compromise transmitted data.
Remote desktop Instead of showing a simple command line interface, a remote desktop
utility displays the graphical user interface of a remote device. Remote desktop solutions are used
to remotely manage a computer or to allow support personnel to view and troubleshoot a remote
user's system. Remote desktop software typically has the following three components:
• The server software, which runs on the target desktop
• The client (or viewer) software, which runs on a remote system (when you run the client
software, you see the desktop of the server system)
• The remote desktop protocol, which is responsible for communication between the server
and the client
• The graphical desktop on the server is sent to the client.
• Keystrokes and mouse movements on the client are sent to the server.
• The server executes the actions performed on the client, which modifies data on the server
and results in changes to the desktop.
• The desktop changes are transferred and displayed on the client.
The remote desktop protocol is optimized to minimize the amount of traffic generated by this
exchange.
There are multiple protocols that can be used for remote desktop connections.
• Virtual Network Computing (VNC) was originally developed for UNIX. Applications
using VNC include RealVNC, TightVNC, UltraVNC, and Vine Server.
• Independent Computing Architecture (ICA) is the protocol used by Citrix products
(WinFrame and MetaFrame/XenApp).
• The Remote Desktop Protocol (RDP) is the protocol developed by Microsoft and used in
Microsoft's Remote Desktop Services and Remote Assistance solutions. Aqua Connect has
licensed RDP and created a version for Mac OS X as a server. RDP uses TCP and UDP port 3389.
Most remote desktop protocols support the following features:
• Client software for a variety of operating systems.
• Server software for a limited number of operating systems.
• The ability to show a remote desktop in a browser without installing client software.
• Redirecting printing, sound, or storage from the server to devices connected to the client.
In addition to these solutions, most operating systems or network services provide management
tools that are capable of contacting a system remotely.

15.6 Monitoring
As you study this section, answer the following questions:
• Why should you enable logging only for specific events you want to track?
• After configuring system logging, what else must you do to take advantage of the benefits
of logging?
• How does a load tester differ from a throughput tester?
• What must you do to configure a packet sniffer to be able to see all frames on a subnet?
After finishing this section, you should be able to complete the following tasks:
• View events recorded in system and application logs.
• Use a packet sniffer to monitor network traffic.

15.6.1 Network Monitoring

The goal of monitoring is to keep track of the conditions that are occurring on the network and
then to identify situations that might signal potential problem. This allows you to identify the
source of those problems and then identify areas of your network that might need to be fixed,
upgraded, or changed in some way.
System Logs

Let's take a look at several tools that you can use to monitor your network.
One of the most important tools available to you, believe it or not, are your system logs. Log
entries are usually generated by the operating system and any applications that are running on a
system. They're usually saved on the local computer. Log entries are generated based on the
events that occur on the system. These events may be triggered by a wide variety of things. For
example, somebody changing the system time could create a log entry. Maybe packets that come
through a router would generate a log entry. Somebody failing to log in over and over and over
would generate a log entry and so on.
By default, most devices as well as the operating systems running on your workstations and
servers will perform some kind of limited logging. However, to gather extensive information,
you need to manually enable and configure the logging process. Be aware though that extensive
logging can consume a lot of system resources. Limit your logging to just the events that you
need to track. Alternatively, you could enable really extensive logging but just for a limited
amount of time while you're trying to troubleshoot a particular problem. Just remember to turn
that extensive logging off when you're done. Otherwise, your system performance is going to
take a hit.
The logs are usually created and stored on each computer and device in the network. In order for
these logs to be useful, you need to be able to analyze them to see what's been happening. You
can do this manually if you want but many times, the key information that you need is going to
be buried and it's going to be really difficult to find. A better approach is to use a log file analyzer
to review your logs for you and then identify key pieces of information.
Centralized Logging

You should also consider implementing centralized logging. This is where the logged files from
all of your systems as well as your network devices are automatically sent to and stored on a
central log server on your network. This has a lot of advantages. For example, it provides you
with instant access to all of the log files within your entire network all in one single location. It
also allows a log file analyzer to generate a system wide analysis of what else is happening on
your network.
Load Tester

Another tool that you can use is a load tester. A load tester simulates a load on a target system.
For example, your organization may have a web server that maintains a website. The load tester
can be used to simulate multiple clients connecting to that web server all at the same time. The
load tester helps you identify how many client connections that web server can support before
it's going to start to choke and experience problems. You can use this information to determine
at what point your normal day to day network traffic will exceed the threshold that you identified
with the load tester.
Throughput Tester

Another tool that you can use is called a throughput tester which estimates the amount of traffic
that can be sent through a network. For example, let's suppose that we have two devices that are
connected through a WAN link or through the internet. The throughput tester sends a known
amount of data through the network and then it estimates the amount of time that took for that
data to be received.
Using this information, the throughput tester identifies your actual bandwidth which may not be
the same as the stated bandwidth that you receive from your service provider or maybe the rated
bandwidth from a hardware vendor. For example, suppose you have a network that uses gigabit
Ethernet equipment. You can use a throughput tester to identify what the actual bandwidth
between those two devices on the network really is. If it's less than the rated bandwidth, then you
can take steps to troubleshoot the problem and try to figure out why you're getting less bandwidth
than what you expected.
Packet Sniffer

A final monitoring tool that you can use is called a packet sniffer. A packet sniffer is really useful.
It's a device that you connect to your network. It captures data that's being transmitted on the
network and then saves it for later analysis. For example, you may have a device sitting on your
network and you want to determine what type of traffic is currently being directed towards that
device. Maybe it's a network firewall. You can use a packet sniffer to look at every single frame
that's being sent to that device.
From a security standpoint, you might do this to identify potential attacks that are being directed
at that specific device. You can also use a packet sniffer to identify traffic that should or should
not exist on your network. For example, you might use the packet sniffer to identify peer-to-peer
file sharing traffic on your network. If your acceptable use policy specifies that this type of traffic
is not allowed, you can use a packet sniffer to try to figure out who's using it. Then, you can take
steps to eliminate it from your network.
When you're using a packet sniffer though, there's a couple of things that you have to be aware
of. First, let's assume here that we have a network that uses a hub to connect all of our network
devices together. If we install packet sniffing software on one of these devices, by default, it will
only see the frames that are addressed to that specific device. By default, the network interface
card will only accept frames whose designation MAC address matches its own MAC address.
Frames that are being sent to the other devices on the network will not be processed by this
device.
If you're using a packet sniffer, you have to fix this. You need to enable P mode which stands for
promiscuous mode on your network interface card. In promiscuous mode, the network interface
will capture all the frames that it sees on the wire, not just the frames that are addressed to it. In
the case of a hub, a frame sent from this device to this other device would be repeated to all the
other ports on the hub. If the interface on the sniffer system is set to promiscuous mode, then that
frame will be seen on the wire. It will be captured and will be analyzed by the packet sniffing
software.
However, if this network uses a switch instead of a hub which it probably does today, then the
only traffic that's going to be sent to the sniffer device is traffic that is specifically addressed to
that device. In this case, a frame going from this host to this other host would never be sent
through this port to the sniffer device. The packet sniffing software will never see the frames that
go between these other devices.
Port Mirroring

What do you do in this situation? You can solve it by using a feature called port mirroring on
your switch. With port mirroring, you designate one of the ports on the switch as a mirror port.
Any traffic that's being sent between any other ports on the switch will be automatically
forwarded to the mirror port as well. For example, a frame sent from this device to this device
would also be sent to the mirror port here where we have our packet sniffing system connected.
Summary

That's it for this lesson. In this lesson, we discussed the importance of network monitoring. By
performing regular network monitoring, you can develop an accurate picture of the overall health
of your network. Regular monitoring helps to identify situations that can lead to problems and
gives you the information that you need in order to make the necessary changes and
improvements in your network to prevent them. To monitor your network, you can monitor your
log files, you can use a load tester, you can use a throughput tester or you can use a packet sniffer.
15.6.2 Protocol Analyzers

A protocol analyzer is hardware or software for monitoring and analyzing digital data passing
over a network. It can be used for all kinds of functions, including logging, sniffing, intercepting,
analyzing performance, monitoring, and troubleshooting.
Analyzer Names

Be aware that protocol analyzers have many different names that are all very similar. A protocol
analyzer can be called a packet analyzer, a network analyzer, a network sniffer, or a network
scanner, and they all mean roughly the same thing.
NIC in Promiscuous Mode

When using a protocol analyzer, put the NIC in promiscuous mode, which allows it to monitor
all traffic on the network. In non-promiscuous mode, only traffic going to and from the protocol
analyzer will be captured. So if you want to capture traffic out there on the network, you have to
enable promiscuous mode.
Uses for Protocol Analyzers

There are many uses for protocol analyzers. We're going to list off various examples. You can
monitor all network traffic as it traverses the network. You can check for specific protocols like
SMTP, DNS, POP3, and ICMP packets. You can detect employees using unauthorized Web sites
by scanning URLs in packets. You can find open ports on the network; however, this would be
more of a network scanner or port scanner feature of a protocol analyzer. You can review network
traffic for clear text passwords. You can analyze packet headers and determine which flags are
set in the TCP handshake. You can detect malformed or fragmented packets. You can fingerprint
systems, that is, determine what operating system they're running based on how they respond to
different types of network traffic. In general, you can constantly observe data traveling over the
network.
Monitoring network traffic with a protocol analyzer is sometimes called passive interception.
This is where you're listening to the network but taking no actions.
With many protocol analyzer tools, you can do what's called active interception and perform
attacks. Active interception can include placing a computer system between the sender and
receiver to capture information, spoofing attacks, man-in-the-middle attacks, replay attacks, and
TCP/IP session hijacking.
And, if you're having trouble monitoring traffic through a switch, you can do a switch attack
called MAC flooding, which basically turns a switch into a hub and allows you to see all the
traffic. Finally, be aware of some common protocol analyzers. Wireshark, Ethereal, DSNIF,
Ethercap, TCP dump, and Microsoft network monitor.
Summary

So, these are some of the ways you can use protocol analyzers to analyze data going across your
network. Be aware that protocol analyzers go by many names. Be sure to place a NIC in
promiscuous mode, and be aware of the difference between active and passive interception.
15.6.3 Viewing Event Logs

In this demonstration, we're going to take a look at viewing event viewer logs. I'm on a server
and I'll go up under 'Tools' and open up 'Event Viewer.' There are different ways to get to event
viewer. If you're on a client, you might want to go through Control Panel. If you like to use the
command prompt or run, you can right-click and type in eventvwr.msc and to open Event Viewer.
Let's open up the Windows logs and take a look.
Application Log

There five default logs that come with Windows.


The application log shows events that are put in here by applications. We have some events from
our certification authority and other information in here, but all these events come from
applications.
System Log

The system log contains all events coming from the operating system.
Security Log
It looks like there was a problem with credentials and some other warnings in here.
Then we have our security log which provides our auditing results.
Setup Log

Any auditing that's set up is going to go into the security log.


The setup log contains only information on events that happen during setup. Other than that there
wouldn't be any information in there beyond that.
Forwarded events log is used when you use event subscriptions. With subscriptions I can either
pull or I can push events over from other computers to this one and they'll show up in the
forwarded defense log.
When I go into the logs, I see lots of information, if I want more information on a particular event
I can double-click that event and I'll get all of the information associated with the event.
If we only want to show particular events, we can filter the log. I can right-click and I can filter
the current log for critical errors and warnings and when I click 'OK,' I'll just see those events.
All the information events go away.
Maybe I'm not even interested in warnings, so I'll go into the filter and say that I just want critical
and errors, and then I'll just see those. If you come into a log and there are no events in the log, I
would suspect that a filter is on, because these logs almost always have events.
If I want to turn the filter off, I can clear the filter and it will go away. Sometimes you might say,
'Hey, I come in here all the time and I always want to filter for warnings and things like that, it's
such a pain in the neck to keep doing that.' To resolve this, we can use custom views.
Custom View

A custom view is like a saved filter and there are custom views that already come in Event Viewer
if you're a domain controller or you're running iOS, but you can always create your own even if
it's a client. To do this, we'll go into 'Custom View' and create a 'Custom View.' Let say we're
looking for critical and error from particular logs. Let's say I want it from the system log. If I
want I can specify particular event IDs that I'm interested in, I could also specify keywords, users,
and computers, but in this case I'm just going to take critical and error from the system log. I'll
click 'OK,' it prompts me to save my custom view, so I'll say system log critical and error, click
'OK,' and then I can see just that information. If I close out of event viewer and come back in a
hundred times, that custom view is going to be saved there so that I can just click on it and see
what I need.
The other thing that we'll see in some of the newer operating systems are application services
and logs where some of the applications and services may have their own particular logs. We can
see quite a number of logs here under Microsoft, different logs for audio, authentication user
interface, but the main logs that we work with are the application log for application settings, the
system log for system events, and the security log for auditing.
Summary

In this demonstration, we look at the logs in Event Viewer so we could see what's going on. If
we want to narrow down what we see, we can save a filter, and call it a custom view.
15.6.4 Using a Packet Sniffer

In this demonstration, we're going to look at capturing packets off of the network using a packet
sniffer. Now this might be done for a variety of different reasons. It might be done to troubleshoot
problems on the network. It might be done to optimize the network.
Installing a Packet Sniffer

It's also commonly done to troubleshoot security issues on the network.


Now in order to capture packets off the network you need to have a network sniffer installed on
your management system and I have that done right here. We have the Wireshark package
installed. Wireshark is very popular, very widely used in the industry. One of the nice things
about it is the fact that it's free. You can download it off of the internet and install it on whatever
system you need to. I'm going to go ahead and launch Wireshark.
Now, before I can capture packets off of the network, I need to come down here and specify
which network interface I want to capture packets on.
Specifying a Network Interface

Under capture I'll click on 'Interface list.' When I do a list of all the network interfaces that have
been installed in the system is displayed and I can come over here and mark which ones I want
to capture packets on. Now, as you can see, I only have one network interface in this system so
my choices are limited. We'll just use this one right here. Now once that's marked, I can go ahead
and click start. When I do it will go out and grab a copy of every single frame that it sees on the
network segment.
Now in order for this to work though, a couple of very important things have to happen first,
because, remember by default, this network interface is only going to accept and process frames
whose destination MAC address matches the MAC address of this network interface. If they
don't match then by default this network interface is going to drop the frame. We don't want that.
We want to capture all the frames on the network, not just those that are directly addressed to this
interface.
To do this we need to actually come down here and click 'Close.' Then we need to go over here
and edit. Then go to preferences and go under capture.
Capturing in Promiscuous Mode

Here we need to make sure that this option is turned on, 'Capture packets in promiscuous mode
on all network cards.' When a network board is running in promiscuous mode it will capture all
of the frames that it sees on the Ethernet network, not just those that are directly addressed to it.
Now in order for this to work on Windows workstations, we usually have to install a particular
service on that workstation that enables promiscuous mode. A very commonly used one is called
WinPcap. In fact, WinPcap gets installed with Wireshark. That's the first thing that has to happen.
Connecting to a Mirror Port

We have to have the network interface running in promiscuous mode.


The second thing that has to happen is that this interface right here needs to be connected to a
mirror port on your switch because, remember by default, a switch will memorize which MAC
addresses are connected to which port. It will only forward frames to that port if the destination
MAC address in the frame matches the destination MAC address of the host that's connected to
that port.
By default the switch is only going to let this interface see the frames that are directly addressed
to it. Again, that's not what we want. We want this interface to be exposed to all of the frames
being transmitted on the network. What we need to do is configure a port on the switch as a
mirror port. When we do, the switch will put a copy of all the traffic coming through that switch
on to that one port so we can see everything. Then we need to make sure that this interface is
connected to that port.
Capturing Packets

We already have done that in this situation so we're ready to go.


Since this is all configured, let's go ahead and start capturing packets. I'll hit start right here.
When I do we can see a list of all the various frames that are being transmitted on the network
segment. You can see a progress indicator down here telling us what all is happening. Right now
we've captured almost 5,000 packets in that very short amount of time.
Now it's important to note that even though this says packets down here, we're not actually
capturing packets per se. What we are doing is capturing frames that contain packets. We're
capturing the entire Ethernet frame. If I were to scroll down here, let's grab one of these frames
that's being transmitted on the network. Let's grab this one right here. You can see here's our
framing information right here. We can see things such as the frame number, the frame length
and so on. We can also see our Ethernet information including the source and destination MAC
addresses of that frame. If we come down here and expand internet protocol version four, we can
actually see information about the packet within that frame.
For example, we can see the source IP address as well as the destination IP address. This packet
was originated by a host that had an IP address of 10.0.0.88 and it's addressed to a system that
has an IP address of 10.0.0.100. Now, as you can see, there's a lot of traffic being passed back
and forth between these two hosts. There is other traffic going on between other hosts on the
network segment but not nearly as much as what's going on between those two particular systems.
These two systems are transferring a lot of data back and forth between each other.
In addition, if we close these two up, we can also see transport layer information. We can see that
this packet used TCP, the transmission control protocol, at the transport layer. We can see the
source port 49,195 and the destination port of the packet 902. We can also see the TCP sequencing
and acknowledgement information right here as well.
I might point out before we go any farther that the actual frame itself, the entire frame, is being
displayed right down here in the bottom part of the window. Whenever I click a piece of
information up here, the corresponding information within the frame itself, which is you'll notice
is displayed in hexadecimal notation, is highlighted. If, for example, I clicked Internet protocol
version four right here and so the IP information is highlighted here within the frame. If I switch
over and click the transmission protocol part of the frame, this part, is highlighted down here as
well.
Filtering Packets

Let's go ahead and collapse this. As you can see up here, many, many frames and their
accompanying packets are being captured some of which we're interested in, but a lot of this
information we're probably not interested in. In order to narrow down the results to just the
information that we're interested in, we can come up here and use the filter field. The filter field
is pretty powerful. Using the filter field we can specify exactly what type of information we want
to see.
For example, let's suppose we're having trouble with DHCP on this network and we need to see
if the DHCP process is working properly. To do this I can type BOOTP up here in the filter field
and I'll click apply and it will look through the results that we've already captured trying to find
any DHCP related information. It found a little bit. What we see here is the second half of a
DHCP transaction. We have the DHCP inform message being sent followed by the DHCP
request.
If we want to see the whole thing, what we can do is come down here. We'll leave the filter
running. Notice we're still capturing packets down here but we're only displaying those that are
related to DHCP. All the other packets that we're not interested in right now are not being shown.
Let's go ahead and release and then renew our DHCP lease on this workstation. When we do we
should see the various messages displayed as they are generated.
Let's do an 'IP config /release.' Notice when I did that a DHCP release message was captured
right here from this workstation. We had an IP address of 10.0.0.117 and you can see here that
our DHCP server has an IP address of 10.0.0.254 and we just told the DHCP server that we want
to release the lease on the IP address that we had.
Let's do an 'IP config /renew' now. We see the various steps in the DHCP process. We have our
discover message right here where this workstation's going out and trying to locate a DHCP
server. We got an offer back from the DHCP server. Our workstation said, "Yeah, I like that
address. Let's go ahead and do it." Then the DHCP server responds, "Okay, here you go." Now
it gave us the same IP address that we had before. That's very common. Let's go ahead and clear
our filter at this point. When we do that we should start seeing all of the information that's being
passed on our network segment.
Let's shift gears now and let's suppose we're having security issues with an FTP server on our
network and we want to find out what's going on. I want to change my filter to just FTP traffic.
Now notice that there are two different FTP filters. There's FTP and there's FTP data. This is the
control connection that establishes the session between the FTP client and the FTP server. Then
the data connection is used to actually transmit the data back and forth. We want to see both of
those so we can actually specify two filters at once here in the filter field by putting a space, two
pipes and then the second filter that we want to use, FTP data. Let's apply the filter. I'm pretty
sure we won't see any FTP traffic being passed and we don't. No FTP traffic is currently being
passed.
Let's go down here and open up a web browser and let's access an FTP server ' FTP://10.0.0.165.
When I do I'm prompted to log in. I'll log in to my FTP server. Log in. I'm now connected to my
FTP server. I can click on the documents directory, for example, and let's say we want to
download this file right here named topsecret.TXT. When I do, this document is displayed and it
tells us that it contains very sensitive information that should not be disclosed.
Well, let's go look at our packet sniffer now. Notice that we now, because we set the filter to just
FTP traffic, we see the entire FTP process between my client system here and the remote FTP
server somewhere else on the network. Let's go ahead and scroll up a little bit. By doing this you
can see a lot of very important security-related information.
Security-Related Information

For example, right here we can see that I submitted a username to the remote FTP server of
RTracy. The FTP server responded back to my client saying, "Please specify the password." Now
we didn't see any of this. It was all behind the scenes. The browser took care of it, right?
Well, even though we didn't see it, some questionable things were happening. In response to the
request my workstation sent a password. The password that was sent here was TestOut. Do you
notice the problem here? If I'm running a packet sniffer, I just captured the username and
password of the user on that FTP server. My authentication credentials can be easily
compromised. I'm not using any type of encryption to protect that data. That's problem number
one.
Now if we scroll down a little bit farther, we see that we're switching into the RTracy documents
directory. Then we requested right here that the topsecret.TXT file be sent. If we come down
here, if we look down in this bottom part of the interface where we see the actual frame right
here, we can actually see clear text, the payload that was inside of the FTP packets that were
transferred from the server down to my client. If I'm a nefarious person I can actually read the
contents of the files as they are being transferred between the FTP server and the client. Again
that's because we're not using any type of encryption in order to protect the data.
You can also see that same information up here in this field where it's a little bit easier to read.
We've obviously got some security issues with our FTP implementation. There's no encryption
going on so usernames and passwords are being transmitted clear text and the data itself is being
transmitted clear text. We would want to switch to a secure version of FTP to keep that data safe.
Summary

That's it for this demonstration. In this demo we looked at using a packet sniffer. We talked about
how a packet sniffer works. We talked about setting the network interface card into promiscuous
mode so we can capture all the packets that are being transferred on the network. We reviewed
how to find data within the interface of the packet sniffer. We then looked at capturing DHCP
transaction data. Then we ended this demonstration by capturing FTP transaction data.
15.6.5 Monitoring Utilization

In this demonstration, we are going to spend some time talking about monitoring utilization. This
is something you should do on your network host, such as workstations and especially servers,
as well as your network infrastructure devices such as your switches, routers, firewalls and so
on. We don't have time to cover all of these different devices in this demonstration, now we're
going to practice doing that here on this Windows workstation, but the principles are the same.
Task Manager

The actual process you will use on these different types of devices will vary.
On a Windows workstation, there are two tools you can use to monitor utilization. One of them
is Task Manager. Task Manager is extremely useful. Start Task Manager by right-clicking on
your task bar and selecting Start Task Manager. When you do, by default the applications tab is
displayed, which simply displays a list of running applications on the system. That's not terribly
useful, although you do get a little bit of information down here. For example, the number of
processes that are running, your overall CPU utilization, your memory utilization and so on. If
you want better information you can come over here to the Performance tab.
Now on the Performance tab you can see two key parameters. First of all, your CPU utilization
and your memory utilization. Let's make this a little bit bigger so we can see. We can see our
overall CPU usage. As you can see, on this system right now, I'm currently bumping between
about 25 to 41 percent CPU utilization. This graph over here represents the average of all the
CPU's in the system. Now this is a single CPU two-core system, therefore, I actually have two
graphs over here, one for each core within the CPU. If you had a quad core system, you would
have 4 graphs. If you had a hyper-threading quad core, you would have 8 graphs and so on.
This gives you a pretty good idea of what's going on with your CPU. Occasional spikes here are
nothing to be concerned about. If you open up an application for example, let's go down here and
start Calculator, you see that the CPU utilization bumped up to about 48 percent just for a second,
then it is back down into the thirties. That's okay. You can even have spikes into the nineties. As
long as it is just a spike, it's okay. As long as that utilization comes back down, you are totally
fine, but if the utilization is staying pegged above about 45 to 50 percent, then you need to do
something. You might need to upgrade to a faster CPU. You might need to upgrade to a CPU
with more cores. You might need to upgrade to a multi-CPU system or you might just need more
memory installed in the system because the CPU is spending a lot of time swapping data back
and forth between the hard drive and memory and it's taking up a lot of processor cycles. You
might need to stop running so many services and applications at one time on the system.
So this one is looking like it is okay. It's probably a little higher than I would like. I'd usually like
to see this down in the high teens and low twenties, but this is perfectly acceptable. Down here
we can take a look at how much of our memory is being used. We have an overall memory usage
over here. We have a physical memory usage history over here. This blue line shows us how
much of the physical memory is in use. Down here we have actual numbers to tell us what's
going on. Under physical memory, here is how much physical memory is installed on the system.
This is how much is cached. This is how much is available. Down here under kernel memory,
we can see how much memory is paged and how much is not paged, and again if our memory
usage is excessive, then we need to probably install more memory in the system or shut down
some services and applications.
If we go over here to the Networking tab, we can monitor our network utilization. Now this
particular system has two network interfaces installed. One is a wireless network interface and
one is a wired network interface and it's currently connected with the wireless interface. As you
can see it is using a very old wireless network interface an 802.11g and it's also running at 54
megabits per second. We notice right here in the network utilization column, let me spread this
out a little more, we can see the overall utilization of each interface. Now notice they are hovering
right now at a whopping zero percent because there's really not a whole lot going on, on either
link. If we were to start doing say a big file copy, then we would see the utilization spike.
Resource Monitor

Now the Networking tab and the Performance tab in Task Manager are quite useful and they
display good information. However, if you want to see more detail then you can use a second
tool and that's Resource Monitor. Basically, if I need just a quick glance at what's going on in the
system, I just use Task Manager. But, if I've got a system that's really having a lot of trouble and
I need to do a detailed analysis of what's going on, I prefer to use Resource Monitor instead,
because it just displays a lot more information. Let's go ahead and close down Task Manager. We
don't need it anymore.
In Resource Monitor, notice that we have four tabs. We have the Overview tab, the CPU tab, the
Memory tab, the Disk tab and the Network tab. On the Overview tab, we get a summary view of
what's going on with the system. Basically, it's a lot of the same information that we saw in Task
Manager. We can look at CPU. Here we see a list of all the different processes that are running
on the system. We can get an overview of the overall disk utilization and notice over here we
have graphs as well showing what's going on with the utilization. Here's our CPU utilization.
Here's our disk utilization. We also have network utilization and finally at the bottom, we have
our memory utilization.
Now again, if we want more detail, we can go over to a specific tab. For example, for CPU, we
can click on the 'CPU' tab. We can see all of the different processes that are running on this CPU
and if we click on services, we can see a list of all the services that are running on this system.
Over on the right, we can see graphs for the CPU utilization total. How much is being used by
services on the system and graphs for the two different cores in the CPU. Likewise, we can go to
the Memory tab and we can see a list of all the processes running on the system and how much
memory each one is using. We have a graph that shows us how memory is being allocated in the
system. How much is in use, how much is in standby, how much is free? Up here we can get a
good idea of who's using the most memory in the system if we are trying to troubleshoot a
problem where we have excessive memory utilization.
By clicking on the various columns, we can see which processes are using the most memory. We
can do that too on the CPU if we're having excessive CPU utilization and we want to see which
process or service is using the CPU the most, we can click on Average CPU and we can see which
process is using the CPU the most. As you can see here under processes, the software that I am
using to actually record this demonstration is using most of the CPU utilization. You can do the
same thing down here under services. Click Average CPU and we can see which system service
is utilizing the CPU the most. We can do the same thing over here with Disk. We can see the
different processes that are currently accessing the disk and we can also see how much they're
utilizing the disk by looking at these columns over here.
We can also go to the Network tab and we can see which processes are currently using the
network right here. Under Network Activity, we can see all the different services and processes
running on the system and how much they're using. We can see which services are sending and
receiving. We can also come down here under TCP Connections and we can see all of the
connections that have currently been established between this system and another system. We
can see which process has established that connection. We can see the IP address of our local
network board and which port is in use on the local system. We can also see the remote system
that we're connected to and which port on the remote system is in use. You can see we are
connected to 10002, 10005, 10087 and 10074 and we are connected to these ports on those
different systems.
One useful thing down here, especially from a security standpoint, we can see a list of listening
ports. These are network ports that are currently open on the local system. You can see which
port it is right here and the process associated with that port. For example, Port 135 is being used
by svchost. Here is an interesting one, Ports 137, 138 and 139, you should recognize those as
your NETBIOS ports or your server message block protocol ports. This is being used for file
sharing over the network. As with the other parameters we looked at, we have some nice graphs
over here that show us what's going on, on the network.
Currently, you can see there's just really not a whole lot going on, on the network. Let's go ahead
and generate some network traffic. As you can see here, I'm accessing a shared folder over the
network on a different computer and I have an ISO file for installing a boot to Linux. I'm going
to copy this file. I'm going to put it on the local hard disk drive. That should generate some traffic.
Hit paste and we'll let that run for a second and now when we come over here notice that our
network utilization has now spiked because we are copying a very big file and it's going to take
quite some time to complete. Even though we're copying that big file, notice that our network
utilization is still hovering around 12 percent. Other processes could still use our network
interface to establish connections and transfer data over the network.
Summary

There's still capacity available.


That's it for this demonstration. We talked about monitoring utilization. We first looked at
monitoring CPU, memory, disk and network using Task Manager and then we explored looking
at those same parameters using Resource Monitor.
15.6.6 Monitoring Interface Statistics

In this demonstration, we're going to learn how to monitor our interface statistics. There are a
couple things I want to touch on before we get into our demonstration.
Show IP Interface Brief

The first command, which we've seen, is a 'show IP interface brief.' We need to make sure that
our interfaces are up before we can do any other commands. There's no reason to look at the
interfaces if we don't have an up/up status. You can see I have this serial and I have an IP address
set right here. We're up on our layer 1. We're up on our layer 2. We have an up/up status. We're
good to go as far as passing some traffic because we're talking to the other end. Right now, I have
2 routers set up on a serial connection communicating with each other.
Show Controllers

The other command that I want you to be familiar with is a 'show controllers' and I'm going to
say 'S000.' I could hit Enter after show controllers. I don't have to put in a specific interface, but
in this case, our fast Ethernets are not up, our other serial is not up, and our VLAN is not up.
There's no reason for me to look at that information, plus it is pages and pages of information
that we really just don't want to go through for this demonstration.
I'm going to hit show controllers S000, because that's my interface I want to work with, and I'm
going to hit 'Enter'. There's one thing in here we have to pay attention to. These other things, all
these hexes down here, don't get too caught up in that.
DCE/DTE

That's a little bit beyond what we need to know for this exam. What I want you to know is this
line right here, the DCE and the clock rate. The DCE tells us that we are supplying the clock, the
timing mechanisms, for this communication. The other end is going to say a DTE. That means
they're determinable, they're accepting the clock rate, and right now, we've got the clock rate set
to 2 million. That's how fast that data is going to be sent back and forth. That's not the bandwidth.
That's just the clock rate or the bandwidth statement that we could use for routing purposes. It
has nothing to do with the actual speed of the data. DCE or DTE is what we want to focus on
here, and we can see right now, we are the DCE. We are providing that clock.
Show Int S000

What we want to focus on now is a 'show int S000'. I'm going to go ahead and hit 'Enter', and
this is what we want to see. We can see that our serial is up, that's our layer 1. Line protocol is
up, good. We saw the same thing with 'show IP interface brief.' Here's our interface address that
we have on our interface.
This right here, these MTU, we can set these for routing as well. Right now, we know our
bandwidth is set at 1.544 kilobytes, that's the default for that serial connection we have. We have
very good reliability, 255 out of 255. Right now, our transmit load is 1 out of 255, very low.
Receive load is 1 out of 255, which is very low as well. That shows us that we don't have a lot of
utilization coming in and out of these interfaces.
The other part we want to focus on is right down here, our packets input and our packets input
errors, our packets output, and our output errors right here. These are the settings that we want
to monitor very closely if we're starting to run into some routing issues or data issues. If we have
complaints saying I'm not receiving the data or the network is down or whatever errors or
messages you're getting at the help desk. We can come in and type 'show interface S000' and get
some information. Right now, it says 0 packets input, 0 packets output, and if I don't have any
input or output, I'm not going to have any errors.
What I want to do is go over to the router, the part you can't see, and I'm going to pass some
traffic and we're going to come back and run this command again. It will show you how some
packets can go through there, because right now like I said, I only have 2 routers, they're not
production routers. They're just sitting there not doing anything really. I'm going to simulate some
traffic to go through there, and we'll see how those packets, those numbers, those counters are
going to go up, because in a real production environment, we're going to have a lot higher
numbers than 0.
Let me bounce over. Let's pass some traffic, and we'll be right back.
Okay, I've sent some traffic. What I want to do now is type 'show int S000' and hit 'Enter' one
more time, and now, look here. This stuff up here at the top is going to stay the same, because
we don't have much of a load. This is not continuous, but this tells us what we have received over
time. We received 3 packets of input and 3 packets of output. What we've done is simply just sent
a ping packet across and then it replied. Yes, it received something and then it sent it right back.
You can see we don't have any errors. We didn't have any collisions. That's great. If we have a
lot of errors in there, we need to do some more investigation. If we have lots of collisions in
there, we need to do some further investigation, but 'show interface', and in this case, we did
S000. We could have done any interface that was up, any interface that we want to monitor the
statistics on such as the other serials or our Ethernet interfaces. We would run these exact same
commands and get this exact same information.
Summary

That is how we monitor this interface.


In this demonstration, we learned how to monitor our interface statistics.
15.6.7 Network Mointoring Facts

The goal of monitoring is to keep track of conditions on the network, identify situations that
might signal potential problems, pinpoint the source of problems, and locate areas of your
network that might need to be upgraded or modified. As you monitor your network, look for your
top talkers and listeners. Top talkers are those computers that send most data, either from your
network or into your network. Top listeners are those hosts that are receiving most of the data,
streaming or downloading large amounts of data from the Internet. These computers can create
heavy traffic and lower performance.
The following table lists some tools you can use to check the health of your network:
Tool Description
Logs Logs contain a record of events that have occurred on a system. Logging capabilities are
built into operating systems, services, and applications. Log entries are generated in response to
changes in configuration, system state, or network conditions.
• By default, some logging is enabled and performed automatically. To gather additional
information, you can usually enable more extensive logging.
• Many systems have logs for different purposes, such as a system log for operating system
entries, a security log for security related entries, and an application log (also called a
performance log) for events related to specific services and processes, such as connections from
a web server.
• Logging requires system resources (processor, memory, and disk). You should only enable
additional logging based on information you want to gather, and you should disable logging after
you obtain the information you need.
• Logs must be analyzed to be useful; only by looking at the logs will you be able to discover
problems. Depending on the log type, additional tools might be available to analyze logs for
patterns.
• syslog is a standard for managing and sending log messages from one computer system
to another. syslog can analyze messages and notify administrators of problems or performance.
Load tester A load tester simulates a load on a server or service. For example, the load tester
might simulate a large number of client connections to a website, test file downloads for an FTP
site, or simulate large volumes of email. Use a load tester to make sure that a system has sufficient
capacity for expected loads. It can even estimate failure points where the load is more than the
system can handle.
Throughput tester A throughput tester measures the amount of data that can be transferred
through a network or processed by a device (such as the amount of data that can be retrieved
from a disk in a specific period of time). On a network, a throughput tester sends a specific
amount of data through the network and measures the time it takes to transfer that data, creating
a measurement of the actual bandwidth. Use a throughput tester to validate the bandwidth on
your network and to identify when the bandwidth is significantly below what it should be.
A throughput tester can help you identify when a network is slow, but will not give you sufficient
information to identify why it is slow.
Packet sniffer A packet sniffer is special software that captures (records) frames that are
transmitted on the network. Use a packet sniffer to:
• Identify the types of traffic on a network.
• View the exchange of packets between communicating devices. For example, you can
capture frames related to DNS and view the exact exchange of packets for a specific name
resolution request.
• Analyze packets sent to and from a specific device.
• View packet contents.
A packet sniffer is typically run on one device with the intent of capturing frames for all other
devices on a subnet. Using a packet sniffer in this way requires the following configuration
changes:
• By default, a NIC will only accept frames addressed to itself. To enable the packet sniffer
to capture frames sent to other devices, configure the NIC in promiscuous mode (sometimes
called p-mode). In p-mode, the NIC will process every frame it sees.
• When using a switch, the switch will forward packets only to the switch port that holds a
destination device. When your packet sniffer is connected to a switch port, it will not see traffic
sent to other switch ports. To configure the switch to send all frames to the packet sniffing device,
configure port mirroring on the switch; all frames sent to all other switch ports will be forwarded
on the mirrored port.
If the packet sniffer is connected to a hub, it will already see all frames sent to any device on the
hub.
Protocol Analyzer A protocol analyzer is a special type of packet sniffer that captures
transmitted frames. A protocol analyzer is a passive device in that it copies frames and allows
you to view frame contents but does not allow you to capture, modify, and retransmit frames
(activities that are used to perform an attack). Use a protocol analyzer to:
• Check for specific protocols on the network, such as SMTP, DNS, POP3, and ICMP.
• Find devices that might be using restricted protocols (such as ICMP) or legacy protocols
(for example IPX/SPX or NetBIOS)
• Analyze traffic that might be sent by attackers
• Identify frames that might cause errors.
• Determine which flags are set in a TCP handshake
• Detect many malformed or fragmented packets
• Examine the data contained within a packet.
• Identify users that are connecting to unauthorized websites
• Discover cleartext passwords allowed by protocols or services
• Identify unencrypted traffic that includes sensitive data
• Troubleshoot communication problems or investigate the source of heavy network traffic.
A protocol analyzer shows the traffic that exists on the network and the source and destination
of that traffic. It does not tell you if the destination ports on a device are open unless you see
traffic originating from that port. For example, seeing traffic addressed to port 80 of a device
does not automatically mean the firewall on that device is open or that the device is responding
to traffic directed to that port.
When using a protocol analyzer, you can filter the frames so that you see only the frames with
information of interest.
• Filters can be configured to show only frames or packets to or from specific addresses, or
frames that include specific protocol types.
• A capture filter captures only the frames identified by the filter. Frames not matching the
filter criteria will not be captured.
• A display filter shows only the frames that match the filter criteria. Frames not matching
the filter criteria are still captured, but are not shown.
• The results of a capture can be saved in order to analyze frames at a later time or on a
different device.

Network Management
In this video, we're going to look at the Simple Network Management Protocol, or SNMP.
How SNMP Functions

SNMP is a protocol that you can install and configure on your network that uses special software
to:
Monitor network devices, such as servers, switches, and routers
and provide alarms that tell you when something has gone wrong. These alerts can be
communicated to the system administrator using emails or text messages.
SNMP provides you with a powerful overview of the health of the devices on the network. The
information it provides gives you the ability to spot a potential problem and rectify it before it
becomes a serious issue. In essence, SNMP helps you to be a proactive administrator instead of
a reactive administrator.
SNMP has been in use for a long time, so most network devices, including servers, routers,
switches and firewalls, provide some form of SNMP support.
Components of SNMP Implementation

SNMP relies upon several key components to perform its function:


First is the SNMP Manager:
The SNMP manager is responsible for collecting data from network hosts that are being
monitored using the SNMP protocol.
The SNMP manager aggregates the information and displays an overview of the current status
of the network.
Next are the SNMP agents:
The SNMP agents are small software applications that get installed on the network hardware that
you want to monitor, such as servers, printers, switches, routers, even firewalls.
The agent is configured to monitor the device. They report events that happen on the device back
to the SNMP manager.
Devices with SNMP agents running are called managed devices.
Finally, we have the Management Information Base (or MIB):
The MIB defines and organizes the parameters (called variables), that the agents will monitor on
their respective devices.
The SNMP Manager uses the MIB to determine what data it will gather from the agents.
The MIB is a hierarchical, structured definition of variables, similar to a database schema. The
MIB is not an actual database; it simply defines the structure of the information used by SNMP.
You can use SNMP walk messages to traverse through the hierarchical MIB structure.
Each variable in the MIB has a unique number assigned to it (called the OID).
How the Components Communicate

SNMP operates at the application layer in the OSI model, and it runs on ports 161 and 162. SNMP
communications can occur in three ways on your network:
The first option is polling:
Using poll mode, the SNMP manager periodically polls the SNMP agents to see what is
happening with their respective devices using UDP port 161. These are referred to as get
messages.
It queries them for specific pieces of information using the MIB.
For example, if the monitored device is a server, it may query the agent to see what CPU
utilization looks like.
The second option is to implement traps.
With traps, you configure your SNMP agents with certain thresholds.
Whenever the agent then detects that a threshold has been exceeded, it sends an SNMP trap
message to the SNMP manager on UDP port 162 (instead of waiting to be polled).
The third option is to use SNMP set commands.
These set commands are generated by the SNMP manager and are used to actually modify
information on the monitor host through the SNMP agent. For each variable you want to set, you
need to specify the variable to update (using its OID number in the MIB), the data type, and the
value you want to assign to the variable.
Security Issues

It's important that you understand the older versions of the SNMP protocol were notoriously
insecure. Versions 1 and 2 didn't use passwords. Instead agents used a 'community name' to
authenticate to the manager. This implemented a rudimentary form of access control based on
the community name supplied. Unfortunately, most people used the default community names
of 'public' (for read-only access) and 'private' (for read/write access) in their SNMP
implementations. Even if they did use a more complex community name, the earlier versions of
SNMP sent it clear text over the network. This made it easy for a person with bad intentions to
sniff the community name off of the network and then use it to manipulate the information being
transmitted between the manager and the agents.
If you're going to implement SNMP, be sure you implement SNMP version 3. SNMPv3 uses
encryption to secure communications between the manager and the agents. SSH and TLS are
supported for this purpose. It also provides more secure authentication options. Currently, the
MD5 and SHA protocols are supported for authentication and integrity verification. Also, an
alternative type of trap called an inform message is used to increase reliability by requiring that
trap messages be acknowledged.
The following communication configurations are available:
No authentication or encryption (NoAuthNoPriv). Authentication without encryption
(AuthNoPriv).
Summary

Authentication with encryption (AuthPriv).


In this lesson, we discussed what the SNMP protocol is and how it works. The components used
in an SNMP implementation. How the components communicate with each other and some of
the security issues associated with SNMP versions.
15.8.2 Configuring an SNMP System

In this demonstration, we'll configure SNMP system. What we're going to do is set up that simple
network management protocol on this router to point on a particular host. Let's get into Global
Configuration. We're going to type 'SNMP server.' I'm going to add the question marks so you
can see the options available to us here. You can see the different ones. What we're going to focus
on here is community because that's going to enable our SNMP. We're going to go in and say
'host.' That specifies the host that's going to receive our SNMP notifications.
Real quick, we'll just say 'community' and we'll have to give the name here. We'll say public and
if it's going to be read-only or read-write. We're going to say read-only since it's public and hit
Enter. What we've done is set up a community called 'public' and it's only read-only.
We're going to go SNMP, server, and we're going to say host. We have to tell it which host will
receive these updates. It has to be a host out there on our network somewhere. We're going to say
'10.10.10.250' and 'public' because we're trapping for public ones. Where it says UDP socket can't
open the port on 161, it's because I don't have this connected to an SNMP server on this
infrastructure. This is just a standalone router. It's kind of giving us some errors here which is a
good thing for us to see because if we ever come in here and we were troubleshooting our SNMP,
this would be one of those logs that we'd want to see, because it's logging to the console.
Remember, we could have it sent to a syslog server as well. That's the kind of messages that we'd
be looking for. But that's as simple as it gets. SNMP server community. You can type public or
private. Once we type in the community string, then we go ahead and type in the host name or
host IP address, if we have DNS running, we could use names. If we don't have DNS, we have
to use an IP address and point it somewhere in our infrastructure.
We used the community string of public, that's SNMP version 1 and version 2, because that was
the easiest way to show in this demonstration on to how set up SNMP and point it to a particular
host.
Summary

That's it for this demonstration.


In this demonstration, we set up a SNMP system to trap or get error messages that could be
occurring on our router.

15.8.3 SNMP Facts

Simple Network Management Protocol (SNMP) is designed for managing complex networks.
SNMP lets network hosts exchange configuration and status information. This information can
be gathered by management software and used to monitor and manage the network.
SNMP uses the following components:
Component Description
Manager A manager is the computer used to perform management tasks. The manager
queries agents and gathers responses by sending messages.
Agent An agent is a software process that runs on managed network devices. The agent
communicates information to the manager and can send dynamic messages to the manager.
Management Information Base (MIB) The MIB is a database of host configuration
information. Agents report data to the MIB, and the manager can then view information by
requesting data from the MIB. Object identifiers (OIDs) specify managed objects in a MIB
hierarchy.
Trap A trap is an event configured on an agent. When the event occurs, the agent logs details
regarding the event.
Get A Get is a message sent from a management system, requesting information about a
specific OID.
Walk A Walk uses GETNEXT messages to navigate the structure of an MIB.
Alert An alert can be configured so that when an event occurs (e.g., a trap), a message will be
sent via email or SMS (text message).
Agents and the manager are configured to communicate with each other using the community
name. The community name identifies a group of devices under the same administrative control.
The community name is not a password but simply a value configured on each device. Devices
with different community names are unable to send SNMP messages to each other.

You might also like