Cisco 200-310

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 282

Cisco 200-310

Designing for Cisco Internetwork Solutions


Version: 1.1
Cisco 200-310 Exam
QUESTION NO: 1

Which of the following is a leased-line WAN technology that divides a link's bandwidth into equal-
sized segments based on clock rate?

A.
TDM

B.
ATM

C.
WDM

D.
DWDM

E.
MPLS

F.
Metro Ethernet

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

Time division multiplexing (TDM) is a leased-line WAN technology that divides a link's bandwidth
into equal-sized segments based on clock rate. TDM enables several data streams to share a
single physical connection. Each data stream is then allotted a fixed number of segments that can
be used to transmit data. Because the number of segments dedicated to each data stream is
static, unused bandwidth from one data stream cannot be dynamically reallocated to another data
stream that has exceeded its available bandwidth. By contrast, statistical multiplexing dynamically
allocates bandwidth to data streams based on their traffic flow. For example, if a particular data
stream does not have any traffic to send, its bandwidth is reallocated to other data streams that
need it.

Metro Ethernet does not divide a link's bandwidth into equal-sized segments based on clock rate.
Metro Ethernet is a WAN technology that is commonly used to connect networks in the same
metropolitan area.

For example, if a company has multiple branch offices within the same city, the company can use
Metro Ethernet to connect the branch offices to the corporate headquarters. Metro Ethernet
providers typically provide up to 1,000 Mbps of bandwidth.

Wavelength division multiplexing (WDM) does not divide a link's bandwidth into equal-sized
"Pass Any Exam. Any Time." - www.actualtests.com 2
Cisco 200-310 Exam
segments based on clock rate. WDM is a leased-line WAN technology used to increase the
amount of data signals that a single fiber strand can carry. To accomplish this, WDM can transfer
data of varying light wavelengths on up to 16 channels per single fiber strand. Whereas TDM
divides the bandwidth in order to carry multiple data streams simultaneously, WDM aggregates the
data signals being carried within the fiber strand.

Dense WDM (DWDM) does not divide a link's bandwidth into equal-sized segments based on
clock rate. DWDM is a leased-line WAN technology that improves on WDM by carrying up to 160
channels on a single fiber strand. The spacing of DWDM channels is highly compressed, requiring
a more complex transceiver design and therefore making the technology very expensive to
implement.

Asynchronous Transfer Mode (ATM) uses statistical multiplexing and does not divide a link's
bandwidth into equal-sized segments based on clock rate. ATM is a shared WAN technology that
transports its payload in a series of 53byte cells. ATM has the unique ability to transport different
types of traffic-including IP packets, traditional circuit-switched voice, and video-while still
maintaining a high quality of service for delay-sensitive traffic, such as voice and video services.
Although ATM could be categorized as a packet-switched WAN technology, it is often listed in its
own category as a cell-switched WAN technology.

Multiprotocol Label Switching (MPLS) does not divide a link's bandwidth into equal-sized
segments based on clock rate. MPLS is a shared WAN technology that makes routing decisions
based on information contained in a fixed-length label. In an MPLS virtual private network (VPN),
each customer site is provided with its own label by the service provider. This enables the
customer site to use its existing IP addressing scheme internally while allowing the service
provider to manage multiple sites that might have conflicting IP address ranges. The service
provider then forwards traffic over shared lines between the sites in the VPN according to the
routing information that is passed to each provider edge router.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, TimeDivision Multiplexing, p. 225

Cisco: ISDN Voice, Video and Data Call Switching with Router TDM Switching Features

QUESTION NO: 2 DRAG DROP

You are adding an additional LAP to your current wireless network, which uses LWAPP. The LAP
is configured with a static IP address. You want to identify the sequence in which the LAP will
connect to and register with a WLC on the network.

"Pass Any Exam. Any Time." - www.actualtests.com 3


Cisco 200-310 Exam
Select the lap connection steps on the left, and drag them to the appropriate location on the right.
Not all steps will be used.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 4


Cisco 200-310 Exam

Section:

Considerations for Expanding an Existing Network Explanation

When you add a lightweight access point (LAP) to a Wireless network that uses Lightweight
Access Point Protocol (LWAPP), the LAP goes through a sequence of steps to register with a
Wireless LAN controller (WLC) on the network. First, if Open Systems Interconnection (OSI) Layer
2 LWAPP mode is supported, the LAP attempts to locate a WLC by broadcasting a Layer 2
LWAPP discovery request message. If a WLC does not respond to the Layer 2 broadcast, the LAP
will broadcast a

Layer 3 LWAPP discovery request message.

Once a WLC receives the LWAPP discovery message, the WLC will send an LWAPP discovery
response message to the LAP; the discovery response will contain the IP address of the WLC.
The LAP compiles a list of all discovery responses it receives. The list is cross-referenced against
the LAP's internal configuration. The LAP will then send an LWAPP join request message to one
of the WLCs on its list of responses.

If the LAP has been configured with a primary, secondary, and tertiary WLC, the LAP will first send
an LWAPP join request message to the primary WLC. If no response is received from the primary
WLC, the LAP will try the secondary and tertiary WLCs in sequence. If no response is received
from either the secondary or tertiary WLCs, the LAP will examine the responses on its list for a
master controller ag. If one of the WLCs is configured as a master, the LAP will send an LWAPP
join request message to the master WLC. If there is no master configured, or if the master does
not respond, the LAP will examine its list of responses and send an LWAPP join request message
"Pass Any Exam. Any Time." - www.actualtests.com 5
Cisco 200-310 Exam
to the WLC with the greatest capacity.

When a WLC responds with an LWAPP join response message, the authentication process
begins. After the LAP and the WLC authenticate with each other, the LAP will register with the
WLC.

Reference:

Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Register the LAP
with the WLC

QUESTION NO: 3

Which of the following statements best describes the purpose of CDP?

A.
CDP is a proprietary protocol used by Cisco devices to detect neighboring Cisco devices.

B.
CDP is a standard protocol used to power IP devices over Ethernet.

C.
CDP is a proprietary protocol used to power IP devices over Ethernet.

D.
CDP is a standard protocol used by Cisco devices to detect neighboring devices of any type.

Answer: A
Explanation:
Section: Design Methodologies Explanation

Cisco Discovery Protocol (CDP) is a Cisco-proprietary protocol used by Cisco devices to detect
neighboring Cisco devices. For example, Cisco switches use CDP to determine whether an
attached Voice over IP (VoIP) phone is manufactured by Cisco or by a third party. CDP is enabled
by default on Cisco devices. You can globally disable CDP by issuing the no cdp run command in
global configuration mode. You can disable CDP on a perinterface basis by issuing the no cdp
enable command in interface configuration mode.

CDP packets are broadcast from a CDPenabled device on a multicast address. Each directly
connected CDPenabled device receives the broadcast and uses that information to build a CDP

"Pass Any Exam. Any Time." - www.actualtests.com 6


Cisco 200-310 Exam
table. Detailed information about neighboring CDP devices can be viewed in IOS by issuing the
show cdp neighbor detail command in global configuration mode. The following abbreviated
sample output shows information obtained from CDP about the IP phone named
SEP00123456789A:

Link Layer Detection Protocol (LLDP), not CDP, is a standard protocol that detects neighboring
devices of any type. Cisco devices also support LLDP. LLDP can be used in a heterogeneous
network to enable Cisco devices to detect non-Cisco devices and vice versa. LLDP, which is
enabled by default, can be disabled globally by issuing the no lldp run command. You can
reenable LLDP by issuing the lldp run command.

CDP is not a protocol used to power IP devices over Ethernet, although an IP phone can provide
Power over Ethernet (PoE) requirements to a switch by using CDP. A Catalyst switch can provide
power to both Cisco and non-Cisco IP phones that support either the 802.3af standard method or
the Cisco prestandard method of PoE. For a Catalyst switch to successfully power an IP phone,
both the switch and the IP phone must support the same PoE method. After a common PoE
method is determined, CDP messages sent between Catalyst switches and Cisco IP phones can
further refine the amount of power allocated to each device.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629

Cisco: Catalyst 3750 Switch Software Configuration Guide, 12.2(40)SE: Configuring CDP

QUESTION NO: 4

Which of the following statements are true regarding standard IP ACLs? (Choose two.)

A.
Standard ACLs should be placed as close to the source as possible.

"Pass Any Exam. Any Time." - www.actualtests.com 7


Cisco 200-310 Exam
B.
Standard ACLs can filter traffic based on source and destination address.

C.
Standard ACLs can be numbered in the range from 1 through 99 or from 1300 through 1999.

D.
Standard ACLs can filter traffic based on port number.

E.
Standard ACLs can filter traffic from a specific host or a specific network.

Answer: C,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Standard IP access control lists (ACLs) can be numbered in the range from 1 through 99 or from
1300 through 1999 and can filter traffic from a specific host or a specific network. ACLs are used
to control packet flow across a network. For example, you could use an ACL on a router to restrict
a specific type of traffic, such as Telnet sessions, from passing through a corporate network.
There are two types of IP ACLs: standard and extended. Standard IP ACLs can be used to filter
based only on source IP addresses; standard IP ACLs cannot be used to filter based on source
and destination address. Standard ACLs should be placed as close to the destination as possible
so that other traffic originating from the source address is not affected by the ACL.

Extended IP ACLs enable you to permit or deny packets based on not only source IP address but
destination network, protocol, or destination port. In contrast to standard IP ACLs, extended IP
ACLs should be placed as close to the source as possible. This ensures that traffic being denied
by the ACL does not unnecessarily traverse the network. Extended ACLs have access list
numbers from 100 through 199 and from 2000 through 2699.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, Identity and Access Control Deployments, pp.
532-533 Cisco: Configuring IP Access Lists

QUESTION NO: 5

In which of the following situations would static routing be the most appropriate routing
mechanism?

"Pass Any Exam. Any Time." - www.actualtests.com 8


Cisco 200-310 Exam
A.
when the router has a single link to a router within the same AS

B.
when the router has redundant links to a router within the same AS

C.
when the router has a single link to a router within a different AS

D.
when the router has redundant links to a router within a different AS

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Static routing would be the most appropriate routing mechanism for a router that has a single link
to a router within a different autonomous system (AS). An AS is defined as the collection of all
areas that are managed by a single organization. Because an interdomain routing protocol, such
as Border Gateway Protocol (BGP), can be complicated to configure and uses a large portion of a
router's resources, static routing is recommended if dynamic routing information is not exchanged
between routers that reside in different ASes. For example, if you connect a router to the Internet
through a single Internet service provider (ISP), it is not necessary for the router to run BGP,
because the router will use this single connection to the Internet for all traffic that is not destined to
the internal network.

External BGP (eBGP), not static routing, would be the most appropriate routing protocol for a
router that has redundant links to a router within a different AS. BGP is typically used to exchange
routing information between ASes, between a company and an ISP, or between ISPs. BGP
routers within the same AS communicate by using internal BGP (iBGP), and BGP routers in
different ASes communicate by using eBGP.

An intradomain routing protocol, such as Enhanced Interior Gateway Routing Protocol (EIGRP) or
Open Shortest Path First (OSPF), would be the most appropriate routing protocol for a router that
has a single link or redundant links to a router within the same AS.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, Static Versus Dynamic Route Assignment, pp.
380-381

"Pass Any Exam. Any Time." - www.actualtests.com 9


Cisco 200-310 Exam
QUESTION NO: 6

You are installing a 4U device in a data center.

Which of the following are you installing?

A.
cabling at the demarc

B.
an environmental control

C.
a network device in a 7inch space

D.
a lock for rack security

Answer: C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You are installing a network device in a 7inch (18centimeter) space if you are installing a 4unit (U)
device in a data center. Although most racks adhere to a standard width of 19 inches (about 48
centimeters), a certain number of U, or height, of space must be available within a rack to allow
the installation of your equipment and to allow space between your equipment and other
equipment that is contained within the rack. A U is equivalent to 1.75 inches (about 4.5
centimeters) of height. Therefore, if the device you want to install is a 2U device, the rack should
have at least 3.5 inches (about 9 centimeters) of available space to accommodate the device and
more to allow for space above and below the device. A 4U device will fit into a 7inch
(18centimeter) rack space.

You are not installing a lock for rack security. However, rack security is likely to be a concern when
installing a server in a third-party data center. Commercial data centers house devices for multiple
customers within the same physical area. Although many data centers are physically secured
against intruders who might steal or modify equipment, the data center's other customers have the
same access to the physical area that you do. Therefore, you should install physical security
mechanisms, such as a lock, at the rack level to ensure that your company’s devices cannot be
accessed by others.

You are not installing an environmental control. However, an environmental control such as
airflow, which helps prevent devices from overheating, is likely to be a concern when installing a
server in a third-party data center. You should choose a data center that provides environmental
controls. For example, a hot and cold aisle layout is a data center design that attempts to control
the airflow within the room in order to mitigate problems that can result from overheated servers? it

"Pass Any Exam. Any Time." - www.actualtests.com 10


Cisco 200-310 Exam
essentially prevents hot air from mixing with cold air. A raised floor layout is a data center design
that puts the heating, ventilation, and air conditioning (HVAC) ductwork below the floor tiles. The
tiles, which are typically located in the aisles between the server racks in this type of environment,
are perforated so that airflow can be directed and concentrated in the exact locations desired.

You are not installing cabling at the demarc, or demarcation point. The demarc is the termination
point between a physical location and its service provider. In other words, it is the point where the
responsibility of the physical location ends and the responsibility of the service provider begins. At
a third-party data center, the demarc is the responsibility of the data center provider and its service
provider, not the data center's customers.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Data Center Facility Aspects, pp. 136-138

Cisco: Cabinet and Rack Installation

QUESTION NO: 7

Which of the following is true of the core layer of a hierarchical design?

A.
It provides address summarization.

B.
It aggregates LAN wiring closets.

C.
It aggregates WAN connections.

D.
It isolates the access and distribution layers.

E.
It is also known as the backbone layer.

F.
It performs Layer 2 switching.

G.
It performs NAC for end users.

Answer: E

"Pass Any Exam. Any Time." - www.actualtests.com 11


Cisco 200-310 Exam
Explanation:
Section: Enterprise Network Design Explanation

The core layer of a hierarchical design is also known as the backbone layer. The core layer is
used to provide connectivity to devices connected through the distribution layer. In addition, it is
the layer that is typically connected to enterprise edge modules. Cisco recommends that the core
layer provide fast transport, high reliability, redundancy, fault tolerance, low latency, limited
diameter, and Quality of Service (QoS). However, the core layer should not include features that
could inhibit CPU performance. For example, packet manipulation that results from some security,
QoS, classification, or inspection features can be a drain on resources.

The distribution layer of a hierarchical design, not the core layer, provides address summarization,
aggregates LAN wiring closets, and aggregates WAN connections. The distribution layer is used
to connect the devices at the access layer to those in the core layer. Therefore, the distribution
layer isolates the access layer from the core layer. In addition to these features, the distribution
layer can also be used to provide policy-based routing, security filtering, redundancy, load
balancing, QoS, virtual LAN (VLAN) segregation of departments, inter-VLAN routing, translation
between types of network media, routing protocol redistribution, and more.

The access layer, not the core layer, typically performs Layer 2 switching and Network Admission
Control (NAC) for end users. The access layer is the network hierarchical layer where end-user
devices connect to the network. For example, port security and Spanning Tree Protocol (STP)
toolkit features like PortFast are typically implemented in the access layer.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Core Layer, pp. 42-43

Cisco: High Availability Campus Network DesignRouted Access Layer using EIGRP or OSPF:
Hierarchical Design

QUESTION NO: 8

In which of the following locations can you not deploy an IPS appliance?

A.
between two Layer 2 devices on the same VLAN

B.
between two Layer 2 devices on different VLANs

"Pass Any Exam. Any Time." - www.actualtests.com 12


Cisco 200-310 Exam
C.
between two Layer 3 devices on the same IP subnet

D.
between two Layer 3 devices on different IP subnets

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You cannot deploy an Intrusion Prevention System (IPS) appliance between two Layer 3 devices
on different IP subnets. An IPS appliance is a standalone, dedicated device that actively monitors
network traffic. An IPS appliance functions similarly to a Layer 2 bridge; a packet entering an
interface on the IPS is directed to the appropriate outbound interface without regard to the
packet's Layer 3 information. Instead, the IPS uses interface or virtual LAN (VLAN) pairs to
determine where to send the packet. This enables an IPS to be inserted into an existing network
topology without requiring any disruptive addressing changes.

For example, an IPS could be inserted on the outside of a firewall to examine all traffic that enters
or exits an organization, as shown in the following diagram:

Because the IPS in this example is configured to operate in inline mode, it functions similarly to a
Layer 2 bridge in that it passes traffic through to destinations on the same subnet. Because all
monitored traffic passes through the IPS, it can block malicious traffic, such as an atomic or single-
packet attack, before it passes onto the network. However, an inline IPS also adds latency to
traffic flows on the network because it must analyze each packet before passing it to its
destination.

An IPS can be deployed between two Layer 2 devices on the same VLAN or between two Layer 2
devices on different VLANs if the VLANs are on the same IP subnet. In addition, the interface on
each Layer 2 device can be configured as an access port or as a trunk port. A trunk port tags each
frame with VLAN information before it transmits the frame? tagging a frame preserves its VLAN
membership as the frame passes across the trunk link.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

"Pass Any Exam. Any Time." - www.actualtests.com 13


Cisco 200-310 Exam

QUESTION NO: 9

Which of the following is used by both NetFlow and NBAR to identify a traffic flow?

A.
Network layer information

B.
Transport layer information

C.
Session layer information

D.
Application layer information

Answer: B
Explanation:
Section: Design Methodologies Explanation

NetFlow and NetworkBased Application Recognition (NBAR) both use Transport layer information
to identify a traffic flow. NetFlow is a Cisco IOS feature that can be used to gather flow-based
statistics such as packet counts, byte counts, and protocol distribution. A device configured with
NetFlow examines packets for select Open Systems Interconnection (OSI) Network layer and
Transport layer attributes that uniquely identify each traffic flow. The data gathered by NetFlow is
typically exported to management software. You can then analyze the data to facilitate network
planning, customer billing, and traffic engineering. For example, NetFlow can be used to obtain
information about the types of applications generating traffic flows through a router.

A traffic flow can be identified based on the unique combination of the following seven attributes:

Although NetFlow does not use Data Link layer information, such as a source Media Access
Control (MAC) address, to identify a traffic flow, the input interface on a switch will be considered
when identifying a traffic flow.

NBAR is a Quality of Service (QoS) feature that classifies application traffic that flows through a
router interface. NBAR enables a router to perform deep packet inspection for all packets that
pass through an NBARenabled interface. With deep packet inspection, an NBARenabled router
can classify traffic based on the content of a Transmission Control Protocol (TCP) or a User
Datagram Protocol (UDP) packet, instead of just the network header information. In addition,
NBAR provides statistical reporting relative to each recognized application.

"Pass Any Exam. Any Time." - www.actualtests.com 14


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, Classification, p. 233

CCDA 200-310 Official Cert Guide, Chapter 15, NetFlow, pp. 626-628

Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: Capturing Traffic Data

QUESTION NO: 10

Which of the following are required when configuring a VSS? (Choose two.)

A.
HSRP

B.
GLBP

C.
VRRP

D.
identical supervisor types

E.
identical IOS versions

Answer: D,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Identical supervisor types and identical IOS versions are required when configuring a Virtual
Switching System (VSS). VSS is a Cisco physical device virtualization feature that can enable a
pair of chassis-based switches, such as the Cisco Catalyst 6500, to function as a single logical
device. There are two identical supervisors in a VSS, one on each physical device, and one
control plane. One of the supervisors is active, and the other is designated as hot-standby; the
active supervisor manages the control plane. If the active supervisor in a VSS goes down, the hot-
standby will automatically take over as the new active supervisor. The supervisors in a VSS are
connected through the Virtual Switch Link (VSL).

Hot Standby Router Protocol (HSRP), Gateway Load Balancing Protocol (GLBP), and Virtual
Router Redundancy Protocol (VRRP) are First Hop Redundancy Protocols (FHRPs) and are not

"Pass Any Exam. Any Time." - www.actualtests.com 15


Cisco 200-310 Exam
required when configuring a VSS. Conversely, one of the benefits of using VSS is that the need for
HSRP, GLBP, and VRRP is removed.

HSRP is a Cisco-proprietary protocol that enables multiple routers to act as a single gateway for
the network. Each router is configured with a priority value that ranges from 0 through 255, with
100 being the default priority value and 255 being the highest priority value.

GLBP is a Cisco-proprietary protocol used to provide router redundancy and load balancing.
GLBP enables you to configure multiple routers into a GLBP group; the routers in the group
receive traffic sent to a virtual IP address that is configured for the group.

Like GLBP and HSRP, VRRP provides router redundancy. However, similar to HSRP, only one
router is active at any time. If the master router becomes unavailable, one of the backup routers
becomes the master router.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Virtualization Technologies, pp. 153-157

Cisco: Campus 3.0 Virtual Switching System Design Guide: VSL link Initialization and Operational
Characteristics

QUESTION NO: 11

What AD is assigned to iBGP routes by default?

A.
1

B.
20

C.
0

D.
100

E.
200

Answer: E

"Pass Any Exam. Any Time." - www.actualtests.com 16


Cisco 200-310 Exam
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Internal Border Gateway Protocol (iBGP) routes are assigned an administrative distance (AD) of
200 by default. AD values are used to determine the routing protocol that should be preferred
when multiple routes to a destination network exist. A routing protocol with a lower AD will be
preferred over a route with a higher AD. The following list contains the most commonly used ADs:

External BGP (eBGP) routes are assigned an AD of 20 by default. Routes to networks in other
autonomous systems (ASes) are called interdomain routes. Interdomain routes typically exist only
on routers that border an AS. Therefore, if multiple routes to an external network exist in the
routing table, the router should use the route advertised by an interdomain routing protocol, such
as eBGP, over the route advertised by an intradomain routing protocol, such as Open Shortest
Path First (OSPF).

Directly connected routes have an AD of 0. Therefore, directly connected routes are trusted over
routes from any other source. Static routes have an AD of 1. Like directly connected routes, static
routes are more trusted than routes from any routing protocol. Static routes are optimal for routing
networks that do not change often.

Routes that are learned by Interior Gateway Routing Protocol (IGRP) have an AD of 100 by
default. Routes advertised by an interior routing protocol, such as IGRP, are typically intradomain
routes.

Reference:
"Pass Any Exam. Any Time." - www.actualtests.com 17
Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 10, Administrative Distance, pp. 386-387

Cisco: What Is Administrative Distance?

QUESTION NO: 12

In which of the following ways are Cisco IPS and IDS devices similar? (Choose two.)

A.
They both sit in the path of network traffic.

B.
Neither sits in the path of network traffic.

C.
They both prevent malicious traffic from infiltrating the network.

D.
They both provide real-time monitoring of malicious traffic.

E.
They can both use signatures to detect malicious traffic.

Answer: D,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Cisco Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) devices are similar
in that they both provide real-time monitoring of malicious traffic and they can both use signatures
to detect malicious traffic. Signature-based detection methods use specific strings of text to detect
malicious traffic. Protocols and port numbers can be checked to further specify malicious traffic
patterns that match a signature. The benefit of signature-based detection methods is that the
number of false positives generated is typically low.

Other detection methods employed by IPS and IDS devices include policy-based detection and
anomaly-based detection. Policy-based detection methods use algorithms to detect patterns in
network traffic. Anomaly-based detection methods are used to detect abnormal behavior on a
network based on traffic that is classified as normal or abnormal.

IPS devices sit in the path of network traffic; however, IDS devices do not. Because traffic flows
through an IPS, an IPS can detect malicious traffic as it enters the IPS device and can prevent the

"Pass Any Exam. Any Time." - www.actualtests.com 18


Cisco 200-310 Exam
malicious traffic from infiltrating the network. An IPS is typically installed inline on the inside
interface of a firewall. Placing the IPS behind the firewall ensures that the IPS does not waste its
resources processing traffic that will ultimately be discarded by the firewall; however, this
placement will prevent the IPS from having visibility into traffic that is not destined to pass through
the firewall. The following diagram illustrates an IPS operating in inline mode:

By contrast, an IDS device merely sniffs the network traffic by using a promiscuous network
interface, which is typically connected to a Remote Switched Port Analyzer (RSPAN) port on a
switch. Because network traffic does not flow through an IDS device, the IDS device functions as a
passive sensor and cannot directly prevent malicious traffic from infiltrating the network. However,
when an IDS detects malicious traffic, it can alert other network devices in the traffic path so that
further traffic can be blocked. In addition, an IDS can be configured to send a Transmission
Control Protocol (TCP) reset notification to the source and destination addresses. The following
diagram illustrates an IDS operating in promiscuous mode:

An IPS can be configured to operate in monitor-only mode, which effectively makes the IPS
function as an IDS. When operating in monitor-only mode, an IPS does not sit in line with the flow
of traffic and must rely on a passive connection to an RSPAN port in order to have the most
visibility into the internal network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

Cisco: Managed Security Services Partnering for Network Security: Managed Intrusion Detection
and Prevention Systems

QUESTION NO: 13

View the Exhibit.


"Pass Any Exam. Any Time." - www.actualtests.com 19
Cisco 200-310 Exam

Which of the following terms most accurately defines the type of NAT that is used on the network
above?

A.
NAT overloading

B.
NAT overlapping

C.
static NAT

D.
dynamic NAT

Answer: A
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The network in this scenario uses Network Address Translation (NAT) overloading. NAT
overloading uses ports to translate inside local addresses to one or more inside global addresses.
The NAT router uses port numbers to keep track of which packets belong to each host. NAT
overloading is also called Port Address Translation (PAT).

NAT translates between public and private IP addresses to enable hosts on a privately addressed
network to access the Internet. Public addresses are routable on the Internet, and private
addresses are routable only on internal networks. Request for Comments (RFC) 1918 defines
several IP address ranges that are reserved for private, internal use:

Because NAT performs address translation between private and public addresses, NAT effectively
hides the address scheme used by the internal network, which can increase security. NAT also
reduces the number of public IP addresses that a company needs to allow its devices to access
Internet resources, thereby conserving IP version 4 (IPv4) address space.

An inside local address is typically an RFC 1918-compliant IP address that represents an internal
host to the internal network. An inside global address is used to represent an internal host to an
"Pass Any Exam. Any Time." - www.actualtests.com 20
Cisco 200-310 Exam
external network.

Static NAT translates a single inside local address to a single inside global address or a single
outside local address to a single outside global address. You can configure a static inside local-to-
inside global IP address translation by issuing the ip nat inside source static inside-local inside-
global command. To configure a static outside local-to-outside global IP address translation, you
should issue the ip nat outside source static outside-global outside-local command.

Dynamic NAT translates local addresses to global addresses that are allocated from a pool. To
create a NAT pool, you should issue the ip nat pool nat-pool start-ip end-ip{netmask mask |
prefix-length prefix} command. To enable translation of inside local addresses, you should issue
the ip nat inside source list access-list pool nat-pool[overload] command.

When a NAT router receives an Internetbound packet from a local host, the NAT router performs
the following tasks:

When all the inside global addresses in the NAT pool are mapped, no other inside local hosts will
be able to communicate on the Internet. This is why NAT overloading is useful. When NAT
overloading is configured, an inside local address, along with a port number, is mapped to an
inside global address. The NAT router uses port numbers to keep track of which packets belong to
each host:

You can issue the ip nat inside source list access-list interface outside-interface overload
command to configure NAT overload with a single inside global address, or you can issue the ip
nat inside source list access-list pool nat-pool overload command to configure NAT overload
with a NAT pool.

You should use NAT overlapping when the addresses on an internal network conflict with the

"Pass Any Exam. Any Time." - www.actualtests.com 21


Cisco 200-310 Exam
addresses on another network. The internal addresses must be translated to unique addresses on
the external network, and addresses on the external network must be translated to unique
addresses on the internal network the translation can be performed through either static or
dynamic NAT.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

Cisco: Configuring Network Address Translation: Getting Started: Example: Allowing Internal
Users to Access the Internet

QUESTION NO: 14

Which of the following is not required of a collapsed core layer in a three-tier hierarchical network
design?

A.
Layer 2 aggregation

B.
high-speed physical and logical paths

C.
intelligent network services

D.
end user, group, and endpoint isolation

E.
routing and network access policies

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

End user, group, and endpoint isolation is not typically required of a collapsed core layer in a
three-tier hierarchical network design. That function is typically provided by the devices in the
access layer. The hierarchical model divides the network into three distinct components:

The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that

"Pass Any Exam. Any Time." - www.actualtests.com 22


Cisco 200-310 Exam
prevents hosts from accessing the network if they do not comply with organizational requirements,
such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically
discovering and inventorying devices attached to the LAN. The access layer serves as a media
termination point for endpoints, such as servers and hosts. Because access layer devices provide
access to the network, the access layer is the ideal place to perform user authentication.

Layer 2 aggregation, high-speed physical and logical paths, intelligent network services, and
routing and network access policies are typically provided by the core and distribution layers. The
core layer typically provides the fastest switching path in the network. As the network backbone,
the core layer is primarily associated with low latency and high reliability. The functionality of the
core layer can be collapsed into the distribution layer if the distribution layer infrastructure is
sufficient to meet the design requirements. It is Cisco best practice to ensure that a collapsed core
design can meet resource utilization requirements for the network.

The distribution layer serves as an aggregation point for access layer network links. Because the
distribution layer is the intermediary between the access layer and the core layer, the distribution
layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to
perform tasks that involve packet manipulation, such as routing. Summarization and next-hop
redundancy are also performed in the distribution layer.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Collapsed Core Design, p. 49

Cisco: Small Enterprise Design Profile Reference Guide: Collapsed Core Network Design

QUESTION NO: 15

Which of the following statements are true regarding a Cisco IPS device? (Choose two.)

A.
It can block all packets from an attacking device.

B.
It can block packets associated with a particular traffic flow.

C.
It cannot send a TCP reset to an attacking device.

D.
It cannot send an alarm to a management device.

E.
"Pass Any Exam. Any Time." - www.actualtests.com 23
Cisco 200-310 Exam
It does not sit in line with the network traffic flow.

Answer: A,B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

A Cisco Intrusion Prevention System (IPS) device sits in line with the network traffic flow; as a
result, it can block malicious traffic before the traffic enters the network. When an IPS device
detects malicious traffic, it can perform the following actions:

By contrast, a Cisco Intrusion Detection System (IDS) device can detect an attack but it cannot
block the packets. IDS devices use a single, promiscuous interface to monitor traffic and do not sit
in line with the traffic flow; as a result, they cannot block malicious traffic before it enters the
network. When an IDS device detects malicious traffic, it can perform the following actions:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

Cisco: Managed Security Services Partnering for Network Security: Managed Intrusion Detection
and Prevention Systems

QUESTION NO: 16

Which of the following statements is true regarding Cisco best practices when implementing QoS
in a campus LAN?

A.
Traffic should be classified as close to its destination as possible.

B.
Traffic should be marked as close to its source as possible.

C.
Traffic should be policed as close to the destination as possible.

D.
QoS mechanisms should always be implemented in software when possible.

Answer: B

"Pass Any Exam. Any Time." - www.actualtests.com 24


Cisco 200-310 Exam
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

According to Cisco best practices for implementing Quality of Service (QoS) in a campus LAN,
traffic should be marked as close to its source as possible. Because the access layer provides
direct connectivity to network endpoints, QoS classification and marking are typically performed in
the access layer. Although classification and marking are typically performed in the access layer,
QoS mechanisms must also be implemented throughout the network for QoS to be effective. In
addition, best practice dictates that QoS mechanisms should be implemented in hardware
whenever alternatives to software implementations are available.

Traffic policing is used to limit the rate of traffic that passes through an interface. With traffic
policing, packet flows that exceed the configured thresholds are typically dropped. Alternatively,
traffic can be remarked with a lower priority before being transmitted. Best practice dictates
policing excess traffic as close to its source as possible so that no resources are wasted in
transmitting packets that will ultimately be dropped.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Campus LAN QoS Considerations, pp. 111-112

Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: Principles of Campus QoS
Design

QUESTION NO: 17

Which of the following statements is true regarding the Cisco ACI architecture?

A.
The scalability of the fabric is limited by the available number of leaf ports.

B.
Spine nodes must be fully meshed.

C.
All nonlocal traffic traverses only a single spine node.

D.
Each spine node requires at least two ports per leaf node.

Answer: C

"Pass Any Exam. Any Time." - www.actualtests.com 25


Cisco 200-310 Exam
Explanation:
Section: Enterprise Network Design Explanation

In the Cisco Application Centric Infrastructure (ACI), all nonlocal traffic traverses only a single
spine node. Cisco ACI is a data center technology that uses switches, categorized as spine and
leaf nodes, to dynamically implement network application policies in response to application level
requirements. Network application policies are defined on a Cisco Application Policy Infrastructure
Controller (APIC) and are implemented by the spine and leaf nodes.

The spine and leaf nodes create a scalable network fabric that is optimized for east-west data
transfer, which in a data center is typically traffic between an application server and its supporting
data services, such as database or file servers. Each spine node requires a connection to each
leaf node; however, spine nodes do not interconnect nor do leaf nodes interconnect. Despite its
lack of fully meshed connections, this physical topology enables nonlocal traffic to pass from any
ingress leaf interface to any egress leaf interface through a single, dynamically selected spine
node. By contrast, local traffic is passed directly from an ingress interface on a leaf node to the
appropriate egress interface on the same leaf node.

Because a spine node has a connection to every leaf node, the scalability of the fabric is limited by
the number of ports on the spine node, not by the number of ports on the leaf node. In addition,
redundant connections between a spine and leaf pair are unnecessary because the nature of the
topology ensures that each leaf has multiple connections to the network fabric. Therefore, each
spine node requires only a single connection to each leaf node.

Redundancy is also provided by the presence of multiple APICs, which are typically deployed as a
cluster of three controllers. APICs are not directly involved in forwarding traffic and are therefore
not required to connect to every spine or leaf node. Instead, the APIC cluster is connected to one
or more leaf nodes in much the same manner that other endpoint groups (EPGs), such as
application servers, are connected.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, ACI, pp. 135

Cisco: Application Centric Infrastructure Overview: Implement a Robust Transport Network for
Dynamic Workloads

QUESTION NO: 18

Which of the following statements regarding VPNs are true? (Choose two.)

"Pass Any Exam. Any Time." - www.actualtests.com 26


Cisco 200-310 Exam
A.
Data is transmitted in clear text.

B.
VPNs route traffic over dedicated leased lines.

C.
An ISDN terminal adapter can be used as an endpoint device.

D.
VPNs typically cost less to implement than a traditional WAN.

E.
Workstations do not typically need client software to use a site-to-site VPN.

Answer: D,E
Explanation:
Section: Enterprise Network Design Explanation

Virtual private networks (VPNs) typically cost less to implement than a traditional WAN. In
addition, workstations do not typically need client software to use a site-to-site VPN. A VPN
securely connects remote offices or users to a central network by tunneling encrypted traffic
through the Internet. By implementing a VPN solution rather than a point-to-point WAN between
branch offices, a company can benefit from all of the following:

There are two general types of VPN: site-to-site and remote access. A site-to-site VPN is used to
create a tunnel between two remote VPN gateways. Devices on the networks connected to the
gateways do not require additional software to use the VPN; instead, all transmissions are handled
by the gateway device, such as an ASA device. Conversely, a remote access VPN is used to
connect individual clients through the Internet to a central network. Remote access VPN clients
must use either VPN client software or an SSL-based VPN to establish a connection to the VPN
gateway.

An Integrated Services Digital Network (ISDN) terminal adapter cannot be used as a VPN
endpoint device. An ISDN terminal adapter is a device used to connect a computer to an ISDN
network. ISDN is an unencrypted international communications standard for sending voice, video,
and data over digital telephone lines or normal telephone wires.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, VPN Benefits, p. 263

Cisco: Virtual Private Network (VPN)

"Pass Any Exam. Any Time." - www.actualtests.com 27


Cisco 200-310 Exam

QUESTION NO: 19

View the Exhibit.

You administer the network shown above. All the routers run EIGRP. Automatic summarization is
enabled throughout the network.

On which router or routers should you disable automatic summarization?

A.
RouterE

B.
RouterA and RouterD

C.
RouterB and RouterC

D.
RouterC and RouterE

E.
RouterA, RouterB, and RouterD

F.
all the routers on the network

Answer: B
"Pass Any Exam. Any Time." - www.actualtests.com 28
Cisco 200-310 Exam
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should disable automatic summarization on RouterA and RouterD. A summary route is used
to advertise a group of contiguous networks as a single route, thus reducing the size of the routing
table. Some routing protocols, such as Enhanced Interior Gateway Routing Protocol (EIGRP) and
Routing Information Protocol version 2 (RIPv2), automatically summarize routes on classful
network boundaries.

Automatic summarization can cause problems when classful networks are discontiguous within a
network topology. A discontiguous subnet exists when a summarized route advertises one or more
subnets that should not be reachable through that route. Therefore, when discontiguous networks
in the same subnet exist in a topology, you should disable automatic summarization with the no
autosummary command. When you disable automatic summarization, the routing protocol can
advertise the actual networks instead of the classful summary. The network diagram shows that
both RouterA and RouterD are configured with different parts of the 172.16.0.0/16 Class B
address space. Because automatic summarization is enabled, RouterA and RouterD will advertise
the 172.16.0.0/16 summary routes to RouterE.

You do not need to disable automatic summarization on RouterE. When RouterE receives the

172.16.1.0/24 route from RouterA and the 172.16.2.0/24 route from RouterD, RouterE will
advertise a summarized 172.16.0.0/16 route to RouterB and RouterC. Because RouterB and
RouterC do not contain any part of the 172.16.0.0/16 address space, they will send all traffic
destined for the 172.16.0.0/16 network to RouterE. RouterE will then route the traffic to the
appropriate nexthop router.

You do not need to disable automatic summarization on RouterB. RouterB will advertise a
10.0.0.0/8 summary route to RouterE, and RouterE will advertise the same summary route to the
other routers on the network. Because no other router on the network contains any part of the
10.0.0.0/8 Class A address space, all other routers will send all traffic destined for the 10.0.0.0/8
network to RouterE, which will route the traffic to RouterB.

You do not need to disable automatic summarization on RouterC. RouterC will advertise the
192.168.0.0/24 network to RouterE. Because the other routers on the network do not contain any
part of the 192.168.0.0/24 Class C address space, they will send all traffic destined for the
192.168.0.0/24 network to RouterE, which will route the traffic to RouterC. The pointtopoint links
between routers belong to address spaces that do not overlap with each other or with the
192.168.0.0/24 network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP Design, p. 404

CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458
"Pass Any Exam. Any Time." - www.actualtests.com 29
Cisco 200-310 Exam
Cisco: EIGRP Commands: autosummary (EIGRP)

QUESTION NO: 20

Which of the following are you least likely to implement in the building access layer of an
enterprise campus network design? (Choose two.)

A.
redundancy

B.
access layer aggregation

C.
STP

D.
PoE

E.
VLAN access

Answer: A,B
Explanation:
Section: Enterprise Network Design Explanation

Of the available choices, you are least likely to implement redundancy and access layer
aggregation in the building access layer of an enterprise campus network design. Redundancy is
more likely to be implemented between the core and distribution layers. Access layer aggregation
is more likely to be implemented in the distribution layer.

The enterprise campus module consists of the following submodules: building access layer,
building distribution layer, campus core layer, edge distribution, and data center. The campus core
layer of the enterprise campus module provides fast transport services between buildings and the
data center. Because the campus core layer acts as the network's backbone, it is essential that
every building distribution layer device have multiple paths to the campus core layer. Multiple
paths between the campus core and building distribution layer devices ensure that network
connectivity is maintained if a link or device fails in either layer. Layer 3 switching typically takes
place in the campus core layer.

The building distribution layer of the enterprise campus module provides link aggregation between
"Pass Any Exam. Any Time." - www.actualtests.com 30
Cisco 200-310 Exam
layers. Because the building distribution layer is the intermediary between the building access
layer and the campus core layer, the building distribution layer is the ideal place to enforce
security policies, provide load balancing, provide Quality of Service (QoS), and perform tasks that
involve packet manipulation, such as routing. Because the building distribution layer connects to
both the building access layer and the campus core layer, it often comprises multilayer switches
that can perform both Layer 3 routing functions and Layer 2 switching functions.

The access layer, which typically comprises Layer 2 switches, serves as a media termination point
for devices, such as servers and workstations. Because building access layer devices provide
access to the network, the building access layer is the ideal place to perform user authentication
and to institute port security. Because the access layer connects directly to end devices, features
such as Spanning Tree Protocol (STP), Power over Ethernet (PoE), and virtual LAN (VLAN)
access are most likely to be implemented in this layer. High availability, broadcast suppression,
and rate limiting are also characteristics of access layer devices.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Enterprise Campus Module, pp. 50-51Category:
Enterprise Network Design

QUESTION NO: 21

On which of the following layers of the hierarchical network design model should you implement
PortFast, BPDU guard, and root guard?

A.
only on core layer ports

B.
only on distribution layer ports

C.
only on access layer ports

D.
only on core and distribution layer ports

E.
on core, distribution, and access layer ports

Answer: C

"Pass Any Exam. Any Time." - www.actualtests.com 31


Cisco 200-310 Exam
Explanation:
Section: Enterprise Network Design Explanation

You should implement PortFast, BPDU guard, and root guard only on access layer ports.
PortFast, BPDU guard, and root guard are enhancements to Spanning Tree Protocol (STP). The
access layer is the network hierarchical layer where end-user devices connect to the network. The
distribution layer is used to connect the devices at the access layer to those in the core layer. The
core layer, which is also referred to as the backbone, is used to provide connectivity to devices
connected through the distribution layer.

PortFast reduces convergence time by immediately placing user access ports into a forwarding
state.

PortFast is recommended only for ports that connect to end-user devices, such as desktop
computers. Therefore, you would not enable PortFast on ports that connect to other switches,
including distribution layer ports and core layer ports. To enable PortFast, issue the spanning tree
portfast command from interface configuration mode. Configuring BPDU filtering on a port that is
also configured for PortFast causes the port to ignore any bridge protocol data units (BPDUs) it
receives, effectively disabling STP.

BPDU guard disables ports that erroneously receive BPDUs. User access ports should never
receive BPDUs, because user access ports should be connected only to end-user devices, not to
other switches. When BPDU guard is applied, the receipt of a BPDU on a port with BPDU guard
enabled will result in the port being placed into a disabled state, which prevents loops from
occurring. To enable BPDU guard, issue the spanning tree bpduguard enable command from
interface configuration mode.

Root guard is used to prevent newly introduced switches from being elected as the root. The
device with the lowest bridge priority is elected the root. If an additional device is added to the
network with a lower priority than the current root, it will become the new root. However, this could
cause the network to reconfigure in unintended ways, particularly if an access layer switch were to
become the root. To prevent this, root guard can be applied to ports that connect to other switches
in order to maintain control over which switch is the root. Root guard is applied on a perport basis
with the spanning tree guard root command.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Cisco STP Toolkit, pp. 103-105

Cisco: Campus Network for High Availability Design Guide: Spanning Tree Protocol Versions

Cisco: Campus Network for High Availability Design Guide: Best Practices for Optimal
Convergence

"Pass Any Exam. Any Time." - www.actualtests.com 32


Cisco 200-310 Exam

QUESTION NO: 22

Which of the following AP modes do not provide client connectivity? (Choose three.)

A.
monitor mode

B.
HREAP mode

C.
local mode

D.
bridge mode

E.
sniffer mode

F.
rogue detector mode

Answer: A,E,F
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Access point (AP) monitor mode, sniffer mode, and rogue detector mode do not provide client
connectivity. After adding an AP to a wireless LAN controller (WLC), you can configure the AP to
operate in one of six modes depending on your needs and the capability of the AP. Three of the
modes, monitor mode, sniffer mode, and rogue detector mode, are used primarily for monitoring
and administrative purposes and, thus, do not provide client connectivity. For example, you can
place an AP in monitor mode to configure the AP to scan wireless traffic. You can place an AP in
sniffer mode to configure the AP to scan for wireless traffic and to send the traffic to a
management station running a tool such as Wireshark. You can configure an AP to operate in
rogue detector mode to configure the AP to scan traffic on the wired connection in search of
unauthorized APs and unauthorized clients on the wired network.

Hybrid remote edge access point (HREAP) mode, which is also known as FlexConnect, allows for
client connectivity. An AP operating in HREAP mode can be remotely managed from a central
location via a WAN link, which is useful when deploying APs to remote offices. This enables
administrators to deploy an AP in the remote office without also needing to deploy a WLC to the
office. Furthermore, APs operating in HREAP mode can provide client connectivity even if the
connection to the remote WLC is lost.

"Pass Any Exam. Any Time." - www.actualtests.com 33


Cisco 200-310 Exam
An AP is in local mode by default. In this mode, the AP can connect to a WLC and can provide
client connectivity. In addition, an AP operating in local mode monitors all wireless channels.

An AP operating in bridge mode can provide client connectivity. You can configure an AP for
bridge mode in order to use the AP to connect two networks together. In addition, you can use this
mode to configure a mesh network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 5, AP Modes, pp. 180-181

CCDA 200-310 Official Cert Guide, Chapter 5, Hybrid REAP, p. 200

Cisco: Cisco Wireless Control System Configuration Guide, Release 6.0: Configuring Access
Points

QUESTION NO: 23

You want to implement a protocol to provide secure communications between a web browser and
a web server.

Which of the following protocols should you use?

A.
HTTP

B.
TLS

C.
EAP

D.
GRE

E.
IPSec

Answer: B
Explanation:
Section: Enterprise Network Design Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 34


Cisco 200-310 Exam
You should use Transport Layer Security (TLS) to secure communication between a web browser
and a web server. TLS is a protocol derived from Secure Sockets Layer version 3 (SSLv3) and is
commonly used to protect traffic between a web browser and a web server. Secure Sockets Layer
(SSL), which was designed by Netscape in the early 1990s, uses signed digital certificates to
provide web traffic authentication, encryption, and nonrepudiation. In 1995, the first public version
of SSL version 2 (SSLv2) was released, but shortly thereafter it was found to have cryptographic
flaws. The flaws led the Netscape team to completely redesign the protocol and resulted in the
release of SSLv3 the following year. Since then, SSL has been used all over the world to secure
web traffic. In 1999, the Internet Engineering Task Force (IETF) published Request for Comments
(RFC) 2246, which defined a new protocol called TLS, which is based on SSLv3. Although TLS is
based on SSLv3, the two protocols are not directly compatible. In fact, the latest version of TLS,
TLS version 1.2 (TLSv1.2), explicitly prevents TLS from actively negotiating SSLv2 sessions
because of the known weaknesses with the older ciphers. The IETF recommends running SSLv3
or TLSv1 (or higher) and disabling all previous versions to mitigate the risk of compromised
sessions.

You should not use Generic Routing Encapsulation (GRE). In addition, you should not use IP
Security (IPSec). GRE over IPSec provides support for IP multicast and dynamic routing protocol
traffic. In addition, it provides support for non-IP protocols. Because the focus of GRE is to
transport many different protocols, it has very limited security features. Therefore, GRE relies on
IPSec to provide data confidentiality and data integrity. Although GRE was developed by Cisco,
GRE works on Cisco and non-Cisco routers.

You should not use Hypertext Transfer Protocol (HTTP). HTTP is typically used to request a
resource from another computer, such as a web server, on the Internet. For example, when a web
browser is used to visit a website, the Uniform Resource Locator (URL) typically begins with http://
because the browser is using HTTP to request a resource. HTTP is not used to secure the
communication between a web browser and a web server.

You should not use Extensible Authentication Protocol (EAP). EAP is an authentication technology
that is typically used on wireless networks. There are many different types of EAP that are
supported by a wide array of products. For example, Lightweight EAP (LEAP) is a type of EAP
developed by Cisco that uses dynamic Wired Equivalent Privacy (WEP) keys for mutual
authentication between wireless devices and a Remote Authentication Dial-In User Service
(RADIUS) device. However, other types of EAP can be used by authentication, authorization, and
accounting (AAA) protocols, such as RADIUS and DIAMETER. DIAMETER was originally
designed to be a more secure replacement for RADIUS.

Reference:

Cisco: SSL: Foundation for Web Security The Internet Protocol Journal Volume 1, No. 1

"Pass Any Exam. Any Time." - www.actualtests.com 35


Cisco 200-310 Exam
QUESTION NO: 24

You are planning a network by using the top-down design method. You are using structured
design principles to generate a model of the completed system.

Which of the following are you most likely to ignore?

A.
business goals

B.
future network services

C.
network protocols

D.
technical objectives

E.
applications

Answer: C
Explanation:
Section: Design Methodologies Explanation

Most likely, you will ignore network protocols if you are using structured design principles to
generate a model of the completed system if that system is being planned by using the top-down
design method. The top-down network design approach is typically used to ensure that the
eventual network build will properly support the needs of the network's use cases. In other words,
a top-down design approach typically begins at the Application layer, or Layer 7, of the Open
Systems Interconnection (OSI) reference model and works down the model to the Physical layer,
or Layer 1. Because a top-down design model of the completed system is intended to provide an
overview of how the system functions, lower OSI layer specifics such as network protocols should
not be included in the model.

Part of the top-down design process is using structured design principles to create a complete
model of the system that includes the business goals, existing and future network services,
technical objectives, and applications. In order for the designer and the organization to obtain a
complete picture of the design, the designer should create models that represent the logical
functionality of the system, the physical functionality of the system, and the hierarchical layered
functionality of the system.

Reference:

"Pass Any Exam. Any Time." - www.actualtests.com 36


Cisco 200-310 Exam
Cisco: Using the TopDown Approach to Network Design: Structured Design Principles(Flash)

QUESTION NO: 25

Which of the following features are provided by IPSec? (Choose three.)

A.
broadcast packet encapsulation

B.
data confidentiality

C.
data integrity

D.
multicast packet encapsulation

E.
data origin authentication

Answer: B,C,E
Explanation:
Section: Enterprise Network Design Explanation

IP Security (IPSec) can provide data confidentiality, data integrity, and data origin authentication.
IPSec is an open standard protocol that uses Encapsulating Security Payload (ESP) to provide
data confidentiality. ESP encrypts an entire IP packet and encapsulates it as the payload of a new
IP packet. Because the entire IP packet is encrypted, the data payload and header information
remain confidential. In addition, IPSec uses Authentication Header (AH) to ensure the integrity of a
packet and to authenticate the origin of a packet. AH does not authenticate the identity of an
IPSec peer, instead, AH verifies only that the source address in the packet has not been modified
during transit. IPSec is commonly used in virtual private networks (VPNs).

Generic Routing Encapsulation (GRE), not IPSec, provides broadcast and multicast packet
encapsulation. GRE is a Cisco proprietary protocol that can tunnel traffic from one network to
another without requiring the transport network to support the network protocols in use at the
tunnel source or tunnel destination. For example, a GRE tunnel can be used to connect two
AppleTalk networks through an IP only network. Because the focus of GRE is to transport many
different protocols, it has very limited security features. By contrast, IPSec has strong data
confidentiality and data integrity features but it can transport only IP traffic. GRE over IPSec

"Pass Any Exam. Any Time." - www.actualtests.com 37


Cisco 200-310 Exam
combines the best features of both protocols to securely transport any protocol over an IP
network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Managed VPN: IPsec, pp. 255-259
Cisco: Configuring Security for VPNs with IPsec: IPsec Functionality Overview

QUESTION NO: 26

Which of the following is a BGP attribute that represents the external metric of a route?

A.
community

B.
confederation

C.
route reflector

D.
MED

E.
local preference

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The multi-exit discriminator (MED) is the Border Gateway Protocol (BGP) attribute that represents
the external metric of a route. A MED value is basically the external metric of a route. BGP uses a
complex method of selecting the best path to the destination. The following list displays the criteria
used by BGP for path selection:

When determining the best path, a BGP router first chooses the route with the highest weight. The
weight value is significant only to the local router? it is not advertised to neighbor routers.

When weight values are equal, a BGP router chooses the route with the highest local preference.
The local preference value is advertised to iBGP neighbor routers to influence routing decisions

"Pass Any Exam. Any Time." - www.actualtests.com 38


Cisco 200-310 Exam
made by those routers.

When local preferences are equal, a BGP router chooses locally originated paths over externally
originated paths. Locally originated paths that have been created by issuing the network command
or redistribute command are preferred over locally originated paths that have been created by
issuing the aggregate address command.

If multiple paths to a destination still exist, a BGP router chooses the route with the shortest AS
path attribute. The AS path attribute contains a list of the AS numbers (ASNs) that a route passes
through.

If multiple paths have the same AS path length, a BGP router chooses the lowest origin type. An
origin type of i, which is used for IGPs, is preferred over an origin type of e, which is used for
Exterior Gateway Protocols (EGPs). These origin types are preferred over an origin type of i,
which is used for incomplete routes where the origin is unknown or the route was redistributed into
BGP.

If origin types are equal, a BGP router chooses the route with the lowest MED. If MED values are
equal, a BGP router chooses eBGP routes over iBGP routes. If there are multiple eBGP paths, or
multiple iBGP paths if no eBGP paths are available, a BGP router chooses the route with the
lowest IGP metric to the next hop router. If IGP metrics are equal, a BGP router chooses the
oldest eBGP path, which is typically the most stable path.

Finally, if route ages are equal, a BGP router chooses the path that comes from the router with the
lowest RID. The RID can be manually configured by issuing the bgp router-id command. If the RID
is not manually configured, the RID is the highest loopback IP address on the router. If no
loopback address is configured, the RID is the highest IP address from among a router's available
interfaces.

Neither a confederation nor a route reflector are BGP attributes. Confederations and route
reflectors are both a means of mitigating performance issues that arise from large, full-mesh iBGP
configurations. A full-mesh configuration enables each router to learn each iBGP route
independently without passing through a neighbor. However, a full-mesh configuration requires the
most administrative effort to configure. A confederation enables an AS to be divided into discrete
units, each of which acts like a separate AS. Within each confederation, the routers must be fully
meshed unless a route reflector is established. A route reflector can be used to pass iBGP routes
between iBGP routers, eliminating the need for a full-mesh configuration. However, it is important
to note that route reflectors advertise best paths only to route reflector clients. In addition, if
multiple paths exist, a route reflector will always advertise the exit point that is closest to the route
reflector.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, BGP Attributes, Weight, and the BGP Decision
Process, pp. 449-455

CCDA 200-310 Official Cert Guide, Chapter 11, Route Reflectors, pp. 446-448
"Pass Any Exam. Any Time." - www.actualtests.com 39
Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 11, Confederations, pp. 448-449

Cisco: BGP Best Path Selection Algorithm

Cisco: Integrity Checks: IBGP Neighbors Not Fully Meshed

QUESTION NO: 27

Which of the following is a characteristic of the bottom-up design approach?

A.
It requires a complete analysis of the organization's needs.

B.
It relies on previous experience.

C.
It focuses on applications and services.

D.
It accounts for projected infrastructure growth.

Answer: B
Explanation:
Section: Design Methodologies Explanation

The bottom-up design approach relies on previous experience rather than on a thorough analysis
of organizational requirements or projected growth. The bottom-up design approach takes its
name from the methodology of starting with the lower layers of the Open Systems Interconnection
(OSI) model, such as Physical, Data Link, Network, and Transport layers, and working upward
toward the higher layers. The bottom-up approach focuses on the devices and technologies that
should be implemented in a design, instead of focusing on the applications and services that will
be used on the network. Because the bottom-up approach does not use a detailed analysis of an
organization's requirements, the bottom-up approach can be much less time consuming than the
top-down design approach. However, the bottom-up design approach can often lead to network
redesigns because the design does not provide a "big picture" overview of the current network or
its future requirements.

By contrast, the top-down design approach takes its name from the methodology of starting with
the higher layers of the OSI model, such as the Application, Presentation, and Session layers, and
working downward toward the lower layers. The top-down design approach requires a thorough
analysis of the organization's requirements. As a result, the top-down design approach is a more
"Pass Any Exam. Any Time." - www.actualtests.com 40
Cisco 200-310 Exam
time consuming process than the bottom-up design approach. With the top-down approach, the
designer obtains a complete overview of the existing network and the organization's needs. With
this "big picture" overview, the designer can then focus on the applications and services that meet
the organization's current requirements. By focusing on the applications and services required in
the design, the designer can work in a modular fashion that will ultimately facilitate the
implementation of the actual design. In addition, the flexibility of the resulting design is typically
much improved over that of the bottom-up approach because the designer can account for the
organization's projected needs.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, TopDown Approach, pp. 24-25

Cisco: Using the TopDown Approach to Network Design: 4. TopDown and BottomUp Approach
Comparison (Flash)

QUESTION NO: 28

Which of the following WLC interfaces is used for Layer 2 discovery?

A.
the management interface

B.
the service port interface

C.
the AP manager interface

D.
the dynamic interface

E.
the virtual interface

Answer: A
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

The management interface on a Cisco wireless LAN controller (WLC) is used for Layer 2
discovery. A WLC interface is a logical interface that can be mapped to at least one physical port.
The port mapping is typically implemented as a virtual LAN (VLAN) on an 802.1Q trunk. A WLC
"Pass Any Exam. Any Time." - www.actualtests.com 41
Cisco 200-310 Exam
has five interface types:

The management interface is used for in-band management, for Layer 2 discovery operations, and
for enterprise services such as authentication, authorization, and accounting (AAA). The service
port interface is statically mapped to the service port on the WLC and is used for out-of-band
management. The AP manager interface is used for Layer 3 discovery operations and handles all
Layer 3 communications between the WLC and an associated AP.

The virtual interface is a special interface used to support wireless client mobility. The virtual
interface acts as a Dynamic Host Configuration Protocol (DHCP) server placeholder and supports
DHCP relay functionality. In addition, the virtual interface is used to implement Layer 3 security,
such as redirects for a web authentication login page.

The dynamic interface type is used to map VLANs on the WLC for wireless client data transfer. A
WLC can support up to 512 dynamic interfaces mapped onto an 802.1Q trunk on a physical port
or onto multiple ports configured as a single port group using link aggregation (LAG).

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, WLC Interface Types, pp. 184-185

Cisco: Cisco Wireless LAN Controller Configuration Guide, Release 7.4: Information About
Interfaces

QUESTION NO: 29

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 42


Cisco 200-310 Exam

Refer to the exhibit above. PVST+ has been configured. DSW1 is the root bridge for all VLANs in
the topology. Devices on VLAN 10 use the HSRP 10 Active router as a default gateway. Devices
on VLAN 20 use the HSRP 20 Active router as a default gateway. All traffic from VLAN 20 is being
forwarded across the Po1 interface to the default gateway.

You want to ensure that devices on each VLAN send traffic to the appropriate default gateway by
using the most direct path. In addition, you want to preserve the maximum amount of redundancy
in the design.

Which of the following could you do to fix the problem?

A.
Configure DSW2 as the root bridge for all VLANs.

B.
Configure DSW2 as the root bridge for VLAN 10.

C.
Configure DSW2 as the root bridge for VLAN 20.

D.
Disable the link between ASW3 and DSW1.

E.
Disable the Po1 interface on DSW2.

F.
Disable the link between ASW3 and DSW2.

"Pass Any Exam. Any Time." - www.actualtests.com 43


Cisco 200-310 Exam
Answer: C
Explanation:
Section: Enterprise Network Design Explanation

You could configure DSW2 as the root bridge for virtual LAN (VLAN) 20 to fix the problem and
maintain the maximum amount of redundancy in the design. In this scenario, the switches are
configured to use Per-VLAN Spanning Tree Plus (PVST+). PVST+ is a revision of the Cisco-
proprietary Per-VLAN Spanning Tree (PVST), which enables a separate spanning tree to be
established for each VLAN. Therefore, a per-VLAN implementation of STP, such as PVST+,
enables the location of a root switch to be optimized on a per-VLAN basis.

Because all of the VLANs use DSW1 as the root bridge in this scenario, all traffic from the access
layer switches, regardless of VLAN, flows first to DSW1. Traffic from VLAN 10 is therefore already
optimized because VLAN 10 uses DSW1 as its default gateway. However, VLAN 20 uses DSW2
as its default gateway. Therefore, traffic from VLAN 20 will most likely flow first to DSW1 and then
across the PortChannel 1 EtherChannel interface to DSW2 for forwarding.

Although configuring DSW2 as the root bridge for all VLANs would cause traffic from VLAN 20
devices to flow directly to DSW2, traffic from VLAN 10 devices would no longer flow in an
optimized fashion.

Configuring DSW2 as the root bridge for all VLANs would cause traffic from VLAN 10 devices to
flow first to DSW2, then across the PortChannel 1 EtherChannel interface to DSW1 for forwarding.
Because PVST+ is configured in this scenario, it is not necessary and not optimal to deploy a
single root bridge for all VLANs.

You should not configure DSW2 as the root bridge for VLAN 10, because traffic from VLAN 10 is
already flowing in an optimized fashion. Configuring DSW2 as the root bridge for VLAN 10 would
cause traffic from VLAN 10 devices to flow first to DSW2 and then across the PortChannel 1
EtherChannel interface to DSW1 for forwarding.

You should not disable any links in the topology. Disabling links would not preserve the maximum
amount of redundancy in the design. Like Spanning Tree Protocol (STP), PVST+ blocks
redundant ports until a configuration change occurs that would allow the redundant port to enter
the forwarding state without creating a switching loop.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: InterSwitch Link and IEEE 802.1Q Frame Format: Background Theory

Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE:
Configuring the Switch Priority of a VLAN

"Pass Any Exam. Any Time." - www.actualtests.com 44


Cisco 200-310 Exam

QUESTION NO: 30

View the Exhibit.

You administer the network shown above. You want to implement the subnets displayed in the
exhibit.

Which of the following routing protocols should you not use? (Choose two.)

A.
RIPv1

B.
RIPv2

C.
OSPF

D.
EIGRP

E.
IGRP

Answer: A,E
Explanation:
Section: Enterprise Network Design Explanation

You should not use Routing Information Protocol version 1 (RIPv1) or Interior Gateway Routing
Protocol (IGRP) to implement the subnets displayed in the exhibit. The network shown in the
exhibit uses variable-length subnet masking (VLSM). VLSM provides the ability to efficiently
allocate addresses in an assigned address space by creating a hierarchy of subnets for a single
network number.

In this scenario, the Class B network, 172.16.0.0, has been subnetted into three subnetworks:

172.16.127.0/17, 172.16.192.0/18, and 172.16.144.0/20. VLSM relies on the routing protocol to


"Pass Any Exam. Any Time." - www.actualtests.com 45
Cisco 200-310 Exam
include the subnet mask in routing table updates. RIPv1 and IGRP do not support VLSM, because
they do not carry subnet mask information within routing updates. Therefore, when RIPv1 or IGRP
is used on a network, you must use the same subnet mask throughout the network. Routing
protocols that do not support VLSM are called classful routing protocols.

RIP version 2 (RIPv2), Open Shortest Path First (OSPF), and Enhanced IGRP (EIGRP) support
VLSM.

These routing protocols support VLSM because they send subnet mask information within routing
updates. VLSM enables you to use different subnet masks within the same network. Routing
protocols that support VLSM are called classless routing protocols.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, VariableLength Subnet Masks, pp. 305-307

CCDA 200-310 Official Cert Guide, Chapter 10, Classless Versus Classful Routing Protocols, pp.
385-386 Cisco: Why Don't RIPv1 and IGRP Support VariableLength Subnet Mask?

QUESTION NO: 31

Which of the following statements are true when VSS is implemented on a distribution layer switch
pair in a campus network? (Choose two.)

A.
VSS eliminates the need to use an FHRP for convergence.

B.
Each access layer switch uses a single, active physical uplink to the distribution switch pair.

C.
MEC trunk links should not be configured using the auto-desirable option.

D.
Loop Guard should be disabled throughout the VSS-enabled network.

E.
Aggressive mode UDLD should be used to monitor link integrity.

Answer: A,D
Explanation:
Section: Enterprise Network Design Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 46


Cisco 200-310 Exam
When Virtual Switching System (VSS) is implemented on a distribution layer switch pair in a
campus network, it eliminates the need to use a First Hop Redundancy Protocol (FHRP) for
convergence and Loop Guard should be disabled throughout the VSS-enabled network. VSS is a
Cisco device virtualization feature that can enable a pair of switches to function as a single logical
switch. The switch pair is connected by an EtherChannel bundle known as a Virtual Switch Link
(VSL). The trunk links in a VSL bundle should be configured as auto-desirable or desirable-
desirable in order to ensure a consistent trunk state across the link.

With VSS, access layer devices can connect to the switch pair using several active, physical
uplinks that are bundled together into a single logical link using Multi-chassis EtherChannel
(MEC). Because all of the links in the bundle to the distribution switch pair are active, each access
layer device is reduced to having a single logical link to the virtual distribution layer switch,
therefore, Spanning Tree Protocol (STP) is no longer required to prevent loops. In addition, since
there is only a single logical link to the virtual distribution layer switch, the access layer device can
load balance traffic across all of its active links and the device does not need to rely on an FHRP
for convergence if a link in the MEC bundle fails.

Cisco recommends disabling the Loop Guard feature in a VSS-enabled campus network to
mitigate the possibility that active links in an EtherChannel bundle are incorrectly placed into a
root-inconsistent state. In addition, Cisco recommends configuring MEC trunk links as auto-
desirable or desirable-desirable to mitigate the potential for configuration errors that might occur
during cycles of change management.

In a Layer 2 switched hierarchical design, only the access layer of the enterprise campus module
uses Layer 2 switching exclusively. The access layer of the enterprise campus module provides
end users with physical access to the network. In addition to using VSS in place of FHRPs for
redundancy, a Layer 2 switching design requires that inter-VLAN traffic be routed in the
distribution layer of the hierarchy. Also, STP in the access layer will prevent more than one
connection between an access layer switch and the distribution layer from becoming active at a
given time.

In a Layer 3 switching design, the distribution and campus core layers of the enterprise campus
module use Layer 3 switching exclusively. Thus a Layer 3 switching design relies on FHRPs for
high availability. In addition, a Layer 3 switching design typically uses route filtering on links that
face the access layer of the design.

Because access layer devices provide hosts and other devices with access to the network, the
access layer is the ideal place to perform user authentication and to institute port security. High
availability, broadcast suppression, and rate limiting are also characteristics of access layer
devices.

Aggressive mode UniDirectional Link Detection (UDLD) should not be used to monitor MEC link
integrity. Aggressive mode UDLD can cause false positives when CPU utilization is particularly
high or while a line card is initializing. These false positives could place MEC links into an error-
disabled state, disrupting the link on both switches. Cisco recommends using normal mode UDLD
"Pass Any Exam. Any Time." - www.actualtests.com 47
Cisco 200-310 Exam
to monitor MEC links because its default timer values are much less likely to produce false
positives when checking link viability.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Distribution Layer Best Practices, pp. 97-99

Cisco: Campus 3.0 Virtual Switching System Design Guide: VSS Enabled Campus Design

QUESTION NO: 32

Which of the following is not a reason to choose a VPN solution instead of a traditional WAN?

A.
The cost is lower.

B.
Network expansion is easy.

C.
Traffic is encrypted.

D.
Traffic is sent over dedicated lines.

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

Virtual private networks (VPNs) typically send traffic through a tunnel over the Internet, which is a
public WAN, not over dedicated lines. Therefore, you might choose a point-to-point WAN that uses
dedicated leased lines instead of a VPN solution if you wanted to prevent traffic from being
tunneled through a public network. A VPN securely connects remote offices or users to a central
network by tunneling encrypted traffic through the Internet. By implementing a VPN solution rather
than a point-to-point WAN between branch offices, a company can benefit from all of the following:

There are two general types of VPN: site-to-site and remote access. A site-to-site VPN is used to
create a tunnel between two remote VPN gateways. Devices on the networks connected to the
gateways do not require additional software to use the VPN? instead, all transmissions are
handled by the gateway device, such as an ASA device. Conversely, a remote access VPN is
used to connect individual clients through the Internet to a central network. Remote access VPN

"Pass Any Exam. Any Time." - www.actualtests.com 48


Cisco 200-310 Exam
clients must use either VPN client software or an SSL-based VPN to establish a connection to the
VPN gateway.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, WAN and Enterprise Edge Overview, p. 218

Cisco: Virtual Private Network (VPN)

QUESTION NO: 33

In which of the following modules of the Cisco enterprise architecture would you expect to find a
DHCP server? (Choose two.)

A.
campus core

B.
data center

C.
building distribution

D.
enterprise edge

E.
building access

Answer: B,D
Explanation:
Section: Enterprise Network Design Explanation

You would expect to find a Dynamic Host Configuration Protocol (DHCP) server in the data center
module or enterprise edge module of the Cisco enterprise architecture. The enterprise architecture
model is a modular framework that is used for the design and implementation of large networks.
The enterprise architecture model includes the following modules: enterprise campus, enterprise
edge, service provider (SP) edge, and remote modules that utilize resources that are located away
from the main enterprise campus.

The campus core layer, building distribution layer, and building access layer are all part of the

"Pass Any Exam. Any Time." - www.actualtests.com 49


Cisco 200-310 Exam
enterprise campus module. These submodules of the enterprise campus module rely on a resilient
multilayer design to support the day-to-day operations of the enterprise. Also found within the
enterprise campus module is the data center submodule, which is also referred to as the server
farm submodule. The data center submodule provides file and print services to the enterprise
campus. In addition, the data center submodule typically hosts internal Domain Name System
(DNS), email, DHCP, and database services.

The enterprise edge module represents the boundary between the enterprise campus module and
the outside world. In addition, the enterprise edge module aggregates voice, video, and data traffic
to ensure a particular level of Quality of Service (QoS) between the enterprise campus and
external users located in remote submodules. Enterprise WAN, Internet connectivity, ecommerce
servers, and remote access & virtual private network (VPN) are all submodules of the enterprise
edge module.

Enterprise data center, enterprise branch, and teleworkers are examples of remote submodules
that are found within the enterprise architecture model. These submodules represent enterprise
resources that are located outside the main enterprise campus. These submodules typically
connect to the enterprise campus through the use of the SP edge and enterprise edge modules.
Because many Cisco routers commonly used at the edge of the network are capable of providing
DHCP and DNS services to the network edge, devices in the remote submodules do not need to
rely on the DHCP and DNS servers located in the enterprise campus.

The SP edge module consists of submodules that represent third-party network service providers.
For example, most enterprise entities rely on Internet service providers (ISPs) for Internet
connectivity and on public switched telephone network (PSTN) providers for telephone service. In
addition, the third-party infrastructure found in the SP edge is often used to provide connectivity
between the enterprise campus and remote resources.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, DNS, pp. 319-321

QUESTION NO: 34

Which of the following actions are you most likely to perform first when using the bottom-up
network design approach?

A.
Configure switches with VLANs.

B.

"Pass Any Exam. Any Time." - www.actualtests.com 50


Cisco 200-310 Exam
License OSes to host network services.

C.
Analyze the company application requirements.

D.
Route cables from a server room to workstations.

Answer: D
Explanation:
Section: Design Methodologies Explanation

Most likely, you would route cables from a server room to workstations first when using the
bottom-up network design approach. This action would fall into the Physical layer of the Open
Systems Interconnection (OSI) reference model. In contrast to the top-down approach, the bottom-
up approach begins at the bottom of the OSI reference model. Decisions about network
infrastructure are made first, and application requirements are considered last. This approach to
network design can often lead to frequent network redesigns to account for requirements that have
not been met by the initial infrastructure.

You would analyze the company application requirements during the initial phase of the top-down
network design approach, not the bottom-up approach. The top-down network design approach is
typically used to ensure that the eventual network build will properly support the needs of the
network's use cases. For example, a dedicated customer service call center might first evaluate
communications and knowledgebase requirements prior to designing and building out the call
center's network infrastructure. In other words, a top-down design approach typically begins at the
Application layer, or Layer 7, of the OSI reference model and works down the model to the
Physical layer, or Layer 1.

The top three layers of the OSI reference model-which are the Application layer, Presentation
layer, and Session layer-should be analyzed in order to determine the design requirements. The
infrastructure that is needed for the Transport layer, Network layer, Data Link layer, and Physical
layer is thus determined by this analysis.

You would not license operating systems (OSes) to host network services or configure switches
with virtual LANs (VLANs) first when using the bottom-up design approach. Licensing OSes to
host network services would most likely be performed in the Transport layer, or Layer 4, of the OSI
reference model. Configuring switches with VLANs would most likely fall in the Network layer, or
Layer 3, of the OSI reference model.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25

Cisco: Using the Top-Down Approach to Network Design (Flash)


"Pass Any Exam. Any Time." - www.actualtests.com 51
Cisco 200-310 Exam

QUESTION NO: 35

View the Exhibit.

Refer to the diagram shown above. Which of the following is a problem with this network design?

A.
A fully meshed topology is required in the core layer.

B.
Layer 3 switching should be implemented in the core layer.

C.
There are not enough devices in the core layer.

D.
Not every device in the distribution layer has multiple paths to the core layer.

E.
A fully meshed topology is required in the access layer.

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

Of the choices available, the problem with the network design in the diagram is that not every

"Pass Any Exam. Any Time." - www.actualtests.com 52


Cisco 200-310 Exam
distribution layer device has multiple paths to the core layer. Hierarchical design separates
functionality into distinct layers, providing for more efficient performance and greater scalability.
The hierarchical design model divides the network into three distinct layers: the core layer, the
distribution layer, and the access layer

The core layer provides fast transport services and redundant connectivity to the distribution layer.
Because the core layer acts as the network's backbone, it is essential that every distribution layer
device have multiple paths to the core layer. Multiple paths between the core and distribution layer
devices ensure that network connectivity is maintained if a link or device fails in either layer. The
network design shown in the diagram above can be improved with the addition of a second link
from the central distribution layer switch to one of the alternate core layer switches, as shown by
the green line in the diagram below:

Although a fully meshed topology can be implemented in the core layer, a fully meshed topology is
not required if multiple paths exist between core layer and distribution layer devices. In addition,
you should not implement a fully meshed topology in the access layer because the access layer
should remain highly scalable. Because a fully meshed topology can add unnecessary cost and
complexity to the design and operation of the network, a partially meshed topology is often
implemented in the core layer. In this scenario, creating a fully meshed topology in the core layer
would not improve the network design, because the central distribution layer switch would still
have only a single connection into the core layer.

The network diagram in this scenario indicates that Layer 3 switching is implemented in the core
and distribution layers. Although Layer 3 switching is the preferred switching mechanism in the
core layer, Layer 3 switching is not required. Layer 3 switching is preferred in the core layer
because it can provide faster convergence times than Layer 2 switching can provide after a link
failure or a device failure. In addition, Layer 3 switching in the core layer facilitates the
implementation of load balancing and path optimization.

"Pass Any Exam. Any Time." - www.actualtests.com 53


Cisco 200-310 Exam
There is not enough information in the diagram to determine whether the core layer contains a
sufficient number of devices. The core layer should contain enough devices to provide wire-speed
transport services to the distribution layer devices. In addition, the core layer should contain
enough devices to support redundant paths to each distribution layer device.

Reference:

Cisco: Campus Network for High Availability Design Guide: Core Layer

QUESTION NO: 36

Which of the following best describes a hypervisor?

A.
It is used to route packets between VMs and a physical network.

B.
It is used to run a guest OS within a host OS.

C.
It is used to allow multiple VMs to communicate within a host system.

D.
It used to send documents to a web browser.

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

A hypervisor is a virtual device used to run a guest operating system (OS) within a host OS. A
hypervisor is responsible for presenting the physical resources of the host machine as virtualized
resources to the guest OS running in a virtual machine (VM). Though VMs share hardware
resources with the host OS, they are otherwise isolated from one another. VMs can be used for a
variety of purposes, such as software testing or hosting specific network services. An additional
programming layer, known as a hypervisor, is required in order for the VM to communicate with
the host hardware. A hypervisor is used to allocate hardware resources, such as hard drive space
and random access memory (RAM), to the VM.

A virtual switch is a logical device used to allow multiple VMs to communicate within a host
system. A VM needs a virtual network interface card (NIC) in order to communicate with other
devices. Each virtual NIC is assigned a unique Media Access Control (MAC) address. Similar to a
"Pass Any Exam. Any Time." - www.actualtests.com 54
Cisco 200-310 Exam
hardware-based switch, a virtual switch maintains a table of MAC-to-port associations. When data
is sent from one VM to another, the virtual switch will use this table to determine which port to use
to forward the received data. If a virtual switch does not exist on a host, the VM must send data via
the physical NIC to a hardware-based switch, where the data can be forwarded to the intended
VM.

A virtual router is a logical device used to route packets between VMs and a physical network. A
virtual router is a VM dedicated to routing and forwarding packets. For example, the Cisco CSR
1000V is a cloud-based virtual router that is deployed as a VM and managed by a hypervisor.
Cisco virtual routers support many of the same features as physical routers and have the added
benefit of being able to share resources with other VMs on the host machine.

Hypertext Transfer Protocol (HTTP), not a hypervisor, is used to send documents to a web
browser. HTTP is an Open Systems Interconnection (OSI) Application layer protocol that is used
throughout the Internet to deliver content from web servers to web browsers, such as Google
Chrome or Mozilla Firefox.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Server Virtualization, p. 155

QUESTION NO: 37

Which of the following is defined in the IEEE 802.3ad standard?

A.
HSRP

B.
LACP

C.
PAgP

D.
VRRP

Answer: B
Explanation:
Section: Enterprise Network Design Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 55


Cisco 200-310 Exam
Link Aggregation Control Protocol (LACP) is defined in the Institute of Electrical and Electronics
Engineers (IEEE) 802.3ad standard. LACP is a link-bundling protocol. Configuring multiple
physical ports into a bundle, which is also known as a port group or an EtherChannel group,
enables a switch to use the multiple physical ports as a single connection between a switch and
another device. Because bundled links function as a single logical port, Spanning Tree Protocol
(STP) is automatically disabled on the physical ports in the bundle? however, spanning tree must
be running on the associated port channel virtual interface to prevent bridging loops.

Typically, a link bundle is configured for high-bandwidth transmissions between switches and
servers. When a link bundle is configured, traffic is load balanced across all links in the port group,
which provides fault tolerance. If a link in the port group goes down, that link's traffic load is
redistributed across the remaining links. Because LACP is a standards-based protocol, it can be
used between Cisco and non-Cisco switches.

Port Aggregation Protocol (PAgP) is a Cisco-proprietary link-bundling protocol. PAgP cannot be


used to create an EtherChannel on non-Cisco switches. In addition, PAgP cannot be used to
create an EtherChannel link between a Cisco switch and a non-Cisco switch, because the
EtherChannel protocol must match on each side of the EtherChannel link.

Both PAgP and LACP work by dynamically grouping physical interfaces into a single logical link.
However, LACP is newer than PAgP and offers somewhat different functionality. Like PAgP, LACP
identifies neighboring ports and their group capabilities? however, LACP goes further by assigning
roles to the link bundle's endpoints. LACP enables a switch to determine which ports are actively
participating in the bundle at any given time and to make operational decisions based on those
determinations.

Neither Hot Standby Router Protocol (HSRP) nor Virtual Router Redundancy Protocol (VRRP) is a
link-bundling protocol. HSRP is a Cisco-proprietary first-hop redundancy protocol (FHRP). VRRP
is an Internet Engineering Task Force (IETF)standard FHRP. Both HSRP and VRRP can be used
to configure failover in case a primary default gateway goes down.

Reference:

Cisco: IEEE 802.3ad Link Bundling: Benefits of IEEE 802.3ad Link Bundling

QUESTION NO: 38 DRAG DROP

Select the characteristics from the left, and drag them under the corresponding design approach
on the right. Use all characteristics. Each characteristic can be used only once.

"Pass Any Exam. Any Time." - www.actualtests.com 56


Cisco 200-310 Exam

Answer:

Explanation:

Section: Design Methodologies Explanation

The bottom-up design approach is a design methodology that focuses on the devices and
"Pass Any Exam. Any Time." - www.actualtests.com 57
Cisco 200-310 Exam
technologies that should be implemented in a design, instead of focusing on the applications and
services that will be used on the network. The bottom-up design approach takes its name from the
methodology of starting with the lower layers of the Open Systems Interconnection (OSI) model,
such as the Physical, Data Link, Network, and Transport layers, and working upward toward the
higher layers. Because the bottom-up approach does not use a detailed analysis of an
organization's requirements, the bottom-up approach can be much less time-consuming than the
top-down design approach. However, the bottom-up design approach can often lead to costly
network redesigns because the design does not provide a "big picture" overview of the current
network or its future requirements. In addition, the bottom-up approach relies on previous
experience rather than on a thorough analysis of organizational requirements or projected growth.

By contrast, the top-down design approach takes its name from the methodology of starting with
the higher layers of the OSI model, such as the Application, Presentation, and Session layers, and
working downward toward the lower layers. The top-down design approach requires a thorough
analysis of the organization's requirements. As a result, the top-down design approach is a more
time-consuming process than the bottom-up design approach. With the top-down approach, the
designer obtains a complete overview of the existing network and the organization's needs. With
this "big picture" overview, the designer can then focus on the applications and services that meet
the organization's current requirements. By focusing on the applications and services required in
the design, the designer can work in a modular fashion that will ultimately facilitate the
implementation of the actual design. In addition, the flexibility of the resulting design is typically
much improved over that of the bottom-up approach because the designer can account for the
organization's projected needs.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25

Cisco: Using the Top-Down Approach to Network Design: 4. Top-Down and Bottom-Up Approach
Comparison (Flash)

QUESTION NO: 39

Which of the following can you configure on a Cisco switch to facilitate a NIC teaming
implementation on Microsoft Windows Server 2012?

A.
HSRP

B.
LACP

C.
"Pass Any Exam. Any Time." - www.actualtests.com 58
Cisco 200-310 Exam
PAgP

D.
VRRP

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You can configure Link Aggregation Control Protocol (LACP) on a Cisco switch to facilitate a
network interface card (NIC) teaming implementation on Microsoft Windows Server 2012. NIC
teaming is a feature that uses multiple NICs on a server to function as a single logical link to the
network infrastructure. Microsoft Windows Server 2012 supports several different teaming modes;
however, LACP or static teaming configurations are recommended when a physical server has
heavy inbound and outbound traffic flows or a virtual machine (VM) has a traffic load that often
exceeds the bandwidth available on a single physical link. NIC teaming relies on sharing a single
Layer 3 address across a bundle of linked Layer 2 interfaces; therefore, NIC teaming is supported
only in a Layer 2 access model, not in a Layer 3 access model.

On Cisco switches, aggregating physical links into a logical link is referred to as EtherChannel.
EtherChannel bundles can be configured manually or by using a link-bundling protocol. LACP is a
link-bundling protocol that is defined in the Institute of Electrical and Electronics Engineers (IEEE)
802.3ad standard and can be used between Cisco and non-Cisco switches. By contrast, Port
Aggregation Protocol (PAgP) is a Cisco-proprietary link-bundling protocol and cannot be used to
create an EtherChannel on non-Cisco switches. In addition, PAgP cannot be used to create an
EtherChannel link between a Cisco switch and a non-Cisco device, such as a Microsoft Windows
Server, because the EtherChannel protocol must match on each side of the EtherChannel link.

Both PAgP and LACP work by dynamically grouping physical interfaces into a single logical link.
However, LACP is newer than PAgP and offers somewhat different functionality. Like PAgP, LACP
identifies neighboring ports and their group capabilities? however, LACP goes further by assigning
roles to the link bundle's endpoints. LACP enables a switch to determine which ports are actively
participating in the bundle at any given time and to make operational decisions based on those
determinations.

Neither Hot Standby Router Protocol (HSRP) nor Virtual Router Redundancy Protocol (VRRP) can
be used to facilitate a NIC teaming implementation on Microsoft Windows Server 2012. HSRP is a
Cisco-proprietary First Hop Redundancy Protocol (FHRP). VRRP is an Internet Engineering Task
Force (IETF)standard FHRP. Both HSRP and VRRP can be used to configure failover in case a
primary default gateway goes down.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, EtherChannel, p. 88

"Pass Any Exam. Any Time." - www.actualtests.com 59


Cisco 200-310 Exam
Microsoft: TechNet: NIC Teaming in Windows Server 2012 -Do I Need to Configure My Switch?

QUESTION NO: 40

RouterA has interfaces connected to the 172.16.0.0/24 through 172.16.7.0/24 networks shown in
the exhibit.

You want to configure static routing on RouterB so that RouterB will forward traffic destined to any
of these networks to RouterA.

Which of the following commands should you issue on RouterB?

A.
ip route 172.16.0.0 255.255.0.0 172.16.12.1

B.
ip route 172.16.0.0 255.255.248.0 172.16.12.1

C.
ip route 172.16.0.0 255.255.240.0 172.16.12.1

D.
ip route 172.16.0.0 255.255.224.0 172.16.12.1E. ip route 172.16.0.0 255.255.255.0 172.16.12.1

Answer: B

"Pass Any Exam. Any Time." - www.actualtests.com 60


Cisco 200-310 Exam
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should issue the ip route 172.16.0.0 255.255.248.0 172.16.12.1 command on RouterB to
configure static routing appropriately in this scenario. The ip route command uses the syntax ip
route net-address mask next-hop, where net-address is the network address of the destination
network, mask is the subnet mask of the destination network, and next-hop is the IP address of a
neighboring router that can reach the destination network. In this scenario, you should choose a
static route to the eight displayed networks. Static routes are also optimal for routing networks that
do not change often. Although you could issue an ip route command for each of the eight networks
connected to RouterA, this is not the most desirable approach, because, consequently, the routing
table on RouterB would become unnecessarily large. This in turn would cause the routing table to
consume excessive random access memory (RAM) and CPU resources on RouterB. The correct
approach is to use a summary address and subnet mask that encompass all of the desired
networks on RouterA in a single ip route command.

The diagram shows that the first of the eight networks on RouterA is 172.16.0.0/24. Eight networks
can be summarized in 3 bits (23 = 8). You can write a network statement that encompasses the
eight networks that begin with 172.16.0.0/24 by taking 3 bits away from the subnet mask. Moving
the 24bit mask 3 bits to the left yields a 21bit mask, which is 255.255.248.0. Thus the network and
subnet mask combination of 172.16.0.0 255.255.248.0 encompasses all eight networks. The
process of taking bits away from the subnet mask to more broadly encompass additional networks
is called supernetting. This is the opposite of subnetting, which is a technique used to divide a
network into smaller subnetworks.

You should not issue the ip route 172.16.0.0 255.255.0.0 172.16.12.1 command on RouterB. This
would cause RouterB to attempt to forward traffic to all networks from 172.16.0.0/24 through
172.16.255.0/24 to RouterA. The eight desired networks on RouterA would be forwarded, but
many additional networks that you may not want to forward to RouterA would also be included.

You should not issue the ip route 172.16.0.0 255.255.240.0 172.16.12.1 command on RouterB.
This would cause RouterB to attempt to forward traffic to all networks from 172.16.0.0/24 through
172.16.15.0/24 to RouterA. The eight desired networks on RouterA would be forwarded, but eight
additional networks from 172.16.8.0/24 through 172.16.15.0/24 that you may not want to forward
to RouterA would also be included.

You should not issue the ip route 172.16.0.0 255.255.224.0 172.16.12.1 command on RouterB.
This would cause RouterB to attempt to forward traffic to all networks from 172.16.0.0/24 through
172.16.31.0/24 to RouterA. The eight desired networks on RouterA would be forwarded, but 16
additional networks from 172.16.8.0/24 through 172.16.31.0/24 that you may not want to forward
to RouterA would also be included.

You should not issue the ip route 172.16.0.0 255.255.255.0 172.16.12.1 command on RouterB.
This would cause RouterB to attempt to forward traffic for only the 172.16.0.0/24 network and
"Pass Any Exam. Any Time." - www.actualtests.com 61
Cisco 200-310 Exam
would not forward traffic for the other seven networks that are connected to RouterA.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 41

On a Cisco router, which of the following commands relies on ICMP TEMs to map the path that a
packet takes as it passes through a network?

A.
ping

B.
traceroute

C.
arp

D.
netconf

Answer: B
Explanation:
Section: Design Methodologies Explanation

On a Cisco router, the traceroute command relies on Internet Control Message Protocol (ICMP)
Time Exceeded Messages (TEMs) to map the path that a packet takes as it passes through a
network. The traceroute command works by sending a sequence of messages, usually User
Datagram Protocol (UDP) packets, to a destination address. The Time-to-Live (TTL) value in the
IP header of each series of packets is incremented as the traceroute command discovers the IP
address of each router in the path to the destination address. The first series of packets, which
have a TTL value of one, make it to the first hop router, where their TTL value is decremented by
one as part of the forwarding process. Because the new TTL value of each of these packets will
be zero, the first hop router will discard the packets and send an ICMP TEM to the source of each
discarded packet. The traceroute command will record the IP address of the source of the ICMP
TEM and will then send a new series of messages with a higher TTL. The next series of messages
is sent with a TTL value of two and arrives at the second hop before generating ICMP TEMs and
thus identifying the second hop. This process continues until the destination is reached and every
"Pass Any Exam. Any Time." - www.actualtests.com 62
Cisco 200-310 Exam
hop in the path to the destination is identified. In this manner, the traceroute command can be
used to manually build a topology map of an existing network? however, more effective
mechanisms, such as Link Layer Discovery Protocol (LLDP) or Cisco Discovery Protocol (CDP),
are typically used instead when available.

The ping command does not rely on TEMs to map the path that a packet takes as it passes
through a network. The ping command sends ICMP Echo Request messages to a destination
address and waits for the corresponding ICMP Echo Reply messages to be sent back. Most
implementations of ping report additional statistics, such as the number of packets dropped and
the length of time that lapsed between a request and the reply. Although the ping command can
be used to determine connectivity to a destination, it cannot be used to trace the route taken by a
packet as it moves toward its destination.

The arp command does not rely on TEMs to map the path that a packet takes as it passes through
a network. On a Cisco router, the arp command can be used in global configuration mode to
create a static entry in the local Address Resolution Protocol (ARP) cache. For example, the arp
198.51.1.11

000c.0101.0101 arp a command creates a manual entry in the ARP table to map the IP address
198.51.1.11 to an Ethernet Media Access Control (MAC) address of 000c.0101.0101.

The net-conf command does not rely on TEMs to map the path that a packet takes as it passes
through a network. The net-conf command can be used to configure Network Configuration
Protocol (NETCONF) parameters on a Cisco device. NETCONF is an Internet Engineering Task
Force (IETF) standard that uses Extensible Markup Language (XML) messages to manage the
configuration of network devices. With NETCONF, a client can lock the configuration data on a
device and then retrieve the entire configuration in a standardized, parsable format without fear of
the configuration changing during the session. The client can then modify the configuration as
needed and send the updated configuration data to the device for implementation before releasing
the session lock.

Reference:

Cisco: Understanding the Ping and Traceroute Commands

QUESTION NO: 42

Which of the following actions are you most likely to perform during the initial phase of the top-
down network design approach?

A.

"Pass Any Exam. Any Time." - www.actualtests.com 63


Cisco 200-310 Exam
Select switches to build out the LAN.

B.
License OSes to host network services.

C.
Analyze the company application requirements.

D.
Route cables from a server room to workstations.

Answer: C
Explanation:
Section: Design Methodologies Explanation

Most likely, you will analyze the company application requirements during the initial phase of the
top-down network design approach. The top-down network design approach is typically used to
ensure that the eventual network build will properly support the needs of the network's use cases.
For example, a dedicated customer service call center might first evaluate communications and
knowledgebase requirements prior to designing and building out the call center's network
infrastructure. In other words, a top-down design approach typically begins at the Application
layer, or Layer 7, of the Open Systems Interconnection (OSI) reference model and works down the
model to the Physical layer, or Layer 1.

The top three layers of the OSI reference model-which are the Application layer, Presentation
layer, and Session layer-should be analyzed in order to determine the design requirements. The
infrastructure that is needed for the Transport layer, Network layer, Data Link layer, and Physical
layer is thus determined by this analysis.

You would not route cables from a server room to workstations. This action would fall into the
Physical layer of the OSI reference model. Therefore, you might start with this action if you were
following a bottom-up network design approach. In contrast to the top-down approach, the bottom-
up approach begins at the bottom of the OSI reference model. Decisions about network
infrastructure are made first, and application requirements are considered last. This approach to
network design can often lead to frequent network redesigns to account for requirements that have
not been met by the initial infrastructure.

You would not license operating systems (OSes) to host network services during the initial phase
of the top-down network design approach. Licensing OSes to host network services would most
likely be performed in the Transport layer. In addition, you would not select switches to build out
the LAN. This action would fall within the lower three layers of the OSI reference model.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25
"Pass Any Exam. Any Time." - www.actualtests.com 64
Cisco 200-310 Exam
Cisco: Using the TopDown Approach to Network Design (Flash)

QUESTION NO: 43

Which of the following IPv6 prefixes is used for multicast addresses?

A.
2000::/3

B.
FC00::/8

C.
FD00::/8

D.
FE80::/10

E.
FF00::/8

Answer: E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The IP version 6 (IPv6) prefix FF00::/8 is used for multicast addresses, which are used for one-to-
many communication. IPv6 addresses in the FF00::/8 range begin with the characters FF00
through FFFF. However, certain address ranges are used to indicate the scope of the multicast
address. The following IPv6 multicast scopes are defined:

IPv6 hosts use the multicasting capabilities of the Neighbor Discovery (ND) protocol to discover
the link layer addresses of neighbor hosts. The Hop Limit field is typically set to 255 in ND packets
that are sent to neighbors. Routers decrement the Hop Limit value as a packet is forwarded from
hop to hop. Therefore, a router that receives an ND packet with a Hop Limit value of 255 considers
the source of the ND packet to be a neighbor. If a router receives an ND packet with a Hop Limit
that is less than 255, the packet is ignored, thereby protecting the router from threats that could
result from the ND protocol's lack of neighbor authentication.

The IPv6 prefix 2000::/3 is used for global aggregatable unicast addresses. IPv6 addresses in the
2000::/3 range begin with the characters 2000 through 3FFF. Global aggregatable unicast address
prefixes are distributed by the Internet Assigned Numbers Authority (IANA) and are globally
"Pass Any Exam. Any Time." - www.actualtests.com 65
Cisco 200-310 Exam
routable over the Internet. Because there is an inherent hierarchy in the aggregatable global
address scheme, these addresses lend themselves to simple consolidation, which greatly reduces
the complexity of Internet routing tables.

The IPv6 prefix FE80::/10 is used for unicast link-local addresses. IPv6 addresses in the FE80::/10
range begin with the characters FE80 through FEBF. Unicast packets are used for one-to-one
communication. Link-local addresses are unique only on the local segment. Therefore, link-local
addresses are not routable. An IPv6capable host typically creates a unicast link-local address
automatically at startup. Unicast link-local addresses are used for neighbor discovery and for
environments in which no router is present to provide a routable IPv6 prefix.

The IPv6 prefixes FC00::/8 and FD00::/8 are used for unicast site-local addresses. IPv6 addresses
in these ranges begin with the characters FC00 through FDFF. Site-local addresses are not
globally routable, but they are routable within an organization.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, IPv6 Multicast Addresses, pp. 470-471

Cisco: Cisco Security Appliance Command Line Configuration Guide, Version 8.0: Multicast
Address

QUESTION NO: 44

View the Exhibit.

You administer the network shown above. RouterA and RouterB are configured to use EIGRP.
Automatic summarization is enabled.

What network or networks will RouterA advertise to RouterB?

"Pass Any Exam. Any Time." - www.actualtests.com 66


Cisco 200-310 Exam
A.
172.16.0.0/16

B.
172.16.0.0/22

C.
172.16.0.0/23 and 172.16.3.0/24

D.
172.16.0.0/24, 172.16.1.0/24, and 172.16.3.0/24

Answer: A
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

RouterA will advertise the 172.16.0.0/16 network to RouterB. When the auto-summary command
has been used to enable automatic summarization on a router, Enhanced Interior Gateway
Routing Protocol (EIGRP) automatically summarizes networks on classful boundaries.
Summarization minimizes the size of routing tables and advertisements and reduces a router's
processor and memory requirements. Summarization is also useful in limiting the scope of EIGRP
queries.

The 172.16.0.0/24, 172.16.1.0/24, and 172.16.3.0/24 networks in this scenario use Class B
addresses. Therefore, these network ranges are summarized to the Class B boundary, which is
/16. Subnetting a contiguous address range in structured, hierarchical fashion enables routers to
maintain smaller routing tables and eases administrative burden when troubleshooting.
Conversely, a discontiguous IP version 4 (IPv4) addressing scheme can cause routing tables to
bloat because the subnets cannot be summarized.

To disable automatic summarization in an EIGRP configuration, you should issue the no auto-
summary command. The no auto-summary command enables EIGRP to advertise the actual
networks, not the classful summary. You should use the no auto-summary command when a
classful network is divided and portions of the same classful network exist in different parts of the
network topology. If you were to issue the no auto-summary command on RouterA, RouterA would
advertise the individual network ranges and subnet mask information to RouterB.

You can issue the ip summary-address eigrp command to enable manual summarization. Manual
summarization is configured on a per-interface basis. The syntax of the ip summary-address eigrp
command is ip summary-address eigrp as number address mask, where as-number is the EIGRP
autonomous system (AS) number, address is the summary address, and mask is the subnet mask
in dotted decimal notation.

Reference:

"Pass Any Exam. Any Time." - www.actualtests.com 67


Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312

Cisco: Cisco IOS IP Routing: EIGRP Command Reference: auto-summary (EIGRP)

QUESTION NO: 45

Which of the following network virtualization technologies can be used to create redundancy and
manage loops between multiple devices?

A.
VDC

B.
VLAN

C.
VRF

D.
vPC

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Explanation

Virtual PortChannel (vPC) can be used to create redundancy and manage loops between multiple
devices.

PortChannel is used to bundle multiple physical links into a single logical link between two network
devices. A logical link that is built from multiple physical device links creates redundancy between
the two devices, which ensures that network traffic can continue to flow in the event of a failure on
one of the physical links. However, a single PortChannel can be configured between two devices
only. vPC expands upon the capabilities provided by PortChannel by enabling the creation of a
logical link between more than two devices, which further increases redundancy by providing
additional pathway possibilities.

Virtual routing and forwarding (VRF) is a network virtualization mechanism that is used to maintain
multiple independent routing tables on a single router. When VRF is used, the same IP addressing
scheme can be applied to different routing tables on the same router. As the fundamental basis of
a virtual network, VRF can be used to build numerous virtual private networks (VPNs), which all
"Pass Any Exam. Any Time." - www.actualtests.com 68
Cisco 200-310 Exam
use the same infrastructure and the same addressing scheme without each instance interfering
with each other.

A Virtual Device Context (VDC) is a device virtualization mechanism that enables an administrator
to divide a single physical switch into up to four virtual switches. Each virtual switch, which is also
referred to as a VDC, has its own control and data plane instances, configuration files, physical
ports, and memory-related processes. For example, the active spanning tree configuration on one
virtual switch would run in a completely separate memory space as the spanning tree
configuration on another virtual switch, even though both virtual switches share the same physical
memory.

A virtual LAN (VLAN) is a network virtualization mechanism that enables multiple LANs to pass
traffic on a single physical interface. VLANs are configured on network switches to create two or
more separate LANs on the same switch. VLANs are used to simplify network administration and
to reduce the size of broadcast domains. Hosts on one VLAN cannot communicate with hosts on
other VLANs unless a router or an Open Systems

Interconnection (OSI) Layer 3 switch is used to provide inter-VLAN routing.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, vPC, p. 154

Cisco: Virtual PortChannel Quick Configuration Guide

QUESTION NO: 46 DRAG DROP

You are implementing a new two-office network. The Administration and Marketing staff will reside
at the first office. The Research department will have the sole use of the second office. Each
department's computers will reside on separate subnets.

The company does not intend to hire any new employees in the near future, so your supervisor
has instructed you to conserve IP address space. The Marketing department requires 25 hosts,
and the Administration department requires 55 hosts The Research department must
accommodate 110 hosts.

In the network diagram displayed below, drag the appropriate network addresses into their correct
locations. Not all addresses will be used.

"Pass Any Exam. Any Time." - www.actualtests.com 69


Cisco 200-310 Exam

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 70


Cisco 200-310 Exam
Section: Addressing and Routing Protocols in an Existing Network Explanation

You have been instructed to conserve IP address space, so you must create subnets containing a
minimal number of IP addresses and still accommodate all necessary hosts. A total of 192 host
addresses are required on the network: 25 for Marketing, 55 for Administration, 110 for Research,
and two for the serial link connecting the offices. You can create a sufficient number of IP
addresses by supernetting a single Class C subnet.

You should begin allocating address ranges starting with the largest group of hosts to ensure that
the entire group has a large, contiguous address range available. Subnetting a contiguous
address range in structured, hierarchical fashion enables routers to maintain smaller routing tables
and eases administrative burden when troubleshooting.

The Research department on Network D requires 110 hosts. A 25-bit subnet mask is sufficient to
handle 126 host addresses. Two address ranges with a 25-bit subnet mask are available: the
192.168.1.0/25 and 192.168.1.128/25 networks. Although the 192.168.1.0/25 network would work
for Network D, the 192.168.1.128/25 address range is the optimal range for Network D because
the 192.168.1.0/25 network range overlaps most of the address ranges of the remaining choices.
When addressing the E0/0 interface of Router2 and hosts in the Research department, you should
use host addresses from the 192.168.1.128/25 range, which includes addresses from
192.168.1.129 through 192.168.1.254.

The next-largest department, Administration, requires 55 host addresses for Network B. A 26-bit
subnet mask is sufficient to handle 62 host addresses. Of the remaining options, you should use
the

192.168.1.64/26 network. When addressing the E1/1 interface of Router1 and hosts in the
Administration department, you should use host addresses from the 192.168.1.64/26 range, which
includes addresses from 192.168.1.65 through 192.168.1.126.

The smallest department, Marketing, requires 25 host addresses for Network A. A 27-bit subnet
mask is sufficient to handle 30 host addresses. Of the remaining options, you should use the
192.168.1.32/27 network. When addressing the E0/0 interface of Router1 and hosts in the
Marketing department, you should use host addresses from the 192.168.1.32/27 range, which
includes addresses from 192.168.1.33 through 192.168.1.62.

The point-to-point link between routers requires two host addresses. A 30-bit subnet mask is
sufficient to handle exactly two host addresses. Of the remaining options, you should use the
192.168.1.0/30 network. When addressing the S0/0 interfaces of Router1 and Router2, you should
choose host addresses from the 192.168.1.0/30 range, which includes the 192.168.1.1 and
192.168.1.2 addresses.

The overall IP addressing scheme is summarized in the following table:

"Pass Any Exam. Any Time." - www.actualtests.com 71


Cisco 200-310 Exam

You should not use the 192.168.2.0/24 network for any of the subnets, because doing so would
not conserve IP addresses. A 24bit subnet mask can support 254 hosts in a single broadcast
domain and would therefore not conserve address space. A 25bit subnet mask is sufficient to
handle the largest subnet in the scenario.

You should not use either the 192.168.1.0/28 or 192.168.1.0/29 networks for any of the subnets. A
28bit subnet mask supports 14 hosts, and a 29bit subnet mask supports six hosts, neither of which
is a sufficient number of hosts for the Administration, Marketing, or Research departments. The
28bit and 29bit subnet masks are not appropriate for Network C either, because the number of
host addresses created by these masks far exceeds the number of actual hosts on the point-to-
point link between Router1 and Router2.

Finally, you should not use the 192.168.1.0/25 network for any of the subnets, because it allows
for a single subnet of 126 hosts. In relation to the other network choices available, this subnet
mask would not fit the overall addressing schema. If you used this option, the remaining available
choices would not meet the design requirement and would not allow for an efficient distribution of
addresses.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312.

Cisco: IP Addressing and Subnetting for New Users.

QUESTION NO: 47

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 72


Cisco 200-310 Exam

Which of the following designs is represented by the diagram shown above?

A.
looped triangle access design

B.
looped square access design

C.
loop-free U access design

D.
Layer 3 access design

E.
loop-free inverted U access design

Answer: B
Explanation:

Section: Enterprise Network Design Explanation

The topology diagram in this scenario represents the looped square access design. The looped
"Pass Any Exam. Any Time." - www.actualtests.com 73
Cisco 200-310 Exam
square access design and the looped triangle access design are Layer 2, looped access designs.
Both of these designs use Layer 2 trunk links between aggregation layer switches and rely on
Spanning Tree Protocol (STP) to resolve physical loops in the network. In the looped square
access design, each access layer switch has a single uplink to the aggregation layer. Additionally,
access layer switches also share a Layer 2 link between them that remains in a blocking state until
an uplink to the aggregation layer fails. In the event of an uplink failure, the shared link provides a
redundant path for access layer traffic to the aggregation layer. The Layer 2 topology of a looped
square access design resembles a square, as shown by the black, dotted lines in the diagram
below:

By contrast, the access layer switches in the looped triangle access design do not share a Layer 2
trunk link. Additionally, each access layer switch in this design has two uplinks to the aggregation
layer. These uplinks form a Layer 2 looped triangle, as shown by the black, dotted lines in the
diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 74


Cisco 200-310 Exam

Because the uplinks in a looped triangle access design form a Layer 2 loop, one of the uplinks
must remain in a blocking state until the active uplink fails. The blocking uplink provides a
redundant path for access layer traffic in the event of a failure of the active uplink. The looped
triangle access design is the most commonly implemented design in data centers today.

The topology diagram in this scenario does not represent the loop-free U access design. A loop-
free design is a design that contains no Layer 2 loops between the access layer and the
aggregation layer. Because there are no Layer 2 loops in a loop-free design, STP blocking is not
in effect for any of the uplinks between access layer and aggregation layer switches. In the loop-
free U access design, the Layer 2 topology resembles the letter U, as indicated by the dotted,
black lines in the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 75


Cisco 200-310 Exam

Each access layer switch in this design provides a single Layer 2 uplink to the aggregation layer
and shares a Layer 2 link to an adjacent access layer switch. The shared link is typically an
802.1Q trunk link and enables each access layer switch to share virtual LAN (VLAN) information.
Additionally, the trunk link provides a redundant path for access layer traffic if an uplink to the
aggregation layer fails. The link between the aggregation layer switches in this design is a Layer 3
link. Because this link is not a Layer 2 link, services that rely on Layer 2 adjacency for state
awareness, such as Hot Standby Router Protocol (HSRP), are not supported.

The topology diagram in this scenario does not represent the loop-free inverted U access design.
Like the loop-free U access design, the loop-free inverted U access design contains no Layer 2
loops between the access layer and the aggregation layer. However, unlike the loop-free U access
design, the loop-free inverted U access design does not contain Layer 2 trunk links between
access layer switches. Instead, the aggregation layer switches are interconnected by Layer 2 trunk
links. These Layer 2 trunk links enable access layer VLANs to span the aggregation layer and also
to serve as redundant paths for access layer traffic in the event of an access layer uplink failure.
However, because the access layer switches are not interconnected by Layer 2 trunk links, single-
attached devices at the access layer can be cut off from the network if their access layer switch
suffers an uplink failure. The Layer 2 topology of a loop-free inverted U access design resembles
an inverted U, as indicated by the dotted, black lines in the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 76


Cisco 200-310 Exam

The topology diagram in this scenario does not represent the Layer 3 access design. In the Layer
3 access design, the uplinks between the access layer and aggregation layer switches are Layer 3
connections. Because the Layer 2 topology in this design is effectively reduced to the trunk link
between the access layer switches, Layer 2 loops are eliminated and all uplinks are in a
forwarding state. STP is no longer necessary in this design however, Cisco recommends
configuring STP on ports that connect to access layer devices to prevent user side loops from
entering the network. The Layer 3 uplinks in this design enable the access layer switches to use
routing information to implement load balancing across all available uplinks. It is important to
consider the performance limitations and capabilities of the access layer and aggregation layer
switches when implementing a routing solution in the Layer 3 access design. If performance is an
issue, static routes and stub routing can reduce processing load for the access layer and
aggregation layer switches while route summarization can reduce processing load for core
switches. The Layer 3 access design is represented by the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 77


Cisco 200-310 Exam

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Access Layer Best Practices, pp. 94-97

Cisco: Data Center Multi-Tier Model Design: Data Center Access Layer

QUESTION NO: 48

Which of the following IPv6 address spaces contains an inherent hierarchy that simplifies Internet
routing tables?

A.
anycast

B.
global unicast

"Pass Any Exam. Any Time." - www.actualtests.com 78


Cisco 200-310 Exam
C.
link-local

D.
broadcast

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The global unicast IP version 6 (IPv6) address space contains an inherent hierarchy that simplifies
Internet routing tables. A global unicast address, which is also referred to as an aggregatable
global address, is designed to minimize the size of Internet routing tables. A global unicast
address contains three distinct parts:

The Global Routing Prefix is a 48bit field that is defined by the Internet service provider (ISP). The
SLA is a 16bit field that identifies a site and is analogous to a subnet in IPv4. The Interface ID is a
64bit field that must be globally unique? therefore, it typically contains the Media Access Control
(MAC) address of the originating device in extended universal identifier (EUI)64 format. Because
there is an inherent hierarchy in the aggregatable global address scheme, these addresses lend
themselves to simple consolidation, which greatly reduces the complexity of Internet routing
tables.

Anycast addresses are used to send packets to the closest device that is configured with the
anycast address. Therefore, an anycast address can be described as a one-to-nearest address.
The closest device is selected by the routing protocol that is used by the router. Because anycast
addresses use the same address for multiple devices in a group, anycast addresses are ideal for
load balancing.

Link-local addresses are unicast addresses used for communication over a single link. Routers do
not forward traffic sent to a link-local address? the traffic stays on the local link. These addresses
always begin with FE8, FE9, FEA, or FEB. Because link-local addresses are used to form
neighbor adjacencies, they should always be considered when creating access control lists (ACLs)
to filter traffic. If the ACL explicitly denies the relevant link-local addresses on a router, neighbor
relationships can fail after an ACL is applied to an interface.

IPv6 does not use broadcast addresses. A broadcast address is an IPv4 address that is used to
communicate with all devices in a broadcast domain at once.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 9, Global Unicast Addresses, p. 342

Cisco: IPv6 Configuration Guide, Cisco IOS Release 15.2S: Aggregatable Global Address

"Pass Any Exam. Any Time." - www.actualtests.com 79


Cisco 200-310 Exam

QUESTION NO: 49

View the Exhibit.

Refer to the exhibit above. The Layer 3 switch on the right, DSW2, is the root bridge for all VLANs
in the topology. Devices on VLAN 10 use the Layer 3 switch on the left, DSW1, as a default
gateway. Devices on VLAN 20 use DSW2 as a default gateway. A device that is operating in
VLAN 10 and is connected to ASW3 transmits a packet that is destined beyond Router1. RPVST+
is enabled on all switches.

What path will the packet most likely take through the network?

A.
ASW3 > DSW2 > Router1

B.
ASW3 > DSW1 > Router1

C.
ASW3 > DSW2 > DSW1 > Router1

D.
ASW3 > DSW1 > DSW2 > Router1

Answer: C

"Pass Any Exam. Any Time." - www.actualtests.com 80


Cisco 200-310 Exam
Explanation:
Section: Enterprise Network Design Explanation

Most likely, the packet will travel from ASW3 to DSW2, to DSW1, and then to Router1. Because all
of the virtual LANs (VLANs) use DSW2 as the root bridge in this scenario, all traffic from the
access layer switches, regardless of VLAN, flows first to DSW2. Traffic from VLAN 20 is therefore
already optimized because VLAN 20 uses DSW2 as its default gateway. However, VLAN 10 uses
DSW1 as its default gateway. Therefore, traffic from VLAN 10 will most likely flow first to DSW2
and then across the PortChannel 1 EtherChannel interface to DSW1 for forwarding.

Because every access layer switch has been physically connected to every distribution layer
switch in this scenario, the topology contains a high level of redundancy. In addition, the use of
Rapid Per-VLAN Spanning Tree Plus (RPVST+) ensures that the network will converge quickly.
However, if you were to configure the root bridge on a per-VLAN basis, the network could be
further optimized because the location of the root switch could be optimized on a per-VLAN basis.
For example, configuring DSW1 as the preferred root bridge for devices that operate on VLAN 10
would cause VLAN 10 traffic from ASW1, ASW2, or ASW3 to flow directly to DSW1 for forwarding
to Router1. VLAN 20 traffic would remain optimized to flow directly to DSW2 from ASW1 or ASW3.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: Inter-Switch Link and IEEE 802.1Q Frame Format: Background Theory

Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE:
Configuring the Switch Priority of a VLAN

QUESTION NO: 50

Which of the following subnet masks contains 4,094 valid host addresses?

A.
/19

B.
/20

C.
/21

D.
/22

"Pass Any Exam. Any Time." - www.actualtests.com 81


Cisco 200-310 Exam
E.
/23

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

A /20 subnet mask contains 4,094 valid host addresses. A subnet mask specifies how many bits
belong to the network portion of a 32bit IP address. The remaining bits in the IP address belong to
the host portion of the IP address. To determine how many host addresses are defined by a
subnet mask, use the formula 2n2, where n is the number of bits in the host portion of the address.
A /20 subnet mask uses 12 bits for host addresses, so 212 -2 equals 4,094 valid host addresses.

Although it is important to learn the formula for calculating valid host addresses, the following list
demonstrates the relationship between subnet masks and valid host addresses:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

Cisco: IP Addressing and Subnetting for New Users

"Pass Any Exam. Any Time." - www.actualtests.com 82


Cisco 200-310 Exam

QUESTION NO: 51

In a campus network hierarchy, which of the following security functions does not typically occur at
the campus access layer?

A.
NAC

B.
packet filtering

C.
DHCP snooping

D.
DAI

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Packet filtering is typically implemented in the campus distribution layer, not the campus access
layer. The distribution layer of the campus network hierarchy is where access control lists (ACLs)
and inter-VLAN routing are typically implemented. The distribution layer serves as an aggregation
point for access layer network links. Because the distribution layer is the intermediary between the
access layer and the core layer, the distribution layer is the ideal place to enforce security policies,
provide load balancing, provide Quality of Service (QoS), and perform tasks that involve packet
manipulation, such as routing and packet filtering. Because the distribution layer connects to both
the access and core layers, it is often comprised of multilayer switches that can perform both
Layer 3 routing functions and Layer 2 switching.

Network Admission Control (NAC), Dynamic ARP Inspection (DAI), and Dynamic Host
Configuration Protocol (DHCP) snooping are performed at the campus access layer. The access
layer serves as a media termination point for devices, such as servers and hosts. Because access
layer devices provide access to the network, the access layer is the ideal place to classify traffic
and perform network admission control. NAC is a Cisco feature that prevents hosts from
accessing the network if they do not comply with organizational requirements, such as having an
updated antivirus definition file. DHCP snooping is a feature used to mitigate DHCP spoofing
attacks. In a DHCP spoofing attack, an attacker installs a rogue DHCP server on the network in an
attempt to intercept DHCP requests. The rogue DHCP server can then respond to the DHCP
requests with its own IP address as the default gateway address? hence all traffic is routed
through the rogue DHCP server. DAI is a feature that can help mitigate Address Resolution
Protocol (ARP) poisoning attacks. In an ARP poisoning attack, which is also known as an ARP

"Pass Any Exam. Any Time." - www.actualtests.com 83


Cisco 200-310 Exam
spoofing attack, the attacker sends a gratuitous ARP (GARP) message to a host. The message
associates the attacker's MAC address with the IP address of a valid host on the network.
Subsequently, traffic sent to the valid host address will go through the attacker's computer rather
than directly to the intended recipient.

Reference:

Cisco: Campus Network for High Availability Design Guide: Access Layer

QUESTION NO: 52

Which of the following is a network architecture principle that represents the structured manner in
which the logical and physical functions of the network are arranged?

A.
modularity

B.
hierarchy

C.
top-down

D.
bottom-up

Answer: B
Explanation:
Section: Design Objectives Explanation

The hierarchy principle is the structured manner in which both the physical and logical functions of
the network are arranged. A typical hierarchical network consists of three layers: the core layer,
the distribution layer, and the access layer. The modules between these layers are connected to
each other in a fashion that facilitates high availability. However, each layer is responsible for
specific network functions that are independent from the other layers.

The core layer provides fast transport services between buildings and the data center. The
distribution layer provides link aggregation between layers. Because the distribution layer is the
intermediary between the access layer and the campus core layer, the distribution layer is the
ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. The access layer, which
"Pass Any Exam. Any Time." - www.actualtests.com 84
Cisco 200-310 Exam
typically comprises Open Systems Interconnection (OSI) Layer 2 switches, serves as a media
termination point for devices, such as servers and workstations. Because access layer devices
provide access to the network, the access layer is the ideal place to perform user authentication
and to institute port security. High availability, broadcast suppression, and rate limiting are also
characteristics of access layer devices.

The modularity network architecture principle is most likely to facilitate troubleshooting. The
modularity and hierarchy principles are complementary components of network architecture. The
modularity principle is used to implement an amount of isolation among network components. This
ensures that changes to any

given component have little to no effect on the rest of the network. Modularity also simplifies the
troubleshooting process by limiting the task of isolating the problem to the affected module.

The modularity principle typically consists of two building blocks: the access distribution block and
the services block. The access distribution block contains the bottom two layers of a three tier
hierarchical network design. The services block, which is a newer building block, typically contains
services like routing policies, wireless access, tunnel termination, and Cisco Unified
Communications services.

Top-down and bottom-up are both network design models, not network architecture principles.
The top-down network design approach is typically used to ensure that the eventual network build
will properly support the needs of the network's use cases. For example, a dedicated customer
service call center might first evaluate communications and knowledgebase requirements prior to
designing and building out the call center's network infrastructure. In other words, a top-down
design approach typically begins at the

Application layer, or Layer 7, of the OSI reference model and works down the model to the
Physical layer, or Layer 1.

In contrast to the top-down approach, the bottom-up approach begins at the bottom of the OSI
reference model. Decisions about network infrastructure are made first, and application
requirements are considered last. This approach to network design can often lead to frequent
network redesigns to account for requirements that have not been met by the initial infrastructure.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Cisco Enterprise Architecture Model, pp. 49-50

Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: Hierarchy

"Pass Any Exam. Any Time." - www.actualtests.com 85


Cisco 200-310 Exam
QUESTION NO: 53 DRAG DROP

From the left, select the characteristics that apply to a small branch office, and drag them to the
right.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 86


Cisco 200-310 Exam

Section: Enterprise Network Design Explanation

A small branch office typically uses a single Integrated Services Router (ISR), combines LAN and
WAN termination, and does not include a distribution layer. Cisco defines a small branch office as
an office that contains up to 50 users and that implements a one-tier design. A single-tier design
combines LAN and WAN termination into a single ISR, where a redundant link to the access layer
can be created if the ISR uses an EtherChannel topology versus a trunked topology, which offers
no link redundancy. Because a small branch office uses a single ISR, such as the ISR G2, to
provide LAN and WAN services, an external access switch, such as the Cisco 2960, is not
necessary. In addition, Rapid PerVLAN Spanning Tree Plus (RPVST+) is not supported on most
ISR platforms.

Medium and large branch offices typically use RPVST+ and external access switches. RPVST+ is
an advanced spanning tree algorithm that can prevent loops on a switch that handles multiple
virtual LANs (VLANs). RPVST+ is typically supported only on external switches and advanced
routing platforms. External access switches provide high-density LAN connectivity to individual
hosts and typically aggregate links on distribution layer switches.

Cisco defines a medium branch office as an office that contains between 50 and 100 users and
that implements a two-tier design. A dual-tier design separates LAN and WAN termination into
multiple devices. A medium branch office typically uses two ISRs, with one ISR serving as a
connection to the headquarters location and the second serving as a connection to the Internet. In
addition, the two ISRs are typically connected by at least one external switch that also serves as
an access layer switch for the branch users.

Cisco defines a large branch office as an office that contains between 100 and 200 users and that
implements a three-tier design. Similar to a dual-tier design, a triple-tier design separates LAN and
WAN termination into multiple devices. However, a triple-tier design separates additional services,
such as firewall functionality and intrusion detection. A large branch office typically uses at least
one dedicated device for each network service. Whereas small and medium branch offices consist
of only an edge layer and an access layer, the large branch office also includes a distribution
layer.

"Pass Any Exam. Any Time." - www.actualtests.com 87


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Branch Profiles, pp. 275-279

Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Small Office
Design (PDF)

Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Branch LAN
Design Options (PDF)

QUESTION NO: 54

Which of the following statements is true regarding route summarization?

A.
Summarization increases routing protocol convergence times.

B.
Summarization must be performed on classless network boundaries.

C.
Summarization causes a router to advertise more routes to its peers.

D.
Summarization can reduce the amount of bandwidth used by a routing protocol.

E.
Summarization cannot be performed on a group of contiguous networks.

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Route summarization can reduce the amount of bandwidth used by a routing protocol.
Summarization is the process of advertising a group of contiguous networks as a single route.
When a router performs summarization, the router advertises a summary route rather than routes
to each individual subnetwork. Summarization can cause a routing protocol to converge faster and
can reduce the consumption of network bandwidth, because only a single summary route will be
advertised by the routing protocol. For example, summarizing routes from the distribution layer to
the core layer of a hierarchical network enables the distribution layer devices to limit the number of
routing advertisements that are sent to the core layer devices. Because fewer advertisements are
sent, the routing tables of core layer devices are kept small and access layer topology changes
are not advertised into the core layer.
"Pass Any Exam. Any Time." - www.actualtests.com 88
Cisco 200-310 Exam
You can configure a router to summarize its networks on either classful or classless network
boundaries. When combining routes to multiple subnetworks into a single summarized route, you
must take bits away from the subnet mask. For example, consider a router that has interfaces
connected to the 16 contiguous networks from 10.10.0.0/24 through 10.10.15.0/24. The routing
table would contain a route to each of the 16 networks. The 16 contiguous networks can be
summarized in 4 bits (24 = 16). Taking 4 bits away from the 24bit subnet mask yields a 20bit
mask, which is 255.255.240.0. Thus the network and subnet mask combination of 10.10.0.0
255.255.240.0 encompasses all 16 networks. The process of taking bits away from the subnet
mask to more broadly encompass multiple subnetworks is called supernetting. This is the opposite
of subnetting, which divides a network into smaller subnetworks.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458

Cisco: IP Routing Frequently Asked Questions: What does route summarization mean?

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 55

Which of the following queuing methods is the most appropriate for handling voice, video, mission-
critical, and lower-priority traffic?

A.
FIFO

B.
WFQ

C.
LLQ

D.
CBWFQ

Answer: C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Of the choices provided, low-latency queuing (LLQ) is the most appropriate queuing method for
handling voice, video, mission-critical, and lower-priority traffic. LLQ supports the creation of up to
"Pass Any Exam. Any Time." - www.actualtests.com 89
Cisco 200-310 Exam
64 user-defined traffic classes as well as one or more strict-priority queues that can be used to
guarantee bandwidth for delay-sensitive traffic, such as voice and video traffic. Each strict-priority
queue can use as much bandwidth as possible but can use only the guaranteed bandwidth when
other queues have traffic to send, thereby avoiding bandwidth starvation. Cisco recommends
limiting the strict-priority queues to a total of 33 percent of the link capacity.

Class-based weighted fair queuing (CBWFQ) provides bandwidth guarantees, so it can be used
for voice, video, mission-critical, and lower-priority traffic. However, CBWFQ does not provide the
delay guarantees provided by LLQ, because CBWFQ does not provide support for strict-priority
queues. CBWFQ improves upon weighted fair queuing (WFQ) by enabling the creation of up to 64
custom traffic classes, each with a guaranteed minimum bandwidth.

Although WFQ can be used for voice, video, mission-critical, and lower-priority traffic, it does not
provide the bandwidth guarantees or the strict-priority queues that are provided by LLQ. WFQ is
used by default on Cisco routers for serial interfaces at 2.048 Mbps or lower. Traffic flows are
identified by WFQ based on source and destination IP address, port number, protocol number,
and Type of Service (ToS). Although WFQ is easy to configure, it is not supported on high-speed
links.

First-in-first-out (FIFO) queuing is the least appropriate for voice, video, mission-critical, and lower-
priority traffic. By default, Cisco uses FIFO queuing for interfaces faster than 2.048 Mbps. FIFO
queuing requires no configuration because all packets are arranged into a single queue. As the
name implies, the first packet received is the first packet transmitted, without regard for packet
type, protocol, or priority.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235

Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping
Principles

Cisco: Signalling Overview: RSVP Support for Low Latency Queueing

QUESTION NO: 56

To which of the following high-availability resiliency levels do duplicate power supplies belong?

A.
management

B.
monitoring
"Pass Any Exam. Any Time." - www.actualtests.com 90
Cisco 200-310 Exam
C.
network

D.
system

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

Duplicate power supplies are a system-level resiliency component of a high-availability solution.


High-availability solutions feature redundant components that provide protection in the event that a
primary component fails. Cisco defines three components of a high-availability solution: network-
level resiliency, system-level resiliency, and management and monitoring. System-level resiliency
components provide failover protection for system hardware components. Duplicate power
supplies ensure that critical system components can maintain power in the event of a failure of the
primary power supply.

Duplicate power supplies are not an example of management and monitoring resiliency
components. Management and monitoring is a resiliency component used to quickly detect
changes to various components of a high-availability solution. Examples of the monitoring
component include Syslog. Syslog is used to gather information about the state of network
components and to compile them in a centralized location. This allows administrators to gain
information regarding the state of network or system components without having to log on to each
device on the network.

Duplicate power supplies are not an example of network-level resiliency components. Network-
level resiliency features redundant network devices, such as backup switches. In addition, network
resiliency features duplicate links that can be used to maintain communication between network
devices if the primary link fails. When you increase network resiliency by adding redundant links to
a network design, you should also configure link management protocols, such as Spanning Tree
Protocol (STP), to ensure that the redundant links do not generate loops within the network.

Reference:

Cisco: Deploying High Availability in the Wiring Closet Q&A

QUESTION NO: 57

Which of the following statements are true about OSPF and EIGRP? (Choose two.)

"Pass Any Exam. Any Time." - www.actualtests.com 91


Cisco 200-310 Exam
A.
Both use a DR and a BDR.

B.
Both use a DIS.

C.
Both can operate on an NBMA point-to-multipoint network.

D.
Both can operate on an NBMA point-to-point network.

E.
Both perform automatic route summarization.

F.
Both use areas to limit the flooding of database updates.

Answer: C,D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) can
operate on non-broadcast multi-access (NBMA) point-to-point networks and NBMA point-to-
multipoint networks. Because NBMA networks, such as Frame Relay and Asynchronous Transfer
Mode (ATM), do not support Data Link layer broadcasts, routing protocols that operate on NBMA
networks must support methods of neighbor discovery and route advertisement that do not rely on
multicast or broadcast transmission methods. Although subinterfaces can be used to treat an
NBMA point-to-multipoint network as a series of point-to-point connections, you are not required to
configure subinterfaces for NBMA point-to-multipoint networks with EIGRP and OSPF.

EIGRP, not OSPF, performs automatic route summarization. Summarization is a method that can
be used to advertise a group of contiguous networks as a single route. You can configure a router
to summarize its networks on either classful or classless network boundaries. When a router
performs summarization, the router advertises a summary route rather than routes to each
individual subnetwork, which can cause a routing protocol to converge faster. This can also reduce
unnecessary consumption of network bandwidth, because only a single summary route will be
advertised by the routing protocol. EIGRP is capable of performing summarization on any EIGRP
interface. By contrast, OSPF supports summarization at border routers and redistribution
summarization.

OSPF, not EIGRP, uses a designated router (DR) and a backup designated router (BDR) as focal
points for routing information. Only the DR distributes link-state advertisements (LSAs) that
contain OSPF routing information to all the OSPF routers in the area. A DR and a BDR are elected
only on multiaccess networks; they are not elected on point-to-point networks. If the DR fails or is
powered off, the BDR takes over for the DR and a new BDR is elected.
"Pass Any Exam. Any Time." - www.actualtests.com 92
Cisco 200-310 Exam
Intermediate System-to-Intermediate System (ISIS), not EIGRP or OSPF, uses a designated
intermediate system (DIS). A DIS is functionally equivalent to an OSPF DR. The DIS serves as a
focal point for the distribution of routing information. Once elected, the DIS must relinquish its
duties if another router with a higher priority joins the network. If the DIS is no longer detected on
the network, a new DIS is elected based on the priority of the remaining routers on the network
segment.

OSPF, not EIGRP, uses areas to limit the flooding of database updates, thereby keeping routing
tables small and update traffic low within each area. By contrast, EIGRP uses stub routers to limit
EIGRP queries.

An EIGRP stub router advertises only a specified set of routes.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439

CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP for IPv4 Summary, p. 406

Cisco: Configuration Notes for the Implementation of EIGRP over Frame Relay and Low Speed
Links: NBMA Interfaces (Frame Relay, X.25, ATM)

Cisco: OSPF Design Guide: Adjacencies on Non-Broadcast Multi-Access (NBMA) Networks

QUESTION NO: 58

STP is disabled by default in which of the following Layer 2 access designs?

A.
Flex Link

B.
loop-free U

C.
looped triangle

D.
loop-free inverted U

E.
looped square

"Pass Any Exam. Any Time." - www.actualtests.com 93


Cisco 200-310 Exam
Answer: A
Explanation:
Section: Enterprise Network Design Explanation

Spanning Tree Protocol (STP) is disabled by default in Flex Link designs. STP prevents switching
loops on a network. Switching loops can occur when there is more than one switched path to a
destination. The spanning tree algorithm determines the best path through a switched network,
and any ports that create redundant paths are blocked. If the best path becomes unavailable, the
network topology is recalculated and the port connected to the next best path is unblocked. There
are no loops in a Flex Link design, and STP is disabled when a device is configured to participate
in a Flex Link. Interface uplinks in this topology are configured in active/standby pairs, and each
device can only belong to a single Flex Link pair. In the event of an uplink failure, the standby link
becomes active and takes over, thereby offering redundancy when an access layer uplink fails.
Possible disadvantages of the Flex Link design include its inability to return to the original state
after a failed link is recovered, its increased convergence time over other designs, and its inability
to run STP in order to block redundant paths that might be created by inadvertent errors in cabling
or configuration.

STP is not disabled by default in loop-free inverted U designs. Loop-free inverted U designs offer
redundancy at the aggregation layer, not the access layer? therefore, traffic will black-hole upon
failure of an access switch uplink. All uplinks are active with no looping, thus there is no STP
blocking by default. However, STP is still essential so that redundant paths that might be created
by any inadvertent errors in cabling or configuration are blocked.

STP is not disabled by default in loop-free U designs. This topology offers a redundant link
between access layer switches as well as a redundant link at the aggregation layer. Because of
the redundant path in both layers, extending a virtual LAN (VLAN) beyond an individual access
layer pair would create a loop? therefore, loop-free U designs cannot support VLAN extensions.
Like loop-free inverted U designs, loop-free U designs also run STP and have issues with traffic
being black-holed upon failure of an access switch uplink.

STP is not disabled by default in looped triangle designs. A looped triangle design can provide
deterministic convergence in the event of a link failure. In a triangle design, each access layer
device has direct paths to redundant aggregation layer devices. The ability to recover from a failed
link in this design is granted by redundant physical connections that are blocked by Rapid STP
(RSTP) until the primary connection fails. RSTP is an evolution of STP that provides faster
convergence. RSTP achieves this by merging the disabled, blocking, and listening states into a
single state, called the discarding port state. With fewer port states to transition through,
convergence is faster. A looped triangle topology is currently the most common design in
enterprise data centers.

STP is not disabled by default in looped square designs. Like a looped triangle, a looped square
design can provide deterministic convergence through redundant connections. However, the
difference between the two is that in a looped square the redundant link exists between the access
layer devices themselves, whereas in a looped triangle the redundant link exists between the
"Pass Any Exam. Any Time." - www.actualtests.com 94
Cisco 200-310 Exam
access layer devices and the aggregation layer devices. In a looped square, the connection
between the access layer devices is blocked by STP until a primary link failure occurs.

Reference:

Cisco: Data Center Access Layer Design: FlexLinks Access Model

QUESTION NO: 59

NO: 59

Your company is opening a branch office that will contain 29 host computers. Your company has
been allocated the 192.168.10.0/24 address range, and you have been asked to conserve IP
address space when creating a subnet for the new branch office.

Which of the following network addresses should you use for the new branch office? (

A.
192.168.10.0/25

B.
192.168.10.32/26

C.
192.168.10.64/26

D.
192.168.10.64/27

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should use the 192.168.10.64/27 network address for the new branch office. The /27 notation
indicates that 27 bits are used for the network portion of the address and that five bits remain for
the host portion of the address, which allows for 32 (25) usable host addresses. Therefore, this
address range is large enough to handle the number of hosts on the new branch office subnet.
The first address is the network address, the last address is the broadcast address, and the other
30 (25-2) addresses are usable host addresses. Therefore, this address range is large enough to
handle a subnet containing 29 host computers.

"Pass Any Exam. Any Time." - www.actualtests.com 95


Cisco 200-310 Exam
You should always begin allocating address ranges starting with the largest group of hosts to
ensure that the entire group has a large, contiguous address range available. Subnetting a
contiguous address range in structured, hierarchical fashion enables routers to maintain smaller
routing tables and eases administrative burden when troubleshooting.

You should not use the 192.168.10.0/25 network address for the new branch office. The /25
notation indicates that 25 bits are used for the network portion of the address and that 7 bits
remain for the host portion of the address, which allows for 126 (27-2) usable host addresses.
Although this address range is large enough to handle the new branch office subnet, it does not
conserve IP address space, because a smaller range can successfully be used.

You should not use the 192.168.10.32/26 network address for the new branch office. Although a
26bit mask is large enough for 62 usable host addresses, the 192.168.10.32 address is not a valid
network address for a 26-bit mask. The 192.168.10.0/24 address range can be divided into four
ranges, each with 64 addresses, by using a 26-bit mask:

192.168.10.0/26

192.168.10.64/26

192.168.10.128/26

192.168.10.192/26

You should not use the 192.168.10.64/26 network address for the new branch office. The /26
notation indicates that 26 bits are used for the network portion of the address and that six bits
remain for the host portion of the address, which allows for 62 (26-2) host addresses. Although
this address range is large enough to handle the new branch office subnet, it does not conserve IP
address space, because a smaller range can successfully be used.

Although it is important to learn the formula for calculating valid host addresses, the following list
demonstrates the relationship between common subnet masks and valid host addresses:

"Pass Any Exam. Any Time." - www.actualtests.com 96


Cisco 200-310 Exam

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 60

You are planning a network by using the top-down design method. You are using structured
design principles to generate a model of the completed system.

Which of the following are you most likely to consider when creating the model? (Choose four.)

A.
business goals

B.
future network services

C.
network protocols
"Pass Any Exam. Any Time." - www.actualtests.com 97
Cisco 200-310 Exam
D.
technical objectives

E.
applications

F.
network topologies

G.
network components

Answer: A,B,D,E
Explanation:
Section: Design Methodologies Explanation

Most likely, you will consider business goals, existing and future network services, technical
objectives, and applications if you are using structured design principles to generate a model of
the completed system if that system is being planned by using the top-down design method. The
top-down network design approach is typically used to ensure that the eventual network build will
properly support the needs of the network's use cases. In other words, a top-down design
approach typically begins at the Application layer, or Layer 7, of the Open Systems
Interconnection (OSI) reference model and works down the model to the Physical layer, or Layer
1. In order for the designer and the organization to obtain a complete picture of the design, the
designer should create models that represent the logical functionality of the system, the physical
functionality of the system, and the hierarchical layered functionality of the system.

Because a top-down design model of the completed system is intended to provide an overview of
how the system functions, lower OSI-layer specifics such as network protocols should not be
included in the model. Therefore, you should not consider the network protocols that will be
implemented. Nor should you consider the network topologies or network hardware components.
Those components of the design should be assessed in more specific detail in the lower layers of
the OSI reference model.

Reference:

Cisco: Using the Top-Down Approach to Network Design: Structured Design Principles (Flash)

QUESTION NO: 61

Which of the following best describes route summarization?

"Pass Any Exam. Any Time." - www.actualtests.com 98


Cisco 200-310 Exam
A.
It increases the scalability of a network design by facilitating the coexistence of multiple routing
protocols.

B.
It protects the network from unnecessary vulnerabilities or downtime that might be caused by
having a single point of failure.

C.
It enables a router to advertise multiple contiguous subnets as a single, larger subnet.

D.
It can be used in conjunction with redistribution to block route advertisements that could create
routing loops.

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

Route summarization is an advanced routing feature that enables a router to advertise multiple
contiguous subnets as a single, larger subnet. Summarization, which is also known as
supernetting, combines several smaller subnets into one larger subnet. This enables routers on
the network to maintain a single summarized route in their routing tables. Therefore, fewer routes
are advertised by the routers, which reduces the amount of bandwidth required for routing update
traffic. Route summarization is most efficient when the subnets can be summarized within a single
subnet boundary and are contiguous, meaning that all of the subnets are consecutive.
Summarization is typically performed between the enterprise campus core and the enterprise
edge. Advanced routing features, such as summarization, route filtering, and redistribution, can
greatly impact the functionality and scalability of a network and, thus, should be carefully
considered during the network design process.

Redundancy, not route summarization, is the repetition built into a network design to protect the
network from unnecessary vulnerabilities or downtime that might be caused by having a single
point of failure. Simply put, redundancy is having a backup plan in place that can be used in the
event that the primary plan becomes unavailable. For example, multiple physical links between
two switches could be used to promote redundancy.

Redistribution, not route summarization, is an advanced routing feature that increases the
scalability of a network design by facilitating the coexistence of multiple routing protocols. For
example, to join networks at multiple locations where one is running Enhanced Interior Gateway
Routing Protocol (EIGRP) and the other is running Open Shortest Path First (OSPF), Cisco
recommends that you configure two-way redistribution with route map filters at each location.
Redistribution is typically performed by routers between the enterprise campus core and the
enterprise edge.

"Pass Any Exam. Any Time." - www.actualtests.com 99


Cisco 200-310 Exam
Route filtering, not route summarization, is an advanced routing feature that can be used in
conjunction with redistribution to block route advertisements that could create routing loops.
Routing loops occur when a topology change or a delayed routing update results in two routers
pointing to each other as the next hop to a destination. For example, Router1 has a path to
Router2 that begins with Router3, and Router3 has a path to Router2 that begins with Router1.
Since both Router1 and Router3 send data to each other that is intended for Router2, they will
continuously bounce the data back and forth between them, thus forming a loop. In order to
prevent this loop, a route filter could be used to stop the path from Router1 to Router2 from being
advertised to Router3. Consequently, when Router3 receives data from Router1 that is intended
for Router2, the only route available is its own path directly to Router2. Because route filtering is
often used in conjunction with redistribution, route filtering is typically performed by routers
between the enterprise campus core and the enterprise edge.

Reference:

Cisco: OSPF Design Guide: OSPF and Route Summarization

QUESTION NO: 62

Which of the following is least likely to be a concern when installing a server in a third-party data
center?

A.
airflow

B.
the demarcation point

C.
rack security

D.
vertical rack space

Answer: B
Explanation:
Section: Consideration for Expanding an Existing Network Explanation

Of the choices provided, the least likely concern when installing a server in a third-party data
center is the demarcation point, or demarc. The demarc is the termination point between a
physical location and its service provider. In other words, it is the point where the responsibility of

"Pass Any Exam. Any Time." - www.actualtests.com 100


Cisco 200-310 Exam
the physical location ends and the responsibility of the service provider begins. At a third-party
datacenter, the demarc is the responsibility of the data center provider and its service provider, not
of the data center's customers.

Rack security is likely to be a concern when installing a server in a third-party data center.
Commercial data centers house devices for multiple customers within the same physical area.
Although many data centers are physically secured against intruders who might steal or modify
equipment, the data center's other customers have the same access to the physical area that you
do. Therefore, you should install physical security mechanisms, such as a lock, at the rack level to
ensure that your company's devices cannot be accessed by others.

Rack space is likely to be a concern when installing a server in a third-party data center. Although
most racks adhere to a standard width of 19 inches (about 48 centimeters), a certain number of
units (U), or height, of space must be available within a rack to allow the installation of your
equipment and to allow space between your equipment and other equipment that is contained
within the rack. A U is equivalent to 1.75 inches (about 4.5 centimeters) of height. Therefore, if the
device you want to install is a 2U device, the rack should have at least 3.5 inches (about 9
centimeters) of available space to accommodate the device and more to allow for space above
and below the device.

Airflow is likely to be a concern when installing a server in a third-party data center. When
configuring devices in a rack, it is important to allow proper airflow around the devices so that they
do not overheat. However, you should also choose a data center that provides environmental
controls. For example, a hot and cold aisle layout is a data center design that attempts to control
the airflow within the room in order to mitigate problems that can result from overheated servers? it
essentially prevents hot air from mixing with cold air. A raised floor layout is a data center design
that puts the heating, ventilation, and air conditioning (HVAC) ductwork below the floor tiles. The
tiles, which are typically located in the aisles between the server racks in this type of environment,
are perforated so that airflow can be directed and concentrated in the exact locations desired.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Data Center Facility Aspects, pp. 136-138
Category: Considerations for Expanding an Existing Network

QUESTION NO: 63

Which of the following network issues are not likely to be mitigated by using a modular
architecture? (Choose two.)

A.

"Pass Any Exam. Any Time." - www.actualtests.com 101


Cisco 200-310 Exam
hardware failures

B.
physical link failures

C.
application failures

D.
poor scalability

E.
poor redundancy

Answer: C,E
Explanation:
Section: Design Objectives Explanation

Application failures and poor redundancy are not likely to be mitigated by using a modular
architecture. Poor redundancy and resiliency are more likely to be mitigated by a full-mesh
topology. However, full-mesh topologies restrict scalability. Application failures can be mitigated by
server redundancy.

Most likely, hardware failures, physical link failures, and poor scalability can be mitigated by using
a modular architecture. The modularity and hierarchy principles are complementary components
of network architecture. The modularity principle is used to implement an amount of isolation
among network components. This ensures that changes to any given component have little to no
effect on the rest of the network. Thus hardware failures and physical link failures, which are
detrimental to network stability and reliability, are less likely to cause system-wide issues.
Modularity facilitates scalability because it allows changes or growth to occur without system-wide
outages.

The hierarchy principle is the structured manner in which both the physical functions and the
logical functions of the network are arranged. A typical hierarchical network consists of three
layers: the core layer, the distribution layer, and the access layer. The modules between these
layers are connected to each other in a fashion that facilitates high availability. However, each
layer is responsible for specific network functions that are independent from the other layers.

The core layer provides fast transport services between buildings and the data center. The
distribution layer provides link aggregation between layers. Because the distribution layer is the
intermediary between the access layer and the campus core layer, the distribution layer is the
ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. The access layer, which
typically comprises Open Systems Interconnection (OSI) Layer 2 switches, serves as a media
termination point for devices, such as servers and workstations. Because access layer devices
provide access to the network, the access layer is the ideal place to perform user authentication
"Pass Any Exam. Any Time." - www.actualtests.com 102
Cisco 200-310 Exam
and to institute port security. High availability, broadcast suppression, and rate limiting are also
characteristics of access layer devices.

Reference:

Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: ModularityCategory:


Design Objectives

QUESTION NO: 64

View the Exhibit.

You have been asked to analyze the router configuration for the network shown in the exhibit.
Examine the following show command output for RouterA and RouterC:

"Pass Any Exam. Any Time." - www.actualtests.com 103


Cisco 200-310 Exam

You want to ensure that the routing protocol converges as quickly as possible, and you want to
eliminate the unnecessary consumption of network bandwidth.

Which of the following should you do?

A.
Use physical Ethernet interfaces instead of logical subinterfaces on RouterA.

B.
Use physical Ethernet interfaces instead of logical subinterfaces on RouterC.

C.
Remove the RIP routing protocol from the configuration of RouterA, and use EIGRP instead.

D.
Remove the RIP routing protocol from the configuration of RouterC, and use EIGRP instead.

E.
Issue the EIGRP auto-summary command on RouterA.

F.
Issue the EIGRP auto-summary command on RouterC.

"Pass Any Exam. Any Time." - www.actualtests.com 104


Cisco 200-310 Exam
Answer: E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should issue the Enhanced Interior Gateway Routing Protocol (EIGRP) auto-summary
command on RouterA. A summary route is used to advertise a group of contiguous networks as a
single route, thus reducing the size of the routing table. Examination of the show ip route output
from RouterC indicates that RouterC has learned nine different routes via its Serial 1 interface.
The letter D that appears in the first column of each entry in the routing table indicates that these
routes were learned via EIGRP. Further examination of these nine routes indicates that eight of
these routes fall within the 172.16.0.0 classful network.

By comparing the show ip route output from RouterC with the show ip interface brief output from
RouterA, you can see that all of the 172.16.0.0 subnetworks known by RouterC are directly
connected networks on RouterA. Therefore, RouterC does not need to know about each individual
172.16.0.0 subnetwork on RouterA. RouterC only needs to know that traffic destined to any
172.16.0.0 subnetwork should be sent to RouterA. Issuing the EIGRP auto-summary command on
RouterA will cause RouterA to summarize its networks on classful boundaries and advertise only a
summary route rather than routes to each individual subnetwork. The advertisement of a single
summary route instead of routes to individual subnetworks will cause EIGRP to converge faster.
This will also reduce the unnecessary consumption of network bandwidth, because only a single
summary route will be advertised by EIGRP.

Automatic summarization can cause problems when classful networks are discontiguous within a
network topology. A discontiguous subnet exists when a summarized route advertises one or more
subnets that should not be reachable through that route. Therefore, when discontiguous networks
in the same subnet exist in a topology, you should disable automatic summarization with the no
auto-summary command. In this scenario, you can deduce that the no auto-summary command
has been issued previously. In order to meet the requirements, set forth in the scenario, you need
to turn automatic summarization back on by issuing the auto-summary command on RouterA.

Issuing the auto-summary command on RouterC would not meet the objectives set forth in the
scenario. EIGRP will only summarize routes to directly connected networks when the auto-
summary command is issued. Issuing the auto-summary command on RouterC would not cause
RouterC to summarize routes that it learns from RouterA.

The use of physical interfaces instead of logical subinterfaces has no effect on the speed at which
a routing protocol converges or the amount of bandwidth consumed. The use of physical
interfaces or logical subinterfaces is most often dictated by the network design or the type of
network media used. Typically, arbitrary choices between physical interfaces and logical
subinterfaces cannot be made.

There is nothing in the scenario or in the show output provided that indicates that the Routing
Information Protocol (RIP) routing protocol should be removed and replaced with EIGRP. On the
"Pass Any Exam. Any Time." - www.actualtests.com 105
Cisco 200-310 Exam
contrary, the letter D that appears in the first column of each route in the routing table indicates
that EIGRP is already being used. EIGRP routes are coded with the letter D, whereas the letter E
is used to denote Exterior Gateway Protocol (EGP) routes EGP predates EIGRP. If any RIP routes
existed, they would be labeled with the letter R.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP Design, p. 404

CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458

Cisco: EIGRP Commands: auto-summary (EIGRP)

QUESTION NO: 65

Confidentiality, integrity, and authentication are features of which of the following protocols?

A.
GRE

B.
PPP

C.
IPSec

D.
PPPoE

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

IP Security (IPSec) provides confidentiality, integrity, and authentication. IPSec is a framework of


protocols that can be used to provide security for virtual private network (VPN) connections. VPNs
provide secure communications over an unsecure network, such as the Internet. IPSec provides
data confidentiality by encrypting the data before it is sent over the connection. Because the data
is encrypted, an attacker who intercepts the data will be unable to read it. IPSec provides data
integrity by using checksums on each end of the connection. If the data generates the same
checksum value on each end of the connection, the data was not modified in transit. IPSec also
provides data authentication through various methods, including user name/password
combinations, preshared keys, digital certificates, and onetime passwords (OTPs).

"Pass Any Exam. Any Time." - www.actualtests.com 106


Cisco 200-310 Exam
Generic Routing Encapsulation (GRE) is a protocol designed to tunnel any Layer 3 protocol
through an IP transport network. Because the focus of GRE is to transport many different
protocols, it has very limited security features. By contrast, IPSec has strong data confidentiality
and data integrity features, but it can transport only IP traffic. GRE over IPSec combines the best
features of both protocols to securely transport any protocol over an IP network. However, GRE
itself does not provide confidentiality, integrity, and authentication.

Point-to-Point Protocol (PPP) is a WAN protocol that can be used on point-to-point serial links.
PPP relies upon other protocols to provide authentication and security for the link. PPP itself does
not provide confidentiality, integrity, and authentication.

PPP over Ethernet (PPPoE) is typically used to initiate a session with a Digital Subscriber Line
(DSL) service provider. With PPPoE, PPP frames are encapsulated into Ethernet frames for
transmission to the service provider. Because PPP frames are not encrypted, PPPoE cannot
provide a secure connection.

PPPoE does not provide confidentiality, integrity, and authentication.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Managed VPN: IPsec, pp. 255-259

Cisco: Configuring Security for VPNs with IPsec: IPsec Functionality Overview

QUESTION NO: 66

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 107


Cisco 200-310 Exam

You are designing an IP addressing scheme for the network in the exhibit above.

Each switch represents hosts that reside in separate VLANs. The subnets should be allocated to
match the following host capacities:

Router subnet: two hosts

SwitchA subnet: four hosts

SwitchB subnet: 10 hosts

SwitchC subnet: 20 hosts

SwitchD subnet: 50 hosts

You have chosen to subnet the 192.168.51.0/24 network.

Which of the following are you least likely to allocate?

A.
a /25 subnet

B.
a /26 subnet

C.
a /27 subnet

D.
a /28 subnet

"Pass Any Exam. Any Time." - www.actualtests.com 108


Cisco 200-310 Exam
E.
a /29 subnet

F.
a /30 subnet

Answer: A
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the available choices, you are least likely to allocate a /25 subnet. The largest broadcast
domain in this scenario contains 50 hosts. A /25 subnet can contain up to 126 assignable hosts. In
this scenario, allocating a /25 subnet would reserve half the 192.168.51.0/24 network for a single
virtual LAN (VLAN). The total number of hosts for which you need addresses in this scenario is 86.
Therefore, you would only need to use half the /24 subnet if all 86 hosts were residing in the same
VLAN.

You should begin allocating address ranges starting with the largest group of hosts to ensure that
the entire group has a large, contiguous address range available. Subnetting a contiguous
address range in structured, hierarchical fashion enables routers to maintain smaller routing tables
and eases administrative burden when troubleshooting.

You are likely to use a /26 subnet. In this scenario, the largest VLAN contains 50 hosts. If you
were to divide the 192.168.51.0/25 subnet into two /26 subnets, the result would be two new
subnets capable of supporting up to 62 assignable hosts: the 192.168.51.0/26 subnet and the
192.168.51.64/26 subnet. Therefore, you should start subnetting with a /26 network. To maintain a
logical, hierarchical IP structure, you could then allocate the 192.168.51.64/26 subnet to SwitchD's
VLAN.

You are likely to use a /27 subnet. The nextlargest broadcast domain in this scenario is the
SwitchC subnet, which contains 20 hosts. If you were to divide the 192.168.51.0/26 subnet into
two /27 subnets, the result would be two new subnets capable of supporting up to 30 assignable
hosts: the 192.168.51.0/27 subnet and the 192.168.51.32/27 subnet. To maintain a logical,
hierarchical IP structure, you could then allocate the 192.168.51.32/27 subnet to SwitchC's VLAN.

You are likely to use a /28 subnet. The nextlargest broadcast domain in this scenario is the
SwitchB subnet, which contains 10 hosts. If you were to divide the 192.168.51.0/27 subnet into
two /28 subnets, the result would be two new subnets capable of supporting up to 14 assignable
hosts: the 192.168.51.0/28 subnet and the 192.168.51.16/28 subnet. To maintain a logical,
hierarchical IP structure, you could then allocate the 192.168.51.16/28 subnet to SwitchB's VLAN.

You are likely to use a /29 subnet. The nextlargest broadcast domain in this scenario is the
SwitchA subnet, which contains four hosts. If you were to divide the 192.168.51.0/28 subnet into

"Pass Any Exam. Any Time." - www.actualtests.com 109


Cisco 200-310 Exam
two /29 subnets, the result would be two new subnets capable of supporting up to six assignable
hosts: the 192.168.51.0/29 subnet and the 192.168.51.8 subnet. To maintain a logical, hierarchical
IP structure, you could then allocate the 192.168.51.8/29 subnet to SwitchA's VLAN.

You are likely to use a /30 subnet. The final subnet in this scenario is the link between RouterA
and RouterB, which contains two hosts. If you were to divide the 192.168.51.0/29 subnet into two
/30 subnets, the result would be two new subnets capable of supporting two assignable hosts
each: the 192.168.51.0/30 subnet and the 192.168.51.4/30 subnet. To maintain a logical,
hierarchical IP structure, you could then allocate the 192.168.51.4/30 subnet to the link between
RouterA and RouterB. This would leave the 192.168.51.0/30 subnet unallocated. However, you
could further divide the 192.168.51.0/30 subnet into single /32 host addresses that could then be
used for loopback IP addressing on the routers.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 67

Which of the following is a type of attack that can be mitigated by enabling DAI on campus access
layer switches?

A.
ARP poisoning

B.
VLAN hopping

C.
DHCP spoofing

D.
MAC flooding

Answer: A
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 110


Cisco 200-310 Exam
Dynamic ARP Inspection (DAI) can be enabled on campus access layer switches to mitigate
Address Resolution Protocol (ARP) poisoning attacks. In an ARP poisoning attack, which is also
known as an ARP spoofing attack, the attacker sends a gratuitous ARP (GARP) message to a
host. The message associates the attacker's media access control (MAC) address with the IP
address of a valid host on the network. Subsequently, traffic sent to the valid host address will go
through the attacker's computer rather than directly to the intended recipient. DAI protects against
ARP poisoning attacks by inspecting all ARP packets that are received on untrusted ports.

Dynamic Host Configuration Protocol (DHCP) spoofing attacks can be mitigated by enabling
DHCP snooping on campus access layer switches, not by enabling DAI. In a DHCP spoofing
attack, an attacker installs a rogue DHCP server on the network in an attempt to intercept DHCP
requests. The rogue DHCP server can then respond to the DHCP requests with its own IP
address as the default gateway address? hence all traffic is routed through the rogue DHCP
server. DHCP snooping is a feature of Cisco Catalyst switches that helps prevent rogue DHCP
servers from providing incorrect IP address information to hosts on the network. When DHCP
snooping is enabled, DHCP servers are placed onto trusted switch ports and other hosts are
placed onto untrusted switch ports. If a DHCP reply originates from an untrusted port, the port is
disabled and the reply is discarded.

Virtual LAN (VLAN) hopping attacks can be mitigated by disabling Dynamic Trunking Protocol
(DTP) on campus access layer switches, not by enabling DAI. A VLAN hopping attack occurs
when a malicious user sends frames over a VLAN trunk link? the frames are tagged with two
different 802.1Q tags, with the goal of sending the frame to a different VLAN. In a VLAN hopping
attack, a malicious user connects to a switch by using an access VLAN that is the same as the
native VLAN on the switch. If the native VLAN on a switch were VLAN 1, the attacker would
connect to the switch by using VLAN 1 as the access VLAN. The attacker would transmit packets
containing 802.1Q tags for the native VLAN and tags spoofing another VLAN. Each packet would
be forwarded out the trunk link on the switch, and the native VLAN tag would be removed from the
packet, leaving the spoofed tag in the packet. The switch on the other end of the trunk link would
receive the packet, examine the 802.1Q tag information, and forward the packet to the destination
VLAN, thus allowing the malicious user to inject packets into the destination VLAN even though
the user is not connected to that VLAN.

To mitigate VLAN hopping attacks, you should configure the native VLAN on a switch to an
unused value, remove the native VLAN from each end of the trunk link, place any unused ports
into a common unrouted VLAN, and disable DTP for unused and nontrunk ports. DTP is a Cisco-
proprietary protocol that eases administration by automating the trunk configuration process.
However, for nontrunk links and for unused ports, a malicious user who has gained access to the
port could use DTP to gain access to the switch through the exchange of DTP messages. By
disabling DTP, you can prevent a user from using DTP messages to gain access to the switch.

MAC flooding attacks can be mitigated by enabling port security on campus access layer switches,
not by enabling DAI. In a MAC flooding attack, an attacker generates thousands of forged frames
every minute with the intention of overwhelming the switch's MAC address table. Once this table is
flooded, the switch can no longer make intelligent forwarding decisions and all traffic is flooded.
This allows the attacker to view all data sent through the switch because all traffic will be sent out

"Pass Any Exam. Any Time." - www.actualtests.com 111


Cisco 200-310 Exam
each port. Implementing port security can help mitigate MAC flooding attacks by limiting the
number of MAC addresses that can be learned on each interface to a maximum of 128. A MAC
flooding attack is also known as a Content Addressable Memory (CAM) table overflow attack.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 12, Loss of Availability, pp. 495-496

Cisco: Layer 2 Security Features on Cisco Catalyst Layer 3 Fixed Configuration Switches
Configuration Example: Background Information

Cisco: Enterprise Data Center Topology: Preventing VLAN Hopping

QUESTION NO: 68

You issue the following commands on RouterA:

Packets sent to which of the following destination IP addresses will be forwarded to the 10.1.1.3
next-hop IP address? (Choose two.)

A.
172.16.0.1

B.
192.168.0.1

C.
192.168.0.14

D.
192.168.0.17

E.
192.168.0.26

F.
192.168.1.1

Answer: D,E
"Pass Any Exam. Any Time." - www.actualtests.com 112
Cisco 200-310 Exam
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the choices available, packets sent to 192.168.0.17 and 192.168.0.26 will be forwarded to the
10.1.1.3 next-hop IP address. When a packet is sent to a router, the router checks the routing
table to see if the next-hop address for the destination network is known. The routing table can be
filled dynamically by a routing protocol, or you can configure the routing table manually by issuing
the ip route command to add static routes. The ip route command consists of the syntax ip route
net-address mask next-hop, where net-address is the network address of the destination network,
mask is the subnet mask of the destination network, and next-hop is the IP address of a
neighboring router that can reach the destination network.

A default route is used to send packets that are destined for a location that is not listed elsewhere
in the routing table. For example, the ip route 0.0.0.0 0.0.0.0 10.1.1.1command specifies that
packets destined for addresses not otherwise specified in the routing table are sent to the default
next-hop address of 10.1.1.1. A net-address and mask combination of 0.0.0.0 0.0.0.0 specifies
any packet destined for any network.

If multiple static routes to a destination are known, the most specific route is used? the most
specific route is the route with the longest network mask. For example, a route to 192.168.0.0/28
would be used before a route to 192.168.0.0/24. Therefore, the following rules apply on RouterA:

The 192.168.0.17 and 192.168.0.26 addresses are within the range of addresses from
192.168.0.16 to 192.168.0.255. Therefore, packets sent to these addresses are forwarded to the
next-hop address of 10.1.1.3.

The 192.168.0.1 and 192.168.0.14 addresses are within the range of addresses from 192.168.0.0
through 192.168.0.15. Therefore, packets sent to these addresses are forwarded to the next-hop
address of 10.1.1.4.

The 192.168.1.1 IP address is within the range of addresses from 192.168.1.0 through
192.168.255.255. Therefore, packets sent to 192.168.1.1 are forwarded to the next-hop address of
10.1.1.2.

RouterA does not have a specific static route to the 172.16.0.1 network. Therefore, packets sent to
172.16.0.1 are forwarded to the default static route v address of 10.1.1.1.

Reference:

Boson ICND2 Curriculum, Module 2: Implementing VLSMs and Summarization, Choosing a Route

Cisco: IP Routing Protocol-Independent Commands: ip route

Cisco: Specifying a Next Hop IP Address for Static Routes

"Pass Any Exam. Any Time." - www.actualtests.com 113


Cisco 200-310 Exam

QUESTION NO: 69 DRAG DROP

Select the protocols and port numbers from the left, and drag them to the corresponding traffic
types on the right. Not all protocols and port numbers will be used.

Answer:

"Pass Any Exam. Any Time." - www.actualtests.com 114


Cisco 200-310 Exam

Explanation:

Section: Considerations for Expanding an Existing Network Explanation

Lightweight Access Point Protocol (LWAPP) uses User Datagram Protocol (UDP) port 12222 for
data traffic and UDP port 12223 for control traffic. LWAPP is a protocol developed by Cisco and is
used as part of the Cisco Unified Wireless Network architecture. LWAPP creates a tunnel between
a lightweight access point (LAP) and a wireless LAN controller (WLC)? in LWAPP operations, both
a LAP and a WLC are required. The WLC handles many of the management functions for the link,
such as user authentication and security policy management, whereas the LAP handles real-time
operations, such as sending and receiving 802.11 frames, wireless encryption, access point (AP)
beacons, and probe messages. Cisco WLC devices prior to software version 5.2 use LWAPP.

"Pass Any Exam. Any Time." - www.actualtests.com 115


Cisco 200-310 Exam
Control and Provisioning of Wireless Access Points (CAPWAP) uses UDP port 5246 for control
traffic and UDP port 5247 for data traffic. CAPWAP is a standards-based version of LWAPP. Cisco
WLC devices that run software version 5.2 and later use CAPWAP instead of LWAPP.

Neither LWAPP nor CAPWAP use Transmission Control Protocol (TCP) for communication. TCP
is a connection-oriented protocol. Because UDP is a connectionless protocol, it does not have the
additional connection overhead that TCP has? therefore, UDP is faster but less reliable.

Reference:

Cisco: LWAPP Traffic Study

IETF: RFC 5415: Control And Provisioning of Wireless Access Points (CAPWAP) Protocol
Specification

QUESTION NO: 70

Which of the following should not be implemented in the core layer? (Choose two.)

A.
ACLs

B.
QoS

C.
load balancing

D.
interVLAN routing

E.
a partially meshed topology

Answer: A,D
Explanation:
Section: Enterprise Network Design Explanation

Access control lists (ACLs) and inter-VLAN routing should not be implemented in the core layer.
Because the core layer focuses on low latency and fast transport services, you should not
implement mechanisms that can introduce unnecessary latency into the core layer. For example,

"Pass Any Exam. Any Time." - www.actualtests.com 116


Cisco 200-310 Exam
mechanisms such as process-based switching, packet manipulation, and packet filtering introduce
latency and should be avoided in the core.

The hierarchical network model divides the operation of the network into three categories:

ACLs and inter-VLAN routing are typically implemented in the distribution layer. Because the
distribution layer is focused on policy enforcement, the distribution layer provides the ideal location
to implement mechanisms such as packet filtering and packet manipulation. In addition, because
the distribution layer acts as an intermediary between the access layer devices and the core layer,
the distribution layer is also the recommended location for route summarization and redistribution.

Because a fully meshed topology can add unnecessary cost and complexity to the design and
operation of the network, a partially meshed topology is often implemented in the core layer. A
fully meshed topology is not required if multiple paths exist between core layer and distribution
layer devices. The core layer is particularly suited to a mesh topology because it typically contains
the least number of network devices. Fully meshed topologies restrict the scalability of a design.
Hierarchical designs are intended to aid scalability, particularly in the access layer.

Quality of Service (QoS) is often implemented in all three layers of the hierarchical model.
However, because the access layer provides direct connectivity to network endpoints, QoS
classification and marking are typically performed in the access layer. Cisco recommends
classifying and marking packets as close to the source of traffic as possible. Although
classification and marking can be performed in the access layer, QoS mechanisms must be
implemented in each of the higher layers for QoS to be effective.

Load balancing is often implemented in all three layers of the hierarchical model. Load balancing
offers redundant paths for network traffic; the redundant paths can be used to provide bandwidth
optimization and network resilience. Typically, the core and distribution layers offer a greater
number of redundant paths than the access layer does. Because some devices, such as network
hosts, often use only a single connection to the access layer, Cisco recommends redundant links
for mission-critical endpoints, such as servers.

Reference:

Cisco: Internetwork Design Guide Internetwork Design Basics

QUESTION NO: 71

You issue the show ip bgp neighbors command on RouterA and receive the following output:

"Pass Any Exam. Any Time." - www.actualtests.com 117


Cisco 200-310 Exam

Which of the following is most likely true?

A.
RouterA is operating in AS 64496.

B.
RouterA has been assigned a BGP RID of 1.1.1.2.

C.
RouterA has been unable to establish a BGP session with the remote router.

D.
RouterA is configured with the neighbor 203.0.113.1 remote-as 64496 command.

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Most likely, RouterA is configured with the neighbor 203.0.113.1 remote-as 64496 command. In
this scenario, the output of the show ip bgp neighbors command reports that RouterA's Border
Gateway Protocol (BGP) neighbor has an IP address of 203.0.113.1 and is operating within the
remote autonomous system number (ASN) of 64496. The syntax of the neighbor remote-as
command is neighbor ip address remote-as as-number, where ip address and as-number are the
IP address and ASN of the neighbor router. For example, the following command configures a
peering relationship with a router that has an IP address of 203.0.113.1 in autonomous system
(AS) 64496:

router(config-router)#neighbor 203.0.113.1 remote-as 64496

Because BGP does not use a neighbor discovery process like many other routing protocols, it is
essential that every peer is manually configured and reachable through Transmission Control
Protocol (TCP) port 179. Once a peer has been configured with the neighbor remote-as command,
the local BGP speaker will attempt to transmit an OPEN message to the remote peer. If the OPEN
message is not blocked by existing firewall rules or other security mechanisms, the remote peer
"Pass Any Exam. Any Time." - www.actualtests.com 118
Cisco 200-310 Exam
will respond with a KEEPALIVE message and will continue to periodically exchange KEEPALIVE
messages with the local peer. A BGP speaker will consider a peer dead if a KEEPALIVE message
is not received within a period of time specified by a hold timer. Routing information is then
exchanged between peers by using UPDATE messages. UPDATE messages can include
advertised routes and withdrawn routes. Withdrawn routes are those that are no longer considered
feasible. Statistics regarding the number of BGP messages, such as UPDATE messages, can be
viewed in the output of the show ip bgp neighbors command.

The output of the show ip bgp neighbors command in this scenario does not indicate that RouterA
is operating in AS 64496. Nor does the output indicate that RouterA has been assigned a BGP
router ID (RID) of 1.1.1.2. Among other things, the partial command output from the show ip bgp
neighbors command indicates that the remote peer has an IP address of 203.0.113.1, an ASN of
64496, a RID of 1.1.1.2, an external BGP (eBGP) session that is an Established state, and a hold
time of 180 seconds.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, BGP Neighbors, pp. 444-445

Cisco: Cisco IOS IP Routing: BGP Command Reference: neighbor remote-as

Cisco: Cisco IOS IP Routing: BGP Command Reference: show ip bgp neighbors

QUESTION NO: 72

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 119


Cisco 200-310 Exam

Refer to the exhibit above. PVST+ is enabled on all the switches. The Layer 3 switch on the right,
DSW2, is the root bridge for VLAN 20. The Layer 3 switch on the left, DSW1, is the root bridge for
VLAN 10. Devices on VLAN 10 use DSW1 as a default gateway. Devices on VLAN 20 use DSW2
as a default gateway. You want to ensure that the network provides high redundancy and fast
convergence.

Which of the following are you most likely to do?

A.
physically connect ASW1 to ASW2

B.
physically connect ASW2 to ASW3

C.
physically connect ASW1 to both ASW2 and ASW3

D.
replace PVST+ with RSTP

E.
replace PVST+ with RPVST+

Answer: E
Explanation:
Section: Enterprise Network Design Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 120


Cisco 200-310 Exam
Most likely, you would replace Per-VLAN Spanning Tree Plus (PVST+) with RapidPVST+
(RPVST+) in order to ensure that the network provides fast convergence. PVST+ is a revision of
the Cisco-proprietary Per-VLAN Spanning Tree (PVST), which enables a separate spanning tree
to be established for each virtual LAN (VLAN). Therefore, a per-VLAN implementation of STP,
such as PVST+, enables the location of a root switch to be optimized on a per-VLAN basis.
However, PVST+ progresses through the same spanning tree states as the 802.1Dbased
Spanning Tree Protocol (STP). Thus it can take up to 30 seconds for a PVST+ link to begin
forwarding traffic. RapidPVST+ provides faster convergence because it passes through the same
three states as the 802.1wbased Rapid STP (RSTP). Therefore, RPVST+ provides faster
convergence than PVST+.

The network in this scenario is already provisioned with high redundancy. Every access layer
switch in this scenario is connected to every distribution layer switch. In addition, the two
distribution layer switches are connected by using an EtherChannel bundle. This configuration
creates multiple paths to the root bridge for each VLAN. Connecting any of the access layer
switches to any of the other access layer switches might add another layer of redundancy, but this
would not provide as much benefit as replacing PVST+ with RPVST+ in this scenario.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: Spanning Tree from PVST+ to RapidPVST Migration Configuration Example: Background
Information

QUESTION NO: 73

Which of the following VPN tunnels support encapsulation of dynamic routing protocol traffic?
(Choose three.)

A.
IPSec

B.
IPSec VTI

C.
GRE over IPSec

D.
DMVPN hub-and-spoke

E.

"Pass Any Exam. Any Time." - www.actualtests.com 121


Cisco 200-310 Exam
DMVPN spoke-to-spoke

Answer: B,C,D
Explanation:
Section: Enterprise Network Design Explanation

Explanation/Reference:

IP Security (IPSec) Virtual Tunnel Interface (VTI), Generic Routing Encapsulation (GRE) over
IPSec, and Dynamic Multipoint Virtual Private Network (DMVPN) hub-and-spoke virtual private
network (VPN) tunnels support encapsulation of dynamic routing protocol traffic, such as Open
Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) traffic. A
VPN tunnel provides secure, private network connectivity over an untrusted medium, such as the
Internet.

IPSec VTI provides support for IP multicast and dynamic routing protocol traffic. However, it does
not support non-IP protocols, and it has limited interoperability with non-Cisco routers.

GRE over IPSec provides support for IP multicast and dynamic routing protocol traffic. In addition,
it provides support for non-IP protocols. Because the focus of GRE is to transport many different
protocols, it has very limited security features. Therefore, GRE relies on IPSec to provide data
confidentiality and data integrity. Although GRE was developed by Cisco, GRE works on Cisco
and non-Cisco routers.

DMVPN hub-and-spoke VPN tunnels provide support for IP multicast and dynamic routing protocol
traffic. However, they support only IP traffic and operate only on Cisco routers.

DMVPN spoke-to-spoke VPN tunnels do not provide support for IP multicast or dynamic routing
protocol traffic. In addition, they support only IP traffic and operate only on Cisco routers.

IPSec VPN tunnels do not provide support for IP multicast or dynamic routing protocol traffic.
Although IPSec can be used on Cisco and non-Cisco routers, IPSec can be used only for IP
traffic, it provides no support for non-IP protocols.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise VPN vs. Service Provider VPN, pp. 255-
263

Cisco: IPSec VPN WAN Design Overview: Design Selection

QUESTION NO: 74
"Pass Any Exam. Any Time." - www.actualtests.com 122
Cisco 200-310 Exam
HostA is a computer on your company's network. RouterA is a NAT router. HostA sends a packet
to HostB, and HostB sends a packet back to HostA.

Which of the following addresses is an outside local address?

A.
15.16.17.18

B.
22.23.24.25

C.
192.168.1.22

D.
192.168.1.30

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The 192.168.1.30 address is an outside local address. An outside local address is an IP address
that represents an outside host to the local network. Network Address Translation (NAT) translates
between public and private IP addresses to enable hosts on a privately addressed network to
access the Internet. Public addresses are routable on the Internet, and private addresses are
routable only on internal networks. Several IP address ranges are reserved for private, internal
use; these addresses, shown below, are defined in Request for Comments (RFC) 1918.

The outside local address is often the same as the outside global address, particularly when inside
hosts attempt to access resources on the Internet. However, in some configurations, it is
necessary to configure a NAT translation that allows a local address on the internal network to
identify an outside host. When RouterA receives a packet destined for 192.168.1.30, RouterA
translates the 192.168.1.30 outside local address to the 15.16.17.18 outside global address and
forwards the packet to its destination. To configure a static outside local-to-outside global IP
address translation, you should issue the ip nat outside source static outside-global outside-local
command.
"Pass Any Exam. Any Time." - www.actualtests.com 123
Cisco 200-310 Exam
In this scenario, 15.16.17.18 is an outside global address. An outside global address is an IP
address that represents an outside host to the global network. Outside global addresses are public
IP addresses assigned to an Internet host by the host's operator. The outside global address is
usually the address registered with the Domain Name System (DNS) server to map a host's public
IP address to a friendly name such as www.mycompany.com.

In this scenario, 192.168.1.22 is an inside local address. An inside local address is an IP address
that represents an inside host to the local network. Inside local addresses are typically private IP
addresses defined by RFC 1918.

In this scenario, 22.23.24.25 is an inside global address. An inside global address is a publicly
routable IP address that is used to represent an inside host to the global network. Inside global IP
addresses are typically assigned from a NAT pool on the router. You can issue the ip nat pool
command to define a NAT pool. For example, the ip nat pool natpool 22.23.24.11 22.23.24.30
netmask 255.255.255.224 command allocates the IP addresses 22.23.24.11 through 22.23.24.30
to be used as inside global IP addresses. When a NAT router receives a packet destined for the
Internet from a local host, it changes the inside local address to an inside global address and
forwards the packet to its destination.

In addition to configuring a NAT pool to dynamically translate addresses, you can configure static
inside local-to-inside global IP address translations by issuing the ip nat inside source static inside-
local inside-global command. This command maps a single inside local address on the local
network to a single inside global address on the outside network.

It is important to specify the inside and outside interfaces when you configure a NAT router. To
specify an inside interface, you should issue the ip nat inside command from interface
configuration mode. To specify an outside interface, you should issue the ip nat outside command
from interface configuration mode.

The following graphic depicts the relationship between inside local, inside global, outside local,
and outside global addresses:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Private Addresses, pp. 299-300

"Pass Any Exam. Any Time." - www.actualtests.com 124


Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

Cisco: NAT: Local and Global Definitions

QUESTION NO: 75

Which of the following OSPF areas accept all LSAs? (Choose two.)

A.
stub

B.
not-so-stubby

C.
totally stubby

D.
backbone

E.
standard

Answer: D,E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Standard areas and backbone areas accept all link-state advertisements (LSAs). Every router in a
standard area contains the same Open Shortest Path First (OSPF) database. If the standard
area's ID number is 0, the area is a backbone area. The backbone area must be contiguous, and
all OSPF areas must connect to the backbone area. If a direct connection to the backbone area is
not possible, you must create a virtual link to connect to the backbone area through a
nonbackbone area.

Stub areas, totally stubby areas, and not-so-stubby areas (NSSAs) flood only certain types of
LSAs. For example, none of these areas floods Type 5, which are LSAs that originate OSPF
autonomous system boundary routers (ASBRs). Instead, stub areas and totally stubby areas are
injected with a single default route from an ABR. Routers inside a stub area or a totally stubby
area will send all packets destined for another area to the area border router (ABR). In addition, a
totally stubby area does not accept Type 3, 4, or 5 summary LSAs, which advertise inter-area
routes. These LSAs are replaced by a default route at the ABR. As a result, routing tables are kept
small within the totally stubby area.
"Pass Any Exam. Any Time." - www.actualtests.com 125
Cisco 200-310 Exam
An NSSA floods Type 7 LSAs within its own area, but does not accept or flood Type 5 LSAs.
Therefore, an NSSA does not accept all LSAs. Similar to Type 5 LSAs, a Type 7 LSA is an
external LSA that originates from an ASBR. However, Type 7 LSAs are only flooded to an NSSA.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, OSPF Stub Area Types, pp. 437-438

Cisco: What Are OSPF Areas and Virtual Links?: Normal, Stub, Totally Stub and NSSA Area
Differences

QUESTION NO: 76

In a switched hierarchical design, which enterprise campus module layer or layers exclusively use
Layer 2 switching?

A.
only the campus core layer

B.
the distribution and campus core layers

C.
only the distribution layer

D.
the distribution and access layers

E.
only the access layer

Answer: E
Explanation:
Section: Enterprise Network Design Explanation

In a switched hierarchical design, only the access layer of the enterprise campus module uses
Layer 2 switching exclusively. The access layer of the enterprise campus module provides end
users with physical access to the network. In addition to using Virtual Switching System (VSS) in
place of First Hop Redundancy Protocols (FHRPs) for redundancy, a Layer 2 switching design
requires that inter-VLAN traffic be routed in the distribution layer of the hierarchy. Also, Spanning
Tree Protocol (STP) in the access layer will prevent more than one connection between an access
layer switch and the distribution layer from becoming active at a given time.
"Pass Any Exam. Any Time." - www.actualtests.com 126
Cisco 200-310 Exam
In a Layer 3 switching design, the distribution and campus core layers of the enterprise campus
module use Layer 3 switching exclusively. Thus a Layer 3 switching design relies on FHRPs for
high availability. In addition, a Layer 3 switching design typically uses route filtering on links that
face the access layer of the design.

The distribution layer of the enterprise campus module provides link aggregation between layers.
Because the distribution layer is the intermediary between the access layer and the campus core
layer, the distribution layer is the ideal place to enforce security policies, provide load balancing,
provide Quality of Service (QoS), and perform tasks that involve packet manipulation, such as
routing. In a switched hierarchical design, the switches in the distribution layer use Layer 2
switching on ports connected to the access layer and Layer 3 switching on ports connected to the
campus core layer.

The campus core layer of the enterprise campus module provides fast transport services between
the modules of the enterprise architecture module, such as the enterprise edge and the intranet
data center. Because the campus core layer acts as the network's backbone, it is essential that
every distribution layer device have multiple paths to the campus core layer. Multiple paths
between the campus core and distribution layer devices ensure that network connectivity is
maintained if a link or device fails in either layer. In a switched hierarchical design, the campus
core layer switches use Layer 3 switching exclusively.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Distribution Layer Best Practices, pp. 97-99

Cisco: Cisco SAFE Reference Guide: Enterprise Campus

QUESTION NO: 77

Which of the following best describes PAT?

A.
It translates a single inside local address to a single inside global address.

B.
It translates a single outside local address to a single outside global address.

C.
It translates inside local addresses to inside global addresses that are allocated from a pool.

D.
It uses ports to translate inside local addresses to one or more inside global addresses.

"Pass Any Exam. Any Time." - www.actualtests.com 127


Cisco 200-310 Exam
Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Port Address Translation (PAT) uses ports to translate inside local addresses to one or more
inside global addresses. The Network Address Translation (NAT) router uses port numbers to
keep track of which packets belong to each host. PAT is also called NAT overloading.

NAT translates between public and private IP addresses to enable hosts on a privately addressed
network to access the Internet. Public addresses are routable on the Internet, and private
addresses are routable only on internal networks. Request for Comments (RFC) 1918 defines
several IP address ranges that are reserved for private, internal use:

Because NAT performs address translation between private and public addresses, NAT effectively
hides the address scheme used by the internal network, which can increase security. NAT also
reduces the number of public IP addresses that a company needs to allow its devices to access
Internet resources, thereby conserving IP version 4 (IPv4) address space.

An inside local address is typically an RFC 1918-compliant IP address that represents an internal
host to the internal network. An inside global address is used to represent an internal host to an
external network.

Static NAT translates a single inside local address to a single inside global address or a single
outside local address to a single outside global address. You can configure a static inside local-to-
inside global IP address translation by issuing the ip nat inside source static inside-local inside-
global command. To configure a static outside local-to-outside global address translation, you
should issue the ip nat outside source static outside-global outside-local command.

Dynamic NAT translates local addresses to global addresses that are allocated from a pool. To
create a NAT pool, you should issue the ip nat pool nat-pool start-ip end-ip{netmask mask | prefix-
length prefix} command. To enable translation of inside local addresses, you should issue the ip
nat inside source list access-list pool nat-pool[overload] command.

When a NAT router receives an Internet-bound packet from a local host, the NAT router performs
the following tasks:

"Pass Any Exam. Any Time." - www.actualtests.com 128


Cisco 200-310 Exam

When all the inside global addresses in the NAT pool are mapped, no other inside local hosts will
be able to communicate on the Internet. This is why NAT overloading is useful. When NAT
overloading is configured, an inside local address, along with a port number, is mapped to an
inside global address. The NAT router uses port numbers to keep track of which packets belong to
each host:

You can issue the ip nat inside source list access-list interface outside-interface overload
command to configure NAT overload with a single inside global address, or you can issue the ip
nat inside source list access-list pool nat-pool overload command to configure NAT overloading
with a NAT pool.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

Cisco: Configuring Network Address Translation: Getting Started: Example: Allowing Internal
Users to Access the Internet

QUESTION NO: 78

Which of the following statements are true regarding the function of the LAP in the Cisco Unified
Wireless Network architecture? (Choose three.)

A.
The LAP determines which RF channel should be used to transmit 802.11 frames.
"Pass Any Exam. Any Time." - www.actualtests.com 129
Cisco 200-310 Exam
B.
The LAP supports 802.11 encryption.

C.
The LAP must be located on the same subnet as a WLC.

D.
The LAP maintains associations with client computers.

E.
The LAP can function without a WLC.

F.
The LAP should be connected to an access port on a switch.

Answer: B,D,F
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

In the Cisco Unified Wireless Network architecture, a lightweight access point (LAP) supports
802.11 encryption, maintains associations with client computers, and should be connected to an
access port on a switch. A LAP creates a Lightweight Access Point Protocol (LWAPP) tunnel
between itself and a wireless LAN controller (WLC)? in LWAPP operations, both a LAP and a
WLC are required. The WLC handles many of the management functions for the link, such as user
authentication and security policy management, while the LAP handles real-time operations, such
as sending and receiving 802.11 frames, wireless encryption, access point (AP) beacons, and
probe messages.

When connecting a LAP to a network, you should connect the LAP to an access port on a switch,
not to a trunk port. Because the WLC handles the management functions for LWAPP operations,
the LAP cannot begin associating with client computers unless a WLC is available on the network.
Therefore, the LAP must associate with a WLC after it is connected to the network. After
connecting to a WLC and obtaining its configuration information, the LAP can begin associating
with clients. The LAP can receive encrypted or unencrypted 802.11 frames. The WLC, however,
does not support 802.11 encryption; as the data passes through the LAP, it is decrypted and then
sent to the WLC for further forwarding.

It is not necessary for the LAP to be located on the same subnet or even in the same geographic
area as a WLC. As long as a WLC is available on the network and the LAP is configured with the
address of the WLC, the LAP will be able to connect to the WLC. DHCP option 43 can be used to
automatically configure a LAP with the IP address of one or more WLCs, even if those WLCs
reside on a different IP subnet.

A LAP requires a WLC in order to function. If the WLC becomes unavailable, the LAP will reboot
and drop all client associations until the WLC becomes available or until another WLC is found on

"Pass Any Exam. Any Time." - www.actualtests.com 130


Cisco 200-310 Exam
the network.

The WLC, not the LAP, determines which radio frequency (RF) channel should be used to transmit
802.11 frames in LWAPP operations. The WLC is responsible for selecting the RF channel to use,
determining the output power for each LAP, authenticating users, managing security policies, and
determining the least used LAP to associate with clients.

Reference:

Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Background


Information

Cisco: Lightweight Access Point FAQ

Cisco: Wireless LAN Controller and Lightweight Access Point Basic Configuration Example:
Configure the Switch for the APs

QUESTION NO: 79

View the Exhibit.

You administer the network shown above. You want to summarize the networks connected to
RouterA so that a single route is inserted into RouterB's routing table.

Which of the following is the smallest summarization for the three networks?

A.
172.16.1.0/16

B.
172.16.1.0/18
"Pass Any Exam. Any Time." - www.actualtests.com 131
Cisco 200-310 Exam
C.
172.16.1.0/22

D.
172.16.1.0/23E.

E.
172.16.1.0/25

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The smallest summarization for the three networks connected to RouterA is 172.16.1.0/22, which
is equivalent to a network address of 172.16.1.0 and a subnet mask of 255.255.252.0. In this
scenario, the Class B 172.16.0.0/16 network has been divided into 256 /24 subnets. Three of the
first four subnets in the Class B range have been assigned to network interfaces on RouterA:
172.16.0.0/24, 172.16.1.0/24, and 172.16.3.0/24. Absent from the network assignments is the
172.16.2.0/24 subnet. However, there is no way to summarize the address range without including
the 172.16.2.0/24 subnet. Therefore, the smallest summarization you can create would summarize
four subnets into a single /22 subnet.

A /22 subnet creates 64 subnetworks capable of supporting 1,022 assignable host IP addresses
each. The assignable address range of the 172.16.0.0/22 subnet begins with 172.16.0.1 and ends
with 172.16.3.255. This range includes all possible assignable IP addresses in the /24 subnets
that are directly connected to RouterA. It also includes all possible assignable IP addresses in the
172.16.2.0/24 subnet.

Subnetting a contiguous address range in structured, hierarchical fashion enables routers to


maintain smaller routing tables and eases administrative burden when troubleshooting.
Conversely, a discontiguous IP version 4 (IPv4) addressing scheme can cause routing tables to
bloat because the subnets cannot be summarized. Summarization minimizes the size of routing
tables and advertisements and reduces a router's processor and memory requirements.

Summarizing the three /24 networks with a /16 subnet would create too large of a summarization,
because the /16 subnet contains the entire Class B range of 172.16.0.0 IP addresses. The first
assignable IP address in the 172.16.0.0/16 range is 172.16.0.1. The last assignable IP address is
172.16.255.255. The range would therefore summarize 256 /24 subnets, not four.

Summarizing the three /24 networks with a /18 subnet would create too large of a summarization.
A /18 subnet creates four possible subnets containing 16,382 assignable host IP addresses each.
The first assignable IP address in the 172.16.0.0/18 range is 172.16.0.1. The last assignable IP
address is 172.16.63.255. The range would therefore summarize 64 /24 subnets, not four.

"Pass Any Exam. Any Time." - www.actualtests.com 132


Cisco 200-310 Exam
Summarizing the three /24 networks with a /23 subnet would create too small of a summarization.
A /23 subnet creates 128 possible subnets containing 510 assignable host IP addresses each.
The first assignable IP address in the 172.16.0.0/23 range is 172.16.0.1. The last assignable IP
address is 172.16.1.255. This range would therefore exclude the 172.16.3.0/24 subnet connected
to RouterA.

Summarizing the three /24 networks with a /25 subnet would not work, because a /25 subnet
divides the 172.16.0.0/24 subnet instead of summarizing. A /25 subnet creates 512 possible
subnets containing 126 assignable host IP addresses each. The first assignable IP address in the
172.16.0.0/25 range is 172.16.0.1. The last assignable IP address is 172.16.0.127. This subnet
would therefore contain only half of one the subnets that is directly connected to RouterA.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 80

Which of the following statements are true regarding an IDS? (Choose two.)

A.
None of its physical interfaces can be in promiscuous mode.

B.
It must have two or more monitoring interfaces.

C.
It does not have an IP address assigned to its monitoring port.

D.
It does not have a MAC address assigned to its monitoring port.

E.
It cannot mitigate single-packet attacks.

Answer: C,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 133


Cisco 200-310 Exam
An Intrusion Detection System (IDS) cannot mitigate single-packet attacks and does not have an
IP address assigned to its monitoring port. An IDS is a network monitoring device that passively
monitors a copy of network traffic, not the actual packet. Typically, an IDS has a management
interface and at least one monitoring interface for each monitored network. Each monitoring
interface operates in promiscuous mode and cannot be assigned an IP address? however, the
monitoring interface does have a Media Access Control (MAC) address assigned to its monitoring
port. Because an IDS does not reside in the path of network traffic, traffic does not flow through
the IDS? therefore, the IDS cannot directly block malicious traffic before it passes into the network.
However, an IDS can send alerts to a management station when it detects malicious traffic. For
example, the IDS in the following diagram is connected to a Switch Port Analyzer (SPAN) interface
on a switch outside the firewall:

This deployment enables the IDS to monitor all traffic flowing between the LAN and the Internet.
However, the IDS will have insight only into LAN traffic that passes through the firewall and will be
unable to monitor LAN traffic that flows between virtual LANs (VLANs) on the internal switch. If the
IDS in this example were to detect malicious traffic, it would be unable to directly block the traffic
but it would be able to send an alert to a management station on the LAN.

By contrast, an Intrusion Prevention System (IPS) is a network monitoring device that can mitigate
single-packet attacks. An IPS requires at least two interfaces for each monitored network: one
interface monitors traffic entering the IPS, and the other monitors traffic leaving the IPS. Like an
IDS, an IPS does not have an IP address assigned to its monitoring ports. Because all monitored
traffic must flow through an IPS, an IPS can directly block malicious traffic before it passes into the
network. The IPS in the following diagram is deployed outside the firewall and can directly act on
any malicious traffic between the LAN and the Internet:

Alternatively, an IPS can be deployed in promiscuous mode, which is also referred to as monitor-
only mode. When operating in promiscuous mode, an IPS is connected to a SPAN port and
effectively functions as an IDS.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

"Pass Any Exam. Any Time." - www.actualtests.com 134


Cisco 200-310 Exam
Cisco: Cisco IPS AIM

QUESTION NO: 81

Which of the following statements are true regarding the distribution layer of the hierarchical
network model? (Choose two.)

A.
The distribution layer provides load balancing.

B.
The distribution layer provides redundant paths to the default gateway.

C.
The distribution layer provides fast convergence.

D.
The distribution layer provides NAC.

Answer: A,B
Explanation:
Section: Enterprise Network Design Explanation

The distribution layer provides load balancing and redundant paths to the default gateway. The
hierarchical model divides the network into three distinct components:

The core layer of the hierarchical model provides fast convergence. The core layer typically
provides the fastest switching path in the network. As the network backbone, the core layer is
primarily associated with low latency and high reliability. The functionality of the core layer can be
collapsed into the distribution layer if the distribution layer infrastructure is sufficient to meet the
design requirements. Thus the core layer does not contain physically connected hosts. For
example, in a small enterprise campus implementation, a distinct core layer may not be required,
because the network services normally provided by the core layer are provided by a collapsed
core layer instead.

The distribution layer serves as an aggregation point for access layer network links. Because the
distribution layer is the intermediary between the access layer and the core layer, the distribution
layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to
perform tasks that involve packet manipulation, such as routing. Summarization and next-hop
redundancy are also performed in the distribution layer.

"Pass Any Exam. Any Time." - www.actualtests.com 135


Cisco 200-310 Exam
The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that
prevents hosts from accessing the network if they do not comply with organizational requirements,
such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically
discovering and inventorying devices attached to the LAN. The access layer serves as a media
termination point for endpoints, such as servers and hosts. Because access layer devices provide
access to the network, the access layer is the ideal place to perform user authentication.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Distribution Layer, pp. 43-44

Cisco: Campus Network for High Availability Design Guide: Distribution Layer

QUESTION NO: 82

Which of the following is a routing protocol that requires a router that operates in the same AS in
order to establish a neighbor relationship?

A.
BGP

B.
EIGRP

C.
HSRP

D.
static routes

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Enhanced Interior Gateway Routing Protocol (EIGRP) requires a router that operates in the same
autonomous system (AS) in order to establish a neighbor relationship, which is also known as an
EIGRP adjacency. EIGRP routers establish adjacencies by sending Hello packets to the multicast
address 224.0.0.10. EIGRP for IP version 6 (IPv6) routers can use IPv6 link-local addresses to
reach neighbors.

Hello packets verify that two-way communication exists between routers. As soon as a router

"Pass Any Exam. Any Time." - www.actualtests.com 136


Cisco 200-310 Exam
receives an EIGRP Hello packet, the router will attempt to establish an adjacency with the router
that sent the packet. Unlike OSPF, EIGRP does not go through neighbor states? a neighbor
relationship is established upon receipt of an EIGRP Hello packet.

An EIGRP router can form an adjacency with another router only if the following values match:

In addition, if the routers are using IP, the primary IP addresses for the routers' connected
interfaces must be on the same IP subnet.

Border Gateway Protocol (BGP) does not require a router that operates in the same AS in order to
establish a neighbor relationship. Because BGP does not use a neighbor discovery process like
many other routing protocols, every peer is manually configured and must be reachable through
Transmission Control Protocol (TCP) port 179. Once a peer has been configured with the neighbor
remote-as command, the local BGP speaker will attempt to transmit an OPEN message to the
remote peer. If the OPEN message is not blocked by existing firewall rules or other security
mechanisms, the remote peer will respond with a KEEPALIVE message and will continue to
periodically exchange KEEPALIVE messages with the local peer. A BGP speaker will consider a
peer dead if a KEEPALIVE message is not received within a period of time specified by a hold
timer. Routing information is then exchanged between peers by using UPDATE messages.
UPDATE messages can include advertised routes and withdrawn routes. Withdrawn routes are
those that are no longer considered feasible. Statistics regarding the number of BGP messages,
such as UPDATE messages, can be viewed in the output of the show ip bgp neighbors command.

Hot Standby Router Protocol (HSRP) is a First Hop Redundancy Protocol (FHRP), not a routing
protocol. Therefore, an HSRP router does not establish a neighbor relationship with another HSRP
router. The active and standby routers in an HSRP configuration do send Hello packets to
establish roles and determine availability. Typically, HSRP routers are connected together on the
same LAN and are therefore operating in the same AS.

Static routes are manually configured on individual routers and remain in the routing table even if
the path is not valid. Therefore, static routes do not establish neighbor relationships with other
routers. A static route can exist regardless of the AS in which the routers are operating.

Reference:

Cisco: Cisco IOS IP Configuration Guide, Release 12.2: Configuring EIGRP

QUESTION NO: 83

Which of the following can you use to hide the IP addresses of hosts on an internal network when
transmitting packets to an external network, such as the Internet?

"Pass Any Exam. Any Time." - www.actualtests.com 137


Cisco 200-310 Exam
A.
a DMZ

B.
WPA

C.
an ACL

D.
NAT

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You can use Network Address Translation (NAT) to hide the IP addresses of hosts on an internal
network when transmitting packets to an external network, such as the Internet. NAT is used to
translate private IP addresses to public IP addresses. Private-to-public address translation
enables hosts on a privately addressed internal network to communicate with hosts on a public
network, such as the Internet. Typically, internal networks use private IP addresses, which are not
globally routable. In order to enable communication with hosts on the Internet, which use public IP
addresses, NAT translates the private IP addresses to a public IP address. Port Address
Translation (PAT) can further refine what type of communication is allowed between an externally
facing resource and an internally facing resource by designating the port numbers to be used
during communication. PAT can create multiple unique connections between the same external
and internal resources.

You cannot use a demilitarized zone (DMZ) to hide the IP addresses of hosts on an internal
network when transmitting packets to an external network. A DMZ is a network segment that is
used as a boundary between an internal network and an external network, such as the Internet. A
DMZ network segment is typically used with an access control method to permit external users to
access specific externally facing servers, such as web servers and proxy servers, without
providing access to the rest of the internal network. This helps limit the attack surface of a
network.

You cannot use Wi-Fi Protected Access (WPA) to hide the IP addresses of hosts on an internal
network when transmitting packets to an external network. WPA is a wireless standard that is used
to encrypt data transmitted over a wireless network. WPA was designed to address weaknesses in
Wired Equivalent Privacy (WEP) by using a more advanced encryption method called Temporal
Key Integrity Protocol (TKIP). TKIP provides 128bit encryption, key hashing, and message
integrity checks. TKIP can be configured to change keys dynamically, which increases wireless
network security.

You cannot use an access control list (ACL) to hide the IP addresses of hosts on an internal
"Pass Any Exam. Any Time." - www.actualtests.com 138
Cisco 200-310 Exam
network when transmitting packets to an external network. ACLs are used to control packet flow
across a network. They can either permit or deny packets based on source network, destination
network, protocol, or destination port. Each ACL can only be applied to a single protocol per
interface and per direction. Multiple ACLs can be used to accomplish more complex packet flow
throughout an organization. For example, you could use an ACL on a router to restrict a specific
type of traffic, such as Telnet sessions, from passing through a corporate network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

QUESTION NO: 84

Which of the following statements is true regarding the service-port interface on a Cisco WLC?

A.
It is used for client data transfer.

B.
It is used for in-band management.

C.
It is used for out-of-band management.

D.
It is used for Layer 3 discovery operations.

E.
It is used for Layer 2 discovery operations.

Answer: C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

The service-port interface on a Cisco wireless LAN controller (WLC) is used for out-of-band
management. A WLC interface is a logical interface that can be mapped to at least one physical
port. The port mapping is typically implemented as a virtual LAN (VLAN) on an 802.1Q trunk. A
WLC has five interface types:

The management interface is used for in-band management, for Layer 2 discovery operations, and
for enterprise services such as authentication, authorization, and accounting (AAA). The AP

"Pass Any Exam. Any Time." - www.actualtests.com 139


Cisco 200-310 Exam
manager interface is used for Layer 3 discovery operations and handles all Layer 3
communications between the WLC and an associated AP.

The virtual interface is a special interface used to support wireless client mobility. The virtual
interface acts as a Dynamic Host Configuration Protocol (DHCP) server placeholder and supports
DHCP relay functionality. In addition, the virtual interface is used to implement Layer 3 security,
such as redirects for a web authentication login page.

The dynamic interface type is used to map VLANs on the WLC for wireless client data transfer. A
WLC can support up to 512 dynamic interfaces mapped onto an 802.1Q trunk on a physical port
or onto multiple ports configured as a single port group using link aggregation (LAG).

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, WLC Interface Types, pp. 184-185

Cisco: Cisco Wireless LAN Controller Configuration Guide, Release 7.4: Information About
Interfaces

QUESTION NO: 85

Which of the following statements regarding WMM is true?

A.
Voice traffic is assigned to the Gold access category.

B.
Unassigned traffic is treated as though it were assigned to the Silver access category.

C.
Best-effort traffic is assigned to the Bronze access category.

D.
WMM is not compatible with the 802.11e standard.

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Wi-Fi Multimedia (WMM) treats unassigned traffic as though it were assigned to the Silver access
category. WMM is a subset of the 802.11e wireless standard, which adds Quality of Service (QoS)
"Pass Any Exam. Any Time." - www.actualtests.com 140
Cisco 200-310 Exam
features to the existing wireless standards. WMM was initially created by the Wi-Fi Alliance while
the 802.11e proposal was awaiting approval by the Institute of Electrical and Electronics
Engineers (IEEE).

The 802.11e standard defines eight priority levels for traffic, numbered from 0 through 7. WMM
reduces the eight 802.11e priority levels into four access categories, which are Voice (Platinum),
Video (Gold), Best-Effort (Silver), and Background (Bronze). On WMM-enabled networks, these
categories are used to prioritize traffic. Packets tagged as Voice (Platinum) packets are typically
given priority over packets tagged with lower-level priorities. Packets that have not been assigned
to a category are treated as though they had been assigned to the Best-Effort (Silver) category.

When a lightweight access point (LAP) receives a frame with an 802.11e priority value from a
WMM-enabled client, the LAP ensures that the 802.11e priority value is within the acceptable
limits provided by the QoS policy assigned to the wireless client. After the LAP polices the 802.11e
priority value, it maps the 802.11e priority value to the corresponding Differentiated Services Code
Point (DSCP) value and forwards the frame to the wireless LAN controller (WLC). The WLC will
then forward the frame with its DSCP value to the wired network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 5, Wireless and Quality of Service (QoS), pp. 197-199

Cisco: Cisco Unified Wireless QoS

QUESTION NO: 86

The network you administer contains the following network addresses:

10.0.4.0/24

10.0.5.0/24

10.0.6.0/24

10.0.7.0/24

You want to summarize these network addresses with a single summary address.

Which of the following addresses should you use?

A.
10.0.0.0/21

"Pass Any Exam. Any Time." - www.actualtests.com 141


Cisco 200-310 Exam
B.
10.0.4.0/22

C.
10.0.4.0/23

D.
10.0.4.0/24

E.
10.0.4.0/25

F.
10.0.4.0/26

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should use the 10.0.4.0/22 address to summarize the network addresses 10.0.4.0/24,
10.0.5.0/24, 10.0.6.0/24, and 10.0.7.0/24. The /22 notation indicates that a 22bit subnet mask
(255.255.252.0) is used, which can summarize two /23 networks, four /24 networks, eight /25
networks, and so on. The process of summarizing multiple subnets with a single address is called
supernetting.

You should not use the 10.0.0.0/21 address to summarize the network addresses. The /21
notation indicates that a 21-bit subnet mask (255.255.248.0) is used, which can summarize two
/22 networks, four /23 networks, eight /24 networks, and so on. Although the 10.0.0.0/21 address
does include the four network addresses on your network, it also includes the 10.0.0.0/24,
10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24 networks. Whenever possible, you should summarize
addresses to the smallest possible bit boundary.

You cannot use the 10.0.4.0/23 address to summarize the network addresses. The /23 notation
indicates that a 23-bit subnet mask (255.255.254.0) is used, which can summarize two /24
networks, four /25 networks, eight /26 networks, and so on. Therefore, the 10.0.4.0/23 address
only summarizes the 10.0.4.0/24 and 10.0.5.0/24 networks. The 10.0.6.0/23 address would be
required to summarize the remaining 10.0.6.0/24 and 10.0.7.0/24 networks.

You cannot use the 10.0.4.0/24 address to summarize the network addresses. The /24 notation
indicates that a 24bit subnet mask (255.255.255.0) is used, which can summarize two /25
networks, four /26 networks, eight /27 networks, and so on. However, a 24-bit summary address
cannot summarize multiple /24 networks.

You cannot use the 10.0.4.0/25 address to summarize the network addresses. A 25-bit mask is
used to subnet a /24 network into two subnets; it cannot be used to supernet multiple /24
"Pass Any Exam. Any Time." - www.actualtests.com 142
Cisco 200-310 Exam
networks.

You cannot use the 10.0.4.0/26 address to summarize the network addresses. A 26-bit mask is
used to subnet a /24 network into four subnets; it cannot be used to supernet multiple /24
networks.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

Cisco: IP Routing Frequently Asked Questions: Q. What does route summarization mean?

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 87

You want to implement a WAN link between two sites.

Which of the following WAN solutions would not offer a guaranteed level of service?

A.
GRE tunnel through the Internet

B.
ATM virtual circuit

C.
Frame Relay virtual circuit

D.
MPLS overlay VPN

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

A Generic Routing Encapsulation (GRE) tunnel through the Internet would not offer a guaranteed
level of service. GRE is a tunneling protocol designed to encapsulate any Layer 3 protocol for
transport through an IP network. Although a GRE tunnel can be used to connect to sites across a
public network, such as the Internet, GRE does not have any inherent Quality of Service (QoS)
mechanisms that can guarantee a level of service to any of the packets that flow through the
"Pass Any Exam. Any Time." - www.actualtests.com 143
Cisco 200-310 Exam
tunnel. Because any traffic that flows through the Internet is delivered on a best-effort basis, WAN
solutions that use the Internet, such as GRE tunnels, are better suited as backup strategies for
WAN links that can guarantee a level of service.

Asynchronous Transfer Mode (ATM) and Frame Relay virtual circuits can provide a guaranteed
level of service. Because ATM and Frame Relay virtual circuits pass through a network that has
inherent QoS capabilities, each virtual circuit can guarantee a level of service to its endpoints. The
service provider network is responsible for ensuring that the service level agreement (SLA) for
each circuit is maintained at all times.

Similarly, a Multiprotocol Label Switching (MPLS) overlay virtual private network (VPN) can
provide a guaranteed level of service. MPLS overlay VPNs are provided by a service provider and
are established on an infrastructure that can ensure a level of service for all traffic that passes
through the service provider network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, WAN Backup over the Internet, pp. 263-264

QUESTION NO: 88

Which of the following standard or standards natively include PortFast, UplinkFast, and
BackboneFast?

A.
802.1s

B.
802.1w

C.
802.1D

D.
802.1D and 802.1s

E.
802.1D and 802.1w

Answer: B
Explanation:
Section: Enterprise Network Design Explanation
"Pass Any Exam. Any Time." - www.actualtests.com 144
Cisco 200-310 Exam
The 802.1w Rapid Spanning Tree Protocol (RSTP) standard natively includes PortFast,
UplinkFast, and BackboneFast. PortFast enables a port to immediately access the network by
transitioning the port into the Spanning Tree Protocol (STP) forwarding state without passing
through the listening and learning states. Configuring BPDU filtering on a port that is also
configured for PortFast causes the port to ignore any bridge protocol data units (BPDUs) it
receives, effectively disabling STP.

UplinkFast increases convergence speed for an access layer switch that detects a failure on the
root port with backup root port selection by immediately replacing the root port with an alternative
root port. BackboneFast increases convergence speed for switches that detect a failure on links
that are not directly connected to the switch.

802.1D is the traditional STP implementation to prevent switching loops on a network. Traditional
STP, which Cisco training and reference materials refer to simply as 802.1D, is more formally
known as the 802.1D1998 standard. Although PortFast, UplinkFast, and BackboneFast can be
used with 802.1D, it does not contain those features natively. Traditional STP converges slowly,
so the 802.1w RSTP standard was developed by the Institute of Electrical and Electronics
Engineers (IEEE) to address the slow transition of an 802.1D port to the forwarding state. RSTP is
backward compatible with STP, but the convergence benefits provided by RSTP are lost when
RSTP interacts with STP devices. The features of 802.1w, including PortFast, UplinkFast, and
BackboneFast, were integrated into the 802.1D2004 standard, and the traditional STP algorithm
was replaced with RSTP.

The 802.1s Multiple Spanning Tree (MST) standard is used to create multiple spanning tree
instances on a network. Implementing MST on a switch also implements RSTP. However, the
802.1s standard does not natively include PortFast, UplinkFast, and BackboneFast within the
specification.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Cisco STP Toolkit, pp. 103-105

Cisco: Understanding Rapid Spanning Tree Protocol (802.1w): Conclusion

QUESTION NO: 89

Which of the following network virtualization techniques does Cisco recommend for any-to-any
connectivity in large networks?

A.
VRFLite

"Pass Any Exam. Any Time." - www.actualtests.com 145


Cisco 200-310 Exam
B.
MultiVRF

C.
EVN

D.
MPLS

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Cisco recommends Multiprotocol Label Switching (MPLS) as a network virtualization technique for
any-to-any connectivity in large networks. MPLS is typically implemented in an end-to-end fashion
at the network edge and requires the edge and core devices to be MPLS-capable. MPLS can
support thousands of virtual networks (VNETs) over a full-mesh topology to provide any-to-any
connectivity without requiring excessive operational complexity or management resources.
Although MPLS is best suited for large networks, integrating MPLS into an existing design and
infrastructure can be disruptive, particularly if MPLS-incapable devices must be replaced with
MPLS-capable devices at the network edge or in the core.

The Multi-virtual routing and forwarding (Multi-VRF) network virtualization technique, which Cisco
also refers to as VRF-Lite, is best suited for small or medium networks. Multi-VRF uses virtual
routing and forwarding (VRF) instances to segregate a Layer 3 network. Multi-VRF is typically
used to support one-to-one, end-to-end connections; however, Multicast Generic Routing
Encapsulation (mGRE) tunnels could be used to create any-to-any connectivity in small networks.
Cisco considers a full mesh of mGRE tunnels in larger networks impractical because of the
increased operational complexity and management load. On Cisco platforms, Multi-VRF network
virtualization supports up to eight VNETs before operational complexity and management become
problematic. The VNETs created by Multi-VRF mirror the physical infrastructure upon which they
are built, and most Cisco platforms support Multi-VRF; therefore, the general network design and
overall infrastructure do not require disruptive changes in order to support a Multi-VRF overlay
topology.

Newer Cisco platforms support Easy Virtual Networking (EVN), which is a network virtualization
that also uses VRFs to segregate Layer 3 networks. EVN supports up to 32 VNETs before
operational complexity and management become problematic. Cisco recommends using EVN
instead of Multi-VRF in small and medium networks. Although EVN is backward-compatible with
Multi-VRF, implementing a homogeneous EVN topology would require replacing unsupported
hardware with EVN-capable devices. Replacing infrastructure is typically disruptive and may
require additional modifications to the existing network design.

Reference:

"Pass Any Exam. Any Time." - www.actualtests.com 146


Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 4, VRF, p. 154

Cisco: Borderless Campus Network Virtualization-Path Isolation Design Fundamentals: Path


Isolation

QUESTION NO: 90 DRAG DROP

Drag the event action on the left to the IPS mode that supports it on the right. Use all event
actions. Some boxes will not be filled.

Answer:

"Pass Any Exam. Any Time." - www.actualtests.com 147


Cisco 200-310 Exam

Explanation:

Section: Considerations for Expanding an Existing Network Explanation


"Pass Any Exam. Any Time." - www.actualtests.com 148
Cisco 200-310 Exam
Promiscuous mode enables Cisco Intrusion Prevention System (IPS) to examine traffic on ports
from multiple network segments without being directly connected to those segments. Copies of
traffic are forwarded to IPS for analysis instead of flowing through IPS directly. Therefore,
promiscuous mode increases latency because the amount of time IPS takes to determine whether
a network attack is in progress can be greater in promiscuous mode than when IPS is operating in
inline mode. The greater latency means that an attack has a greater chance at success prior to
detection.

IPS can use all of the following actions to mitigate a network attack in promiscuous mode:

IPS in promiscuous mode requires Remote Switched Port Analyzer (RSPAN). RSPAN enables the
monitoring of traffic on a network by capturing and sending traffic from a source port on one device
to a destination port on a different device on a non-routed network. Inline mode enables IPS to
examine traffic as it flows through the IPS device. Therefore, the IPS device must be directly
connected to the network segment that it is intended to protect. Any traffic that should be analyzed
by IPS must be to a destination that is separated from the source by the IPS device.

IPS can use all of the following actions to mitigate a network attack in inline mode:

IPS in inline mode mitigates attacks for 60 minutes by default. IPS in promiscuous mode mitigates
attacks for 30 minutes by default. However, the mitigation effect time for both inline mode and
promiscuous mode can be configured by an IPS administrator.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

Cisco: Cisco IPS Mitigation Capabilities: Event Actions

QUESTION NO: 91

Which of the following statements are correct regarding network design approaches? (Choose
two.)

A.
The top-down approach is recommended over the bottom-up approach.

B.
The top-down approach is more time-consuming than the bottom-up approach.

C.

"Pass Any Exam. Any Time." - www.actualtests.com 149


Cisco 200-310 Exam
The top-down approach can lead to costly redesigns.

D.
The bottom-up approach focuses on applications and services.

E.
The bottom-up approach provides a "big picture" overview.

F.
The bottom-up approach incorporates organizational requirements.

Answer: A,B
Explanation:
Section: Design Methodologies Explanation

The top-down approach to network design is recommended over the bottom-up approach, and the
top-down approach is more time-consuming than the bottom-up approach. The top-down design
approach takes its name from the methodology of starting with the higher layers of the Open
Systems Interconnection (OSI) model, such as the Application, Presentation, and Session layers,
and working downward toward the lower layers. The top-down design approach is more time-
consuming than the bottom-up design approach because the top-down approach requires a
thorough analysis of the organization's requirements. Once the designer has obtained a complete
overview of the existing network and the organization's needs, in terms of applications and
services, the designer can provide a design that meets the organization's current requirements
and that can adapt to the organization's projected future needs. Because the resulting design
includes room for future growth, costly redesigns are typically not necessary with the top-down
approach to network design.

By contrast, the bottom-up approach can be much less time-consuming than the top-down design
approach. The bottom-up design approach takes its name from the methodology of starting with
the lower layers of OSI model, such as the Physical, Data Link, Network, and Transport layers,
and working upward toward the higher layers. The bottom-up approach relies on previous
experience rather than on a thorough analysis of organizational requirements or projected growth.
In addition, the bottom-up approach focuses on the devices and technologies that should be
implemented in a design, instead of focusing on the applications and services that will actually use
the network. Because the bottom-up approach does not use a detailed analysis of an
organization's requirements, the bottom-up design approach can often lead to costly network
redesigns. Cisco does not recommend the bottom-up design approach, because the design does
not provide a "big picture" overview of the current network or its future requirements.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, TopDown Approach, pp. 24-25

Cisco: Using the TopDown Approach to Network Design: 4. TopDown and BottomUp Approach
Comparison (Flash)
"Pass Any Exam. Any Time." - www.actualtests.com 150
Cisco 200-310 Exam

QUESTION NO: 92

View the Exhibit.

You have been asked to use CDP to document the network shown in the diagram above. You are
working from HostA, which is connected to the console port of SwitchA. You connect to SwitchA
and issue the show cdp neighbors and show cdp neighbors detail commands.

Which of the following statements are correct? (Choose two.)

A.
The show cdp neighbors detail command will show all of the host IP addresses in use on HostA's
LAN.

B.
The show cdp neighbors command will show which port on SwitchB connects to SwitchA.

C.
The show cdp neighbors command will show two devices connected to SwitchA.

D.
The show cdp neighbors detail command will show information for all Cisco devices on the
network.

E.
The show cdp neighbors detail command will display all of RouterA's IP addresses.

"Pass Any Exam. Any Time." - www.actualtests.com 151


Cisco 200-310 Exam
Answer: B,C
Explanation:
Section: Design Methodologies Explanation

The show cdp neighbors command will display the directly connected Cisco devices that are
sending Cisco Discovery Protocol (CDP) updates; the directly connected devices in this case are
RouterA and SwitchB. The port ID of the sending device will be displayed by the show cdp
neighbors command. Therefore, the show cdp neighbors command will show which port on
SwitchB and which interface on RouterA connect to SwitchA. CDP is used to collect information
about neighboring Cisco devices and is enabled by default. Because CDP operates at the Data
Link layer, which is Layer 2 of the Open Systems Interconnection (OSI) model, CDP is not
dependent on any particular Layer 3 protocol addressing, such as IP addressing. Therefore, if
CDP information is not being exchanged between devices, you should check for Physical layer
and Data Link layer connectivity problems. CDP is enabled by default on Cisco devices. You can
globally disable CDP by issuing the no cdp run command in global configuration mode. You can
disable CDP on a perinterface basis by issuing the no cdp enable command in interface
configuration mode.

The show cdp neighbors detail command will not show information for all of the Cisco devices on
the network. The only devices that will send CDP information are the directly connected devices.

The show cdp neighbors detail command will not display all of RouterA's IP addresses. Updates
sent from RouterA and received by SwitchA will include only the IP address of the port that sent
the update.

The show cdp neighbors detail command will not show all of the IP addresses of hosts on the
LAN. Hosts do not send CDP information? only directly connected Cisco devices send CDP
updates.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629

Cisco: Cisco IOS Configuration Fundamentals Command Reference, Release 12.2: show cdp
neighbors

QUESTION NO: 93

Which of the following prefixes will an IPv6enabled computer use to automatically configure an
IPv6 address for itself?

"Pass Any Exam. Any Time." - www.actualtests.com 152


Cisco 200-310 Exam
A.
2000::/3

B.
FC00::/7

C.
FE80::/10

D.
FF00::/8

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

An IP version 6 (IPv6) enabled computer will use the prefix FE80::/10 to automatically configure an
IPv6 address for itself. The IPv6 prefix FE80::/10 is used for unicast link-local addresses. IPv6
addresses in the FE80::/10 range begin with the characters FE80 through FEBF. Unicast packets
are used for one-to-one communication. Link-local addresses are unique only on the local
segment. Therefore, link-local addresses are not routable. Unicast link-local addresses are used
for neighbor discovery and for environments in which no router is present to provide a routable
IPv6 prefix.

IPv6 was developed to address the lack of available address space with IPv4. An IPv6 address is
a 128bit (16byte) address that is typically written as eight groups of four hexadecimal characters,
including numbers from 0 through 9 and letters from A through F. Each group of four characters is
separated by colons. Leading zeroes in each group can be dropped. A double colon can be used
at the beginning, middle, or end of an IPv6 address in place of one or more contiguous four
character groups consisting of all zeroes. However, only one double colon can be used in an IPv6
address. Therefore, the following IPv6 addresses are equivalent:

An IPv6enabled computer will not use the prefix 2000::/3 to automatically configure an IPv6
address for itself. The IPv6 prefix 2000::/3 is used for global aggregatable unicast addresses. IPv6
addresses in the 2000::/3 range begin with the characters 2000 through 3FFF. Global
aggregatable unicast address prefixes are distributed by the Internet Assigned Numbers Authority
(IANA) and are globally routable over the Internet. Because there is an inherent hierarchy in the
aggregatable global address scheme, these addresses lend themselves to simple consolidation,
which greatly reduces the complexity of Internet routing tables.

An IPv6enabled computer will not use the prefix FC00::/7 to automatically configure an IPv6
address for itself. The IPv6 prefix FC00::/7 is used for unicast unique-local addresses. IPv6
addresses in this range begin with the characters FC00 through FDFF. Unique-local addresses
are not globally routable, but they are routable within an organization.

"Pass Any Exam. Any Time." - www.actualtests.com 153


Cisco 200-310 Exam
An IPv6enabled computer will not use the prefix FF00::/8 to automatically configure an IPv6
address for itself. The IPv6 prefix FF00::/8 is used for multicast addresses, which are used for
one-to-many communication. IPv6 addresses in the FF00::/8 range begin with the characters
FF00 through FFFF. However, certain address ranges are used to indicate the scope of the
multicast address. The following IPv6 multicast scopes are defined:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 9, LinkLocal Addresses, p. 343

CCDA 200-310 Official Cert Guide, Chapter 9, SLAAC of LinkLocal Address, p. 350

Cisco: IPv6: A Primer for Physical Security Professionals

QUESTION NO: 94

Which of the following does NetFlow use to identify a traffic flow?

A.
only Layer 2 information

B.
only Layer 3 information

C.
only Layer 4 information

D.
Layer 2 and Layer 3 information

E.
Layer 3 and Layer 4 information

F.
Layer 4 through 7 information

Answer: E
Explanation:
Section: Design Methodologies Explanation

NetFlow uses Open Systems Interconnection (OSI) Layer 3 and Layer 4 information to identify a
traffic flow. NetFlow is a Cisco IOS feature that can be used to gather flow-based statistics, such

"Pass Any Exam. Any Time." - www.actualtests.com 154


Cisco 200-310 Exam
as packet counts, byte counts, and protocol distribution. A device configured with NetFlow
examines packets for select Layer 3 and Layer 4 attributes that uniquely identify each traffic flow.
A traffic flow can be identified based on the unique combination of the following seven attributes:

The data gathered by NetFlow is typically exported to management software. You can then
analyze the data to facilitate network planning, customer billing, and traffic engineering. For
example, NetFlow can be used to obtain information about the types of applications generating
traffic flows through a router.

NetFlow does not use Layer 2 information, such as a packet's source Media Access Control
(MAC) address, to identify a traffic flow. Although the input will be considered when identifying a
traffic flow, the MAC address of the interface will not be considered.

Network-Based Application Recognition (NBAR), not NetFlow, uses Layer 4 through 7 information
to classify application traffic. NBAR is a Quality of Service (QoS) feature that enables a device to
perform deep packet inspection for all packets that pass through an NBAR-enabled interface. With
deep packet inspection, an NBAR-enabled device can classify traffic based on the content of a
Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP) packet, instead of just
the network header information. In addition, NBAR can provide statistical reporting relative to each
recognized application.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, NetFlow, pp. 626-628

Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: Capturing Traffic Data

QUESTION NO: 95

Which of the following is a Layer 2 high-availability feature?

A.
NSF

B.
UDLDC

C.
SPF

D.
FHRP

"Pass Any Exam. Any Time." - www.actualtests.com 155


Cisco 200-310 Exam
Answer: B
Explanation:
Section: Enterprise Network Design Explanation

UniDirectional Link Detection (UDLD) is a Layer 2 high-availability (HA) feature. UDLD monitors a
link to verify that both ends of the link are functioning. UDLD operates by sending messages
across the link. When a port receives a UDLD message, the port responds by sending an echo
message to verify that the link is bidirectional. Layer 2 HA features, such as UDLD, Spanning Tree
Protocol (STP), and IEEE 802.3ad link aggregation, increase network resiliency and are often
integral components in redundant topology designs.

Shortest Path First (SPF), First-Hop Redundancy Protocol (FHRP), and nonstop forwarding (NSF)
are Layer 3 HA features, not Layer 2 HA features. SPF uses an efficient algorithm to determine
the optimal Layer 3 path to a destination within a routing domain. FHRP provides gateway
resiliency for hosts. NSF provides graceful restart provisions for common routing protocols to
ensure fast convergence and uninterrupted Layer 3 forwarding during failure events, such as
supervisor module failure and switchover.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Virtualization Technologies, pp. 153-157

Cisco: Campus 3.0 Virtual Switching System Design Guide: VSS Architecture and Operation

QUESTION NO: 96

Which of the following statements is true regarding VMs?

A.
VMs running on a host computer must run the same version of an OS as the host computer.

B.
Multiple VMs can be running simultaneously on a single host computer.

C.
Installing virus protection on the host computer automatically protects any VMs running on that
host computer.

D.
All software is shared among the host computer and the VMs.

"Pass Any Exam. Any Time." - www.actualtests.com 156


Cisco 200-310 Exam
Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Explanation/Reference:

Multiple virtual machines (VMs) can be running simultaneously on a single host computer. A VM is
an isolated environment running a separate operating system (OS) while sharing hardware
resources with a host machine's OS. For example, you can configure a Windows 7 VM that can
run within Windows 8? both OSs can run at the same time if virtualization software, such as
Microsoft HyperV, is used. The Windows 7 VM could then be used as a testing environment for
patch or application deployment.

Depending on a computer's hardware capabilities, multiple VMs can be installed on a single


computer, which can help provide more efficient utilization of hardware resources. For example,
VMWare ESXi Server provides a hypervisor that runs on bare metal, meaning without a host OS,
and that can efficiently manage multiple VMs on a single server. A VM can access the physical
network through a network adapter shared by the host computer. Alternatively, a VM could access
virtualized networking devices on the host, such as routers or switches, to access network
resources.

Before a VM is installed, it is important to ensure that the hardware on the host in which you are
configuring the VM has enough CPU process availability and random access memory (RAM) to
support the simultaneous use of multiple OSs and to ensure that the client you are accessing the
VM from has sufficient network bandwidth.

The VMs on a host computer can, but are not required to, run the same version of an OS as the
host computer. For example, you can install Windows 8 on a VM that is hosted on a Windows 8
computer. Alternatively, as in the example given previously, you can configure a Windows 7 VM
that can run within Windows 8.

Installing virus protection on the host computer will not automatically protect any VMs running on
that host computer. Securing the host computer does not secure all virtual computers running on
that host computer. You must manually manage the security of each VM installed on a host
computer. For example, installing patches and security software on the host computer will not also
configure the patches and software to be installed on the VMs.

Although a VM shares the hardware resources of the host computer, the software remains
separate. Software installed on the host is not accessible from within the VM. For example,
Microsoft Office might be installed on the host computer, but in order to access Microsoft Office
from within a VM you must also install Microsoft Office on the VM. Separate instances of software
on the host computer and on each VM can help protect the host computer from potentially harmful
changes made within a VM. For example, if a VM user accidentally deletes a system file or installs
malicious software, the host computer will not be affected. This applies to drivers as well? if the
network adapter driver is removed from the VM, the host computer and the other VMs will not be
"Pass Any Exam. Any Time." - www.actualtests.com 157
Cisco 200-310 Exam
affected.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Server Virtualization, p. 155

QUESTION NO: 97

Which of the following are true of the access layer of a hierarchical design? (Choose two.)

A.
It provides address summarization.

B.
It aggregates LAN wiring closets.

C.
It aggregates WAN connections.

D.
It isolates the distribution and core layers.

E.
It is also known as the backbone layer.

F.
It performs Layer 2 switching.

G.
It performs NAC for end users.

Answer: F,G
Explanation:
Section: Enterprise Network Design Explanation

Explanation/Reference:

The access layer typically performs Layer 2 switching and Network Admission Control (NAC) for
end users. The access layer is the network hierarchical layer where end-user devices connect to
the network. Port security and Spanning Tree Protocol (STP) toolkit features like PortFast are
typically implemented in the access layer.

The distribution layer of a hierarchical design, not the access layer, provides address

"Pass Any Exam. Any Time." - www.actualtests.com 158


Cisco 200-310 Exam
summarization, aggregates LAN wiring closets, and aggregates WAN connections. The
distribution layer is used to connect the devices at the access layer to those in the core layer.
Therefore, the distribution layer isolates the access layer from the core layer. In addition to these
features, the distribution layer can also be used to provide policy-based routing, security filtering,
redundancy, load balancing, Quality of Service (QoS), virtual LAN (VLAN) segregation of
departments, inter-VLAN routing, translation between types of network media, routing protocol
redistribution, and more.

The core layer of a hierarchical design, not the access layer, is also known as the backbone layer.
The core layer is used to provide connectivity to devices connected through the distribution layer.
In addition, it is the layer that is typically connected to enterprise edge modules. Cisco
recommends that the core layer provide fast transport, high reliability, redundancy, fault tolerance,
low latency, limited diameter, and QoS. However, the core layer should not include features that
could inhibit CPU performance. For example, packet manipulation that results from some security,
QoS, classification, or inspection features can be a drain on resources.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46

Cisco: High Availability Campus Network Design-Routed Access Layer using EIGRP or OSPF:
Hierarchical Design

QUESTION NO: 98

In which of the following modules of the Cisco enterprise architecture would you expect to find a
DNS server? (Choose two.)

A.
campus core

B.
data center

C.
building distribution

D.
enterprise edge

E.
building access

"Pass Any Exam. Any Time." - www.actualtests.com 159


Cisco 200-310 Exam
Answer: B,D
Explanation:
Section: Enterprise Network Design Explanation

You would expect to find a Domain Name System (DNS) server in the data center or enterprise
edge modules of the Cisco enterprise architecture. The enterprise architecture model is a modular
framework that is used for the design and implementation of large networks. The enterprise
architecture model includes the following modules: enterprise campus, enterprise edge, service
provider (SP) edge, and remote modules that utilize resources that are located away from the
main enterprise campus.

The campus core layer, building distribution layer, and building access layer are all part of the
enterprise campus module. These submodules of the enterprise campus module rely on a resilient
multilayer design to support the day-to-day operations of the enterprise. Also found within the
enterprise campus module is the data center submodule, which is also referred to as the server
farm submodule. The data center submodule provides file and print services to the enterprise
campus. In addition, the data center submodule typically hosts internal DNS, email, Dynamic Host
Configuration Protocol (DHCP), and database services.

The enterprise edge module represents the boundary between the enterprise campus module and
the outside world. In addition, the enterprise edge module aggregates voice, video, and data traffic
to ensure a particular level of Quality of Service (QoS) between the enterprise campus and
external users located in remote submodules. Enterprise WAN, Internet connectivity, ecommerce
servers, and remote access & virtual private network (VPN) are all submodules of the enterprise
edge module.

Enterprise data center, enterprise branch, and teleworkers are examples of remote submodules
that are found within the enterprise architecture model. These submodules represent enterprise
resources that are located outside the main enterprise campus. These submodules typically
connect to the enterprise campus through the use of the SP edge and the enterprise edge
modules. Because many Cisco routers commonly used at the edge of the network are capable of
providing DHCP and DNS services to the network edge, devices in the remote submodules do not
need to rely on the DHCP and DNS servers located in the enterprise campus.

The SP edge module consists of submodules that represent third-party network service providers.
For example, most enterprise entities rely on Internet service providers (ISPs) for Internet
connectivity and on public switched telephone network (PSTN) providers for telephone service. In
addition, the third-party infrastructure found in the SP edge is often used to provide connectivity
between the enterprise campus and remote resources.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, DNS, pp. 319-321

"Pass Any Exam. Any Time." - www.actualtests.com 160


Cisco 200-310 Exam

QUESTION NO: 99 DRAG DROP

Select the subnet masks on the left, and place them over the number of host addresses that the
subnet mask can support. Not all subnet masks will be used.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 161


Cisco 200-310 Exam

Section: Addressing and Routing Protocols in an Existing Network Explanation

A subnet mask specifies how many bits belong to the network portion of a 32bit IP address. The
remaining bits in the IP address belong to the host portion of the IP address. To determine how
many host addresses are defined by a subnet mask, use the formula 2n-2, where n is the number
of bits in the host portion of the address.

A /19 subnet mask uses 13 bits for host addresses. Therefore, 213 -2 equals 8,190 valid host
addresses.

A /20 subnet mask uses 12 bits for host addresses. Therefore, 212 -2 equals 4,094 valid host
addresses.

A /22 subnet mask uses 10 bits for host addresses. Therefore, 210-2 equals 1,022 valid host
addresses.

A /23 subnet mask uses nine bits for host addresses. Therefore, 29-2 equals 510 valid host
addresses.

A /25 subnet mask uses seven bits for host addresses. Therefore, 27-2 equals 126 valid host
addresses.

Although it is important to learn the formula for calculating valid host addresses, the following list
demonstrates the relationship between subnet masks and valid host addresses:

"Pass Any Exam. Any Time." - www.actualtests.com 162


Cisco 200-310 Exam

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 100

Which of the following statements is true regarding NetFlow?

A.
NetFlow can collect timestamps of traffic flowing between a particular source and destination.

B.
Data collected by NetFlow cannot be exported.

C.
Many configuration changes to existing network devices are required in order to accommodate
NetFlow.

D.
For audit purposes, NetFlow must run on every router in a network.

"Pass Any Exam. Any Time." - www.actualtests.com 163


Cisco 200-310 Exam
Answer: A
Explanation:
Section: Design Methodologies Explanation

NetFlow is a Cisco IOS feature that can collect timestamps of traffic flowing between a particular
source and destination. NetFlow can be used to gather flow-based statistics, such as packet
counts, byte counts, and protocol distribution. A device configured with NetFlow examines packets
for select Layer 3 and Layer 4 attributes that uniquely identify each traffic flow. The data gathered
by NetFlow is typically exported to management software. You can then analyze the data to
facilitate network planning, customer billing, and traffic engineering. A traffic flow is defined as a
series of packets with the same source IP address, destination IP address, protocol, and Layer 4
information. Although NetFlow does not use Layer 2 information, such as a source Media Access
Control (MAC) address, to identify a traffic flow, the input interface on a switch will be considered
when identifying a traffic flow. Each NetFlowenabled device gathers statistics independently of any
other device? NetFlow does not have to run on every router in a network in order to produce
valuable data for an audit. In addition, NetFlow is transparent to the existing network infrastructure
and does not require any network configuration changes in order to function.

Reference:

Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: NetFlow Overview

QUESTION NO: 101

On a Cisco router, which of the following message types does the traceroute command use to
map the path that a packet takes through a network?

A.
ICMP Echo

B.
ICMP TEM

C.
LLDP TLV

D.
CDP TLV

Answer: B

"Pass Any Exam. Any Time." - www.actualtests.com 164


Cisco 200-310 Exam
Explanation:
Section: Design Methodologies Explanation

On a Cisco router, the traceroute command uses Internet Control Message Protocol (ICMP) Time
Exceeded Message (TEM) messages to map the path that a packet takes through a network. The
traceroute command works by sending a sequence of messages, usually User Datagram Protocol
(UDP) packets, to a destination address. The Time-to-Live (TTL) value in the IP header of each
series of packets is incremented as the traceroute command discovers the IP address of each
router in the path to the destination address. The first series of packets, which have a TTL value of
one, make it to the first hop router, where their TTL value is decremented by one as part of the
forwarding process. Because the new TTL value of each of these packets will be zero, the first hop
router will discard the packets and send an ICMP TEM to the source of each discarded packet.
The traceroute command will record the IP address of the source of the ICMP TEM and will then
send a new series of messages with a higher TTL. The next series of messages is sent with a TTL
value of two and arrives at the second hop before generating ICMP TEMs and thus identifying the
second hop. This process continues until the destination is reached and every hop in the path to
the destination is identified. In this manner, the traceroute command can be used to manually build
a topology map of an existing network? however, more effective mechanisms, such as Link Layer
Discovery Protocol (LLDP) or Cisco Discovery Protocol (CDP), are typically used instead when
available.

Some network trace implementations similar to the IOS traceroute command send ICMP Echo
messages or Transmission Control Protocol (TCP) synchronization (SYN) packets by default. For
example, the tracert command on Microsoft Windows platforms uses ICMP Echo messages by
default, instead of ICMP TEMs, to map the path a packet takes through a network. Some
implementations offer configuration options to specify the message types used to map the network
path of a series of packets. Being able to specify the message type is useful in environments
where firewalls or other filtering mechanisms restrict the flow of certain types of packets, such as
ICMP Echo messages.

CDP is a Cisco-proprietary network discovery protocol that uses Type-Length-Value (TLV) fields to
share data with neighboring Cisco devices. A TLV is a data structure that defines a type of data,
its maximum length, and a value. For example, the CDP Device-ID TLV contains a string of
characters identifying the name assigned to the device. Each CDP message contains a series of
TLV fields, which collectively describe a Cisco device, its configuration, and its capabilities. CDP-
enabled devices listen for CDP packets and parse the TLVs to build a table with information about
each neighboring Cisco device. The information in the CDP table can be used by other processes
on the device. For example, native virtual LAN (VLAN) mismatches are commonly identified based
on the information from the CDP table.

Likewise, LLDP uses TLV fields to share data with neighboring network devices. LLDP is an open-
standard network discovery protocol specified as part of the Institute of Electrical and Electronics
Engineers (IEEE) 802.1AB standard. Because LLDP is designed to operate in a multivendor
environment, it specifies a number of mandatory TLVs that must be included at the beginning of
each LLDP message. Any optional TLVs follow the mandatory TLVs, and an empty TLV specifies
"Pass Any Exam. Any Time." - www.actualtests.com 165
Cisco 200-310 Exam
the end of the series. Most Cisco platforms support both CDP and LLDP.

Reference:

Cisco: Understanding the Ping and Traceroute Commands

QUESTION NO: 102

Which of the following is a hierarchical routing protocol that can summarize routes at border
routers and by using redistribution?

A.
RIPv1

B.
RIPv2

C.
OSPF

D.
EIGRP

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Open Shortest Path First (OSPF) is a hierarchical, link-state routing protocol that can summarize
routes at border routers or by using redistribution summarization. OSPF divides an autonomous
system (AS) into areas. These areas can be used to limit routing updates to one portion of the
network, thereby keeping routing tables small and update traffic low. Only OSPF routers in the
same hierarchical area form adjacencies. Hierarchical design provides for efficient performance
and scalability. Although OSPF is more difficult to configure, it converges more quickly than most
other routing protocols.

Enhanced Interior Gateway Routing Protocol (EIGRP) is a hybrid routing protocol that combines
the best features of distance-vector and link-state routing protocols. Unlike OSPF, EIGRP
supports automatic summarization and can summarize routes on any EIGRP interface. However,
both OSPF and EIGRP converge faster than other routing protocols and support manual
configuration of summary routes.

"Pass Any Exam. Any Time." - www.actualtests.com 166


Cisco 200-310 Exam
Routing Information Protocol version 1 (RIPv1) and RIPv2 are not hierarchical routing protocols
that divide an AS into areas. RIPv1 and RIPv2 are distance-vector routing protocols that use hop
count as a metric. By default, RIP sends out routing updates every 30 seconds, and the routing
updates are propagated to all RIP routers on the network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439

Cisco: Open Shortest Path First

QUESTION NO: 103

View the Exhibit:

Refer to the exhibit. Which of the following statements are true regarding the deployment of the
IPS in the exhibit? (Choose two.)

A.
It increases response latency.

B.
It increases the risk of successful attacks.

C.
It can directly block all communication from an attacking host.

D.
It can reset TCP connections.

E.
It does not require RSPAN on switch ports.

Answer: C,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 167


Cisco 200-310 Exam
When Cisco Intrusion Prevention System (IPS) is configured in inline mode, IPS can directly block
all communication from an attacking host. In addition, an IPS in inline mode does not require that
Remote Switched Port Analyzer (RSPAN) be enabled on switch ports.

Inline mode enables IPS to examine traffic as it flows through the IPS device. Therefore, any traffic
that should be analyzed by IPS must be to a destination that is separated from the source by the
IPS device. By contrast, promiscuous mode enables IPS to examine traffic on ports from multiple
network segments without being directly connected to those segments. Promiscuous mode, which
is also referred to as monitor-only operation, enables an IPS to passively examine network traffic
without impacting the original flow of traffic. This passive connection enables the IPS to have the
most visibility into the networks on the switch to which it is connected. However, promiscuous
mode operation increases latency and increases the risk of successful attacks.

IPS can use all of the following actions to mitigate a network attack in inline mode:

IPS in promiscuous mode, not inline mode, requires RSPAN. RSPAN enables the monitoring of
traffic on a network by capturing and sending traffic from a source port on one device to a
destination port on a different device on a non-routed network. Because copies of traffic from the
RSPAN port are forwarded to a monitor-only IPS for analysis instead of flowing through IPS
directly, the amount of time IPS takes to determine whether a network attack is in progress can be
greater in promiscuous mode than when IPS is operating in inline mode. The increased response
latency means that an attack has a greater chance at success prior to detection.

IPS in promiscuous mode, not inline mode, can reset TCP connections. Promiscuous mode
supports three actions to mitigate attacks: Request block host, Request block connection, and
Reset TCP connection. The Request block host action causes IPS to send a request to the Attack
Response Controller (ARC) to block all communication from the attacking host for a given period
of time. The Request block connection action causes IPS to send a request to the ARC to block
the specific connection from the attacking host for a given period of time. The Reset TCP
connection action clears TCP resources so that normal TCP network activity can be established.
However, resetting TCP connections is effective only for TCP-based attacks and against only
some types of those attacks.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

Cisco: Cisco IPS Mitigation Capabilities: Inline Mode Event Actions

QUESTION NO: 104

"Pass Any Exam. Any Time." - www.actualtests.com 168


Cisco 200-310 Exam
Which of the following statements are correct regarding wireless signals in a VoWLAN? (Choose
two.)

A.
High data rate signals require higher SNRs than low data rate signals.

B.
VoWLANs require lower SNRs than data-only WLANs.

C.
Signals from adjacent cells on nonoverlapping channels should have an overlap of between 15
and 20percent to ensure smooth roaming.

D.
VoWLANs require lower signal strengths than data-only WLANs.

E.
Increasing the strength of a signal cannot increase its SNR.

Answer: A,C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

In a Voice over wireless LAN (VoWLAN), high data rate signals require higher signal-to-noise
ratios (SNRs) than low data rate signals. In addition, signals from adjacent cells on nonoverlapping
channels should have an overlap between 15 and 20 percent to ensure smooth roaming. The
sensitivity of an 802.11 radio decreases as the data rate goes up. Thus the separation of valid
802.11 signals from background noise must be greater at higher data rates than at lower data
rates. Otherwise, the 802.11 radio will be unable to distinguish the valid signals from the
surrounding noise. For example, an 802.11 radio might register a 1Mbps signal at -45 decibel
milliwatts (dBm) with -96 dBm of noise. These values produce an SNR of 51 decibels (dB).
However, if the data rate is increased to 11 Mbps, the radio might register a signal of -63 dBm with
-82 dBm of noise, thereby bringing the SNR to 19 dB. Because the sensitivity of the radio is
diminished at the higher data rate, the radio might not be able to distinguish parts of the signal
from the surrounding noise, which might result in packet loss. Therefore, the optimal cell size is
determined by the configured data rate and the transmitter power of the access point (AP).

Packet loss can also be mitigated by maintaining an overlap between 15 and 20 percent on
nonoverlapping channels for all adjacent cells in a VoWLAN. By providing at least 15 percent
overlap between adjacent cells, a wireless client has a greater chance of completing the roaming
process without incurring too much delay or packet loss. If the overlap is less than 15 percent, the
client might drop its connection with one AP before it has completed associating with the next AP.
This can result in degraded voice quality and disconnected calls.

VoWLANs require higher signal strengths than data-only wireless LANs (WLANs). Data traffic can

"Pass Any Exam. Any Time." - www.actualtests.com 169


Cisco 200-310 Exam
tolerate delayed or dropped packets because its associated applications typically do not operate in
real time. If a wireless client breaks its connection with an AP and packets are delayed or lost, the
client can retransmit the missing packets when it reconnects. By contrast, real-time data, such as
voice traffic, is particularly sensitive to delay, variations in delay, and packet loss. If packets are
delayed too long or lost because a client breaks its connection with an AP, the quality of the
client's voice stream is degraded. If there is enough delay or packet loss, the call will be
disconnected by the client device.

Likewise, VoWLANs require higher SNRs than data-only WLANs. A high SNR indicates that a
device can easily distinguish valid wireless signals from the surrounding noise. The greater the
separation between signal and noise, the higher the likelihood that wireless clients will not
experience packet loss due to signal interference. Cisco recommends maintaining a minimum
signal strength of -67 dBm and a minimum SNR of 25 dB throughout the coverage area of a
VoWLAN to help mitigate packet loss.

Increasing the strength of a signal can increase its SNR. By increasing the strength of a
transmitted signal, the difference between the signal and any associated noise can be increased
at the receiving station. A wireless LAN controller (WLC) can be configured to adjust the signal
strength of a lightweight AP (LAP) if it registers a low SNR value from one of the LAP's associated
devices.

Reference:

Cisco: Site Survey Guide: Deploying Cisco 7920 IP Phones: Getting started

QUESTION NO: 105

Which of the following is a circuit-switched WAN technology that offers less than 2 Mbps of
bandwidth?

A.
ATM

B.
Frame Relay

C.
ISDN

D.
SONET

E.
"Pass Any Exam. Any Time." - www.actualtests.com 170
Cisco 200-310 Exam
SMDS

F.
Metro Ethernet

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

Integrated Services Digital Network (ISDN) is a circuit-switched WAN technology that offers less
than 2 Mbps of bandwidth. Circuit-switched WAN technologies rely on dedicated physical paths
between nodes in a network. For example, when RouterA needs to contact RouterB, a dedicated
path is established between the routers and then data is transmitted. While the circuit is
established, RouterA cannot use the WAN link to transmit any data that is not destined for
networks accessible through RouterB. When RouterA no longer has data for RouterB, the circuit is
torn down until it is needed again.

Because circuit-switched links rely on dedicated physical paths, they are considered leased WAN
technologies. Other examples of leased WAN technologies are time division multiplexing (TDM)
and Synchronous Optical Network (SONET).

Metro Ethernet is a WAN technology that is commonly used to connect networks in the same
metropolitan area. However, Metro Ethernet providers typically provide up to 1,000 Mbps of
bandwidth. A company that has multiple branch offices within the same city can use Metro
Ethernet to connect the branch offices to the corporate headquarters.

Packet-switched networks do not rely on dedicated physical paths between nodes in a network. In
a packet-switched network, a node establishes a single physical circuit to a service provider.
Multiple virtual circuits can share this physical circuit, allowing a single device to send data to
several destinations. Because packet-switched links do not rely on dedicated physical paths, they
are considered shared WAN links. Frame Relay, X.25, Multiprotocol Label Switching (MPLS), and
Switched Multimegabit Data Service (SMDS) are examples of packet-switched, shared WAN
technologies.

Asynchronous Transfer Mode (ATM) is a shared WAN technology that transports its payload in a
series of fixed-sized 53byte cells. ATM has the unique ability to transport different types of traffic,
including IP packets, traditional circuit-switched voice, and video, while still maintaining a high
quality of service for delay-sensitive traffic such as voice and video services. Although ATM could
be categorized as a packet-switched WAN technology, it is often listed in its own category as a
cell-switched WAN technology instead.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, ISDN, pp. 221-222


"Pass Any Exam. Any Time." - www.actualtests.com 171
Cisco 200-310 Exam
Cisco: Introduction to WAN Technologies: Circuit Switching

Cisco: Asynchronous Transfer Mode Switching: ATM Devices and the Network Environment

QUESTION NO: 106

You administer a router that contains five routes to the same network: a static route, a RIPv2
route, an IGRP route, an OSPF route, and an internal EIGRP route. The default ADs are used.
The link to the static route has just failed.

Which route or routes will be used?

A.
the RIPv2 route

B.
the IGRP route

C.
the OSPF route

D.
the EIGRP route

E.
both the RIPv2 route and the EIGRP route

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The Enhanced Interior Gateway Routing Protocol (EIGRP) route is used when the link to the static
route goes down. EIGRP is a Cisco-proprietary routing protocol. When multiple routes to a
network exist and each route uses a different routing protocol, a router prefers the routing protocol
with the lowest administrative distance (AD). The following list contains the most commonly used
ADs:

"Pass Any Exam. Any Time." - www.actualtests.com 172


Cisco 200-310 Exam

In this scenario, the static route has the lowest AD. Therefore, the static route is used instead of
the other routes. When the static route fails, the EIGRP route is preferred, because internal EIGRP
has an AD of 90.

If the EIGRP route were to fail, the Interior Gateway Routing Protocol (IGRP) route would be
preferred, because IGRP has an AD of 100. If the IGRP route were also to fail, the Open Shortest
Path First (OSPF) route would be preferred, because OSPF has an AD of 110. The Routing
Information Protocol version 2 (RIPv2) route would not be used unless all of the other links were to
fail, because RIPv2 has an AD of 120. ADs for a routing protocol can be manually configured by
issuing the distance command in router configuration mode. For example, to change the AD of
OSPF from 110 to 80, you should issue the following commands:

RouterA(config)#router ospf 1

RouterA(configrouter)#distance 80

You can view the AD of the best route to a network by issuing the show ip routecommand. The AD
is the first number inside the brackets in the output. For example, the following router output
shows an OSPF route with an AD of 80:

Router#show ip route

Gateway of last resort is 10.19.54.20 to network 10.140.0.0

E2 172.150.0.0 [80/5] via 10.19.54.6, 0:01:00, Ethernet2

"Pass Any Exam. Any Time." - www.actualtests.com 173


Cisco 200-310 Exam
The number 5 in the brackets above is the OSPF metric, which is based on cost. OSPF calculates
cost based on the bandwidth of an interface: the higher the bandwidth, the lower the cost. When
two OSPF paths exist to the same destination, the router will choose the OSPF path with the
lowest cost.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, Administrative Distance, pp. 386-387

Cisco: What Is Administrative Distance?

QUESTION NO: 107

Which of the following statements is true regarding physical connections in the Cisco ACI
architecture?

A.
Spine nodes must be fully meshed.

B.
Leaf nodes must be fully meshed.

C.
Each leaf node must connect to each spine node.

D.
Each APIC must connect to each leaf node.

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

In the Cisco Application Centric Infrastructure (ACI), each leaf node must connect to each spine
node. Cisco ACI is a data center technology that uses switches, categorized as spine and leaf
nodes, to dynamically implement network application policies in response to application-level
requirements. Network application policies are defined on a Cisco Application Policy Infrastructure
Controller (APIC) and are implemented by the spine and leaf nodes.

The spine and leaf nodes create a scalable network fabric that is optimized for east-west data
transfer, which in a data center is typically traffic between an application server and its supporting
data services, such as database or file servers. Each spine node requires a connection to each

"Pass Any Exam. Any Time." - www.actualtests.com 174


Cisco 200-310 Exam
leaf node? however, spine nodes do not interconnect nor do leaf nodes interconnect. Despite its
lack of fully meshed connections, this physical topology enables nonlocal traffic to pass from any
ingress leaf interface to any egress leaf interface through a single, dynamically selected spine
node. By contrast, local traffic is passed directly from an ingress interface on a leaf node to the
appropriate egress interface on the same leaf node.

Because a spine node has a connection to every leaf node, the scalability of the fabric is limited by
the number of ports on the spine node, not by the number of ports on the leaf node. In addition,
redundant connections between a spine and leaf pair are unnecessary because the nature of the
topology ensures that each leaf has multiple connections to the network fabric. Therefore, each
spine node requires only a single connection to each leaf node.

Redundancy is also provided by the presence of multiple APICs, which are typically deployed as a
cluster of three controllers. APICs are not directly involved in forwarding traffic and are therefore
not required to connect to every spine or leaf node. Instead, the APIC cluster is connected to one
or more leaf nodes in much the same manner that other endpoint groups (EPGs), such as
application servers, are connected.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, ACI, pp. 135

Cisco: Application Centric Infrastructure Overview: Implement a Robust Transport Network for
Dynamic Workloads

QUESTION NO: 108

Which of the following are not supported by GET VPN? (Choose two.)

A.
centralized key management

B.
dynamic NAT

C.
voice traffic

D.
static NAT

E.
native multicast traffic
"Pass Any Exam. Any Time." - www.actualtests.com 175
Cisco 200-310 Exam
Answer: B,D
Explanation:
Section: Enterprise Network Design Explanation

Group Encrypted Transport (GET) virtual private network (VPN) supports neither static nor
dynamic Network Address Translation (NAT). GET VPN is a Cisco-proprietary technology that
provides tunnel-less, end-to-end security for both unicast and multicast traffic. GET VPN uses IP
Security (IPSec) tunnel mode with address preservation to preserve the inner IP header of each
encrypted packet? the IP source address and various IP header fields are unaffected by the
encryption process. Because NAT changes information in the IP header, such as the IP source
address, NAT is not supported by GET VPN and must be performed either before a packet is
encrypted or after a packet is decrypted. Cisco recommends GET VPN for environments needing
highly scalable, any-to-any encrypted connectivity for unicast and multicast traffic, such as a large
financial network using a Multiprotocol Label Switching (MPLS) WAN.

In a GET VPN, trusted group member routers receive security policy and authentication keys from
a central key server. Although group member routers obtain keying information from a central key
server, the key server is not involved in the flow of traffic as in a hub-and-spoke design. Instead,
group member routers can use the keying information from the key server to dynamically form
direct connections with one another for data transmission. This enables group member routers to
form security associations with sufficient speed to minimize transmission delay and to support the
Quality of Service (QoS) levels necessary for voice traffic.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, GETVPN, pp. 258-259

Cisco: Cisco Group Encrypted Transport VPN

QUESTION NO: 109

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 176


Cisco 200-310 Exam

Refer to the exhibit above. The Layer 3 switch on the left, DSW1, is the root bridge for all VLANs
in the topology. Devices on VLAN 10 use DSW1 as a default gateway. Devices on VLAN 20 use
the Layer 3 switch on the right, DSW2, as a default gateway. A device that is operating in VLAN
20 and is connected to ASW3 transmits a packet that is destined beyond Router1.

What path will the packet most likely take through the network?

A.
ASW3 > DSW2 > Router1

B.
ASW3 > DSW1 > Router1

C.
ASW3 > DSW2 > DSW1 > Router1

D.
ASW3 > DSW1 > DSW2 > Router1

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

Most likely, the packet will travel from ASW3 to DSW1, to DSW2, and then to Router1. Because all
of the virtual LANs (VLANs) use DSW1 as the root bridge in this scenario, all traffic from the
access layer switches, regardless of VLAN, flows first to DSW1. Traffic from VLAN 10 is therefore
already optimized because VLAN 10 uses DSW1 as its default gateway. However, VLAN 20 uses

"Pass Any Exam. Any Time." - www.actualtests.com 177


Cisco 200-310 Exam
DSW2 as its default gateway. Therefore, traffic from VLAN 20 will most likely flow first to DSW1
and then across the PortChannel 1 EtherChannel interface to DSW2 for forwarding.

In this scenario, if you were to configure a separate spanning tree to be established for each
VLAN, the location of the root switch could be optimized on a per-VLAN basis. For example,
configuring DSW2 as the preferred root bridge for devices that operate on VLAN 20 would cause
VLAN 20 traffic from both ASW1 and ASW3 to flow directly to DSW2 for forwarding to Router1.
VLAN 10 traffic would remain optimized to flow directly to DSW1 from ASW1, ASW2, or ASW3.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: InterSwitch Link and IEEE 802.1Q Frame Format: Background Theory

Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE:
Configuring the Switch Priority of a VLAN

QUESTION NO: 110

Which of the following address blocks is typically used for IPv4 link-local addressing?

A.
192.168.0.0/16

B.
172.16.0.0/12

C.
169.254.0.0/16

D.
10.0.0.0/8

E.
127.0.0.0/8

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the available choices, only the 169.254.0.0/16 address block is typically used for IP version 4
"Pass Any Exam. Any Time." - www.actualtests.com 178
Cisco 200-310 Exam
(IPv4) link-local addressing. The IP addresses in the 169.254.0.0/16 address block, which includes
the IP addresses from 169.254.0.0 through 169.254.255.255, are defined by Request for
Comments (RFC) 3927. This address block is reserved for the dynamic configuration of IPv4 link-
local addresses. On Microsoft Windows computers, addresses in these ranges are known as
Automatic Private IP Addressing (APIPA) addresses.

Addresses in the 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8 ranges are private IP addresses
that are defined by RFC 1918. The following are the valid IP address blocks in each of the classes
available for commercial use as defined by RFC 1918:

Class A - 10.0.0.0 through 10.255.255.255, or 10.0.0.0/8

Class B - 172.16.0.0 through 172.31.255.255, or 172.16.0.0/12

Class C - 192.168.0.0 through 192.168.255.255, or 192.168.0.0/16

The 127.0.0.0/8 IP address block is a special-use IPv4 address block that is defined by the
Internet Engineering Task Force (IETF) in RFC 1122 and in RFC 6890, which obsoletes RFC
5735. The 127.0.0.1/32 IP address is typically used as a loopback address for devices on a
network.

Reference:

IETF: RFC 3927: Dynamic Configuration of IPv4 Link-Local Addresses

QUESTION NO: 111

Which of the following protocols can provide Application layer management information?

A.
RMON

B.
RMON2

C.
SNMPv1

D.
SNMPv2E. SNMPv3

Answer: B
"Pass Any Exam. Any Time." - www.actualtests.com 179
Cisco 200-310 Exam
Explanation:
Section: Design Methodologies Explanation

Remote Monitoring version 2 (RMON2) can provide Open Systems Interconnection (OSI)
Application layer management information. RMON2 builds on the framework of Simple Network
Management Protocol (SNMP) and extends the Management Information Base (MIB) to provide
network flow statistics. The statistics that RMON2 provides are divided into groups based on the
type of information they contain. For example, RMON2 groups contain information about Network
layer address mappings, Application layer traffic statistics, and per-protocol traffic distribution. In
addition, RMON2 provides a managed device with the ability to locally store historical data that
can then be used to analyze trends in network utilization and to determine whether a managed
device requires optimization. By contrast, the Cisco NetFlow feature can provide similar data for
analysis? however, very little NetFlow data is typically stored locally. Instead, NetFlow data is
typically exported to a collector where it can be analyzed to determine whether a managed device
requires optimization.

Remote Monitoring version 1, commonly referred to as RMON, provides Physical and Data Link
layer management information. Like RMON2, RMON divides the management data it provides into
distinct groups? however, RMON's groups contain information about the physical network, such as
Ethernet interface statistics, host addresses based on Media Access Control (MAC) addresses,
and Data Link layer traffic statistics. RMON information can also be maintained on the managed
device to provide historical data. Although RMON data is limited to only Physical and Data Link
layer information, it can still be a valuable resource for determining whether a managed device
requires optimization.

Simple Network Management Protocol (SNMP) provides a framework for obtaining basic
information about a managed device. Like RMON and RMON2, SNMP uses the MIB to store
information about a managed device; however, SNMP does not have the capability to locally store
historical data. Therefore, SNMP requires a network management station (NMS) to periodically
poll a managed device to accumulate historical data that can then be used determine whether the
managed device requires optimization. Three versions of SNMP currently exist: SNMP version 1
(SNMPv1), SNMPv2, and SNMPv3. SNMPv1 and SNMPv2 do not provide authentication,
encryption, or message integrity. Thus access to management information is based on a simple
password known as a community string; the password is sent as plain text with each SNMP
message. If an attacker intercepts a message, the attacker can view the password information.
SNMPv3 improves upon SNMPv1 and SNMPv2 by providing encryption, authentication, and
message integrity to ensure that the messages are not viewed or tampered with during
transmission.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, RMON, pp. 624-626

IETF: RFC 2021: Remote Network Monitoring Management Information Base Version 2 using
SMIv2: 2. Overview

"Pass Any Exam. Any Time." - www.actualtests.com 180


Cisco 200-310 Exam
IETF: RFC 3577: Introduction to the Remote Monitoring (RMON) Family of MIB Modules: 4.
RMON Documents

QUESTION NO: 112

Which of the following is not true regarding the MPLS WAN deployment model for branch
connectivity?

A.
It provides the highest SLA guarantees for QoS capabilities.

B.
It provides the highest SLA guarantees for network availability.

C.
It is the most expensive deployment model.

D.
It supports only dual-router configurations.

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

Explanation

The Multiprotocol Label Switching (MPLS) WAN deployment model for branch connectivity
supports both single-router and dual-router configurations. Cisco defines three general
deployment models for branch connectivity:

The MPLS WAN deployment model can use a single-router configuration with connections to
multiple MPLS service providers or a dual-router configuration where each router has a connection
to one or more MPLS service providers. Service provider diversity ensures that an outage at the
service provider level will not cause an interruption of service at the branch. The MPLS WAN
deployment model can provide service-level agreement (SLA) guarantees for Quality of Service
(QoS) and network availability through service-provider provisioning and routing protocol
optimization. Although using multiple MPLS services provides increased network resilience and
bandwidth, it also increases the complexity and cost of the deployment when compared to other
deployment models.

The Hybrid WAN deployment model can use a single or dual-router configuration and relies on an
MPLS service provider for its primary WAN connection and on an Internet-based virtual private
"Pass Any Exam. Any Time." - www.actualtests.com 181
Cisco 200-310 Exam
network (VPN) connection as a backup circuit. Unlike the MPLS WAN deployment model, the
Hybrid WAN deployment model cannot ensure QoS capabilities for traffic that does not pass to the
MPLS service provider. In the Hybrid WAN deployment model, low-priority traffic is often routed
through the lower cost Internet VPN circuit, which can reduce the bandwidth requirements for the
MPLS circuit, further lowering the overall cost without sacrificing network resilience.

The Internet WAN deployment model can use a single or dual-router configuration and relies on
an Internet-based VPN solution for primary and backup circuits. Internet service provider (ISP)
diversity ensures that carrier level outages do not affect connectivity between the branch and the
central site. Because the Internet WAN deployment model uses the public Internet, its QoS
capabilities are limited. However, the Internet WAN deployment model is the most cost effective of
the three models defined by Cisco.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Branch Connectivity, p. 271

QUESTION NO: 113

Which of the following queuing methods provides bandwidth and delay guarantees?

A.
FIFO

B.
LLQ

C.
WFQ

D.
CBWFQ

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Low-latency queuing (LLQ) provides bandwidth and delay guarantees through the creation of one
or more strict-priority queues that can be used specifically for delay-sensitive traffic, such as voice
and video traffic. In addition, LLQ supports the creation of up to 64 user-defined traffic classes.
Each strict-priority queue can use as much bandwidth as possible but can only use its guaranteed
"Pass Any Exam. Any Time." - www.actualtests.com 182
Cisco 200-310 Exam
minimum bandwidth when other queues have traffic to send, thereby avoiding bandwidth
starvation for the user-defined queues. Cisco recommends limiting the strict-priority queues to a
total of 33 percent of the link capacity.

Class-based weighted fair queuing (CBWFQ) provides bandwidth guarantees, so it can be used
for voice, video, and mission-critical traffic. However, CBWFQ does not provide the delay
guarantees provided by LLQ, because CBWFQ does not provide support for strict-priority queues.
CBWFQ improves upon weighted fair queuing (WFQ) by enabling the creation of up to 64 custom
traffic classes, each with a guaranteed minimum bandwidth.

Although WFQ can be used for voice, video, and mission-critical traffic, it does not provide the
bandwidth or delay guarantees provided by LLQ, because WFQ does not support the creation of
strict-priority queues. Traffic flows are identified by WFQ based on source and destination IP
address, port number, protocol number, and Type of Service (ToS). Although WFQ is easy to
configure, it is not supported on high-speed links. WFQ is used by default on Cisco routers for
serial interfaces at 2.048 Mbps or lower.

First-in-first-out (FIFO) queuing does not provide any traffic guarantees of any sort. FIFO queuing
requires no configuration, because all packets are arranged into a single queue. As the name
implies, the first packet received is the first packet transmitted, without regard for packet type,
protocol, or priority. Therefore, FIFO queuing is not appropriate for voice, video, or mission-critical
traffic. By default, Cisco uses FIFO queuing for interfaces faster than 2.048 Mbps.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235

Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping
Principles

Cisco: Signalling Overview: RSVP Support for Low Latency Queuing

QUESTION NO: 114 DRAG DROP

Select the processes from the left, and place them in the appropriate corresponding Cisco PBM
Design Lifecycle phase column on the right. All processes will be used.

"Pass Any Exam. Any Time." - www.actualtests.com 183


Cisco 200-310 Exam

Answer:

Explanation:

Section: Design Objectives Explanation

The Cisco Plan, Build, Manage (PBM) Design Lifecycle is a newer methodology designed to
streamline the concepts from Cisco's older design philosophy: the Prepare, Plan, Design,
Implement, Operate, and Optimize (PPDIOO) Design Lifecycle. As the name implies, the PBM
Design Lifecycle is divided into three distinct phases: Plan, Build, and Manage.

"Pass Any Exam. Any Time." - www.actualtests.com 184


Cisco 200-310 Exam
The Plan phase of the PBM Design Lifecycle consists of the following three processes:

The purpose of the strategy and analysis process is to generate proposed improvements to an
existing network infrastructure with the overall goal of increasing an organization's return on
investment (ROI) from the network and its support staff. The assessment process then examines
the proposed improvements from the strategy and analysis process and determines whether the
improvements comply with organizational goals and industry best practices. In addition, the
assessment process identifies potential deficiencies that infrastructure changes might cause in
operational and support facilities. Finally, the design process produces a network design that
meets current organizational objectives while maintaining resiliency and scalability.

The Build phase of the PBM Design Lifecycle consists of the following three processes:

The purpose of the validation process is to implement the infrastructure changes outlined in the
design process of the Plan phase and to verify that the implementation meets the organizational
needs as specified by the network design. The validation process implements the network design
in a controlled environment such as in a lab or staging environment. Once the network design has
been validated, the purpose of the deployment process is to implement the network design in a
full-scale production environment. Finally, the purpose of the migration process is to incrementally
transition users, devices, and services to the new infrastructure as necessary.

The Manage phase of the PBM Design Lifecycle consists of the following four processes:

The product support process addresses support for specific hardware, software, or network
products. Cisco Smart Net is an example of a component of the product support process. By
contrast, solution support is focused on the solutions that hardware, software, and network
products provide for an organization. Cisco Solution Support is the primary component of the
solution support process. Cisco Solution Support serves as the primary point of contact for Cisco
solutions, leverages solution-focused expertise, coordinates between multiple vendors for complex
solutions, and manages each case from inception to resolution. The optimization process is
concerned with improving the performance, availability, and resiliency of a network
implementation. It also addresses foreseeable changes and upgrades, which reduces operating
costs, mitigates risk, and improves ROI. The operations management process addresses the
ongoing management of the network infrastructure. It includes managed solutions for
collaboration, data center, security, and general network services.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Cisco Design Lifecycle: Plan, Build, Manage, pp. 9-
12

Cisco: Services: Portfolio

"Pass Any Exam. Any Time." - www.actualtests.com 185


Cisco 200-310 Exam

QUESTION NO: 115

In which of the following situations would eBGP be the most appropriate routing protocol?

A.
when the router has a single link to a router within the same AS

B.
when the router has redundant links to a router within the same AS

C.
when the router has a single link to a router within a different AS

D.
when the router has redundant links to a router within a different AS

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

External Border Gateway Protocol (eBGP) would be the most appropriate routing protocol for a
router that has redundant links to a router within a different autonomous system (AS). An AS is
defined as the collection of all areas that are managed by a single organization. Routing protocols
that dynamically share routing information within an AS are called interior gateway protocols
(IGPs), and routing protocols that dynamically share routing information between multiple ASes
are called exterior gateway protocols (EGPs). Border Gateway Protocol (BGP) routers within the
same AS communicate by using internal BGP (iBGP), and BGP routers in different ASes
communicate by using eBGP. BGP is typically used to exchange routing information between
ASes, between a company and an Internet service provider (ISP), or between ISPs.

Static routing, not BGP, would be the most appropriate routing method for a router that has a
single link to a router within a different AS. Because BGP can be complicated to configure and can
use large amounts of processor and memory resources, static routing is recommended if dynamic
routing information does not need to be exchanged between routers that reside in different ASes.
For example, if you connect a router to the Internet through a single ISP, it is not necessary for the
router to run BGP, because the router will use a single, static default route to the ISP for all traffic
that is not destined to the internal network.

An IGP would be the most appropriate routing protocol for a router that has a single link or
redundant links to a router within the same AS. Enhanced Interior Gateway Routing Protocol
(EIGRP), Open Shortest Path First (OSPF), and Routing Information Protocol (RIP) are examples
of IGPs.

"Pass Any Exam. Any Time." - www.actualtests.com 186


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, BGP Neighbors, pp. 444-446

Cisco: Sample Configuration for iBGP and eBGP With or Without a Loopback Address:
Introduction

QUESTION NO: 116

In which of the following layer or layers should you implement QoS?

A.
in only the core layer

B.
in only the distribution layer

C.
in only the access layer

D.
in only the core and distribution layers

E.
in only the access and distribution layers

F.
in the core, distribution, and access layers

Answer: F
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You should implement Quality of Service (QoS) in the core, distribution, and access layers. A
network can become congested due to the aggregation of multiple links or a drop in bandwidth
from one link to another. When many packets are sent on a congested network, a delay in
transmission time can occur. Lack of bandwidth, end-to-end delay, jitter, and packet loss can be
mitigated by implementing QoS. QoS facilitates the optimization of network bandwidth by
prioritizing network traffic based on its type. Prioritizing packets enables time-sensitive traffic, such
as voice traffic, to be sent before other packets. Packets are queued based on traffic type, and
packets with a higher priority are sent before packets with a lower priority.

"Pass Any Exam. Any Time." - www.actualtests.com 187


Cisco 200-310 Exam
Because the access layer provides direct connectivity to network endpoints, QoS classification
and marking are typically performed in the access layer. Cisco recommends classifying and
marking packets as close to the source of traffic as possible and using hardware-based QoS
functions whenever possible. Although classification and marking are typically performed in the
access layer, QoS mechanisms must be implemented in each of the higher layers for QoS to be
effective.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Campus LAN QoS Considerations, pp. 111-112

Cisco: Campus Network for High Availability Design Guide: General Design Considerations

QUESTION NO: 117 DRAG DROP

Select the attributes from the left, and place them under the corresponding Layer 2 access design
on the right. Attributes can be selected more than once, and some attributes might not be used.

Answer:

"Pass Any Exam. Any Time." - www.actualtests.com 188


Cisco 200-310 Exam

Explanation:

Section: Enterprise Network Design Explanation

Loop-free inverted U designs support all service module installations, have all uplinks active, and
support virtual LAN (VLAN) extensions. A service module is a piece of hardware that extends the
functionality of a Cisco device, for example, the Secure Sockets Layer (SSL) Service Module for
Catalyst 6500 series switches and Cisco 7600 series routers performs the majority of the CPU-
intensive SSL processing so that the switch's processor or router's processor is not burdened by
large numbers of SSL connections. Loop-free inverted U designs offer redundancy at the
aggregation layer, not the access layer; therefore, traffic will black-hole upon failure of an access
switch uplink. All uplinks are active with no looping, thus there is no Spanning Tree Protocol (STP)
blocking by default. However, STP is still essential so that redundant paths that might be created
by any inadvertent errors in cabling or configuration are blocked.

Loop-free U designs do not support VLAN extensions, have all uplinks active, and support all
service module implementations. Loop-free U designs offer a redundant link between access layer
switches as well as a redundant link at the aggregation layer. Because of the redundant path in
"Pass Any Exam. Any Time." - www.actualtests.com 189
Cisco 200-310 Exam
both layers, extending a VLAN beyond an individual access layer pair would create a loop. Like
loop-free inverted U designs, loop-free U designs also run STP and have issues with traffic being
black-holed upon failure of an access switch uplink.

Flex Link designs have a single active uplink, support VLAN extensions and all service modules,
and disable STP by default. There are no loops in a Flex Link design, and STP is disabled when a
device is configured to participate in a Flex Link. Interface uplinks in this topology are configured in
active/standby pairs, and each device can only belong to a single Flex Link pair. In the event of an
uplink failure, the standby link becomes active and takes over, thereby offering redundancy when
an access layer uplink fails. Possible disadvantages of the Flex Link design include its increased
convergence time over other designs and its inability to run STP in order to block redundant paths
that might be created by inadvertent errors in cabling or configuration.

Reference:

Cisco: Data Center Access Layer Design

QUESTION NO: 118

Which of the following are most likely to be provided by a collapsed core? (Choose four.)

A.
Layer 2 aggregation

B.
high-speed physical and logical paths

C.
intelligent network services

D.
end user, group, and endpoint isolation

E.
routing and network access policies

Answer: A,B,C,E
Explanation:
Section: Enterprise Network Design Explanation

Layer 2 aggregation, high-speed physical and logical paths, intelligent network services, and

"Pass Any Exam. Any Time." - www.actualtests.com 190


Cisco 200-310 Exam
routing and network access policies are typically provided by the core and distribution layers. A
collapsed core is a three-tier hierarchical design in which the core and distribution layers have
been combined. The hierarchical model divides the network into three distinct components:

The core layer typically provides the fastest switching path in the network. As the network
backbone, the core layer is primarily associated with low latency and high reliability. The
functionality of the core layer can be collapsed into the distribution layer if the distribution layer
infrastructure is sufficient to meet the design requirements. It is Cisco best practice to ensure that
a collapsed core design can meet resource utilization requirements for the network.

The distribution layer serves as an aggregation point for access layer network links. Because the
distribution layer is the intermediary between the access layer and the core layer, the distribution
layer is the ideal place to enforce security policies, to provide Quality of Service (QoS), and to
perform tasks that involve packet manipulation, such as routing. Summarization and next-hop
redundancy are also performed in the distribution layer.

The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that
prevents hosts from accessing the network if they do not comply with organizational requirements,
such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically
discovering and inventorying devices attached to the LAN. The access layer serves as a media
termination point for endpoints, such as servers and hosts. Because access layer devices provide
access to the network, the access layer is the ideal place to perform user authentication.

End user, group, and endpoint isolation is not typically required of a collapsed core layer in a
three-tier hierarchical network design. That function is typically provided by the devices in the
access layer.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Collapsed Core Design, p. 49

Cisco: Small Enterprise Design Profile Reference Guide: Collapsed Core Network Design

QUESTION NO: 119

Which of the following are recommended campus network design practices? (Choose two.)

A.
use a redundant triangle topology

B.

"Pass Any Exam. Any Time." - www.actualtests.com 191


Cisco 200-310 Exam
use a redundant square topology

C.
avoid equal-cost links between redundant devices

D.
summarize routes from the distribution layer to the core layer

E.
create routing protocol peer relationships on all links

Answer: A,D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

When designing a campus network, Cisco recommends that you use a redundant triangle
topology and summarize routes from the distribution layer to the core layer. In a redundant triangle
topology, each core layer device has direct paths to redundant distribution layer devices, as shown
in the diagram below:

This topology ensures that a link or device failure in the distribution layer can be detected
immediately in hardware. Otherwise, a core layer device could detect only link or device failures
through a software-based mechanism such as expired routing protocol timers. Additionally, the
use of equal-cost redundant links enables a core layer device to enter both paths into its routing
table. Because both equal-cost paths are active in the routing table, the core layer device can
perform load balancing between the paths when both paths are up. When one of the equal-cost
redundant links fails, the routing protocol does not need to reconverge, because the remaining
redundant link is still active in the routing table. Thus traffic flows can be immediately rerouted
around the failed link or device.

You should summarize routes from the distribution layer to the core layer. With route
summarization, contiguous network addresses are advertised as a single network. This process
enables the distribution layer devices to limit the number of routing advertisements that are sent to
"Pass Any Exam. Any Time." - www.actualtests.com 192
Cisco 200-310 Exam
the core layer devices. Because fewer advertisements are sent, the routing tables of core layer
devices are kept small and access layer topology changes are not advertised into the core layer.

Cisco does not recommend that you use a redundant square topology. In a redundant square
topology, not every core layer device has redundant direct paths to distribution layer devices, as
shown below:

Because a redundant square topology does not provide a core layer device with redundant direct
paths to the distribution layer, the device will enter only the path with the lowest cost into its routing
table. If the lowest cost path fails, the routing protocol must converge in order to select an
alternate path from the remaining available paths. No traffic can be forwarded around the failed
link or device until the routing protocol converges.

You should create routing protocol peer relationships on only the transit links of Layer 3 devices. A
transit link is a link that directly connects two or more Layer 3 devices, such as a multilayer switch
or a router. By default, a Layer 3 device sends routing protocol updates out of every Layer 3
interface that participates in the routing protocol. These routing updates can cause unnecessary
network overhead on devices that directly connect to a large number of networks, such as
distribution layer switches. Therefore, Cisco recommends filtering routing protocol updates from
interfaces that are not directly connected to Layer 3 devices.

Reference:

Cisco: Campus Network for High Availability Design Guide: Using Triangle Topologies

QUESTION NO: 120

The IP address 169.254.173.233 is an example of which of the following types of IP addresses?

A.
"Pass Any Exam. Any Time." - www.actualtests.com 193
Cisco 200-310 Exam
a Class A address

B.
a public address

C.
a DHCP address

D.
an APIPA address

Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

The IP address 169.254.173.233 is an example of an Automatic Private IP Addressing (APIPA)


address. On networks that utilize IP, each computer requires a unique IP address in order to
access network resources. If an APIPA-capable computer, which must be running Windows 2000
or later, is configured to use Dynamic Host Configuration Protocol (DHCP) and is unable to obtain
an IP address from a DHCP server, it will assign itself an APIPA address. An APIPA IP address is
in the range of 169.254.0.0 to 169.254.255.255.

The computer with this address will most likely not be able to access other computers on the
network unless those computers are also using APIPA addresses. A computer that has an APIPA
address continually checks the network for a DHCP server. When a DHCP server becomes
available, the computer releases its APIPA address and leases an IP address from the DHCP
server.

IP version 4 (IPv4) addresses are 32bit (four-byte) addresses typically written in dotted-decimal
format, where each byte is written as a decimal value from 0 to 255 and separated by dots. All
IPv4 addresses fall into one of several classes. Class A IP addresses range from 1.0.0.0 through
126.255.255.255, Class B IP addresses range from 128.0.0.0 through 191.255.255.255, and
Class C addresses range from 192.0.0.0 through 223.255.255.255. Two other classes of IP
addresses exist: Class D and Class E. Class D addresses are reserved for multicast use, and
Class E addresses are reserved for experimental use.

Neither Class D addresses nor Class E addresses can be used on the Internet. The table below
shows the classes of IPv4 addresses and their ranges:

"Pass Any Exam. Any Time." - www.actualtests.com 194


Cisco 200-310 Exam

IPv4 addresses can be either public or private. A public IP address is an address that has been
assigned by the Internet Assigned Numbers Authority (IANA) for use on the Internet. IANA has
also designated several ranges of IPv4 addresses for use on internal private networks that will not
directly connect to the Internet.

The table below shows the IPv4 addresses that IANA designated for private use:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Private Addresses, pp. 299-300

CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

QUESTION NO: 121

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 195


Cisco 200-310 Exam

Refer to the exhibit. Which of the following statements are true about the deployment of the IPS in
the exhibit? (Choose two.)

A.
It increases response latency.

B.
It decreases the risk of successful attacks.

C.
It can directly block all communication from an attacking host.

D.
It can reset TCP connections.

E.
It does not require RSPAN on switch ports.

Answer: B,D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

When Cisco Intrusion Prevention System (IPS) is configured in promiscuous mode, IPS response
latency is increased, thereby increasing the risk of a successful attack. In addition, IPS in
promiscuous mode supports the Reset TCP connection action, which mitigates Transmission
Control Protocol (TCP) attacks by resetting TCP connections.

Promiscuous mode, which is also referred to as monitor-only operation, enables an IPS to


passively examine network traffic without impacting the original flow of traffic. This passive
connection enables the IPS to have the most visibility into the networks on the switch to which it is
connected. However, promiscuous mode operation increases response latency and increases the
risk of successful attacks because copies of traffic are forwarded to IPS for analysis instead of
flowing through IPS directly, thereby increasing the amount of time IPS takes to determine
whether a network attack is in progress. This increased response latency means that an attack
has a greater chance at success prior to detection than it would if the IPS were deployed inline
with network traffic.

"Pass Any Exam. Any Time." - www.actualtests.com 196


Cisco 200-310 Exam
Remote Switched Port Analyzer (RSPAN) must be enabled on switch ports so that IPS can
analyze the traffic on those ports. RSPAN enables the monitoring of traffic on a network by
capturing and sending traffic from a source port on one device to a destination port on a different
device on a nonrouted network.

IPS in promiscuous mode supports three actions to mitigate attacks: Request block host, Request
block connection, and Reset TCP connection. The Request block host action causes IPS to send
a request to the Attack Response Controller (ARC) to block all communication from the attacking
host for a given period of time. The Request block connection action causes IPS to send a request
to the ARC to block the specific connection from the attacking host for a given period of time. The
Reset TCP connection action clears TCP resources so that normal TCP network activity can be
established. However, resetting TCP connections is effective only for TCP-based attacks and
against only some types of those attacks.

IPS in promiscuous mode does not directly block all communication from an attacking host. In
promiscuous mode, IPS can send a request to block the host to the ARC but does not directly
block the host. One advantage of sending block requests to the ARC is that attacking hosts can be
blocked from multiple locations within the network. IPS can directly deny all communication from
an attacking host when operating in inline mode by using the Deny attacker inline action.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

Cisco: Cisco IPS Mitigation Capabilities: Promiscuous Mode Event Actions

QUESTION NO: 122

Which of the following is the QoS model that is primarily used on the Internet?

A.
best-effort

B.
IntServ

C.
DiffServ

D.
AutoQoS

"Pass Any Exam. Any Time." - www.actualtests.com 197


Cisco 200-310 Exam
Answer: A
Explanation:
Section: Enterprise Network Design Explanation

The best-effort model is the Quality of Service (QoS) model that is primarily used on the Internet.
No QoS mechanisms are used when the best-effort model is implemented; all packets are treated
with equal priority. The best-effort model is very scalable and easy to implement. However, since
bandwidth is not guaranteed for any packet types the best-effort model can be a key limitation
when considering an Internet circuit as a backup connection for an enterprise wide area network
(WAN).

The Integrated Services (IntServ) model is not the QoS model primarily used on the Internet.
IntServ, which was the first QoS model, provides end-to-end reliability guarantees for bandwidth,
delay, and packet loss. However, IntServ is not very scalable, since its signaling overhead can
consume a lot of bandwidth. IntServ uses Resource Reservation Protocol (RSVP) as the signaling
protocol.

The Differentiated Services (DiffServ) model is also not the QoS model primarily used on the
Internet. DiffServ does not provide end-to-end reliability guarantees. Instead, it provides per-hop
QoS mechanisms. Because end-to-end signaling is not required, bandwidth is not consumed by
signaling overhead? therefore, DiffServ is more scalable than IntServ. However, the QoS
mechanisms employed by DiffServ must be configured consistently at each hop.

AutoQoS is not a QoS model. AutoQoS automates the configuration of QoS on Cisco devices,
enabling consistent configurations throughout a large network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, WAN Backup over the Internet, pp. 263-264

Cisco: QoS Fact or Fiction

QUESTION NO: 123

Which of the following protocols can IPSec use to provide the integrity component of the CIA
triad? (Choose two.)

A.
GRE

"Pass Any Exam. Any Time." - www.actualtests.com 198


Cisco 200-310 Exam
B.
AH

C.
AES

D.
ESP

E.
DES

Answer: B,D
Explanation:
Section: Enterprise Network Design Explanation

IP Security (IPSec) can use either Authentication Header (AH) or Encapsulating Security Payload
(ESP) to provide the integrity component of the confidentiality, integrity, and availability (CIA) triad.
The integrity component of the CIA triad ensures that data is not modified in transit by
unauthorized parties. AH and ESP are integral parts of the IPSec protocol suite and can be used
to ensure the integrity of a packet. Data integrity is provided by using checksums on each end of
the connection. If the data generates the same checksum value on each end of the connection,
the data was not modified in transit. In addition, AH and ESP can authenticate the origin of
transmitted data. Data authentication is provided through various methods, including user
name/password combinations, preshared keys (PSKs), digital certificates, and onetime passwords
(OTPs). Although AH and ESP perform similar functions, ESP provides additional security by
encrypting the contents of the packet. AH does not encrypt the contents of the packet.

In addition to data authentication and data integrity, IPSec can provide confidentiality, which is
another component of the CIA triad. IPSec uses encryption protocols, such as Advanced
Encryption Standard (AES) or Data Encryption Standard (DES), to provide data confidentiality.
Because the data is encrypted, an attacker cannot read the data if he or she intercepts the data
before it reaches the destination. IPSec does not use either AES or DES for data authentication or
data integrity.

Generic Routing Encapsulation (GRE) is a protocol designed to tunnel any Open Systems
Interconnection (OSI) Layer 3 protocol through an IP transport network. Because the focus of GRE
is to transport many different protocols, it has very limited security features. By contrast, IPSec has
strong data confidentiality and data integrity features, but it can transport only IP traffic. GRE over
IPSec combines the best features of both protocols to securely transport any protocol over an IP
network. However, GRE itself does not provide data integrity or data authentication.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Managed VPN: IPsec, pp. 255-259

"Pass Any Exam. Any Time." - www.actualtests.com 199


Cisco 200-310 Exam
IETF: RFC 4301: Security Architecture for the Internet Protocol: 3.2. How IPsec Works

QUESTION NO: 124

How many valid host IP addresses are available on a /21 subnet?

A.
32,766

B.
4,094

C.
2,046

D.
510

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

A /21 subnet contains 2,046 valid host addresses. A subnet mask specifies how many bits belong
to the network portion of a 32bit IP address. The remaining bits in the IP address belong to the
host portion of the IP address. To determine how many host addresses are defined by a subnet
mask, use the formula 2n-2, where n is the number of bits in the host portion of the address. You
must subtract 2 from the number of available hosts, because the first address is the subnetwork
address and the last address is the broadcast address.

To determine the number of bits in the host portion of the address, you should convert /21 to
dotted-decimal notation. To convert /21 from Classless Inter-Domain Routing (CIDR) notation to
dotted-decimal notation, begin at the left and set the first 21 bits to a value of 1. These bits identify
the network portion of the IP address. The remaining 11 bits will be set to 0. These are the host
bits.

/21 = 11111111.11111111.11111000.00000000

There are 11 bits equal to the /21 subnet mask. Applying the 2n-2 formula, where n = 11, yields
2,048 -2 = 2,046. Therefore, 2,046 hosts are available for each subnetwork when a subnet mask
of /21 is applied.

"Pass Any Exam. Any Time." - www.actualtests.com 200


Cisco 200-310 Exam
Although it is important to learn the formula for calculating valid host addresses, the following list
demonstrates the relationship between common subnet masks and valid host addresses:

Subnetting a contiguous address range in structured, hierarchical fashion enables routers to


maintain smaller routing tables and eases administrative burden when troubleshooting.
Conversely, a discontiguous IP version 4 (IPv4) addressing scheme can cause routing tables to
bloat because the subnets cannot be summarized. Summarization minimizes the size of routing
tables and advertisements and reduces a router's processor and memory requirements.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, Plan for a Hierarchical IP Address Network, pp.
311-312

Cisco: IP Addressing and Subnetting for New Users

QUESTION NO: 125

Which of the following is true regarding the Hybrid WAN deployment model for branch
connectivity?

A.
"Pass Any Exam. Any Time." - www.actualtests.com 201
Cisco 200-310 Exam
It can provide QoS capabilities for essential traffic.

B.
It sacrifices network availability to reduce costs.

C.
It is the least expensive deployment model.

D.
It supports only single-router configurations.

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

The Hybrid WAN deployment model for branch connectivity can provide Quality of Service (QoS)
capabilities for essential traffic. Cisco defines three general deployment models for branch
connectivity:

The Multiprotocol Label Switching (MPLS) WAN deployment model can use a single-router
configuration with connections to multiple MPLS service providers or a dual-router configuration
where each router has a connection to one or more MPLS service providers. Service provider
diversity ensures that an outage at the service provider level will not cause an interruption of
service at the branch. The MPLS WAN deployment model can provide service-level agreement
(SLA) guarantees for QoS and network availability through service-provider provisioning and
routing protocol optimization. Although using multiple MPLS services provides increased network
resilience and bandwidth, it also increases the complexity and cost of the deployment when
compared to other deployment models.

The Hybrid WAN deployment model can use a single or dual-router configuration and relies on an
MPLS service provider for its primary WAN connection and on an Internet-based virtual private
network (VPN) connection as a backup circuit. Unlike the MPLS WAN deployment model, the
Hybrid WAN deployment model cannot ensure QoS capabilities for traffic that passes over the
backup circuit. Because of this QoS limitation, low-priority traffic is often routed through the lower
cost Internet VPN circuit whereas high-priority traffic is routed through the MPLS circuit. This can
reduce the bandwidth requirements of the MPLS circuit and lower the overall cost of the
deployment without sacrificing network resilience or QoS capabilities for essential traffic.

The Internet WAN deployment model can use a single or dual-router configuration and relies on
an Internet-based VPN solution for primary and backup circuits. Internet service provider (ISP)
diversity ensures that carrier level outages do not affect connectivity between the branch and the
central site. Because the Internet WAN deployment model uses the public Internet, its QoS
capabilities are limited. However, the Internet WAN deployment model is the most cost effective of
the three models defined by Cisco.

"Pass Any Exam. Any Time." - www.actualtests.com 202


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Branch Connectivity, p. 271

QUESTION NO: 126

View the Exhibit.

Refer to the exhibit. Which of the following traffic flows will the IPS be unable to monitor? (Choose
two.)

A.
traffic from the DMZ to the Internet

B.
traffic from the DMZ to the LAN

C.
traffic from the Internet to the DMZ

D.
traffic from the Internet to the LAN

E.
traffic from the LAN to the DMZ

F.
traffic from the LAN to the Internet

Answer: B,E
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 203


Cisco 200-310 Exam
The Intrusion Prevention System (IPS) in this scenario will be unable to monitor traffic flows from
the demilitarized zone (DMZ) to the LAN and from the LAN to the DMZ. An IPS provides real-time
monitoring of malicious traffic and can prevent malicious traffic from infiltrating the network. An IPS
functions similarly to a Layer 2 bridge; a packet entering an interface on the IPS is directed to the
appropriate outbound interface without regard to the packet's Layer 3 information. Instead, the IPS
uses interface or virtual LAN (VLAN) pairs to determine where to send the packet. This enables an
IPS to be inserted into an existing network topology without requiring any disruptive addressing
changes. Because traffic flows through an IPS, an IPS can detect malicious traffic as it enters the
IPS device and can prevent the malicious traffic from infiltrating the network.

In this scenario, the IPS is deployed inline between the firewall and the edge router. Because
traffic flows between the LAN and DMZ do not pass through the firewall, the IPS will be unable to
monitor them. However, the IPS will be able to monitor traffic flows between the LAN and the
Internet and between the DMZ and the Internet. In addition, because the IPS is deployed on the
outside of the firewall, it will have visibility into traffic flows that will ultimately be dropped by the
firewall. This insight can be useful during an active attack; however, it comes at the cost of
additional resource utilization since the IPS will be processing more traffic than will ultimately be
passing through the firewall.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 13, IPS/IDS Fundamentals, pp. 534-535

QUESTION NO: 127

When using the bottom-up design approach, which layer of the OSI model is used as a starting
point?

A.
Application layer

B.
Session layer

C.
Network layer

D.
Data Link layer

E.
Physical layer

"Pass Any Exam. Any Time." - www.actualtests.com 204


Cisco 200-310 Exam
Answer: E
Explanation:
Section: Design Methodologies Explanation

The Physical layer of the Open Systems Interconnection (OSI) model is used as a starting point
when using the bottom-up design approach. The bottom-up design approach takes its name from
the methodology of starting with the lower layers of the OSI model, such as the Physical, Data
Link, Network, and Transport layers, and working upward toward the higher layers. The bottom-up
approach focuses on the devices and technologies that should be implemented in a design,
instead of focusing on the applications and services that will be used on the network. In addition,
the bottom-up approach relies on previous experience rather than on a thorough analysis of
organizational requirements or projected growth. Because the bottom-up approach does not use a
detailed analysis of an organization's requirements, the bottom-up approach can be much less
time-consuming than the top-down design approach. However, the bottom-up design approach
can often lead to network redesigns because the design does not provide a "big picture" overview
of the current network or its future requirements.

By contrast, the top-down design approach takes its name from the methodology of starting with
the higher layers of the OSI model, such as the Application, Presentation, and Session layers, and
working downward toward the lower layers. The top-down design approach requires a thorough
analysis of the organization's requirements. As a result, the top-down design approach is a more
time-consuming process than the bottom-up design approach. With the top-down approach, the
designer obtains a complete overview of the existing network and the organization's needs. With
this "big picture" overview, the designer can then focus on the applications and services that meet
the organization's current requirements. By focusing on the applications and services required in
the design, the designer can work in a modular fashion that will ultimately facilitate the
implementation of the actual design. In addition, the flexibility of the resulting design is typically
much improved over that of the bottom-up approach because the designer can account for the
organization's projected needs.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25

Cisco: Using the Top-Down Approach to Network Design: 4. Top-Down and Bottom-Up Approach
Comparison (Flash)

QUESTION NO: 128

You issue the following commands on RouterA:

"Pass Any Exam. Any Time." - www.actualtests.com 205


Cisco 200-310 Exam

RouterA receives a packet destined for 10.0.0.24.

To which next-hop IP address will RouterA forward the packet?

A.
10.0.0.4

B.
192.168.1.1

C.
192.168.1.2

D.
192.168.1.3

E.
192.168.1.4

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

RouterA will forward the packet to the next-hop IP address of 192.168.1.1. When a packet is sent
to a router, the router checks the routing table to see if the next-hop address for the destination
network is known. The routing table can be filled dynamically by a routing protocol, or you can
configure the routing table manually by issuing the ip route command to add static routes. The ip
route command uses the syntax ip route net-address mask next-hop, where net-address is the
network address of the destination network, mask is the subnet mask of the destination network,
and next-hop is the IP address of a neighboring router that can reach the destination network.

A default route is used to send packets that are destined for a location that is not listed elsewhere
in the routing table. For example, the ip route 0.0.0.0 0.0.0.0 192.168.1.4command specifies that
packets destined for addresses not otherwise specified in the routing table are sent to the default
next-hop address of 192.168.1.4. A net-address and mask combination of 0.0.0.0 0.0.0.0 specifies
any packet destined for any network.

If multiple static routes to a destination are known, the most specific route is used. Therefore, the
following rules apply on RouterA:

"Pass Any Exam. Any Time." - www.actualtests.com 206


Cisco 200-310 Exam
Because the most specific route to 10.0.0.24 is the route toward the 10.0.0.0 255.255.255.224
network, RouterA will forward a packet destined for 10.0.0.24 to the next-hop address of
192.168.1.1.

Reference:

Cisco: IP Routing ProtocolIndependent Commands: ip route

Cisco: Specifying a Next Hop IP Address for Static Routes

QUESTION NO: 129

In a Layer 3 hierarchical design, which enterprise campus module layer or layers exclusively use
Layer 3 switching?

A.
only the campus core layer

B.
the distribution and campus core layers

C.
only the distribution layer

D.
the distribution and access layers

E.
only the access layer

Answer: B
Explanation:
Section: Enterprise Network Design Explanation

In a Layer 3 hierarchical design, the distribution and campus core layers of the enterprise campus
module use Layer 3 switching exclusively. Thus a Layer 3 switching design relies on First Hop
Redundancy Protocols (FHRPs) for high availability. In addition, a Layer 3 switching design
typically uses route filtering on links that face the access layer of the design.

In a Layer 2, or switched, hierarchical design, only the access layer of the enterprise campus
module uses Layer 2 switching exclusively. The access layer of the enterprise campus module

"Pass Any Exam. Any Time." - www.actualtests.com 207


Cisco 200-310 Exam
provides end users with physical access to the network. In addition to using Virtual Switching
System (VSS) in place of FHRPs for redundancy, a Layer 2 switching design requires that inter-
VLAN traffic be routed in the distribution layer of the hierarchy. Also, Spanning Tree Protocol
(STP) in the access layer will prevent more than one connection between an access layer switch
and the distribution layer from becoming active at a given time.

The distribution layer of the enterprise campus module provides link aggregation between layers.
Because the distribution layer is the intermediary between the access layer and the campus core
layer, the distribution layer is the ideal place to enforce security policies, provide load balancing,
provide Quality of Service (QoS), and perform tasks that involve packet manipulation, such as
routing. In a switched hierarchical design, the switches in the distribution layer use Layer 2
switching on ports connected to the access layer and Layer 3 switching on ports connected to the
campus core layer.

The campus core layer of the enterprise campus module provides fast transport services between
the modules of the enterprise architecture module, such as the enterprise edge and the intranet
data center. Because the campus core layer acts as the network's backbone, it is essential that
every distribution layer device have multiple paths to the campus core layer. Multiple paths
between the campus core and distribution layer devices ensure that network connectivity is
maintained if a link or device fails in either layer. In a switched hierarchical design, the campus
core layer switches use Layer 3 switching exclusively.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Hierarchical Model Examples, pp. 46-48

Cisco: Cisco SAFE Reference Guide: Enterprise Campus

QUESTION NO: 130

Which of the following noise values would provide the weakest connection between an AP and a
wireless client with an RSSI of -67 dBm?

A.
-19 dBm

B.
-38 dBm

C.
-67 dBm

D.
"Pass Any Exam. Any Time." - www.actualtests.com 208
Cisco 200-310 Exam
-83 dBmE. -91 dBm

Answer: A
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

A noise value of -19 decibel milliwatts (dBm) would provide the weakest connection between an
AP and a wireless client with a Received Signal Strength Indicator (RSSI) of -67 dBm. In a Voice
over wireless LAN (VoWLAN), signal strength is measured in dBm, which is a measure of power
normalized to 1 milliwatt (mW). Zero dBm indicates infinite signal strength, and negative values
indicate decreased signal strength. For example, -19 dBm is a stronger signal than -38 dBm. The
signaltonoise ratio (SNR) describes the separation between a valid radio signal and any ambient
noise. A wireless client with an RSSI of -67 dBm would require a noise value less than -67 dBm in
order to separate signal from noise. A high SNR indicates that a device can easily distinguish valid
signals from the surrounding noise. The greater the separation between signal and noise, the
higher the likelihood that the wireless client will not experience packet loss due to signal
interference.

Conversely, a lower SNR increases the likelihood that the wireless client will be unable to discern
the signal. If the SNR is too low, the wireless client might not be able to distinguish some parts of
the signal from the surrounding noise, which might result in packet loss. In this case, an RSSI of -
67 dBm with a noise value of -19 dBm produces an SNR of -48 decibels (dB). A negative SNR
value indicates that the strength of the noise is greater than the strength of the received signal,
resulting in 100 percent packet loss. Cisco recommends maintaining a minimum signal strength of
-67 dBm and a minimum SNR of 25 dB throughout the coverage area of a VoWLAN to help
mitigate packet loss.

The sensitivity of an 802.11 radio decreases as the data rate goes up. Thus the separation of valid
802.11 signals from background noise must be greater at higher data rates than at lower data
rates. Otherwise, the 802.11 radio will be unable to distinguish the valid signals from the
surrounding noise. For example, an 802.11 radio might register a 1Mbps signal at -45 dBm with -
96 dBm of noise. These values produce an SNR of 51 dB. However, if the data rate is increased to
11 Mbps, the radio might register a signal of -63 dBm with -82 dBm of noise, thereby bringing the
SNR to 19 dB. Because the sensitivity of the radio is diminished at the higher data rate, the radio
might not be able to distinguish parts of the signal from the surrounding noise, which might result
in packet loss. Therefore, the optimal cell size is determined by the configured data rate and the
transmitter power of the access point (AP).

Noise values of -38 dBm, -67 dBm, -83 dBm, and -91 dBm would not provide a connection weaker
than the noise value of -19 dBm. Each of these values produces an SNR higher than the SNR
obtained with a noise value of -19 dBm. Because the strength of a connection can be determined
by its SNR, a noise value that produces the lowest SNR would provide the weakest connection.

Reference:
"Pass Any Exam. Any Time." - www.actualtests.com 209
Cisco 200-310 Exam
Cisco: Site Survey Guide: Deploying Cisco 7920 IP Phones: Getting started

QUESTION NO: 131

Which of the following operating modes enables a WLC to manage a remote LAP from a central
location?

A.
local

B.
H-REAP

C.
monitor

D.
rogue detector

E.
sniffer

Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Hybrid remote edge access point (H-REAP) mode is an operating mode that enables a wireless
LAN controller (WLC) to manage a remote lightweight access point (LAP) from a central location.
After adding a LAP to a WLC, you can configure the mode of the LAP depending on your needs
and the capability of the LAP. H-REAP mode, which is also known as FlexConnect, enables
administrators to deploy a LAP in a remote location without also needing to deploy a WLC to the
location. A LAP operating in H-REAP mode can connect over a WAN link to a WLC that is located
in a different location. This enables administrators to manage the LAP from a central location
without having to deploy WLCs to each remote office. Furthermore, LAPs operating in H-REAP
mode can provide client connectivity even if the connection to the remote WLC is lost. That is, a
LAP operating in H-REAP mode can authenticate clients locally even if the AP cannot reach the
WLC.

Local mode is the default mode of operation for a LAP and does not enable a WLC to manage a
remote LAP from a central location. A LAP operating in local mode uses timers that are tuned for
WLCs on a LAN. Therefore, connectivity to a remote WLC can fail if the roundtrip time between

"Pass Any Exam. Any Time." - www.actualtests.com 210


Cisco 200-310 Exam
the LAP and the WLC is too high, such as on a WAN link. Cisco recommends using H-REAP to
centrally manage remote LAPs across a WAN link.

Rogue detector mode is a LAP operating mode used to detect unauthorized clients on a wired
network. You can configure an AP to operate in rogue detector mode to configure the AP to scan
traffic on the wired connection in search of unauthorized APs and unauthorized clients on the
wired network.

Sniffer mode is a LAP operating mode used to capture network traffic, which is then forwarded to a
designated host for analysis. The host must be running network analyzer software, such as
AiroPeek, to decode the packets sent from the LAP. A LAP operating in sniffer mode does not
process normal client data and essentially becomes a dedicated wireless packet sniffer.

Monitor mode is a LAP operating mode used to provide data for location-based services. A LAP
operating in monitor mode functions as a dedicated sensor, which continuously scans all
configured channels and provides data that can be used by location-based services and intrusion
detection systems.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 5, AP Modes, pp. 180-181

CCDA 200-310 Official Cert Guide, Chapter 5, Hybrid REAP, p. 200

QUESTION NO: 132

You want to load share traffic from two VLANs across two FHRP-capable default gateways.

Which technologies are you most likely to configure? (Choose two.)

A.
HSRP

B.
floating static routes

C.
RPVST+

D.
RSTP

"Pass Any Exam. Any Time." - www.actualtests.com 211


Cisco 200-310 Exam
E.
STP

Answer: A,C
Explanation:
Section: Enterprise Network Design Explanation

Most likely, you will configure Hot Standby Router Protocol (HSRP) and Rapid Per-VLAN
Spanning Tree Plus (RPVST+) if you want to load share traffic from two virtual LANs (VLANs)
across two First Hop Redundancy Protocol (FHRP)enabled gateways. HSRP is an FHRP that
provides redundancy by enabling the automatic configuration of active and standby routers.
RPVST+ is the Rapid Spanning Tree Protocol (RSTP) implementation of Per VLAN Spanning
Tree Plus (PVST+), which enables the configuration of a separate Spanning Tree Protocol (STP)
instance per VLAN configuration. This means that each VLAN in an organization can be
configured to use a different switch as its root.

Although HSRP does not support load balancing, you can configure an HSRP load sharing
scenario by assigning different HSRP routers as root bridges for different VLANs. Next, you could
configure a separate HSRP group for each VLAN. Finally, configure each VLAN's root bridge as
the active HSRP router for that VLAN's HSRP group. Using this configuration, each VLAN in an
organization will by default send traffic over a different default gateway. The VLANs will only share
a default gateway if one of the HSRP routers goes down.

You would be more likely to use HSRP than floating static routes in this scenario. Floating static
routes are manually configured paths. Typically, one path is assigned a higher administrative
distance (AD) so that it is not inserted into the routing table unless the first path becomes
unavailable. Therefore, floating static routes would not make sense in a scenario in which an
FHRP can be used to provide both redundancy and availability.

You would be more likely to use RPVST+ than STP or RSTP in this scenario. Neither STP nor
RSTP support separate STP instances per VLAN.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: Inter-Switch Link and IEEE 802.1Q Frame Format: Background Theory

Cisco: Catalyst 3750X and 3560X Switch Software Configuration Guide, Release 12.2(55)SE:
Configuring the Switch Priority of a VLAN

"Pass Any Exam. Any Time." - www.actualtests.com 212


Cisco 200-310 Exam
QUESTION NO: 133

Which of the following is a network architecture principle that is used to facilitate troubleshooting in
large, scalable networks?

A.
modularity

B.
hierarchy

C.
top-down

D.
bottom-up

Answer: A
Explanation:
Section: Design Objectives Explanation

Of the available choices, the modularity network architecture principle is most likely to facilitate
troubleshooting in large, scalable networks. The modularity and hierarchy principles are
complementary components of network architecture. The modularity principle is used to implement
an amount of isolation among network components. This ensures that changes to any given
component have little to no effect on the rest of the network. Modularity also simplifies the
troubleshooting process by limiting the task of isolating the problem to the affected module.

The modularity principle typically consists of two building blocks: the access-distribution block and
the services block. The access-distribution block contains the bottom two layers of a three-tier
hierarchical network design. The services block, which is a newer building block, typically contains
services like routing policies, wireless access, tunnel termination, and Cisco Unified
Communications services.

The hierarchy principle is the structured manner in which both the physical and logical functions of
the network are arranged. A typical hierarchical network consists of three layers: the core layer,
the distribution layer, and the access layer. The modules between these layers are connected to
each other in a fashion that facilitates high availability. However, each layer is responsible for
specific network functions that are independent from the other layers.

The core layer provides fast transport services between buildings and the data center. The
distribution layer provides link aggregation between layers. Because the distribution layer is the
intermediary between the access layer and the campus core layer, the distribution layer is the
ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. The access layer, which

"Pass Any Exam. Any Time." - www.actualtests.com 213


Cisco 200-310 Exam
typically comprises Open Systems Interconnection (OSI) Layer 2 switches, serves as a media
termination point for devices, such as servers and workstations. Because access layer devices
provide access to the network, the access layer is the ideal place to perform user authentication
and to institute port security. High availability, broadcast suppression, and rate limiting are also
characteristics of access layer devices.

Top-down and bottom-up are both network design models, not network architecture principles.
The top-down network design approach is typically used to ensure that the eventual network build
will properly support the needs of the network's use cases. For example, a dedicated customer
service call center might first evaluate communications and knowledgebase requirements prior to
designing and building out the call center's network infrastructure. In other words, a top-down
design approach typically begins at the Application layer, or Layer 7, of the OSI reference model
and works down the model to the Physical layer, or Layer 1.

In contrast to the top-down approach, the bottom-up approach begins at the bottom of the OSI
reference model. Decisions about network infrastructure are made first, and application
requirements are considered last. This approach to network design can often lead to frequent
network redesigns to account for requirements that have not been met by the initial infrastructure.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Cisco Enterprise Architecture Model, pp. 49-50

Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: Modularity

QUESTION NO: 134

Which of the following WMM access categories maps to the WLC Gold QoS profile?

A.
Voice

B.
Video

C.
Background

D.
Best-Effort

Answer: B
"Pass Any Exam. Any Time." - www.actualtests.com 214
Cisco 200-310 Exam
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

The Video WiFi Multimedia (WMM) access category maps to the wireless LAN controller (WLC)
Gold profile. WMM is a subset of the 802.11e wireless standard, which adds Quality of Service
(QoS) features to the existing wireless standards. WMM was initially created by the WiFi Alliance
while the 802.11e proposal was awaiting approval by the Institute of Electrical and Electronics
Engineers (IEEE).

The 802.11e standard defines eight priority levels for traffic, numbered from 0 through 7. WMM
reduces the eight 802.11e priority levels into four access categories, which are Voice (Platinum),
Video (Gold), Best-Effort (Silver), and Background (Bronze). On WMM-enabled networks, these
categories are used by WLCs to prioritize traffic. Packets tagged as Voice (Platinum) packets are
typically given priority over packets tagged with lower-level priorities. Packets that have not been
assigned to a category are treated as though they had been assigned to the Best-Effort (Silver)
category.

When a lightweight access point (LAP) receives a frame with an 802.11e priority value from a
WMM-enabled client, the LAP ensures that the 802.11e priority value is within the acceptable
limits provided by the QoS policy assigned to the wireless client. After the LAP polices the 802.11e
priority value, it maps the 802.11e priority value to the corresponding Differentiated Services Code
Point (DSCP) value and forwards the frame to the wireless LAN controller (WLC). The WLC will
then forward the frame with its DSCP value to the wired network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 5, Wireless and Quality of Service (QoS), pp. 197-199

Cisco: Enterprise Mobility 4.1 Design Guide: Cisco Unified Wireless QoS

QUESTION NO: 135

Which of the following OSPF areas does not accept Type 3, 4, and 5 summary LSAs?

A.
stub area

B.
ordinary area

C.
"Pass Any Exam. Any Time." - www.actualtests.com 215
Cisco 200-310 Exam
backbone area

D.
not-so-stubby area

E.
totally stubby area

Answer: E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

An Open Shortest Path First (OSPF) totally stubby area does not accept Type 3, 4, and 5
summary link-state advertisements (LSAs), which advertise routes outside the area. These LSAs
are replaced by a default route at the area border router (ABR). As a result, routing tables are kept
small within the totally stubby area. To create a totally stubby area, you should issue the area
area-id stub no-summary command in router configuration mode.

The backbone area, Area 0, accepts all LSAs. All OSPF areas must directly connect to the
backbone area or must traverse a virtual link to the backbone area. To configure a router to be
part of the backbone area, you should issue the area 0 command in router configuration mode.

An ordinary area, which is also called a standard area, accepts all LSAs. Every router in an
ordinary area contains the same OSPF routing database. To configure an ordinary area, you
should issue the area area-id command in router configuration mode.

A stub area does not accept Type 5 LSAs, which advertise external summary routes. Routers
inside the stub area will send all packets destined for another area to the ABR. To configure a stub
area, you should issue the area area-id stub command in router configuration mode.

A not-so-stubby area (NSSA) is basically a stub area that contains one or more autonomous
system boundary routers (ASBRs). Like stub areas, NSSAs do not accept Type 5 LSAs. External
routes from the ASBR are converted to Type 7 LSAs and tunneled through the NSSA to the ABR,
where they are converted back to Type 5 LSAs. To configure an NSSA, you should issue the area
area-id nssa command in router configuration mode. To configure a totally stubby NSSA, which
does not accept summary routes, you should issue the area area-id nssa no-summary command
in router configuration mode.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, OSPF Stub Area Types, pp. 437-438

Cisco: What Are OSPF Areas and Virtual Links?

"Pass Any Exam. Any Time." - www.actualtests.com 216


Cisco 200-310 Exam

QUESTION NO: 136

Which of the following statements best describes NetFlow?

A.
NetFlow is a Cisco IOS feature that can collect timestamps of traffic sent between a particular
source and destination for the purpose of reviewing in an audit.

B.
NetFlow is a protocol that extends the standard MIB data structure and enables a managed device
to store statistical data locally.

C.
NetFlow is a security appliance that serves as the focal point for security events on a network.

D.
NetFlow is used to monitor and manage network devices by collecting data about those devices.

Answer: A
Explanation:
Section: Design Methodologies Explanation

NetFlow is a Cisco IOS feature that can collect timestamps of traffic flowing between a particular
source and destination for the purpose of reviewing in an audit. NetFlow can be used to gather
flow based statistics, such as packet counts, byte counts, and protocol distribution. A device
configured with NetFlow examines packets for select Layer 3 and Layer 4 attributes that uniquely
identify each traffic flow. The data gathered by NetFlow is typically exported to management
software. You can then analyze the data to facilitate network planning, customer billing, and traffic
engineering. A traffic flow is defined as a series of packets with the same source IP address,
destination IP address, protocol, and Layer 4 information. Although NetFlow does not use Layer 2
information, such as a source Media Access Control (MAC) address, to identify a traffic flow, the
input interface on a switch will be considered when identifying a traffic flow. Each NetFlow enabled
device gathers statistics independently of any other device; NetFlow does not have to run on every
router in a network in order to produce valuable data for an audit. In addition, NetFlow is
transparent to the existing network infrastructure and does not require any configuration changes
in order to function.

Simple Network Management Protocol (SNMP) is used to monitor and manage network devices
by collecting data about those devices. The data is stored on each managed device in a data
structure known as a Management Information Base (MIB). Three versions of SNMP currently
exist: SNMPv1, SNMPv2, and SNMPv3. SNMPv1 and SNMPv2 do not provide authentication,
encryption, or message integrity. Thus access to management information is based on a simple
password known as a community string; the password is sent as plain text with each SNMP

"Pass Any Exam. Any Time." - www.actualtests.com 217


Cisco 200-310 Exam
message. If an attacker intercepts a message, the attacker can view the password information.
SNMPv3 improves upon SNMPv1 and SNMPv2 by providing encryption, authentication, and
message integrity to ensure that the messages are not viewed or tampered with during
transmission.

Remote Monitoring (RMON) and RMON2 are protocols that extend the standard MIB data
structure and enable a managed device to store statistical data locally. Because an RMON-
capable device can store its own statistical data, the number of queries by a management station
is reduced. RMON agents use SNMP to communicate with management stations. Therefore,
RMON does not need to implement authentication, encryption, or message integrity methods.

Cisco Security Monitoring, Analysis, and Response System (CS-MARS) is a security appliance
that serves as the focal point for security events on a network. CS-MARS can discover the
topology of the network and the configurations of key network devices, such as Cisco security
devices, third-party network devices, and applications. Because CS-MARS has a more
comprehensive view of the network than individual network security devices have, CS-MARS can
identify false positives and facilitate the mitigation of some types of security issues. For example,
once CS-MARS has identified a new Intrusion Prevention System (IPS) signature, it can distribute
this signature to all of the relevant IPS devices on the network.

Reference:

Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: NetFlow Overview

QUESTION NO: 137

Which of the following processes is a component of the Manage phase in the Cisco PBM Design
Lifecycle?

A.
assessment

B.
validation

C.
deployment

D.
optimization

E.
migration
"Pass Any Exam. Any Time." - www.actualtests.com 218
Cisco 200-310 Exam
Answer: D
Explanation:
Section: Design Methodologies Explanation

The optimization process is a component of the Manage phase in the Cisco Plan, Build, Manage
(PBM) Design Lifecycle. The PBM Design Lifecycle is a newer methodology designed to
streamline the concepts from Cisco's older design philosophy: the Prepare, Plan, Design,
Implement, Operate, and Optimize (PPDIOO) Design Lifecycle. As the name implies, the PBM
Design Lifecycle is divided into three distinct phases: Plan, Build, and Manage.

The Plan phase of the PBM Design Lifecycle consists of the following three processes:

The purpose of the strategy and analysis process is to generate proposed improvements to an
existing network infrastructure with the overall goal of increasing an organization's return on
investment (ROI) from the network and its support staff. The assessment process then examines
the proposed improvements from the strategy and analysis process and determines whether the
improvements comply with organizational goals and industry best practices. In addition, the
assessment process identifies potential deficiencies that infrastructure changes might cause in
operational and support facilities. Finally, the design process produces a network design that
meets current organizational objectives while maintaining resiliency and scalability.

The Build phase of the PBM Design Lifecycle consists of the following three processes:

The purpose of the validation process is to implement the infrastructure changes outlined in the
design process of the Plan phase and to verify that the implementation meets the organizational
needs as specified by the network design. The validation process implements the network design
in a controlled environment such as in a lab or staging environment. Once the network design has
been validated, the purpose of the deployment process is to implement the network design in a
full-scale production environment. Finally, the purpose of the migration process is to incrementally
transition users, devices, and services to the new infrastructure as necessary.

The Manage phase of the PBM Design Lifecycle consists of the following four processes:

The product support process addresses support for specific hardware, software, or network
products. Cisco Smart Net is an example of a component of the product support process. By
contrast, solution support is focused on the solutions that hardware, software, and network
products provide for an organization. Cisco Solution Support is the primary component of the
solution support process. Cisco Solution Support serves as the primary point of contact for Cisco
solutions, leverages solution-focused expertise, coordinates between multiple vendors for complex
solutions, and manages each case from inception to resolution. The optimization process is
concerned with improving the performance, availability, and resiliency of a network
implementation. It also addresses foreseeable changes and upgrades, which reduces operating
costs, mitigates risk, and improves return on investment (ROI). The operations management

"Pass Any Exam. Any Time." - www.actualtests.com 219


Cisco 200-310 Exam
process addresses the ongoing management of the network infrastructure. It includes managed
solutions for collaboration, data center, security, and general network services.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Cisco Design Lifecycle: Plan, Build, Manage, pp. 9-
12

Cisco: Services: Portfolio

QUESTION NO: 138

You are designing a routed access layer for a high availability campus network that will be
deployed using only Cisco devices.

Which of the following are most likely to result in fast and deterministic recovery when a path to a
destination becomes invalid? (Choose two.)

A.
EIGRP

B.
OSPF

C.
RIP

D.
redundant equal-cost paths to the destination

E.
redundant unequal-cost paths to the destination

F.
a single path to the destination

Answer: A,D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the available choices, Enhanced Interior Gateway Routing Protocol (EIGRP) and redundant
equal-cost paths to the destination are most likely to result in fast and deterministic recovery when
"Pass Any Exam. Any Time." - www.actualtests.com 220
Cisco 200-310 Exam
a path to a destination becomes invalid. Both Open Shortest Path First (OSPF) and the Cisco-
developed EIGRP are dynamic routing protocols and are capable of fast convergence. However,
on a network that contains only Cisco routers, EIGRP is typically simpler to deploy than OSPF and
can converge faster than OSPF because of the feasible successors stored in the EIGRP topology
database.

One means of optimizing a routing design is to create redundant equal-cost paths between
devices because such a design promotes fast and deterministic recovery when a path becomes
invalid. When either EIGRP or OSPF has a redundant equal-cost path to a destination, all of the
new path calculation occurs on the local device if one of the paths becomes unavailable. If the
device has no redundant equal-cost paths, the routing protocol must rely on information from
neighboring devices and calculate a new path.

Routing Information Protocol (RIP) does not offer fast recovery. RIP sends out routing updates
every 30 seconds, so convergence is relatively slow. In addition, RIP relies on hold-down timers,
which further slowdown convergence time.

The amount of time required for a routing protocol to detect the loss of a forwarding path and to
calculate a new best path can both affect convergence time. In addition, the amount of time it
takes for the Cisco Express Forwarding (CEF) table to populate with routing updates can affect the
speed of convergence. It is therefore important when designing a network to ensure that the
routing design is an optimized design.

Reference:

Cisco: High Availability Campus Network Design-Routed Access Layer using EIGRP or OSPF:
Route Convergence

QUESTION NO: 139

Which of the following queuing methods provides strict-priority queues and prevents bandwidth
starvation?

A.
CQ

B.
PQ

C.
LLQ

"Pass Any Exam. Any Time." - www.actualtests.com 221


Cisco 200-310 Exam
D.
WFQ

E.
FIFO

F.
CBWFQ

Answer: C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Low-latency queuing (LLQ) provides strict-priority queues and prevents bandwidth starvation. LLQ
supports the creation of up to 64 user-defined traffic classes as well as one or more strict-priority
queues that can be used specifically for delay-sensitive traffic, such as voice and video traffic.
Each strict-priority queue can use up to the maximum bandwidth available but can only use its
guaranteed minimum bandwidth when other queues have traffic to send, thereby avoiding
bandwidth starvation. Cisco recommends limiting the strict-priority queues to a total of 33 percent
of the link capacity. Because LLQ can provide guaranteed bandwidth to delay-sensitive packets,
such as Voice over IP (VoIP) packets, without monopolizing the available bandwidth on a link, LLQ
is recommended for handling voice, video, and mission-critical traffic.

First-in-first-out (FIFO) queuing does not provide strict-priority queues or prevent bandwidth
starvation. By default, Cisco uses FIFO queuing for interfaces faster than 2.048 Mbps. FIFO
queuing requires no configuration, because all packets are arranged into a single queue. As the
name implies, the first packet received is the first packet transmitted without regard for packet
type, protocol, or priority.

Although you can implement priority queuing (PQ) on an interface to prioritize voice, video, and
mission-critical traffic, you should not use it when lower-priority traffic must be sent on that
interface. PQ arranges packets into four queues: high priority, medium priority, normal priority, and
low priority. Queues are processed in order of priority. As long as the high-priority queue contains
packets, no packets are sent from other queues. This can cause bandwidth starvation.

Custom queuing (CQ) is appropriate for voice, video, and mission-critical traffic, but it can be
difficult to balance the queues to avoid bandwidth starvation of lower-priority queues. CQ is a form
of weighted round robin (WRR) queuing. With round robin (RR) queuing, you configure multiple
queues of equal priority and you assign traffic to each queue. Because each queue has equal
priority, each queue takes turns sending traffic over the interface. With WRR queuing, you can
assign a weight value to each queue whereby each queue can send a number of packets relative
to their weight values. CQ allows you to configure each queue with a specific byte value whereby
each queue can send that many bytes before the next queue can send traffic.

"Pass Any Exam. Any Time." - www.actualtests.com 222


Cisco 200-310 Exam
Although weighted fair queuing (WFQ) can be used for voice, video, and mission-critical traffic, it
does not provide the bandwidth guarantees or the strict-priority queues provided by LLQ. WFQ is
used by default on Cisco routers for serial interfaces at 2.048 Mbps or lower. WFQ addresses the
jitter and delay problems inherent with FIFO queuing, and it addresses the bandwidth starvation
problem inherent with PQ. Traffic flows are identified by WFQ based on source and destination IP
addresses, port number, protocol number, and Type of Service (ToS). Although WFQ is easy to
configure, it is not supported on high-speed links.

Class-based WFQ (CBWFQ) can be used for voice, video, and mission-critical traffic; however, it
does not provide the delay guarantees provided by LLQ, because CBWFQ does not provide
support for strict-priority queues. CBWFQ improves upon WFQ by enabling the creation of up to
64 custom traffic classes, each with a guaranteed minimum bandwidth. Bandwidth can be
allocated as a value in Kbps, by a percentage of bandwidth, or by a percentage of the remaining
bandwidth. Unlike with PQ, bandwidth starvation does not occur with CBWFQ.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, Low-Latency Queuing, p. 235

Cisco: Enterprise QoS Solution Reference Network Design Guide: Queuing and Dropping
Principles

Cisco: Congestion Management Overview: Low Latency Queueing

QUESTION NO: 140

Which of the following are not true of the access layer of a hierarchical design? (Choose three.)

A.
It provides address summarization.

B.
It aggregates LAN wiring closets.

C.
It isolates the distribution and core layers.

D.
It performs Layer 2 switching.

E.
It performs NAC for end users.

"Pass Any Exam. Any Time." - www.actualtests.com 223


Cisco 200-310 Exam
Answer: A,B,C
Explanation:
Section: Enterprise Network Design Explanation

The access layer typically performs Open Systems Interconnection (OSI) Layer 2 switching and
Network Admission Control (NAC) for end users. The access layer is the network hierarchical
layer where end-user devices connect to the network. Port security and Spanning Tree Protocol
(STP) toolkit features like PortFast are typically implemented in the access layer.

The distribution layer of a hierarchical design, not the access layer, provides address
summarization, aggregates LAN wiring closets, and aggregates WAN connections. The
distribution layer is used to connect the devices at the access layer to those in the core layer.
Therefore, the distribution layer isolates the access layer from the core layer. In addition to these
features, the distribution layer can also be used to provide policy-based routing, security filtering,
redundancy, load balancing, Quality of Service (QoS), virtual LAN (VLAN) segregation of
departments, inter-VLAN routing, translation between types of network media, routing protocol
redistribution, and more.

The core layer of a hierarchical design, not the access layer, is also known as the backbone layer.
The core layer is used to provide connectivity to devices connected through the distribution layer.
In addition, it is the layer that is typically connected to enterprise edge modules. Cisco
recommends that the core layer provide fast transport, high reliability, redundancy, fault tolerance,
low latency, limited diameter, and QoS. However, the core layer should not include features that
could inhibit CPU performance. For example, packet manipulation that results from some security,
QoS, classification, or inspection features can be a drain on resources.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46

Cisco: High Availability Campus Network DesignRouted Access Layer using EIGRP or OSPF:
Hierarchical Design

QUESTION NO: 141

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 224


Cisco 200-310 Exam

You administer the network shown above. All the routers run EIGRP. Automatic summarization is
disabled throughout the network. You want to optimize the routing tables where possible.

On which routers should you enable automatic summarization? (Choose three.)

A.
RouterA

B.
RouterB

C.
RouterC

D.
RouterD

E.
RouterE

Answer: B,C,E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

You should enable automatic summarization on RouterB, RouterC, and RouterE. A summary
route is used to advertise a group of contiguous networks as a single route, thus reducing the size
of the routing table. Some routing protocols, such as Enhanced Interior Gateway Routing Protocol
(EIGRP) and Routing Information Protocol version 2 (RIPv2), automatically summarize routes on
classful network boundaries.

"Pass Any Exam. Any Time." - www.actualtests.com 225


Cisco 200-310 Exam
RouterB will advertise a 10.0.0.0/8 summary route to RouterE, and RouterE will advertise the
same summary route to the other routers on the network. Because no other router on the network
contains any part of the 10.0.0.0/8 Class A address space, all other routers will send all traffic
destined for the 10.0.0.0/8 network to RouterE, which will route the traffic to RouterB.

RouterC will advertise the 192.168.0.0/24 network to RouterE. Because the other routers on the
network do not contain any part of the 192.168.0.0/24 Class C address space, they will send all
traffic destined for the 192.168.0.0/24 network to RouterE, which will route the traffic to RouterC.
The point-to-point links between routers belong to address spaces that do not overlap with each
other or with the 192.168.0.0/24 network.

When RouterE receives the 172.16.1.0/24 route from RouterA and the 172.16.2.0/24 route from
RouterD, RouterE will advertise a summarized 172.16.0.0/16 route to RouterB and RouterC.
Because RouterB and RouterC do not contain any part of the 172.16.0.0/16 address space, they
will send all traffic destined for the 172.16.0.0/16 network to RouterE. RouterE will then route the
traffic to the appropriate next-hop router.

You should not enable automatic summarization on RouterA and RouterD. Automatic
summarization can cause problems when classful networks are discontiguous within a network
topology. A discontiguous subnet exists when a summarized route advertises one or more subnets
that should not be reachable through that route. Therefore, when discontiguous networks in the
same subnet exist in a topology, you should disable automatic summarization with the no auto-
summary command. When you disable automatic summarization, the routing protocol can
advertise the actual networks instead of the classful summary. The network diagram shows that
both RouterA and RouterD are configured with different parts of the 172.16.0.0/16 Class B
address space. Because automatic summarization is enabled, RouterA and RouterD will advertise
the 172.16.0.0/16 summary routes to RouterE.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, EIGRP Design, p. 404

CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458

Cisco: EIGRP Commands: autosummary (EIGRP)

QUESTION NO: 142

Which of the following is a hierarchical routing protocol that does not support automatic
summarization?

A.
"Pass Any Exam. Any Time." - www.actualtests.com 226
Cisco 200-310 Exam
RIPv1

B.
RIPv2

C.
OSPF

D.
EIGRP

Answer: C
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Open Shortest Path First (OSPF) is a hierarchical, link-state routing protocol that does not support
automatic summarization. However, OSPF can be configured to summarize routes at border
routers or by using redistribution summarization. OSPF divides an autonomous system (AS) into
areas. These areas can be used to limit routing updates to one portion of the network, thereby
keeping routing tables small and update traffic low. Only OSPF routers in the same hierarchical
area form adjacencies. Hierarchical design provides for efficient performance and scalability.
Although OSPF is more difficult to configure, it converges more quickly than most other routing
protocols.

Enhanced Interior Gateway Routing Protocol (EIGRP) is a hybrid routing protocol that combines
the best features of distance-vector and link-state routing protocols. Unlike OSPF, EIGRP
supports automatic summarization and can summarize routes on any EIGRP interface. However,
both OSPF and EIGRP converge faster than other routing protocols and support manual
configuration of summary routes.

Routing Information Protocol version 1 (RIPv1) and RIPv2 are not hierarchical routing protocols.
RIPv1 and RIPv2 are distance-vector routing protocols that use hop count as a metric. By default,
RIP sends out routing updates every 30 seconds, and the routing updates are propagated to all
RIP routers on the network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, OSPFv2 Summary, p. 439

Cisco: Open Shortest Path First

"Pass Any Exam. Any Time." - www.actualtests.com 227


Cisco 200-310 Exam
QUESTION NO: 143

Which of the following statements is not true?

A.
The access layer should not contain physically connected hosts.

B.
The access layer provides NAC.

C.
The core layer should provide fast convergence.

D.
The core layer should provide high resiliency.

E.
The distribution layer provides inter-VLAN routing.

F.
The distribution layer provides route filtering.

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

The access layer should contain physically connected hosts because it is the tier at which end
users connect to the network. The access layer serves as a media termination point for endpoints
such as servers and hosts. Because access layer devices provide access to the network, the
access layer is the ideal place to perform user authentication.

The hierarchical model divides the network into three distinct components:

The access layer provides Network Admission Control (NAC). NAC is a Cisco feature that
prevents hosts from accessing the network if they do not comply with organizational requirements,
such as having an updated antivirus definition file. NAC Profiler automates NAC by automatically
discovering and inventorying devices attached to the LAN.

The core layer of the hierarchical model is primarily associated with low latency and high reliability.
It is the only layer of the model that should not contain physically connected hosts. As the network
backbone, the core layer provides fast convergence and typically provides the fastest switching
path in the network. The functionality of the core layer can be collapsed into the distribution layer if
the distribution layer infrastructure is sufficient to meet the design requirements. Thus the core
layer does not contain physically connected hosts. For example, in a small enterprise campus
implementation, a distinct core layer may not be required, because the network services normally
"Pass Any Exam. Any Time." - www.actualtests.com 228
Cisco 200-310 Exam
provided by the core layer are provided by a collapsed core layer instead.

The distribution layer provides route filtering and inter-VLAN routing. The distribution layer serves
as an aggregation point for access layer network links. In addition, the distribution layer can
contain connections to physical hosts. Because the distribution layer is the intermediary between
the access layer and the core layer, the distribution layer is the ideal place to enforce security
policies, to provide Quality of Service (QoS), and to perform tasks that involve packet
manipulation, such as routing. Summarization and next-hop redundancy are also performed in the
distribution layer.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Access Layer, pp. 44-46

Cisco: Campus Network for High Availability Design Guide: Access Layer

QUESTION NO: 144

At which of the following layers of the OSI model does CDP operate?

A.
Application layer

B.
Transport layer

C.
Network Layer

D.
Data Link layer

E.
Physical Layer

Answer: D
Explanation:
Section: Design Methodologies Explanation

Cisco Discovery Protocol (CDP) operates at the Data Link layer, or Layer 2, of the Open Systems

Interconnection (OSI) model. CDP is a proprietary protocol used by Cisco devices to detect
"Pass Any Exam. Any Time." - www.actualtests.com 229
Cisco 200-310 Exam
neighboring Cisco devices. For example, Cisco switches use CDP to determine whether a directly
connected Voice over IP (VoIP) phone is manufactured by Cisco or by a third party. CDP packets
are broadcast from a CDP-enabled device to a multicast address. Each directly connected CDP-
enabled device receives the broadcast and uses that information to build a CDP table. The CDP
table contains a significant amount of information, including the following:

Although CDP does not operate at the Physical layer, or Layer 1, it relies on a fully operational
Physical layer. CDP packets are encapsulated by the CDP process on a Cisco device and then
passed to the Physical layer for transmission onto the Physical medium, typically as electrical or
optimal pulses which represent the bits of data. If CDP information is not being exchanged
between directly connected devices, you should first check for Physical layer connectivity issues
before moving on to troubleshoot potential Data Link layer connectivity issues.

CDP does not operate at any OSI layer above the Data Link layer, such as the Network layer
(Layer 3), Transport layer (Layer 4), or Application layer (Layer 7). One of the strengths of CDP is
that its operation is network protocol agnostic? meaning that CDP is not dependent on any
particular Network layer protocol addressing scheme, such as IP addressing. For example, two
directly connected devices with misconfigured IP addressing can still communicate and share
information.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629

Cisco: Configuring Cisco Discovery Protocol

QUESTION NO: 145

Which of the following address and subnet mask combinations summarizes the smallest network?

A.
172.16.1.0/8

B.
172.16.2.0/16

C.
172.20.3.0/21

D.
172.31.148.0/24

"Pass Any Exam. Any Time." - www.actualtests.com 230


Cisco 200-310 Exam
Answer: D
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the available choices, the 172.31.148.0/24 network address and subnet mask combination
summarizes the smallest network. The /24 notation indicates that a 24-bit subnet mask
(255.255.255.0) is used. A 24-bit subnet is typically used to represent a single Class C, 256-host
subnet. In this scenario, the 172.31.148.0/24 network is a 256-host subnet of the Class B
172.31.0.0 network. The 256-host subnet's network address is 172.31.148.0. Its broadcast
address is 172.31.148.255. In its classful form, this network would be represented by a /16 subnet
mask (255.255.0.0) and contain a total of 65,534 hosts. The Class B range network address would
be 172.31.0.0. The broadcast address would be 172.31.255.255.

The 172.20.3.0/21 network address and subnet mask combination does not summarize the
smallest network. The /21 notation indicates that a 21bit subnet mask (255.255.248.0) is used,
which can summarize two /22 networks, four /23 networks, eight /24 networks, and so on. In this
scenario, the 172.20.3.0/21 subnet results in a subnet that can support 2,046 hosts. This subnet's
network address is 172.20.0.0. Its broadcast address is 172.20.7.255. The next subnet of
addresses from the Class B range would thus have a network address of 172.20.8.0. If this subnet
were also a /21, it would have a broadcast address of 172.20.15.255.

The 172.16.2.0/16 network address and subnet mask combination does not summarize the
smallest network. The /16 notation indicates that a 16bit subnet mask (255.255.0.0) is used. This
subnet mask in fact encompasses the entire classful 172.16.0.0 network range. Therefore, the
network address of the 172.16.2.0/16 network is 172.16.0.0. Its broadcast address is
172.16.255.255, for a total of 65,534 hosts.

The 172.16.1.0/8 network address and subnet mask combination does not summarize the
smallest network. The /8 notation indicates that an eightbit subnet mask (255.0.0.0) is used. This
subnet mask encompasses a range of 16,777,214 IP addresses. For example, the Class A
10.0.0.0 network is a /8 range of IP addresses. In this scenario, the 172.16.1.0/8 network would
include every address in the range from 172.0.0.0 through 172.255.255.255.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, IPv4 Address Subnets, pp. 302-310

Cisco: IP Routing Frequently Asked Questions: Q. What does route summarization mean?

Cisco: IP Addressing and Subnetting for New Users

"Pass Any Exam. Any Time." - www.actualtests.com 231


Cisco 200-310 Exam
QUESTION NO: 146 DRAG DROP

From the left, select the characteristics that apply to a large branch office, and drag them to the
right.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 232


Cisco 200-310 Exam

Section: Enterprise Network Design Explanation

A large branch office is an office that contains between 100 and 200 users, typically uses Rapid
Per-VLAN Spanning Tree Plus (RPVST+) and external access switches, includes a distribution
layer, and uses both redundant links and redundant devices. Cisco defines a large branch office
as an office that contains between 100 and 200 users and that implements a three-tier design. A
triple-tier design separates LAN and WAN termination into multiple devices. In addition, a triple-tier
design separates services, such as firewall functionality and intrusion detection. A large branch
office typically uses at least one dedicated device for each network service. Whereas small and
medium branch offices consist of only an edge layer and an access layer, the large branch office
also includes a distribution layer. RPVST+ is an advanced spanning tree algorithm that can
prevent loops on a switch that handles multiple virtual LANs (VLANs). RPVST+ is typically
supported only on external switches and advanced routing platforms. External access switches
provide high-density LAN connectivity to individual hosts. External access switches typically
aggregate their links on distribution layer switches.

Cisco defines a medium branch office as an office that contains between 50 and 100 users and
that implements a two-tier design. A dual-tier design separates LAN and WAN termination into
multiple devices. A medium branch office typically uses two Integrated Services Routers (ISRs),
such as the ISR G2, with one ISR serving as a connection to the headquarters location and the
second serving as a connection to the Internet. In addition, the two ISRs are typically connected
by at least one external switch that also serves as an access layer switch for the branch users.

Cisco defines a small branch office as an office that contains up to 50 users and that implements a
one-tier design. A single-tier design combines LAN and WAN termination into a single ISR, where
a redundant link to the access layer can be created if the ISR uses an EtherChannel topology
versus a trunked topology, which offers no link redundancy. Because a small branch office uses a
single ISR to provide LAN and WAN services, an external access switch, such as the Cisco 2960,
is not necessary. In addition, PVST+ is not supported on most ISR platforms. Similar to a medium
branch office, a small branch office contains no Layer 2 loops in its topology.

"Pass Any Exam. Any Time." - www.actualtests.com 233


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, Enterprise Branch Profiles, pp. 275-279

Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Large Office
Design (PDF)

Cisco: LAN Baseline Architecture Branch Office Network Reference Design Guide: Branch LAN
Design Options (PDF)

QUESTION NO: 147

Which of the following is a Cisco-proprietary link-bundling protocol?

A.
HSRP

B.
LACP

C.
PAgP

D.
VRRP

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

Port Aggregation Protocol (PAgP) is a Cisco-proprietary link-bundling protocol. Configuring


multiple physical ports into a bundle, which is also known as a port group or an EtherChannel
group, enables a switch to use the multiple physical ports as a single connection between a switch
and another device. Because bundled links function as a single logical port, Spanning Tree
Protocol (STP) is automatically disabled on the physical ports in the bundle? however, spanning
tree must be running on the associated port channel virtual interface to prevent bridging loops.

Typically, a link bundle is configured for high-bandwidth transmissions between switches and
servers. When a link bundle is configured, traffic is load balanced across all links in the port group,
which provides fault tolerance. If a link in the port group goes down, that link's traffic load is
redistributed across the remaining links.

"Pass Any Exam. Any Time." - www.actualtests.com 234


Cisco 200-310 Exam
PAgP cannot be used to create an EtherChannel on non-Cisco switches. In addition, PAgP cannot
be used to create an EtherChannel link between a Cisco switch and a non-Cisco switch, because
the EtherChannel protocol must match on each side of the EtherChannel link.

Link Aggregation Control Protocol (LACP) is a link-bundling protocol that is defined in the Institute
of Electrical and Electronics Engineers (IEEE) 802.3ad standard, not by Cisco. Because LACP is a
standards-based protocol, it can be used between Cisco and non-Cisco switches.

Both PAgP and LACP work by dynamically grouping physical interfaces into a single logical link.
However, LACP is newer than PAgP and offers somewhat different functionality. Like PAgP, LACP
identifies neighboring ports and their group capabilities? however, LACP goes further by assigning
roles to the link bundle's endpoints. LACP enables a switch to determine which ports are actively
participating in the bundle at any given time and to make operational decisions based on those
determinations.

Neither Hot Standby Router Protocol (HSRP) nor Virtual Router Redundancy Protocol (VRRP) is a
link-bundling protocol. HSRP is a Cisco-proprietary first-hop redundancy protocol (FHRP). VRRP
is an Internet Engineering Task Force (IETF)standard FHRP. Both HSRP and VRRP can be used
to configure failover in case a primary default gateway goes down.

Reference:

Cisco: IEEE 802.3ad Link Bundling: Benefits of IEEE 802.3ad Link Bundling

QUESTION NO: 148

Which of the following methods is always used by a new LAP to discover a WLC?

A.
broadcast

B.
OTAP

C.
DHCP

D.
DNS

E.
NVRAM

"Pass Any Exam. Any Time." - www.actualtests.com 235


Cisco 200-310 Exam
Answer: C
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

When you add a lightweight access point (LAP) to a wireless network that uses Lightweight
Access Point Protocol (LWAPP), the LAP goes through a sequence of steps to discover and
register with a wireless LAN controller (WLC) on the network. Because a new LAP has not been
configured with a static IP address, the LAP will first attempt to obtain an address from a Dynamic
Host Configuration Protocol (DHCP) server. When the LAP receives an IP address, the LAP scans
the DHCP server response for option 43, which identifies the address of a WLC. Although this
method is always the first action taken by a new LAP when it attempts to discover a WLC, the LAP
will also use other methods.

When the LAP receives an IP address from the DHCP server, the LAP can also receive other
configuration parameters, such as the IP address of a Domain Name System (DNS) server. If a
DNS server is configured, the LAP will attempt to resolve the host name CISCO-LWAPP-
CONTROLLER.localdomain, where localdomain is the fully qualified domain name (FQDN) in use.
Once the LAP has resolved the name to one or more IP addresses, the LAP will send an LWAPP
discovery message to all of the IP addresses simultaneously.

Alternatively, a LAP can use Over-the-Air-Provisioning (OTAP) to discover a WLC. OTAP is


enabled by default on a new LAP. With OTAP, LAPs periodically transmit neighbor messages that
contain the IP address of a WLC. A new LAP that has OTAP enabled can scan the wireless
network for neighbor messages until the LAP locates the IP address of a local WLC. Once the
LAP has discovered the IP address of a WLC, the LAP will send a Layer 3 LWAPP discovery
request directly to the WLC.

If Layer 2 LWAPP mode is supported, a new LAP can attempt to locate a WLC by broadcasting a
Layer 2 LWAPP discovery request message. If there are no WLCs on that network segment or if a
WLC does not respond to the Layer 2 broadcast, the LAP will then broadcast a Layer 3 LWAPP
discovery request message.

A new LAP will not have the address of a WLC stored in nonvolatile random access memory
(NVRAM) by default. However, you can configure a LAP with the IP address of a WLC to facilitate
the discovery of a WLC when the LAP is installed. In addition, if a LAP has ever joined with a
WLC, it may store the previously discovered WLC IP address as a primary, secondary, or tertiary
WLC.

Reference:

Cisco: Lightweight AP (LAP) Registration to a Wireless LAN Controller (WLC): Register the LAP
with the WLC

"Pass Any Exam. Any Time." - www.actualtests.com 236


Cisco 200-310 Exam

QUESTION NO: 149

Which of the following are BGP attributes that are used to determine best path? (Choose three.)

A.
confederation

B.
local preference

C.
route reflector

D.
MED

E.
weight

Answer: B,D,E
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Local preference, multi-exit discriminator (MED), and weight are all Border Gateway Protocol
(BGP) attributes that are used to determine the best path to a destination. The following list
displays the criteria used by BGP for path selection:

When determining the best path, a BGP router first chooses the route with the highest weight. The
weight value is significant only to the local router? it is not advertised to neighbor routers.

When weight values are equal, a BGP router chooses the route with the highest local preference.
The local preference value is advertised to internal iBGP neighbor routers to influence routing
decisions made by those routers.

When local preferences are equal, a BGP router chooses locally originated paths over externally
originated paths. Locally originated paths that have been created by issuing the network or
redistribute command are preferred over locally originated paths that have been created by issuing
the aggregate-address command.

If multiple paths to a destination still exist, a BGP router chooses the route with the shortest AS
path attribute. The AS path attribute contains a list of the AS numbers (ASNs) that a route passes
through.

"Pass Any Exam. Any Time." - www.actualtests.com 237


Cisco 200-310 Exam
If multiple paths have the same AS path length, a BGP router chooses the lowest origin type. An
origin type of i, which is used for IGPs, is preferred over an origin type of e, which is used for
Exterior Gateway Protocols (EGPs). These origin types are preferred over an origin type of, which
is used for incomplete routes where the origin is unknown or the route was redistributed into BGP.

If origin types are equal, a BGP router chooses the route with the lowest MED. If MED values are
equal, a BGP router chooses eBGP routes over iBGP routes. If there are multiple eBGP paths, or
multiple iBGP paths if no eBGP paths are available, a BGP router chooses the route with the
lowest IGP metric to the next-hop router. If IGP metrics are equal, a BGP router chooses the
oldest eBGP path, which is typically the most stable path.

Finally, if route ages are equal, a BGP router chooses the path that comes from the router with the
lowest RID. The RID can be manually configured by issuing the bgp router-id command. If the RID
is not manually configured, the RID is the highest loopback IP address on the router. If no
loopback address is configured, the RID is the highest IP address from among a router's available
interfaces.

Neither a confederation nor a route reflector are BGP attributes. Confederations and route
reflectors are both a means of mitigating performance issues that arise from large, full-mesh iBGP
configurations. A full-mesh configuration enables each router to learn each iBGP route
independently without passing through a neighbor. However, a full-mesh configuration requires the
most administrative effort to configure. A confederation enables an AS to be divided into discrete
units, each of which acts like a separate AS. Within each confederation, the routers must be fully
meshed unless a route reflector is established. A route reflector can be used to pass iBGP routes
between iBGP routers, eliminating the need for a full-mesh configuration. However, it is important
to note that route reflectors advertise best paths only to route reflector clients. In addition, if
multiple paths exist, a route reflector will always advertise the exit point that is closest to the route
reflector.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, BGP Attributes, Weight, and the BGP Decision
Process, pp. 449-455

CCDA 200-310 Official Cert Guide, Chapter 11, Route Reflectors, pp. 446-448

CCDA 200-310 Official Cert Guide, Chapter 11, Confederations, pp. 448-449

Cisco: BGP Best Path Selection Algorithm

Cisco: Integrity Checks: IBGP Neighbors Not Fully Meshed

"Pass Any Exam. Any Time." - www.actualtests.com 238


Cisco 200-310 Exam
QUESTION NO: 150

Which of the following is an advantage of SSL VPNs over IPSec VPNs?

A.
SSL VPNs are encrypted.

B.
SSL VPNs are authenticated.

C.
SSL VPNs do not require a preinstalled VPN client.

D.
SSL VPNs provide direct access to the network.

Answer: C
Explanation:
Section: Enterprise Network Design Explanation

Secure Sockets Layer (SSL) virtual private networks (VPNs) do not require a preinstalled VPN
client. To connect to internal network resources over an SSL VPN, a user must connect to an SSL
VPN device by using a web browser. After a user provides valid authentication credentials to the
SSL VPN device, an encrypted connection is established and the user is granted access to
network resources.

SSL VPNs are encrypted, but so are IP Security (IPSec) VPNs. SSL VPNs use Transport Layer
Security (TLS) for encryption. IPSec VPNs use Encapsulating Security Payload (ESP) for
encryption.

SSL VPNs are authenticated, but so are IPSec VPNs. SSL VPNs can be configured to use a
variety of authentication mechanisms, including local authentication and Remote Authentication
Dial-In User Service (RADIUS) authentication. IPSec VPNs can also use a variety of
authentication mechanisms, including Kerberos, preshared keys (PSKs), and digital certificates.

SSL VPNs do not provide direct access to the network? however, they do provide access to
network resources. By contrast, IPSec VPNs do provide direct network access. The primary
concern with both SSL VPNs and IPSec VPNs is the lack of administrative control over desktop
computers that connect to the network. A user who has installed the proper VPN client software or
knows the IP address of an SSL VPN server can authenticate and connect from any computer,
including those that do not fully comply with company policies. For example, if a user were to
connect from a computer that does not have adequate antivirus or firewall protection, the network
could be exposed to any malware threats that exist on the computer.

"Pass Any Exam. Any Time." - www.actualtests.com 239


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, VPN Benefits, p. 263

Cisco: Virtual Private Network (VPN)

QUESTION NO: 151

Which of the following branch office WAN connectivity methods is the most expensive?

A.
DWDM

B.
ISDN

C.
MPLS

D.
Metro Ethernet

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

Of the branch office WAN connectivity methods provided, dense wavelength division multiplexing
(DWDM) is the most expensive. DWDM is a leased-line WAN technology used to increase the
amount of data signals that a single fiber strand can carry. To accomplish this, DWDM can transfer
data of varying light wavelengths on up to 160 channels per single fiber strand. The spacing of
DWDM channels is highly compressed, requiring a complex transceiver design and therefore
making the technology very expensive to implement.

Metro Ethernet is not as expensive as DWDM. Metro Ethernet is a WAN technology that is
commonly used to connect networks in the same metropolitan area. For example, if a company
has multiple branch offices within the same city, the company can use Metro Ethernet to connect
the branch offices to the corporate headquarters. Metro Ethernet providers typically provide up to
1,000 Mbps of bandwidth.

Multiprotocol Label Switching (MPLS) is not as expensive as DWDM. MPLS is a shared WAN
technology that makes routing decisions based on information contained in a fixed-length label. In

"Pass Any Exam. Any Time." - www.actualtests.com 240


Cisco 200-310 Exam
an MPLS virtual private network (VPN), each customer site is provided with its own label by the
service provider. This enables the customer site to use its existing IP addressing scheme internally
while allowing the service provider to manage multiple sites that might have conflicting IP address
ranges. The service provider then forwards traffic over shared lines between the sites in the VPN
according to the routing information that is passed to each provider edge router.

Integrated Services Digital Network (ISDN) is an inexpensive circuit-switched WAN technology.


However, ISDN offers less than 2 Mbps of bandwidth, so it is typically used only for backup WAN
connectivity, not for branch office connectivity. Circuit-switched WAN technologies rely on
dedicated physical paths between nodes in a network. For example, when RouterA needs to
contact RouterB, a dedicated path is established between the routers and then data is transmitted.
While the circuit is established, RouterA cannot use the WAN link to transmit any data that is not
destined for networks accessible through RouterB. When RouterA no longer has data for RouterB,
the circuit is torn down until it is needed again.

Reference:

Cisco: Introduction to DWDM Technology (PDF)

QUESTION NO: 152

Your supervisor wants you to reduce the range of IP addresses available from a NAT pool that is
configured on the department router. You should do this by enabling internal hosts to share a
single external IP address.

Which of the following should you configure on RouterA?

A.
NAT overloading

B.
NAT overlapping

C.
static NAT

D.
dynamic NAT

Answer: A
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation
"Pass Any Exam. Any Time." - www.actualtests.com 241
Cisco 200-310 Exam
You should configure Network Address Translation (NAT) overloading on RouterA to enable
internal hosts to share a single external IP address. NAT overloading appends port numbers to IP
addresses to enable multiple internal hosts to share the same external IP address. You can issue
the ip nat inside source list access-list interface outside-interface overload command to configure
NAT overloading with a single inside global address, or you can issue the ip nat inside source list
access-list pool nat-pool overload command to configure NAT overloading with a NAT pool.

You do not need to configure dynamic NAT, because the network in this scenario already uses
dynamic NAT. Dynamic NAT translates inside local addresses to inside global addresses; inside
global addresses are allocated from a pool. To create a NAT pool, you should issue the ip nat pool
nat-pool start-ip end-ip {netmask mask | prefix-length prefix} command. To enable translation of
inside local addresses, you should issue the ip nat inside source-list access-list pool nat-pool
[overload] command.

You do not need to configure static NAT. Static NAT translates a single inside local address to a
single inside global address, or a single outside local address to a single outside global address.
You can configure a static inside local-to-inside global IP address translation by issuing the ip nat
inside source static inside-local inside-global command. To configure a static outside local-to-
outside global IP address translation, you should issue the ip nat outside source static outside-
global outside-local command. Unlike dynamic NAT configurations, which are created in the NAT
table when traffic is generated, static NAT configurations are always contained in the NAT table.

When a NAT router receives an Internet-bound packet from a local host, the NAT router performs
the following tasks:

When all the inside global addresses in the NAT pool are mapped, no other inside local hosts will
be able to communicate on the Internet unless NAT overloading, also known as Port Address
Translation (PAT), is configured. When NAT overloading is configured, an inside local address,
along with a port number, is mapped to an inside global address. The NAT router uses port
numbers to keep track of which packets belong to each host:

"Pass Any Exam. Any Time." - www.actualtests.com 242


Cisco 200-310 Exam

You do not need to configure NAT overlapping. You should use NAT overlapping when the
addresses on the internal network conflict with the addresses on another network. The internal
addresses must be translated to unique addresses on the external network, and addresses on the
external network must be translated to unique addresses on the internal network; the translation
can be performed either through static or dynamic NAT. Nothing in this scenario indicates that you
are configuring the router for NAT overlapping.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 8, NAT, pp. 300-302

Cisco: Configuring Network Address Translation: Getting Started: Example: Allowing Internal
Users to Access the Internet

Cisco: Configuring Network Address Translation: Getting Started

QUESTION NO: 153

View the Exhibit.

"Pass Any Exam. Any Time." - www.actualtests.com 243


Cisco 200-310 Exam

You administer the network shown in the topology diagram above. RouterA and HostD are
nonCisco devices. RouterMain, SwitchB, PhoneC, and RouterE are Cisco devices.

Information about which of the following devices will be displayed in the output of the show cdp
neighbors command on RouterMain? (Choose two.)

A.
RouterA

B.
SwitchB

C.
PhoneC

D.
HostD

E.
RouterE

Answer: B,C
Explanation:
Section: Design Objectives Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 244


Cisco 200-310 Exam
Information about SwitchB and PhoneC will be displayed in the output of the show cdp neighbors
command on RouterMain. Cisco Discovery Protocol (CDP) is used to advertise and discover Cisco
devices on a local network. CDP is enabled by default on many Cisco devices, but it can be
disabled for security purposes. When CDP is enabled, devices periodically send advertisements
out each CDP-enabled interface. These advertisements contain information about the capabilities
of the device and are sent to a hardware multicast address. Each directly connected CDP-enabled
device receives the broadcast and uses that information to build a CDP table. The CDP table
information can be used in conjunction with data gleaned from commandline interface (CLI)
output, such as that from the traceroute command, to build a topology map of an existing network.

CDP operates at Layer 2, which is the Data Link layer, of the Open Systems Interconnection (OSI)
model. Because it operates at Layer 2, even devices using different Layer 3 protocols can
communicate and share information. The type of information collected by CDP includes the host
name, IP address, port information, device type, and IOS version of neighboring devices. The
following is sample output from the show cdp neighbors command:

The Device ID field indicates the host name, Media Access Control (MAC) address, or serial
number of the neighboring device. The Local Intrfce field indicates the interface on the local
device. The Holdtme field indicates the amount of time remaining before the CDP advertisement is
discarded. The Capability field indicates the type of device:

The Platform field indicates the product number of the neighboring device. The Port IDfield
indicates the connected interface on the neighboring device.

Information about RouterA and HostD will not be displayed in the output of the show cdp
neighbors command on RouterMain, because RouterA and HostD are not Cisco devices. CDP will
not display information about non-Cisco devices. However, if Link Layer Discovery Protocol
(LLDP) had been used to build a topology map in this scenario and all of the devices supported
LLDP, RouterA and HostD might have appeared in the LLDP topology table along with SwitchB
and PhoneC. LLDP is an open-standard network discovery protocol described in the Institute of
Electrical and Electronics Engineers (IEEE) 802.1AB standard. LLDP is designed to operate in a
multivendor environment and operates in a manner similar to CDP. Most Cisco platforms support
both CDP and LLDP.

Information about RouterE will not be displayed in the output of the show cdp neighborscommand
on RouterMain, because RouterE is not directly connected to RouterMain. Although RouterE is a
Cisco device, RouterE cannot share information over CDP unless it is directly connected to
RouterMain.

"Pass Any Exam. Any Time." - www.actualtests.com 245


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, CDP, p. 629

Cisco: Cisco IOS Configuration Fundamentals Command Reference, Release 12.2: show cdp
neighbors

QUESTION NO: 154

Which of the following is an advantage of implementing QoS features in hardware, instead of in


software?

A.
consistent cross-platform QoS feature support

B.
consistent cross-platform QoS configuration syntax

C.
increased QoS feature support

D.
reduced CPU processing load

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Reduced CPU processing load is an advantage of implementing Quality of Service (QoS) in


hardware instead of in software. Cisco platforms implement QoS features in either hardware or
software, depending on the specific platform. For example, most Cisco switches implement QoS
features only in hardware, whereas most Cisco routers implement QoS features in software. A
limited number of Cisco platforms can implement QoS features in either hardware or software.
Best practice dictates that hardware-based QoS features should be preferred over software-based
QoS features when feasible.

When QoS features are implemented in software, the CPU must dedicate a portion of available
processing power to the configured QoS policies. Depending on the volume of traffic and the
number and complexity of configured QoS policies, it is possible for QoS features to be
responsible for a significant portion of the CPU processing load on a software-based QoS
platform. If the CPU processing load becomes too high, it could impact other operations on the
affected platform, such as responding to network events. In order to ensure that the CPU
"Pass Any Exam. Any Time." - www.actualtests.com 246
Cisco 200-310 Exam
processing load does not become too high, Cisco recommends using hardware-based QoS
features when available. Hardware-based QoS features reduce the CPU processing load by
offloading QoS-related processing from the CPU to dedicated hardware, such as QoS-oriented
application-specific integrated circuits (ASICs).

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Campus LAN QoS Considerations, pp. 111-112

Cisco: Medianet WAN Aggregation QoS Design 4.0: Hardware versus Software QoS

QUESTION NO: 155

Which of the following tasks can best be achieved with SNMP?

A.
identifying the nearest interface on an adjacent Cisco device

B.
verifying IPv4 connectivity to an adjacent network device

C.
determining the number of bytes transmitted by an interface on a network device

D.
monitoring IPv4 traffic flows through an interface on a Cisco device

Answer: C
Explanation:
Section: Design Methodologies Explanation

Of the choices available, determining the number of bytes transmitted by an interface on a network
device is best achieved with Simple Network Management Protocol (SNMP). SNMP is used to
monitor and manage network devices by collecting data about those devices. The data is stored
on each managed device in a data structure known as a Management Information Base (MIB). A
network management station (NMS) can periodically poll a managed device to accumulate
historical data that can then be analyzed to determine whether the managed device requires
optimization. Because SNMP cannot store historical data locally and requires polling by an NMS to
accrue historical data, it is considered a pull-based network management protocol. By contrast, a
push-based network management protocol can periodically send data to an NMS. The Cisco
NetFlow feature is an example of a push-based protocol.

"Pass Any Exam. Any Time." - www.actualtests.com 247


Cisco 200-310 Exam
Monitoring IP version 4 (IPv4) traffic flows through an interface on a Cisco device is best achieved
with NetFlow, not SNMP. NetFlow is a Cisco IOS feature that can be used to monitor traffic flows.
A traffic flow is defined as a series of packets with the same source IP address, destination IP
address, protocol, and Open Systems Interconnection (OSI) Layer 4 information. NetFlow gathers
flow-based statistics such as packet counts, byte counts, and protocol distribution. The data
gathered by NetFlow is typically exported to management software. You can then analyze the data
to facilitate network planning, customer billing, and traffic engineering.

Identifying the nearest interface on an adjacent Cisco device is best achieved with Cisco
Discovery Protocol (CDP), not SNMP. CDP is a Cisco-proprietary Layer 2 protocol that is
supported on all Cisco-manufactured hardware. CDP-enabled devices periodically send
advertisements out of each CDP-enabled interface. These CDP advertisements contain
information about the sending device and its capabilities. When a CDP-enabled device on the
network segment receives a CDP advertisement, the device updates its internal list of neighboring
CDP-enabled devices.

Verifying IPv4 connectivity to an adjacent network device is best achieved with Internet Control
Message Protocol (ICMP), not SNMP. ICMP uses several message types to provide information
regarding network connectivity between devices. The following types of packets are sent by ICMP:

Reference:

CCDA 200-310 Official Cert Guide, Chapter 15, Simple Network Management Protocol, pp. 619-
624

Cisco: How To Calculate Bandwidth Utilization Using SNMP

Cisco: Performance Management: Best Practices White Paper: Measure Utilization

QUESTION NO: 156

NO: 156

You have configured NIC teaming on a Microsoft Windows Server. The server's NICs are
connected to a Cisco Catalyst 3750 switch that has been configured to use LACP.

Which of the following characteristics will the switch use to load balance traffic?

A.
the source MAC address

B.

"Pass Any Exam. Any Time." - www.actualtests.com 248


Cisco 200-310 Exam
the destination MAC address

C.
the source and destination MAC addresses

D.
the source IP address

E.
the destination IP address

F.
the source and destination IP addresses

Answer: A
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

By default, a Catalyst 3750 switch that has been configured to use Link Aggregation Control
Protocol (LACP) will load balance traffic based on the source Media Access Control (MAC)
address of each packet. LACP is a link-bundling protocol that is defined in the Institute of Electrical
and Electronics Engineers (IEEE) 802.3ad standard and can be used to form an EtherChannel link
between a Cisco switch and a third-party device, such as a Microsoft Windows Server configured
to support network interface card (NIC) teaming. NIC teaming is a feature that uses multiple NICs
on a server to function as a single logical link to the network infrastructure. NIC teaming relies on
sharing a single Layer 3 address across a bundle of linked Layer 2 interfaces? therefore, NIC
teaming is supported only in a Layer 2 access model, not in a Layer 3 access model.

EtherChannel, which is Cisco's name for Ethernet link aggregation, bundles multiple, individual
Ethernet links into a single logical link on a switch or router. Packets from the same host device
are sent over the same port, and packets from different devices can be sent over different ports.
The port-channel load-balance command can be used to modify the load distribution criteria used
by EtherChannel. The basic syntax of the port-channel load-balance command is port-channel
load-balance {dst-ip | dst-mac | srcdst-ip | srcdst-mac | src-ip | src-mac}. Issuing the port-channel
load-balance src-mac command is the same as issuing the no port-channel load-balance
command because src-mac is the default load distribution method.

EtherChannel can aggregate Ethernet links of any speed as long as all of the links in a bundle are
the same speed. For example, with Fast EtherChannel, you can configure two FastEthernet links
on a switch to connect to two FastEthernet links on a Microsoft Windows Server. The two
individual links are treated as a single logical connection by the switch and the server. This
doubles the available bandwidth between the router and the switch. Traffic is evenly distributed
across both links until one of the physical links fails. If one of the physical links fails, the logical link
remains intact and continues to forward traffic.

"Pass Any Exam. Any Time." - www.actualtests.com 249


Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, EtherChannel, p. 88

Cisco: Catalyst 3750 Command Reference: portchannel loadbalance

QUESTION NO: 157

NO: 157

On a Cisco PE router, you have implemented a VRF instance with the name of boson. In addition,
you have associated an interface on the PE router with the boson VRF instance and assigned that
interface the IP address 123.45.67.89.

You are currently logged in to a Cisco PE router and want to examine the path that packets take to
reach the CE router. The CE router interface that connects to the PE router has been assigned the
IP address 123.45.67.90.

Which of the following commands should you issue?

A.
traceroute 123.45.67.89

B.
traceroute boson

C.
traceroute vrf 123.45.67.90

D.
traceroute vrf boson 123.45.67.90

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

You should issue the traceroute vrf boson 123.45.67.90 command to examine the path that
packets take to reach the customer edge (CE) router from the provider edge (PE) router in this
scenario. A virtual routing and forwarding (VRF) instance is the fundamental component of many
virtual networking technologies and provides a mechanism for a single network device to maintain
multiple, isolated routing tables. Because a VRF instance on a network device is isolated from

"Pass Any Exam. Any Time." - www.actualtests.com 250


Cisco 200-310 Exam
other VRF instances on the same device, multiple VRF instances can be configured with the same
IP address information. PE routers are typically configured with multiple VRF instances to isolate
customer traffic and to mitigate IP addressing conflicts without requiring changes in the customer
topology.

CE routers are typically not configured with VRF instances and are not aware of the VRF
instances configured on PE routers. The PE interface associated with the VRF instance to which a
CE router is connected is indistinguishable from a normal interface from the perspective of the CE
router. VRF-aware network commands, such as ping and traceroute, on a device configured
without any VRF instances, such as the CE router in this scenario, do not require any special
parameters to identify the associated VRF instance. For example, when you use the traceroute
command to examine the path that packets take from a CE router to a PE router, it is not
necessary to specify the name of a VRF instance configured on the PE router.

By contrast, a network command on a device configured with multiple VRF instances, such as the
PE router in this scenario, must specify the VRF instance to which the command should be
applied. You can specify a VRF instance name by using the vrf vrf-name parameter with the
traceroute command on a Cisco router. For example, you could issue the traceroute vrf boson
123.45.67.90 command to test connectivity from the PE router to the CE router in this scenario.
Without a specified VRF instance name, the PE router in this example would attempt to reach the
CE router by using the global routing table and not the routing table specific to the VRF named
boson.

When using the vrf keyword, the appropriate VRF instance name must be used in place of the vrf-
name variable; otherwise, the command will produce an error message. For example, you could
not issue the traceroute vrf 123.45.67.90 command to test the connectivity from the PE router to
the CE router in this scenario, because that command is missing the VRF instance name.

You should not issue the traceroute boson command. The traceroute boson command would
attempt to connect to a host named boson, not to a VRF instance named boson. The traceroute
boson command would only work in this scenario if the host name boson

had been assigned to the IP address 123.45.67.89 in Domain Name System (DNS) and if the IP
address were reachable through a route in the global routing table. There is nothing in this
scenario to indicate that a host name of boson resolves to the IP address 123.45.67.89.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, VRF, p. 154

Cisco: Network VirtualizationPath Isolation Design Guide: Diagnostic Tools (Ping, Traceroute)

"Pass Any Exam. Any Time." - www.actualtests.com 251


Cisco 200-310 Exam
QUESTION NO: 158

NO: 158

Which of the following network issues can most likely be mitigated by using a modular
architecture? (Choose three.)

A.
hardware failures

B.
physical link failures

C.
application failures

D.
poor scalability

E.
poor redundancy

Answer: A,B,D
Explanation:
Section: Design Objectives Explanation

Most likely, hardware failures, physical link failures, and poor scalability can be mitigated by using
a modular architecture. The modularity and hierarchy principles are complementary components
of network architecture. The modularity principle is used to implement an amount of isolation
among network components. This ensures that changes to any given component have little to no
effect on the rest of the network. Thus hardware failures and physical link failures, which are
detrimental to network stability and reliability, are less likely to cause system-wide issues.
Modularity facilitates scalability because it allows changes or growth to occur without system-wide
outages.

The hierarchy principle is the structured manner in which both the physical functions and the
logical functions of the network are arranged. A typical hierarchical network consists of three
layers: the core layer, the distribution layer, and the access layer. The modules between these
layers are connected to each other in a fashion that facilitates high availability. However, each
layer is responsible for specific network functions that are independent from the other layers.

The core layer provides fast transport services between buildings and the data center. The
distribution layer provides link aggregation between layers. Because the distribution layer is the
intermediary between the access layer and the campus core layer, the distribution layer is the
ideal place to enforce security policies, provide load balancing, provide Quality of Service (QoS),
and perform tasks that involve packet manipulation, such as routing. The access layer, which
"Pass Any Exam. Any Time." - www.actualtests.com 252
Cisco 200-310 Exam
typically comprises Open Systems Interconnection (OSI) Layer 2 switches, serves as a media
termination point for devices, such as servers and workstations. Because access layer devices
provide access to the network, the access layer is the ideal place to perform user authentication
and to institute port security. High availability, broadcast suppression, and rate limiting are also
characteristics of access layer devices.

Application failures and poor redundancy are less likely to be mitigated by using a modular
architecture. Poor redundancy and resiliency are more likely to be mitigated by a full-mesh
topology. However, full-mesh topologies restrict scalability. Application failures can be mitigated by
server redundancy.

Reference:

Cisco: Enterprise Campus 3.0 Architecture: Overview and Framework: ModularityCategory:


Design Objectives

QUESTION NO: 159

NO: 159

Which of the following are reasons to choose a VPN solution instead of a traditional WAN?
(Choose three.)

A.
The cost is lower.

B.
Network expansion is easy.

C.
Traffic is encrypted.

D.
Traffic is sent over dedicated lines.

E.
VPNs are faster because there is no encryption.

Answer: A,B,C
Explanation:
Section: Enterprise Network Design Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 253


Cisco 200-310 Exam
Virtual private networks (VPNs) typically have a lower cost to implement than traditional WANs,
send encrypted traffic, and provide for easy network expansion. VPNs send traffic through a tunnel
over the Internet, which is a public WAN, not over dedicated lines. Therefore, you might choose a
point-to-point WAN that uses dedicated leased lines instead of a VPN solution if you wanted to
prevent traffic from being tunneled through a public network. A VPN securely connects remote
offices or users to a central network by tunneling encrypted traffic through the Internet. By
implementing a VPN solution rather than a point-to-point WAN between branch offices, a company
can benefit from all of the following:

There are two general types of VPN: site-to-site and remote access. A site-to-site VPN is used to
create a tunnel between two remote VPN gateways. Devices on the networks connected to the
gateways do not require additional software to use the VPN? instead, all transmissions are
handled by the gateway device, such as an ASA device. Conversely, a remote access VPN is
used to connect individual clients through the Internet to a central network. Remote access VPN
clients must use either VPN client software or an SSL-based VPN to establish a connection to the
VPN gateway.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, WAN and Enterprise Edge Overview, p. 218

Cisco: Virtual Private Network (VPN)

QUESTION NO: 160

NO: 160

Which of the following features are provided by IPSec? (Choose three.)

A.
broadcast packet encapsulation

B.
data confidentiality

C.
data integrity

D.
multicast packet encapsulation

E.
data origin authentication

"Pass Any Exam. Any Time." - www.actualtests.com 254


Cisco 200-310 Exam
Answer: B,C,E
Explanation:
Section: Enterprise Network Design Explanation

IP Security (IPSec) can provide data confidentiality, data integrity, and data origin authentication.
IPSec is an open standard protocol that uses Encapsulating Security Payload (ESP) to provide
data confidentiality. ESP encrypts an entire IP packet and encapsulates it as the payload of a new
IP packet. Because the entire IP packet is encrypted, the data payload and header information
remain confidential. In addition, IPSec uses Authentication Header (AH) to ensure the integrity of a
packet and to authenticate the origin of a packet. AH does not authenticate the identity of an
IPSec peer; instead, AH verifies only that the source address in the packet has not been modified
during transit. IPSec is commonly used in virtual private networks (VPNs).

Generic Routing Encapsulation (GRE), not IPSec, provides broadcast and multicast packet
encapsulation. GRE is a Cisco-proprietary protocol that can tunnel traffic from one network to
another without requiring the transport network to support the network protocols in use at the
tunnel source or tunnel destination. For example, a GRE tunnel can be used to connect two
AppleTalk networks through an IP-only network. Because the focus of GRE is to transport many
different protocols, it has very limited security features. By contrast, IPSec has strong data
confidentiality and data integrity features but it can transport only IP traffic. GRE over IPSec
combines the best features of both protocols to securely transport any protocol over an IP
network.

Reference:

Cisco: Configuring Security for VPNs with IPsec: IPsec Functionality Overview

QUESTION NO: 161

NO: 161

Which of the following network virtualization techniques does Cisco recommend for extending
fewer than 32 VNETs across a WAN?

A.
Multi-VRF with 802.1Q tagging

B.
EVN with 802.1Q tagging

C.

"Pass Any Exam. Any Time." - www.actualtests.com 255


Cisco 200-310 Exam
Multi-VRF with GRE tunnels

D.
EVN with GRE tunnels

E.
MPLS

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Cisco recommends Easy Virtual Networking (EVN) with Generic Routing Encapsulation (GRE)
tunnels to extend fewer than 32 virtual networks (VNETs) across a WAN. EVN is a network
virtualization technique that uses virtual routing and forwarding (VRF) instances to segregate
Layer 3 networks. EVN typically uses 802.1Q tagging to extend VNETs across an infrastructure
that is within an organization's administrative control, such as a campus LAN. GRE and Multicast
GRE (mGRE) tunnels can extend VNETs across an infrastructure that is outside an organization's
administrative control, such as a WAN.

EVN supports up to 32 VNETs before operational complexity and management become


problematic. Cisco recommends using EVN in small and medium networks? however,
implementing a homogeneous EVN topology could require replacing unsupported hardware with
EVN-capable devices. Replacing infrastructure is typically disruptive and may require additional
modifications to the existing network design.

For small and medium networks needing to extend eight or fewer VNETs across a WAN, Cisco
recommends using Multi-VRF, which Cisco also refers to as VRF-Lite. On Cisco platforms, Multi-
VRF network virtualization supports up to eight VNETs before operational complexity and
management become problematic. Like EVN, Multi-VRF uses VRF instances to segregate a Layer
3 network. The VNETs created by Multi-VRF mirror the physical infrastructure upon which they are
built, and most Cisco platforms support Multi-VRF; therefore, the general network design and
overall infrastructure do not require disruptive changes in order to support a Multi-VRF overlay
topology.

For large networks needing to extend more than 32 VNETs across a WAN, Cisco recommends

Multiprotocol Label Switching (MPLS). MPLS is typically implemented in an end-to-end fashion at


the network edge and requires the edge and core devices to be MPLS-capable. Integrating MPLS
into an existing design and infrastructure can be disruptive, particularly if MPLS-incapable devices
must be replaced with MPLS-capable devices at the network edge or in the core. MPLS relies on
Multiprotocol Border Gateway Protocol (MPBGP) and Label Distribution Protocol (LDP) to manage
the dynamic distribution of VRF information at each Layer 3 device. MPLS is best suited for large
networks and can support thousands of VNETs without operational complexity or management
becoming problematic.
"Pass Any Exam. Any Time." - www.actualtests.com 256
Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, VRF, p. 154

Cisco: Borderless Campus Network Virtualization-Path Isolation Design Fundamentals: Path


Isolation

QUESTION NO: 162 DRAG DROP

NO: 162 DRAG DROP

Select the network layers from the left, and drag them under the appropriate protocol on the right.
Network layers can be used more than once. Not all network layers will be used.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 257


Cisco 200-310 Exam

Section: Design Objectives Explanation

NetFlow is a Cisco IOS feature that can be used to gather flow-based statistics, such as packet
counts, byte counts, and protocol distribution. A device configured with NetFlow examines packets
for select Open Systems Interconnection (OSI) Layer 3 and Layer 4 attributes that uniquely
identify each traffic flow. The data gathered by NetFlow is typically exported to management
software. You can then analyze the data to facilitate network planning, customer billing, and traffic
engineering. For example, NetFlow can be used to obtain information about the types of
applications generating traffic flows through a router.

A traffic flow can be identified based on the unique combination of the following seven attributes:

Although NetFlow does not use Layer 2 information, such as a source Media Access Control
(MAC) address, to identify a traffic flow, the input interface on a switch will be considered when
identifying a traffic flow.

By contrast, Network-Based Application Recognition (NBAR) is a Quality of Service (QoS) feature


that classifies application traffic that flows through a router interface. NBAR enables a router to
perform deep packet inspection for all packets that pass through an NBAR-enabled interface. With
deep packet inspection, an NBAR-enabled router can classify traffic based on the content of a
Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP) packet, instead of just
the network header information.

In addition, NBAR provides statistical reporting relative to each recognized application.

Reference:

"Pass Any Exam. Any Time." - www.actualtests.com 258


Cisco 200-310 Exam
CCDA 200-310 Official Cert Guide, Chapter 6, Classification, p. 233

CCDA 200-310 Official Cert Guide, Chapter 15, NetFlow, pp. 626-628

Cisco: Cisco IOS Switching Services Configuration Guide, Release 12.2: Capturing Traffic Data

QUESTION NO: 163

NO: 163

View the Exhibit.

Which of the following data center designs is represented by the diagram shown above?

A.
loop-free U access design

B.
loop-free inverted U access design

"Pass Any Exam. Any Time." - www.actualtests.com 259


Cisco 200-310 Exam
C.
looped triangle access design

D.
looped square access design

E.
Layer 3 access design

Answer: A
Explanation:
Section: Enterprise Network Design Explanation

A loop-free U access design is represented by the diagram shown below:

A loop-free design is a design that contains no Layer 2 loops between the access layer and the
aggregation layer. The aggregation layer is the data center equivalent to the distribution layer in
campus designs. Because there are no Layer 2 loops in a loop-free design, Spanning Tree
Protocol (STP) blocking is not in effect for any of the uplinks between access layer and
aggregation layer switches. In the loop-free U access design, the Layer 2 topology resembles the
letter U, as indicated by the dotted, black lines in the diagram above. Each access layer switch in
this design provides a single Layer 2 uplink to the aggregation layer and shares a Layer 2 link to
"Pass Any Exam. Any Time." - www.actualtests.com 260
Cisco 200-310 Exam
an adjacent access layer switch. The shared link is typically an 802.1Q trunk link and enables
each access layer switch to share virtual LAN (VLAN) information. Additionally, the trunk link
provides a redundant path for access layer traffic if an uplink to the aggregation layer fails. The link
between the aggregation layer switches in this design is a Layer 3 link. Layer 3 links are not
considered part of the Layer 2 topology and should be ignored when evaluating a design for Layer
2 loops.

The topology diagram in this scenario does not represent the Layer 3 access design. In the Layer
3 access design, the uplinks between the access layer and aggregation layer switches are Layer 3
connections. Because the Layer 2 topology in this design is effectively reduced to the trunk link
between the access layer switches, Layer 2 loops are eliminated and all uplinks are in a
forwarding state. STP is no longer necessary in this design? however, Cisco recommends
configuring STP on ports that connect to access layer devices to prevent user-side loops from
entering the network. The Layer 3 uplinks in this design enable the access layer switches to use
routing information to implement load balancing across all available uplinks. It is important to
consider the performance limitations and capabilities of the access layer and aggregation layer
switches when implementing a routing solution in the Layer 3 access design. If performance is an
issue, static routes and stub routing can reduce processing load for the access layer and
aggregation layer switches while route summarization can reduce processing load for core
switches. The Layer 3 access design is represented by the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 261


Cisco 200-310 Exam

The topology diagram in this scenario does not represent the loop-free inverted U access design.
Like the loop-free U access design, the loop-free inverted U access design contains no Layer 2
loops between the access layer and the aggregation layer. However, unlike the loop-free U access
design, the loop-free inverted U access design does not contain Layer 2 trunk links between
access layer switches. Instead, the aggregation layer switches are interconnected by Layer 2 trunk
links. These Layer 2 trunk links enable access layer VLANs to span the aggregation layer and also
to serve as redundant paths for access layer traffic in the event of an access layer uplink failure.
However, because the access layer switches are not interconnected by Layer 2 trunk links, single-
attached devices at the access layer can be cut off from the network if their access layer switch
suffers an uplink failure. The Layer 2 topology of a loop-free inverted U access design resembles
an inverted U, as indicated by the dotted, black lines in the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 262


Cisco 200-310 Exam

The topology diagram in this scenario does not represent the looped triangle access design, nor
does it represent the looped square access design. The looped triangle access design and the
looped square access design are Layer 2, looped access designs. Both of these designs use
Layer 2 trunk links between aggregation layer switches and rely on STP to resolve physical loops
in the network. In the looped triangle access design, each access layer switch has two uplinks to
the aggregation layer. These uplinks form a Layer 2 looped triangle, as shown by the black, dotted
lines in the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 263


Cisco 200-310 Exam

Because the uplinks in a looped triangle access design form a Layer 2 loop, one of the uplinks
must remain in a blocking state until the active uplink fails. The blocking uplink provides a
redundant path for access layer traffic in the event of a failure of the active uplink. By contrast,
each access layer switch in the looped square access design has a single uplink to the
aggregation layer. Additionally, access layer switches also share a Layer 2 link between them that
remains in a blocking state until an uplink to the aggregation layer fails. In the event of an uplink
failure, the shared link provides a redundant path for access layer traffic to the aggregation layer.
The Layer 2 topology of a looped square access design resembles a square, as shown by the
black, dotted lines in the diagram below:

"Pass Any Exam. Any Time." - www.actualtests.com 264


Cisco 200-310 Exam

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, Access Layer Best Practices, pp. 94-97

Cisco: Data Center Multi-Tier Model Design: Data Center Access Layer

QUESTION NO: 164

NO: 164

Once an STP-enabled network has converged, which of the following are valid port states for
participating switch ports? (Choose two.)

A.
initializing

B.
blocking

C.

"Pass Any Exam. Any Time." - www.actualtests.com 265


Cisco 200-310 Exam
listening

D.
learning

E.
forwarding

F.
disabled

Answer: B,E
Explanation:
Section: Enterprise Network Design Explanation

Forwarding and blocking are valid port states for participating switch ports once a Spanning Tree
Protocol (STP)enabled network has converged. All ports in an STP-enabled network will transition
through a series of states. There are five possible port states on an STP-enabled switch: blocking,
listening, learning, forwarding, and disabled. The complete cycle of STP port transitions takes
approximately 30 to 50 seconds. The time necessary to complete the STP process is known as
convergence.

After a port is initialized, it enters the blocking state. A port in the blocking state discards data
frames and does not populate the Media Access Control (MAC) address table. However, a port in
the blocking state can receive bridge protocol data units (BPDUs) and send them to the system
module. If a topology change occurs or if the max age timer expires without the switch receiving a
BPDU, the switch will enter the listening state.

In the listening state, a port becomes able to send and receive BPDUs. However, a port in the
listening state discards data frames and does not populate the MAC address table. It is during the
listening state that the election of the root switch, as well as the selection of root ports and
designated ports, occurs. After the forward delay timer expires, ports will transition from the
listening state to the learning state unless the port receives STP information indicating that it
should transition to the blocking state.

After a port transitions to the learning state, it begins to populate the MAC address table with the
MAC addresses of other devices on the network based on the frames that it receives. In addition,
a port in the learning state will continue to send and receive BPDUs; however, it will continue to
discard data frames.

After the forward delay timer expires, root ports and designated ports will transition to the
forwarding state. All other ports will transition back to the blocking state.

Ports in the forwarding state send and receive BPDUs, populate the MAC address table, and
"Pass Any Exam. Any Time." - www.actualtests.com 266
Cisco 200-310 Exam
forward data frames. A topology change could cause a port to transition from the forwarding state
to the blocking state.

Ports that are not connected or that are administratively shut down are in the disabled state.
Administrators, not STP, place ports into the disabled state. Therefore, ports in the disabled state
do not participate in STP, do not populate the MAC address table, and do not forward frames. STP
will never transition a port out of the disabled state.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 3, STP Design Considerations, pp. 101-103

Cisco: Configuring Spanning Tree: STP Port States

Cisco: Ethernet Card Software Feature and Configuration Guide for the Cisco ONS 15454 SDH,
ONS 15454, and ONS 15327, Release 6.2: Port Roles and the Active Topology

QUESTION NO: 165

NO: 165

Which of the following technologies can you use to configure up to nine switches to act as a single
switch?

A.
EtherChannel

B.
HSRP

C.
VTP

D.
StackWise

Answer: D
Explanation:
Section: Enterprise Network Design Explanation

You can use StackWise to configure up to nine switches to act as a single switch. StackWise is a
Cisco-proprietary Virtual Switching System (VSS) technology that is used to provide Open
"Pass Any Exam. Any Time." - www.actualtests.com 267
Cisco 200-310 Exam
Systems Interconnection (OSI) Layer 2 or Layer 3 connectivity between switches so that the stack
of switches acts as a single device. When a StackWise configuration is used, the failure of a single
switch will not result in an outage. Instead, the other switches in the stack will compensate for the
failed switch. This method of switch stacking enables the network to take advantage of many
access ports on multiple interconnected physical switches, which reduces the administrative
burden of managing multiple switches. In addition, StackWise enables you to add or remove
physical switches without significant downtime, thereby preserving network availability and
performance.

StackWise switches are connected sequentially by stack cables: the first switch is connected to
the second, the second switch is connected to the third, and so on until the last switch is
connected to the first. If a stack cable is broken, the bandwidth of the stack will be reduced by 50
percent until the cable is fixed.

EtherChannel is not used to configure up to nine switches to act as a single switch. You can use

EtherChannel to bundle up to eight redundant links to form a single robust link. EtherChannel can
be used as an alternative to purchasing new hardware for the purpose of increasing bandwidth
between two devices. For example, you could combine eight FastEthernet ports for 800 Mbps of
bandwidth, eight GigabitEthernet ports for 8 Gbps of bandwidth, or eight TenGigabitEthernet ports
for 80 Gbps of bandwidth. An EtherChannel link can operate at Layer 2 or Layer 3 of the OSI
model. Layer 2 EtherChannel is used to connect switching ports, whereas Layer 3 EtherChannel is
used to connect routing ports. When configuring Layer 2 EtherChannel, you must ensure that each
link is configured to be a part of the same virtual LAN (VLAN) or as a trunk link that comprises the
same range of VLANs. In addition, each link must be configured to use the same portfiltering
protocol. When configuring Layer 3 EtherChannel, you must assign the same Layer 3 address to
the logical interface shared by the links. When multilayer switches are used, the logical interface
will often be a switched virtual interface (SVI). In addition, links that comprise an EtherChannel are
required to have matching duplex and speed settings. Improperly configured EtherChannel links
are disabled.

Hot Standby Router Protocol (HSRP) is not used to configure up to nine switches to act as a
single switch. HSRP is a Cisco-proprietary router redundancy protocol. HSRP allows several
Layer 3 devices to function as a single gateway to clients? if one router fails, another router is
available to forward traffic sent from the clients to the gateway IP address. Devices in an HSRP
group are assigned one virtual IP address. Clients on an HSRP-enabled network send data to the
virtual IP address? the data is forwarded to the active device in the group. If the active device fails,
a standby device provides failover. Any additional devices in the HSRP group that are not
currently designated as an active or standby router remain in listening mode until they are needed
because of a failure of the active or standby device.

VLAN Trunking Protocol (VTP) is not used to configure up to nine switches to act as a single
switch. VTP is a protocol that is used to manage VLAN changes and to propagate those changes
over trunk ports. VTP reduces the administrative overhead of maintaining VLANs. When VTP is
used, changes regarding VLAN information can be centrally configured and then propagated by
VTP over trunk ports instead of manually configured on each device on the network.
"Pass Any Exam. Any Time." - www.actualtests.com 268
Cisco 200-310 Exam
Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Virtualization Technologies, pp. 153-157

Cisco: Cisco StackWise and StackWise Plus Technology

QUESTION NO: 166 DRAG DROP

NO: 166 DRAG DROP

Select each feature from the left, and drag it to its corresponding description on the right.

Answer:

Explanation:

"Pass Any Exam. Any Time." - www.actualtests.com 269


Cisco 200-310 Exam

Section: Enterprise Network Design Explanation

Redistribution is an advanced routing feature that increases the scalability of a network design by
facilitating the coexistence of multiple routing protocols. Redistribution is typically performed by
routers between the enterprise campus core and the enterprise edge. For example, to join a
campus network running Enhanced Interior Gateway Routing Protocol (EIGRP) with a branch
network running Open Shortest Path First (OSPF), Cisco recommends that you configure two-way
redistribution with route map filters at each location. Advanced routing features, such as
redistribution, route filtering, and summarization, can greatly impact the functionality and scalability
of a network and, thus, should be carefully considered during the network design process.

Route filtering is an advanced routing feature that can be used to block route advertisements that
could create routing loops. Routing loops occur when a topology change or a delayed routing
update results in two routers pointing to each other as the next hop to a destination. For example,
Router1 has a path to Router2 that begins with Router3, and Router3 has a path to Router2 that
begins with Router1. Since both Router1 and Router3 send data to each other that is intended for
Router2, they will continuously bounce the data back and forth between them, thus forming a loop.
In order to prevent this loop, a route filter could be used to stop the path from Router1 to Router2
from being advertised to Router3. Consequently, when Router3 receives data from Router1 that is
intended for Router2, the only route available is its own path directly to Router2. Because route
filtering is often used in conjunction with redistribution, route filtering is typically performed by
routers between the enterprise campus core and the enterprise edge.

Route summarization, which is also known as supernetting, is an advanced routing feature that
enables a router to advertise multiple contiguous subnets as a single, larger subnet.
Summarization combines several smaller subnets into one larger subnet. This enables routers on
the network to maintain a single summarized route in their routing tables. Therefore, fewer routes
are advertised by the routers, which reduces the amount of bandwidth required for routing update
traffic. Route summarization is most efficient when the subnets can be summarized within a single
subnet boundary and are contiguous, meaning that all of the subnets are consecutive.
Summarization is typically performed between the enterprise campus core and the enterprise
edge.

"Pass Any Exam. Any Time." - www.actualtests.com 270


Cisco 200-310 Exam
Redundancy is the repetition built into a network design to protect the network from unnecessary
vulnerabilities or downtime that might be caused by having a single point of failure. Simply put,
redundancy is having a backup plan in place that can be used in the event that the primary plan
becomes unavailable.

For example, multiple physical links between two switches could be used to promote redundancy.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 2, Route Redundancy, pp. 63-64

CCDA 200-310 Official Cert Guide, Chapter 11, Route Summarization, pp. 455-458

CCDA 200-310 Official Cert Guide, Chapter 11, Route Redistribution, pp. 458-461

CCDA 200-310 Official Cert Guide, Chapter 11, Route Filtering, pp. 461-462

Cisco: Redistributing Routing Protocols: Introduction

Cisco: OSPF Design Guide: OSPF and Route Summarization

Cisco: Filtering Routing Updates on Distance Vector IP Routing Protocols: Introduction

Cisco: Cisco Unified Communications System IP Telephony for Enterprise and Midmarket 7.0(1):
Redundancy

QUESTION NO: 167

NO: 167

Which of the following best describes a DMZ?

A.
decentralized computer resources that can be accessed over the Internet

B.
a network zone between the Internet and a private or trusted network

C.
a portion of a private or trusted network that can be accessed by a business partner

D.
websites available to only users inside a private network

"Pass Any Exam. Any Time." - www.actualtests.com 271


Cisco 200-310 Exam
Answer: B
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

A demilitarized zone (DMZ) is best described as a network zone between the Internet and a
private or trusted network. A DMZ is typically used with an access control method to permit
external users to access specific externally facing servers, such as web servers and proxy
servers, without providing access to the rest of the internal network. This helps limit the attack
surface of a network. DMZs are typically bordered by two firewalls: one that allows information to
flow between the DMZ and the Internet, and one that allows information to flow between the DMZ
and the private, or trusted, network.

A portion of a private or trusted network that can be accessed by a business partner best
describes an extranet, not a DMZ. An extranet is a portion of a company's internal network that is
accessible to specific people outside of the company, such as business partners, suppliers, or
customers. By creating an extranet, a company can provide a location for sharing information with
external users. For example, a consulting company could create an extranet for external
customers to view and comment on the consulting company's progress on various projects. In
many extranet implementations, the external customer network shares a bilateral connection with
the company's internal network. This bilateral connection not only enables the external customer
to access portions of the company's internal network, but it also enables portions of the company's
internal network to access the portions of the external customer's network.

Decentralized computer resources that can be accessed over the Internet describes an external
cloud, not a DMZ. An external cloud allows for computer processes that are typically hosted
internally to be moved to an external provider, which can reduce the burden on system and
network resources. In cloud computing, there are two accepted types of cloud infrastructure:
external and internal. External clouds are managed by a service provider and are further broken
down into two categories: public and private. With public clouds, the service provider controls the
cloud and its infrastructure, whereas with private clouds, the service provider controls only the
infrastructure. Internal clouds are similar to private clouds, except that the cloud is owned and
managed by the organization that uses it and not by a third-party service provider.

Websites available to only users inside a private network best describe an intranet, not a DMZ. An
intranet can be created to provide internal users with their own website. An intranet provides a
location for sharing information among members of the company. Unlike an extranet, which is a
portion of the company's network that is accessible by people outside the company, an intranet is
typically available only to internal users.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 6, DMZ Connectivity, pp. 236-238

"Pass Any Exam. Any Time." - www.actualtests.com 272


Cisco 200-310 Exam

QUESTION NO: 168

NO: 168

Which of the following is the maximum number of chassis that may be connected to form a VSS?

A.
two

B.
three

C.
four

D.
nine

Answer: A
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

A maximum of two chassis may be connected to form a Virtual Switching System (VSS). VSS is a
Cisco physical device virtualization feature that can enable a pair of chassis-based switches, such
as the Cisco Catalyst 6500, to function as a single logical device. The switch pair is connected by
an EtherChannel bundle known as a Virtual Switch Link (VSL). Using VSS enables an
administrator to increase link utilization without adding physical connections or hardware. There
are two identical supervisors in a VSS, one on each physical device, and one control plane. One
of the supervisors is active, and the other is designated as hot-standby; the active supervisor
manages the control plane. If the active supervisor in a VSS goes down, the hot-standby will
automatically take over as the new active supervisor.

With VSS, access layer devices can connect to the switch pair by using several active, physical
uplinks that are bundled together into a single logical link using Multi-chassis EtherChannel
(MEC). Because all of the links in the bundle to the distribution switch pair are active, each access
layer device is reduced to having a single logical link to the virtual distribution layer switch;
therefore, Spanning Tree Protocol (STP) is no longer required to prevent loops. In addition, since
there is only a single logical link to the virtual distribution layer switch, the access layer device can
load balance traffic across all of its active links and the device does not need to rely on a First Hop
Redundancy Protocol (FHRP) for convergence if a link in the MEC bundle fails.

You cannot connect nine physical chassis to form a VSS? however, you can use StackWise to link
up to nine individual switches to function as a single switch. StackWise is a Cisco-proprietary
"Pass Any Exam. Any Time." - www.actualtests.com 273
Cisco 200-310 Exam
technology that is used to provide Open Systems Interconnection (OSI) Layer 2 or Layer 3
connectivity between switches so that a stack of workgroup switches acts as a single device.
StackWise switches are connected sequentially by stack cables: the first switch is connected to
the second, the second switch is connected to the third, and so on until the last switch is
connected to the first. If a stack cable is broken, the bandwidth of the stack will be reduced by 50
percent until the cable is fixed.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Virtualization Technologies, pp. 153-157

Cisco: Campus 3.0 Virtual Switching System Design Guide: VSS Architecture and Operation

QUESTION NO: 169

NO: 169

You are routing traffic between company departments that reside in the same physical location.

Which of the following are you least likely to use?

A.
BGP

B.
EIGRP

C.
OSPF

D.
static routes

Answer: A
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Of the available choices, you are least likely to use Border Gateway Protocol (BGP) if you are
routing traffic between company departments that reside in the same physical location. BGP is
more likely to be used to route traffic from a network in one autonomous system (AS) to traffic in
another AS. Unlike some other routing protocols, BGP does not use a neighbor discovery process.
Therefore, neighbor relationships between a BGP router and a router in either the local AS or a

"Pass Any Exam. Any Time." - www.actualtests.com 274


Cisco 200-310 Exam
remote AS must be manually configured.

Most likely, you would use Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing
Protocol (EIGRP), or static routes to route traffic between company departments that reside in the
same physical location. Both OSPF and EIGRP support neighbor discovery. However, the
neighbors must reside in the same EIGRP AS or OSPF area for those processes to work without
extra configurations such as manual configuration or a remote neighbor or a virtual link.

The use of static routes in this scenario would depend on the level of complexity and the desired
level of scalability in the network topology. If you were simply connecting two branches in different
virtual LANs (VLANs) and never planning to expand, static routes would keep router CPU
overhead and configuration complexity to a minimum.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 11, BGP Neighbors, pp. 444-445

Cisco: Cisco IOS IP Routing: BGP Command Reference: neighbor remote-as

QUESTION NO: 170

NO: 170

What AD is assigned to static routes by default?

A.
0

B.
1

C.
20

D.
100

E.
200

Answer: B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

"Pass Any Exam. Any Time." - www.actualtests.com 275


Cisco 200-310 Exam
Static routes have an administrative distance (AD) of 1. Like directly connected routes, static
routes are more trusted than routes from any routing protocol. Static routes are optimal for routing
networks that do not change often. AD values are used to determine the routing protocol that
should be preferred when multiple routes to a destination network exist. A routing protocol with a
lower AD will be preferred over a route with a higher AD. The following list contains the most
commonly used ADs:

Directly connected routes have an AD of 0. Therefore, directly connected routes are trusted over
routes from any other source.

Routes that are learned by Interior Gateway Routing Protocol (IGRP) have an AD of 100 by
default. Routes advertised by an interior routing protocol, such as IGRP, are typically intradomain
routes.

Internal Border Gateway Protocol (iBGP) routes are assigned an AD of 200 by default. External
BGP (eBGP) routes are assigned an AD of 20 by default. Routes to networks in other autonomous
systems (ASes) are called interdomain routes. Interdomain routes typically exist only on routers
that border an AS. Therefore, if multiple routes to an external network exist in the routing table, the
router should use the route advertised by an interdomain routing protocol, such as eBGP, over the
route advertised by an intradomain routing protocol, such as Open Shortest Path First (OSPF).

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, Administrative Distance, pp. 386-388

Cisco: What Is Administrative Distance?

"Pass Any Exam. Any Time." - www.actualtests.com 276


Cisco 200-310 Exam

QUESTION NO: 171

NO: 171

You want to install antivirus software on a host that is configured with multiple VMs.

Which of the following statements is true?

A.
Antivirus software must be installed only on the host and cannot be installed on the VMs.

B.
Antivirus software can be installed on only one of the VMs and will protect the host and other VMs.

C.
Antivirus software installed on the host will prevent damage to any VM.

D.
Antivirus software should be installed on the host and on each VM in order to fully protect the
system.

Answer: D
Explanation:
Section: Considerations for Expanding an Existing Network Explanation

Antivirus software should be installed on the host and on each virtual machine (VM) in order to
fully protect the system. A VM is an isolated environment running a separate operating system
(OS) while sharing hardware resources with a host machine's OS. For example, you can configure
a Windows 7 VM that can run within Windows 8; both OSs can run at the same time if
virtualization software, such as Microsoft Hyper-V, is used. Depending on a computer's hardware
capabilities, multiple VMs can be installed on a single computer, which can help provide more
efficient utilization of hardware resources. For example, VMWare ESXi Server provides a
hypervisor that runs on bare metal, meaning without a host OS, and that can efficiently manage
multiple VMs on a single server.

Although a VM shares the hardware resources of the host computer, the OSs do not share
software resources. Therefore, software installed on the host is not accessible from within the VM.
For example, Microsoft Office might be installed on the host computer, but in order to access
Microsoft Office from within a VM, you must also install Microsoft Office on the VM. Separate
instances of software on the host computer and on each VM can help protect the host computer
from potentially harmful changes made within a VM. For example, if a VM user accidentally
deletes a system file or installs malicious software, the host computer will not be affected.
Therefore, the user can safely shut down the affected VM without compromising the host.

"Pass Any Exam. Any Time." - www.actualtests.com 277


Cisco 200-310 Exam
Installing virus protection on the host computer will not automatically protect any VMs running on
that host computer. In addition, installing virus protection on any one of the VMs will not protect the
host or any other VM on the host. You must manually manage the security of the host and of each
VM that is installed on a host computer. For example, installing patches and security software on
the host computer will not apply those same patches and software to the VMs. This applies to
drivers as well? if the network adapter driver is updated on one VM, the host computer and the
other VMs are not similarly updated.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 4, Server Virtualization, p. 155

QUESTION NO: 172

NO: 172

Which of the following are characteristics of the top-down design approach? (Choose two.)

A.
It considers projected growth as a design factor.

B.
It is less time-consuming than the bottom-up approach.

C.
It can lead to inefficient designs.

D.
It focuses on devices and technologies.

E.
It provides a "big picture" overview.

Answer: A,E
Explanation:
Section: Design Methodologies Explanation

The top-down approach to network design considers projected growth as a design factor and
provides a "big picture" overview. The top-down design approach takes its name from the
methodology of starting with the higher layers of the Open Systems Interconnection (OSI) model,
such as the Application, Presentation, and Session layers, and working downward toward the
lower layers. The top-down design approach is more time-consuming than the bottom-up design
"Pass Any Exam. Any Time." - www.actualtests.com 278
Cisco 200-310 Exam
approach because the top-down approach requires a thorough analysis of the organization's
requirements. Once the designer has obtained a complete overview of the existing network and
the organization's needs, in terms of applications and services, the designer can provide a design
that meets the organization's current requirements and that can adapt to the organization's
projected future needs. Because the resulting design includes room for future growth, it is typically
very efficient and mitigates the need for costly redesigns.

By contrast, the bottom-up approach can be much less time-consuming than the top-down design
approach. The bottom-up design approach takes its name from the methodology of starting with
the lower layers of the OSI model, such as the Physical, Data Link, Network, and Transport layers,
and working upward toward the higher layers. The bottom-up approach relies on previous
experience rather than on a thorough analysis of organizational requirements or projected growth.
In addition, the bottom-up approach focuses on the devices and technologies that should be
implemented in a design, instead of focusing on the applications and services that will actually use
the network. Because the bottom-up approach does not use a detailed analysis of an
organization's requirements, the bottom-up design approach can often lead to costly network
redesigns. Cisco does not recommend the bottom-up design approach, because the design does
not provide a "big picture" overview of the current network or its future requirements.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 1, Top-Down Approach, pp. 24-25

Cisco: Using the Top-Down Approach to Network Design: 4. Top-Down and Bottom-Up Approach
Comparison (Flash)

QUESTION NO: 173

NO: 173

Which of the following statements regarding Cisco GET VPN are true? (Choose two.)

A.
All encrypted traffic must pass through a central hub router.

B.
A packet's original source and destination are not protected during transmission.

C.
Tunnel setup delay introduces jitter and delay that make GET unsuitable for voice traffic.

D.
Key management is not scalable, because there is no centralized key server.

"Pass Any Exam. Any Time." - www.actualtests.com 279


Cisco 200-310 Exam
E.
Dynamic routing protocols that rely on multicast traffic can be used between peers.

Answer: B,E
Explanation:
Section: Enterprise Network Design Explanation

In a Cisco Group Encrypted Transport (GET) virtual private network (VPN), a packet's original
source and destination are not protected during transmission and dynamic routing protocols that
rely on multicast traffic can be used between peers. GET VPN is a tunnel-less technology that
provides end-to-end security for both unicast and multicast traffic. GET VPN also provides support
for advanced Quality of Service (QoS) features, such as low latency connections and direct
connections between sites. In a GET VPN, trusted group member routers receive security policy
and authentication keys from a central key server. These group member routers also remove the
need for traditional IP Security (IPSec) overlay routing tunnels by applying the encryption to the
packet, which preserves the original packet structure, including the source and destination IP
addresses that are placed in the outer IP header. Removing the dependency on tunnels to protect
traffic and utilizing existing routing infrastructure allows GET VPN to be highly scalable as
compared to native IPSec VPNs.

Native IPSec VPNs establish a secure tunnel between two sites that are separated by an
untrusted network. This tunneling is also known as overlay routing. IPSec is a security framework
that can guarantee the confidentiality and integrity of data as it passes through an untrusted
network. IPSec uses Encapsulating Security Protocol (ESP) to provide data confidentiality. ESP
encrypts an entire IP packet and encapsulates it as the payload of a new IP packet. Because the
entire IP packet is encrypted, the data payload and header information remain confidential. IPSec
VPNs are not very scalable, because site-to-site peering is required. With site-to-site peering,
each virtual circuit must be provisioned and, if a full mesh of circuits is not created, redundancy is
sacrificed. In contrast to GET VPNs, IPSec VPN devices authenticate themselves by using a
preshared key or digital certificate, not by using a centralized key management server.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 7, GETVPN, pp. 258-259

Cisco: Cisco Group Encrypted Transport VPN

QUESTION NO: 174

NO: 174

"Pass Any Exam. Any Time." - www.actualtests.com 280


Cisco 200-310 Exam
In which of the following situations would iBGP be the most appropriate routing mechanism?
(Choose two.)

A.
when the router has a single link to a router within the same AS

B.
when the router has redundant links to a router within the same AS

C.
when the router has a single link to a router within a different AS

D.
when the router has redundant links to a router within a different AS

Answer: A,B
Explanation:
Section: Addressing and Routing Protocols in an Existing Network Explanation

Internal Border Gateway Protocol (iBGP) would be the most appropriate routing protocol for a
router that has either a single link or redundant links to a router within the same autonomous
system (AS). An AS is defined as the collection of all areas that are managed by a single
organization. BGP is typically used to exchange routing information between ASes, between a
company and an Internet service provider (ISP), or between ISPs. BGP routers within the same
AS communicate by using internal iBGP, and BGP routers in different ASes communicate by using
external BGP (eBGP).

Using eBGP would be appropriate for a router that has redundant links to a router within a different
AS. For example, redundant links between a company and an ISP, or between ISPs, would most
likely use eBGP.

Static routing would be the most appropriate routing mechanism for a router that has a single link
to a router within a different AS. Because an interdomain routing protocol, such as BGP, can be
complicated to configure and uses a large portion of a router's resources, static routing is
recommended if dynamic routing information is not exchanged between routers that reside in
different ASes. For example, if you connect a router to the Internet through a single ISP, it is not
necessary for the router to run BGP, because the router will use this single connection to the
Internet for all traffic that is not destined to the internal network.

Reference:

CCDA 200-310 Official Cert Guide, Chapter 10, Static Versus Dynamic Route Assignment, pp.
380-381

"Pass Any Exam. Any Time." - www.actualtests.com 281


Cisco 200-310 Exam

"Pass Any Exam. Any Time." - www.actualtests.com 282

You might also like