CCNA-v3-Sem3

advertisement
Chapter 1
Chapter 1
Introduction to Classless Routing
1
Introduction to Classless Routing
Overview
A network administrator must anticipate and manage the physical growth of a network,
perhaps by buying or leasing another floor of the building to house new networking
equipment such as racks, patch panels, switches, and routers. The network designer must
choose an addressing scheme that allows for growth. Variable-Length Subnet Masking
(VLSM) is a technique that allows for the creation of efficient, scalable addressing schemes.
With the phenomenal growth of the Internet and TCP/IP, virtually every enterprise must
now implement an IP addressing scheme. Many organizations select TCP/IP as the only
routed protocol to run on their network. Unfortunately, the architects of TCP/IP could not
have predicted that their protocol would eventually sustain a global network of information,
commerce, and entertainment.
Twenty years ago, IP version 4 (IPv4) offered an addressing strategy that, although
scalable for a time, resulted in an inefficient allocation of addresses. IP version 6 (IPv6), with
virtually unlimited address space, is slowly being implemented in select networks and may
replace IPv4 as the dominant protocol of the Internet. Over the past two decades, engineers
have successfully modified IPv4 so that it can survive the exponential growth of the Internet.
VLSM is one of the modifications that has helped to bridge the gap between IPv4 and IPv6.
Networks must be scalable in order to meet the changing needs of users. When a
network is scalable it is able to grow in a logical, efficient, and cost-effective way. The
routing protocol used in a network does much to determine the scalability of the network.
Therefore, it is important that the routing protocol be chosen wisely. Routing Information
Protocol (RIP) is still considered suitable for small networks, but is not scalable to large
networks because of inherent limitations. To overcome these limitations yet maintain the
simplicity of RIP version 1 (RIP v1), RIP version 2 (RIP v2) was developed.
Students completing this module should be able to:

Define VLSM and briefly describe the reasons for its use

Divide a major network into subnets of different sizes using VLSM

Define route aggregation and summarization as they relate to VLSM

Configure a router using VLSM

Identify the key features of RIP v1 and RIP v2

Identify the important differences between RIP v1 and RIP v2

Configure RIP v2
2
Cisco Academy – CCNA 3.0 Semester 3

Verify and troubleshoot RIP v2 operation

Configure default routes using the ip route and ip default-network commands
1.1 VLSM
1.1.1 What is VLSM and why is it used?
As IP subnets have grown, administrators have looked for ways to use their address
space more efficiently. One technique is called Variable-Length Subnet Masks (VLSM). With
VLSM, a network administrator can use a long mask on networks with few hosts, and a short
mask on subnets with many hosts.
In order to use VLSM, a network administrator must use a routing protocol that supports
it. Cisco routers support VLSM with Open Shortest Path First (OSPF), Integrated
Intermediate System to Intermediate System (Integrated IS-IS), Enhanced Interior Gateway
Routing Protocol (EIGRP), RIP v2, and static routing.
VLSM allows an organization to use more than one subnet mask within the same
network address space. Implementing VLSM is often referred to as "subnetting a subnet", and
can be used to maximize addressing efficiency.
Classful routing protocols require that a single network use the same subnet mask.
Therefore, network 192.168.187.0 must use just one subnet mask such as 255.255.255.0.
VLSM is simply a feature that allows a single autonomous system to have networks
with different subnet masks. If a routing protocol allows VLSM, use a 30-bit subnet mask on
network connections, 255.255.255.252, a 24-bit mask for user networks, 255.255.255.0, or
even a 22-bit mask, 255.255.252.0, for networks with up to 1000 users.
1.1.2 A waste of space
In the past, it has been recommended that the first and last subnet not be used. Use of the
first subnet, known as subnet zero, for host addressing was discouraged because of the
confusion that can occur when a network and a subnet have the same addresses. The same
was true with the use of the last subnet, known as the all-ones subnet. It has always been true
that these subnets could be used. However, it was not a recommended practice. As networking
technologies have evolved, and IP address depletion has become of real concern, it has
become acceptable practice to use the first and last subnets in a subnetted network in
conjunction with VLSM.
In this network, the network management team has decided to borrow three bits from the
Chapter 1
Introduction to Classless Routing
3
host portion of the Class C address that has been selected for this addressing scheme.
If management decides to use subnet zero, it has eight useable subnets. Each may
support 30 hosts. If the management decides to use the no ip subnet-zero command, it has
seven usable subnets with 30 hosts in each subnet. From Cisco IOS version 12.0, remember
that Cisco routers use subnet zero by default. Therefore Sydney, Brisbane, Perth, and
Melbourne remote offices may each have 30 hosts. The team realizes that it has to address
the three point-to-point WAN links between Sydney, Brisbane, Perth, and Melbourne. If the
team uses the three remaining subnets for the WAN links, it will have used all of the available
addresses and have no room for growth. The team will also have wasted the 28 host addresses
from each subnet to simply address three point-to-point networks. Using this addressing
scheme one third of the potential address space will have been wasted.
Such an addressing scheme is fine for a small LAN. However, this addressing scheme is
extremely wasteful if using point-to-point connections.
1.1.3 When to use VLSM?
It is important to design an addressing scheme that allows for growth and does not
involve wasting addresses. This section examines how VLSM can be used to prevent waste of
addresses on point-to-point links.
This time the networking team decided to avoid their wasteful use of the /27 mask on
the point-to-point links. The team decided to apply VLSM to the addressing problem.
To apply VLSM to the addressing problem, the team will break the Class C address into
subnets of variable sizes. Large subnets are created for addressing LANs. Very small subnets
are created for WAN links and other special cases. A 30-bit mask is used to create subnets
with only two valid host addresses. In this case this is the best solution for the point-to-point
connections. The team will take one of the three subnets they had previously decided to assign
to the WAN links, and subnet it again with a 30-bit mask.
In the example, the team has taken one of the last three subnets, subnet 6, and subnetted
it again. This time the team uses a 30-bit mask. Figures
and illustrate that after using
VLSM, the team has eight ranges of addresses to be used for the point-to-point links.
1.1.4 Calculating subnets with VLSM
VLSM helps to manage IP addresses. VLSM allows for the setting of a subnet mask that
suits the link or the segment requirements. A subnet mask should satisfy the requirements of a
LAN with one subnet mask and the requirements of a point-to-point WAN with another.
Cisco Academy – CCNA 3.0 Semester 3
4
Look at the example in Figure
which illustrates how to calculate subnets with VLSM.
The example contains a Class B address of 172.16.0.0 and two LANs that require at
least 250 hosts each. If the routers are using a classful routing protocol the WAN link would
need to be a subnet of the same Class B network, assuming that the administrator is not using
IP unnumbered. Classful routing protocols such as RIP v1, IGRP, and EGP are not capable of
supporting VLSM. Without VLSM, the WAN link would have to have the same subnet mask
as the LAN segments. A 24-bit mask (255.255.255.0) would support 250 hosts.
The WAN link only needs two addresses, one for each router. Therefore there would be
252 addresses wasted.
If VLSM were used in this example, a 24-bit mask would still work on the LAN
segments for the 250 hosts. A 30-bit mask could be used for the WAN link because only two
host addresses are needed.
In Figure
the subnet addresses used are those generated from subdividing the
172.16.32.0/20 subnet into multiple /26 subnets. The figure illustrates where the subnet
addresses can be applied, depending on the number of host requirements. For example, the
WAN links use subnet addresses with a prefix of /30. This prefix allows for only two hosts,
just enough hosts for a point-to-point connection between a pair of routers.
To calculate the subnet addresses used on the WAN links, further subnet one of the
unused /26 subnets. In this example, 172.16.33.0/26 is further subnetted with a prefix of /30.
This provides four more subnet bits and therefore 16 (24) subnets for the WANs. Figure
illustrates how to work through a VLSM masking system.
VLSM allows the subnetting of an already subnetted address. For example, consider the
subnet address 172.16.32.0/20 and a network needing ten host addresses. With this subnet
address, there are over 4000 (212 – 2 = 4094) host addresses, most of which will be wasted.
With VLSM it is possible to further subnet the address 172.16.32.0/20 to give more network
addresses and fewer hosts per network. For example, by subnetting 172.16.32.0/20 to
172.16.32.0/26, there is a gain of 64 (26) subnets, each of which could support 62 (2 6 – 2)
hosts.
Use this procedure to further subnet 172.16.32.0/20 to 172.16.32.0/26:

Step 1 Write 172.16.32.0 in binary form.

Step 2 Draw a vertical line between the 20th and 21st bits, as shown in Figure . /20
was the original subnet boundary.

Step 3 Draw a vertical line between the 26th and 27th bits, as shown in Figure .
The original /20 subnet boundary is extended six bits to the right, becoming /26.
Chapter 1

Introduction to Classless Routing
5
Step 4 Calculate the 64 subnet addresses using the bits between the two vertical
lines, from lowest to highest in value. The figure shows the first five subnets
available.
It is important to remember that only unused subnets can be further subnetted. If any
address from a subnet is used, that subnet cannot be further subnetted. In the example, four
subnet numbers are used on the LANs. Another unused subnet, 172.16.33.0/26, is further
subnetted for use on the WANs.
Lab Activity
Lab Exercise: Calculating VLSM Subnets
In this lab, students will use variable-length subnet mask (VLSM) to support more
efficient use of the assigned IP addresses and to reduce the amount of routing information at
the top level.
1.1.5 Route aggregation with VLSM
When using VLSM, try to keep the subnetwork numbers grouped together in the
network to allow for aggregation. This means keeping networks like 172.16.14.0 and
172.16.15.0 near one another so that the routers need only carry a route for 172.16.14.0/23.
The use of Classless InterDomain Routing (CIDR) and VLSM not only prevents address
waste, but also promotes route aggregation, or summarization. Without route summarization,
Internet backbone routing would likely have collapsed sometime before 1997.
Figure
illustrates how route summarization reduces the burden on upstream routers.
This complex hierarchy of variable-sized networks and subnetworks is summarized at various
points, using a prefix address, until the entire network is advertised as a single aggregate route,
200.199.48.0/22. Route summarization, or supernetting, is only possible if the routers of a
network run a classless routing protocol, such as OSPF or EIGRP. Classless routing protocols
carry a prefix that consists of 32-bit IP address and bit mask in the routing updates. In Figure ,
the summary route that eventually reaches the provider contains a 20-bit prefix common to all
of the addresses in the organization, 200.199.48.0/22 or 11001000.11000111.0011. For
summarization to work properly, carefully assign addresses in a hierarchical fashion so that
summarized addresses will share the same high-order bits.
Remember the following rules:

A router must know in detail the subnet numbers attached to it.

A router does not need to tell other routers about each individual subnet if the
6
Cisco Academy – CCNA 3.0 Semester 3
router can send one aggregate route for a set of routers.

A router using aggregate routes would have fewer entries in its routing table.
VLSM allows for the summarization of routes and increases flexibly by basing the
summarization entirely on the higher-order bits shared on the left, even if the networks are not
contiguous.
The graphic shows that the addresses, or routes, share each bit up to and including the
20th bit. These bits are colored red. The 21st bit is not the same for all the routes. Therefore
the prefix for the summary route will be 20 bits long. This is used to calculate the network
number of the summary route.
Figure
shows that the addresses, or routes, share each bit up to and including the 21st
bit. These bits are colored red. The 22nd bit is not the same for all the routes. Therefore the
prefix for the summary route will be 21 bits long. This is used to calculate the network
number of the summary route.
1.1.6 Configuring VLSM
If VLSM is the scheme chosen, it must then be calculated and configured correctly.
In this example allow for the following:
Network address: 192.168.10.0
The Perth router has to support 60 hosts. In this case, a minimum of six bits are needed
in the host portion of the address. Six bits will yield 62 possible host addresses, 26 = 64 – 2 =
62, so the division was 192.168.10.0/26.
The Sydney and Singapore routers have to support 12 hosts each. In these cases, a
minimum of four bits are needed in the host portion of the address. Four bits will yield 14
possible host addresses, 24 = 16 – 2 = 14, so the division is 192.168.10.96/28 for Sydney and
192.168.10.112/28 for Singapore.
The Kuala Lumpur router requires 28 hosts. In this case, a minimum of five bits are
needed in the host portion of the address. Five bits will yield 30 possible host addresses, 2 5 =
32 – 2 = 30, so the division here is 192.168.10.64/27.
The following are the point-to-point connections:

Perth to Kuala Lumpur 192.168.10.128/30 – Since only two addresses are required,
a minimum of two bits are needed in the host portion of the address. Two bits will
yield two possible host addresses (22 = 4 – 2 = 2) so the division here is
Chapter 1
Introduction to Classless Routing
7
192.168.10.128/30.

Sydney to Kuala Lumpur 192.168.10.132/30 – Since only two addresses are
required, a minimum of two bits are needed in the host portion of the address. Two
bits will yield two possible host addresses (22 = 4 – 2 = 2) so the division here is
192.168.10.132/30.

Singapore to Kuala Lumpur 192.168.10.136/30 – Since only two addresses are
required, a minimum of two bits are needed in the host portion of the address. Two
bits will yield two possible host addresses (22 = 4 – 2 = 2) so the division here is
192.168.10.136/30.
There is sufficient host address space for two host endpoints on a point-to-point serial
link. The example for Singapore to Kuala Lumpur is configured as follows:
Singapore(config)#interface serial 0
Singapore(config-if)#ip address 192.168.10.137 255.255.255.252
KualaLumpur(config)#interface serial 1
KualaLumpur(config-if)#ip address 192.168.10.138 255.255.255.252
1.2 RIP Version 2
1.2.1 RIP history
The Internet is a collection of autonomous systems (AS). Each AS is generally
administered by a single entity. Each AS will have its own routing technology, which may
differ from other autonomous systems. The routing protocol used within an AS is referred to
as an Interior Gateway Protocol (IGP). A separate protocol, called an Exterior Gateway
Protocol (EGP), is used to transfer routing information between autonomous systems. RIP
was designed to work as an IGP in a moderate-sized AS. It is not intended for use in more
complex environments.
RIP v1 is considered an interior gateway protocol that is classful. RIP v1 is a distance
vector protocol that broadcasts its entire routing table to each neighbor router at
predetermined intervals. The default interval is 30 seconds. RIP uses hop count as a metric,
with 15 as the maximum number of hops.
If the router receives information about a network, and the receiving interface belongs to
the same network but is on a different subnet, the router applies the one subnet mask that is
configured on the receiving interface:

For Class A addresses, the default classful mask is 255.0.0.0.

For Class B addresses, the default classful mask is 255.255.0.0.
8
Cisco Academy – CCNA 3.0 Semester 3

For Class C addresses, the default classful mask is 255.255.255.0.
RIP v1 is a popular routing protocol because virtually all IP routers support it. The
popularity of RIP v1 is based on the simplicity and the universal compatibility it demonstrates.
RIP v1 is capable of load balancing over as many as six equal-cost paths, with four paths as
the default.
RIP v1 has the following limitations:

It does not send subnet mask information in its updates.

It sends updates as broadcasts on 255.255.255.255.

It does not support authentication.

It is not able to support VLSM or classless interdomain routing (CIDR).
RIP v1 is simple to configure, as shown in Figure .
1.2.2 RIP v2 features
RIP v2 is an improved version of RIP v1 and shares the following features:

It is a distance vector protocol that uses a hop count metric.

It uses holddown timers to prevent routing loops – default is 180 seconds.

It uses split horizon to prevent routing loops.

It uses 16 hops as a metric for infinite distance.
RIP v2 provides prefix routing, which allows it to send out subnet mask information
with the route update. Therefore, RIP v2 supports the use of classless routing in which
different subnets within the same network can use different subnet masks, as in VLSM.
RIP v2 provides for authentication in its updates. A set of keys can be used on an
interface as an authentication check. RIP v2 allows for a choice of the type of authentication
to be used in RIP v2 packets. The choice can be either clear text or Message-Digest 5 (MD5)
encryption. Clear text is the default. MD5 can be used to authenticate the source of a routing
update. MD5 is typically used to encrypt enable secret passwords and it has no known
reversal.
RIP v2 multicasts routing updates using the Class D address 224.0.0.9, which provides
for better efficiency.
1.2.3 Comparing RIP v1 and v2
RIP uses distance vector algorithms to determine the direction and distance to any link
in the internetwork. If there are multiple paths to a destination, RIP selects the path with the
least number of hops. However, because hop count is the only routing metric used by RIP, it
Chapter 1
Introduction to Classless Routing
9
does not necessarily select the fastest path to a destination.
RIP v1 allows routers to update their routing tables at programmable intervals. The
default interval is 30 seconds. The continual sending of routing updates by RIP v1 means that
network traffic builds up quickly. To prevent a packet from looping infinitely, RIP allows a
maximum hop count of 15. If the destination network is more than 15 routers away, the
network is considered unreachable and the packet is dropped. This situation creates a
scalability issue when routing in large heterogeneous networks. RIP v1 uses split horizon to
prevent loops. This means that RIP v1 advertises routes out an interface only if the routes
were not learned from updates entering that interface. It uses holddown timers to prevent
routing loops. Holddowns ignore any new information about a subnet indicating a poorer
metric for a time equal to the holddown timer.
Figure
summarizes the behavior of RIP v1 when used by a router.
RIP v2 is an improved version of RIP v1. It has many of the same features of RIP v1.
RIP v2 is also a distance vector protocol that uses hop count, holddown timers, and split
horizon. Figure
compares and contrasts RIP v1 and RIP v2.
Lab Activity
Lab Exercise: Review of Basic Router Configuration with RIP
In this lab, the students will setup an IP addressing scheme using Class B networks and
configure Routing Information Protocol (RIP) on routers.
Lab Activity
e-Lab Activity: Review of Basic Router Configuration including RIP
In this lab, the students will review the basic configuration of routers.
Interactive Media Activity
Checkbox: RIP v1 and RIP v2 Comparison
When the student has completed this activity, the student will be able to identify the
difference between RIP v1 and RIP v2.
1.2.4 Configuring RIP v2
RIP v2 is a dynamic routing protocol that is configured by naming the routing protocol
RIP Version 2, and then assigning IP network numbers without specifying subnet values. This
10
Cisco Academy – CCNA 3.0 Semester 3
section describes the basic commands used to configure RIP v2 on a Cisco router.
To enable a dynamic routing protocol, the following tasks must be completed:

Select a routing protocol, such as RIP v2.

Assign the IP network numbers without specifying the subnet values.

Assign the network or subnet addresses and the appropriate subnet mask to the
interfaces.
RIP v2 uses multicasts to communicate with other routers. The routing metric helps the
routers find the best path to each network or subnet.
The router command starts the routing process. The network command causes the
implementation of the following three functions:

The routing updates are multicast out an interface.

The routing updates are processed if they enter that same interface.

The subnet that is directly connected to that interface is advertised.
The network command is required because it allows the routing process to determine
which interfaces will participate in the sending and receiving of routing updates. The network
command starts up the routing protocol on all interfaces that the router has in the specified
network. The network command also allows the router to advertise that network.
The router rip version 2 command specifies RIP v2 as the routing protocol, while the
network command identifies a participating attached network.
In this example, the configuration of Router A includes the following:

router rip version 2 – Selects RIP v2 as the routing protocol.

network 172.16.0.0 – Specifies a directly connected network.

network 10.0.0.0 – Specifies a directly connected network.
The interfaces on Router A connected to networks 172.16.0.0 and 10.0.0.0, or their
subnets, will send and receive RIP v2 updates. These routing updates allow the router to learn
the network topology. Routers B and C have similar RIP configurations but with different
network numbers specified.
Figure
shows another example of a RIP v2 configuration.
Lab Activity
Lab Exercise: Converting RIP v1 to RIP v2
In this lab, the students will configure RIP v1 on the routers and then convert to RIP v2.
Chapter 1
Introduction to Classless Routing
11
Lab Activity
e-Lab Activity: Converting RIP v1 to RIP v2
In this lab, the student will configure RIP v1 and then convert to RIP v2.
1.2.5 Verifying RIP v2
The show ip protocols and show ip route commands display information about routing
protocols and the routing table. This section describes how to use show commands to verify
the RIP configuration.
The show ip protocols command displays values about routing protocols and routing
protocol timer information associated with the router. In the example, the router is configured
with RIP and sends updated routing table information every 30 seconds. This interval is
configurable. If a router running RIP does not receive an update from another router for 180
seconds or more, the first router marks the routes served by the non-updating router as being
invalid. In Figure , the holddown timer is set to 180 seconds. Therefore, an update to a route
that was down and is now up could stay in the holddown state until the full 180 seconds have
passed.
If there is still no update after 240 seconds the router removes the routing table entries.
In the figure, it has been 18 seconds since Router A received an update from Router B. The
router is injecting routes for the networks listed following the Routing for Networks line. The
router is receiving routes from the neighboring RIP routers listed following the Routing
Information Sources line. The distance default of 120 refers to the administrative distance for
a RIP route.
The show ip interface brief command can also be used to list a summary of the
information and status of an interface.
The show ip route command displays the contents of the IP routing table. The routing
table contains entries for all known networks and subnetworks, and contains a code that
indicates how that information was learned. The output of key fields from this command and
their function is explained in the table.
Examine the output to see if the routing table is populated with routing information. If
entries are missing, routing information is not being exchanged. Use the show
running-config or show ip protocols privileged EXEC commands on the router to check for
a possible misconfigured routing protocol.
Lab Activity
12
Cisco Academy – CCNA 3.0 Semester 3
Lab Exercise: Verifying RIP v2 Configuration
In this lab, the students will configure RIP v1 and v2 on routers and use show
commands to verify RIP v2 operation.
1.2.6 Troubleshooting RIP v2
This section explains the use of the debug ip rip command.
Use the debug ip rip command to display RIP routing updates as they are sent and
received. The no debug all or undebug all commands will turn off all debugging.
The example shows that the router being debugged has received updates from one router
at source address 10.1.1.2. The router at source address 10.1.1.2 sent information about two
destinations in the routing table update. The router being debugged also sent updates, in both
cases to broadcast address 255.255.255.255 as the destination. The number in parentheses is
the source address encapsulated into the IP header.
Other outputs sometimes seen from the debug ip rip command includes entries such as
the following:
RIP: broadcasting general request on Ethernet0
RIP: broadcasting general request on Ethernet1
These outputs appear at startup or when an event occurs such as an interface transition
or a user manually clears the routing table.
An entry, such as the following, is most likely caused by a malformed packet from the
transmitter:
RIP: bad version 128 from 160.89.80.43
Examples of debug ip rip outputs and meanings are shown in Figure .
Lab Activity
Lab Exercise: Troubleshooting RIP v2 using Debug
In this lab, the students will use debug commands to verify proper RIP operation and
analyze data transmitted between routers.
Lab Activity
e-Lab Activity: RIP v2 using Debug
Chapter 1
Introduction to Classless Routing
13
In this lab, the students will enable routing on the router, save the configuration, and
ping interfaces on routers.
1.2.7 Default routes
By default, routers learn paths to destinations three different ways:

Static routes – The system administrator manually defines the static routes as the
next hop to a destination. Static routes are useful for security and traffic reduction,
as no other route is known.

Default routes – The system administrator also manually defines default routes as
the path to take when there is no known route to the destination. Default routes
keep routing tables shorter. When an entry for a destination network does not exist
in a routing table, the packet is sent to the default network.

Dynamic routes – Dynamic routing means that the router learns of paths to
destinations by receiving periodic updates from other routers.
In Figure , the default route is indicated by the following command:
Router(config)#ip route 172.16.1.0 255.255.255.0 172.16.2.1
The ip default-network command establishes a default route in networks using
dynamic routing protocols:
Router(config)#ip default-network 192.168.20.0
Generally after the routing table has been set to handle all the networks that must be
configured, it is often useful to ensure that all other packets go to a specific location. One
example is a router that connects to the Internet. This is called the default route for the router.
All the packets that are not defined in the routing table will go to the nominated interface of
the default router.
The ip default-network command is usually configured on the routers that connect to a
router with a static default route.
In Figure , Hong Kong 2 and Hong Kong 3 would use Hong Kong 4 as the default
gateway. Hong Kong 4 would use interface 192.168.19.2 as its default gateway. Hong Kong 1
would route packets to the Internet for all internal hosts. To allow Hong Kong 1 to route these
packets it is necessary to configure a default route as:
HongKong1(config)#ip route 0.0.0.0 0.0.0.0 192.168.20.1
The zeros represent any destination network with any mask. Default routes are referred
14
Cisco Academy – CCNA 3.0 Semester 3
to as quad zero routes. In the diagram, the only way Hong Kong 1 can go to the Internet is
through the interface 192.168.20.1.
Summary
An understanding of the following key points should have been achieved:

VLSM and the reasons for its use

Subnetting networks of different sizes using VLSM

Route aggregation and summarization as they relate to VLSM

Router configuration using VLSM

Key features of RIP v1 and RIP v2

Important differences between RIP v1 and RIP v2

Configuration of RIP v2

Verifying and troubleshooting RIP v2 operation

Configuring default routes using the ip route and ip default-network commands
Chapter 2
Chapter 2
Single Area OSPF
15
Single Area OSPF
Overview
The two main classes of interior gateway routing protocols (IGP) are distance vector and
link-state. Both types of routing protocols are concerned with finding routes through
autonomous systems. Distance vector and link-state routing protocols use different methods to
accomplish the same tasks.
Link-state routing algorithms, also known as shortest path first (SPF) algorithms,
maintain a complex database of topology information. A link-state routing algorithm
maintains full knowledge of distant routers and how they interconnect. In contrast, distance
vector algorithms provide nonspecific information about distant networks and no knowledge
of distant routers.
Understanding the operation of link-state routing protocols is critical in understanding
how to enable, verify, and troubleshoot their operation. This module explains how link-state
routing protocols work, outlines their features, describes the algorithm they use, and points
out the advantages and disadvantages of link-state routing.
Early routing protocols like RIP were all distance vector protocols. Many of the
important protocols in use today are also distance vector protocols, including RIP v2, IGRP,
and EIGRP. However, as networks grew in size and complexity, some of the limitations of
distance vector routing protocols became apparent. Routers in a network using a distance
vector scheme could only guess at the network topology based on the full routing tables
received from neighboring routers. Bandwidth usage is high because of periodic exchange of
routing updates, and network convergence is slow resulting in poor routing decisions.
Link-state routing protocols differ from distance vector protocols. Link-state protocols
flood routing information allowing every router to have a complete view of the network
topology. Triggered updates allow efficient use of bandwidth and faster convergence. Changes
in the state of a link are sent to all routers in the network as soon as the change occurs.
One of the most important link-state protocols is Open Shortest Path First (OSPF).
OSPF is based on open standards, which means it can be developed and improved by multiple
vendors. It is a complex protocol that is a challenge to implement in a large network. The
basics of OSPF are covered in this module.
OSPF configuration on a Cisco router is similar to the configuration of other routing
16
Cisco Academy – CCNA 3.0 Semester 3
protocols. As with other routing protocols, the OSPF routing process must be enabled and
networks must be identified that will be announced by OSPF. However, OSPF has a number
of features and configuration procedures that are unique. These features make OSPF a
powerful choice for a routing protocol and make OSPF configuration a very challenging
process.
In complex large networks, OSPF can be configured to span many areas and several
different area types. The ability to design and implement large OSPF networks begins with
the ability to configure OSPF in a single area. This module also discusses the configuration of
single area OSPF.
Students completing this module should be able to:

Identify the key features of link-state routing

Explain how link-state routing information is maintained

Discuss the link-state routing algorithm

Examine the advantages and disadvantages of link-state routing

Compare and contrast link-state routing with distance vector routing

Enable OSPF on a router

Configure a loopback address to set router priority

Change OSPF route preference by modifying the cost metric

Configure OSPF authentication

Change OSPF timers

Describe the steps to create and propagate a default route

Use show commands to verify OSPF operation

Configure the OSPF routing process

Define key OSPF terms

Describe the OSPF network types

Describe the OSPF Hello protocol

Identify the basics steps in the operation of OSPF
2.1 Link-State Routing Protocol
2.1.1 Overview of link-state routing
Link-state routing protocols perform in a very different way from distance vector
protocols. Understanding the difference between distance vector and link-state protocols is
vital for network administrators. One essential difference is that distance vector protocols use
a simpler method of exchanging routing information. Figure
both distance vector and link-state routing protocols.
outlines the characteristics of
Chapter 2
Single Area OSPF
17
Link-state routing algorithms maintain a complex database of topology information.
While the distance vector algorithm has nonspecific information about distant networks and
no knowledge of distant routers, a link-state routing algorithm maintains full knowledge of
distant routers and how they interconnect.
Interactive Media Activity
Drag and Drop: Link-State Routing Overview
When the student has completed this activity, the student will be able to identify the
differences between distance vector and link-state routing protocols.
2.1.2 Link-state routing protocol features
Link-state routing protocols collect routing information from all other routers in the
network or within a defined area of the network. Once all of the information is collected, each
router, independently of the other routers, calculates its best paths to all destinations in the
network. Because each router maintains its own view of the network, it is less likely to
propagate incorrect information provided by any of its neighboring routers.
Link-state routing protocols perform the following functions:

Respond quickly to network changes

Send triggered updates only when a network change has occurred

Send periodic updates known as link-state refreshes

Use a hello mechanism to determine the reachability of neighbors
Each router keeps track of the state or condition of its directly connected neighbors by
multicasting hello packets. Each router also keeps track of all the routers in its network or
area of the network by using link-state advertisements (LSAs). The hello packets contain
information about the networks that are attached to the router. In Figure , P4 knows about its
neighbors, P1 and P3, on Perth3 network. The LSAs provide updates on the state of links that
are interfaces on other routers in the network.
A router running a link-state protocol has the following features:

Uses the hello information and LSAs it receives from other routers to build a
database about the network

Uses the shortest path first (SPF) algorithm to calculate the shortest route to each
network

Stores this route information in its routing table
2.1.3 How routing information is maintained
18
Cisco Academy – CCNA 3.0 Semester 3
Link-state routing uses the following features:

Link-state advertisements (LSAs)

A topological database

The shortest path first (SPF) algorithm

The resulting SPF tree

A routing table of paths and ports to each network to determine the best paths for
packets
Link-state routing protocols were designed to overcome the limitations of distance
vector routing protocols. For example, distance vector protocols only exchange routing
updates with immediate neighbors while link-state routing protocols exchange routing
information across a much larger area.
When a failure occurs in the network, such as a neighbor becomes unreachable,
link-state protocols flood LSAs using a special multicast address throughout an area. Each
link-state router takes a copy of the LSA and updates its link-state, or topological database.
The link-state router will then forward the LSA to all neighboring devices. LSAs cause every
router within the area to recalculate routes. Because LSAs need to be flooded throughout an
area, and all routers within that area need to recalculate their routing tables, the number of
link-state routers that can be in an area should be limited.
A link is the same as an interface on a router. The state of the link is a description of an
interface and the relationship to its neighboring routers. For example, a description of the
interface would include the IP address of the interface, the subnet mask, the type of network
to which it is connected, the routers connected to that network, and so on. The collection of
link-states forms a link-state database, sometimes called a topological database. The link-state
database is used to calculate the best paths through the network. Link-state routers find the
best paths to destinations. Link-state routers do this by applying the Dijkstra shortest path first
(SPF) algorithm against the link-state database to build the shortest path first tree, with the
local router as the root. The best paths are then selected from the SPF tree and placed in the
routing table.
2.1.4 Link-state routing algorithms
Link-state routing algorithms maintain a complex database of the network topology by
exchanging link-state advertisements (LSAs) with other routers in a network. This section
describes the link-state routing algorithm.
Link-state routing algorithms have the following characteristics:

They are known collectively as shortest path first (SPF) protocols.
Chapter 2

They maintain a complex database of the network topology.

They are based on the Dijkstra algorithm.
Single Area OSPF
19
Unlike distance vector protocols, link-state protocols develop and maintain full
knowledge of the network routers as well as how they interconnect. This is achieved through
the exchange of link-state advertisements (LSAs) with other routers in a network.
Each router that exchanges LSAs constructs a topological database using all received
LSAs. An SPF algorithm is then used to compute reachability to networked destinations. This
information is used to update the routing table. This process can discover changes in the
network topology caused by component failure or network growth.
LSA exchange is triggered by an event in the network instead of periodic updates. This
can greatly speed up the convergence process because there is no need to wait for a series of
timers to expire before the networked routers can begin to converge.
If the network shown in Figure
uses a link-state routing protocol, there would be no
concern about connectivity between routers A and B. Depending on the actual protocol
employed and the metrics selected, it is highly likely that the routing protocol could
discriminate between the two paths to the same destination and try to use the best one.
Shown in Figure
are the routing entries in the table for Router A, to Router D. In this
example, a link-state protocol would remember both routes. Some link-state protocols provide
a way to assess the performance capabilities of the two routes and choose the best one. If the
route through Router C was the more preferred path and experienced operational difficulties,
such as congestion or component failure, the link-state routing protocol would detect this
change and and begin forwarding packets through Router B.
2.1.5 Advantages and disadvantages of link-state routing
The following list contains many of the advantages that link-state routing protocols have
over the traditional distance vector algorithms, such as Routing Information Protocol (RIP v1)
or Interior Gateway Routing Protocol (IGRP):

Link-state protocols use cost metrics to choose paths through the network. The cost
metric reflects the capacity of the links on those paths.

Link-state protocols use triggered, flooded updates and can immediately report
changes in the network topology to all routers in the network. This immediate
reporting generally leads to fast convergence times.

Each router has a complete and synchronized picture of the network. Therefore, it
is very difficult for routing loops to occur.

Routers always use the latest set of information on which to base their routing
20
Cisco Academy – CCNA 3.0 Semester 3
decisions because LSAs are sequenced and aged.

The link-state database sizes can be minimized with careful network design. This
leads to smaller Dijkstra calculations and faster convergence.

Every router is capable of mapping a copy of the entire network architecture, at
least of its own area of the network. This attribute can greatly assist
troubleshooting.

Classless interdomain routing (CIDR) and variable-length subnet masking (VLSM)
are supported.
The following are some disadvantages of link-state routing protocols:

They require more memory and processing power than distance vector routers,
which can make link-state routing cost-prohibitive for organizations with small
budgets and legacy hardware.

They require strict hierarchical network design, so that a network can be broken
into smaller areas to reduce the size of the topology tables.

They require an administrator with a good understanding of link-state routing.

They flood the network with LSAs during the initial discovery process, which can
significantly decrease the capability of the network to transport data. This flooding
process can noticeably degrade the network performance depending on the
available bandwidth and the number of routers exchanging information.
2.1.6 Compare and contrast distance vector and link-state routing
All distance vector protocols learn routes and then send these routes to directly
connected neighbors. However, link-state routers advertise the states of their links to all other
routers in the area so that each router can build a complete link-state database. These
advertisements are called link-state advertisements (LSAs). Unlike distance vector routers,
link-state routers can form special relationships with their neighbors and other link-state
routers. This is to ensure that the LSA information is properly and efficiently exchanged.
The initial flood of LSAs provides routers with the information that they need to build a
link-state database. Routing updates occur only when the network changes. If there is no
changes, the routing updates occur after a specific interval. If the network changes, a partial
update is sent immediately. The partial update only contains contains information about links
that have changed, not a complete routing table. An administrator concerned about WAN link
utilization will find these partial and infrequent updates an efficient alternative to distance
vector routing, which sends out a complete routing table every 30 seconds. When a change
occurs, link-state routers are all notified simultaneously by the partial update. Distance vector
routers wait for neighbors to note the change, implement the change, and then pass it to the
neighboring routers.
Chapter 2
Single Area OSPF
21
The benefits of link-state routing over distance vector protocols include faster
convergence and improved bandwidth utilization. Link-state protocols support classless
interdomain routing (CIDR) and variable-length subnet mask (VLSM). This makes them a
good choice for complex, scalable networks. In fact, link-state protocols generally outperform
distance vector protocols on any size network. Link-state protocols are not implemented on
every network because they require more memory and processing power than distance vector
protocols and can overwhelm slower equipment. Another reason they are not more widely
implemented is the fact that link-state protocols are quite complex. This would require
well-trained administrators to correctly configure and maintain them.
2.2 Single Area OSPF Concepts
2.2.1 OSPF overview
Open Shortest Path First (OSPF) is a link-state routing protocol based on open standards.
It is described in several standards of the Internet Engineering Task Force (IETF). The most
recent description is RFC 2328. The Open in OSPF means that it is open to the public and is
non-proprietary.
OSPF is becoming the preferred IGP protocol when compared with RIP v1 and RIP v2
because it is scalable. RIP is limited to 15 hops, it converges slowly, and it sometimes chooses
slow routes because it ignores critical factors such as bandwidth in route determination.
OSPF overcomes these limitations and proves to be a robust and scalable routing protocol
suitable for the networks of today. OSPF can be used and configured as a single area for small
networks.
It can also be used for large networks. OSPF routing scales to large networks if
hierarchical network design principles are used.
Large OSPF networks use a hierarchical design. Multiple areas connect to a distribution
area, area 0, also called the backbone. This design approach allows for extensive control of
routing updates. Defining areas reduces routing overhead, speeds up convergence, confines
network instability to an area and improves performance.
2.2.2 OSPF terminology
As a link-state protocol, OSPF operates differently from distance vector routing
protocols. Link-state routers identify neighboring routers and then communicate with the
identified neighbors. OSPF has its own terminology. The new terms are shown in Figure .
Information is gathered from OSPF neighbors about the status, or links, of each OSPF
router. This information is flooded to all its neighbors. Flooding is a process that sends
22
Cisco Academy – CCNA 3.0 Semester 3
information out all ports, with the exception of the port on which the information was
received. An OSPF router advertises its own link states and passes on received link states.
The routers process the information about link-states and build a link-state database.
Every router in the OSPF area will have the same link-state database.
Every router has the
same information about the state of the links and the neighbors of every other router.
Then each router runs the SPF algorithm on its own copy of the database. This
calculation determines the best route to a destination. The SPF algorithm adds up the cost,
which is a value that is usually based on bandwidth. The lowest cost path is added to the
routing table, which is also known as the forwarding database.
OSPF routers record information about their neighbors in the adjacency database.
To reduce the number of exchanges of routing information among several neighbors on
the same network, OSPF routers elect a Designated Router (DR) and a Backup Designated
Router (BDR) that serve as focal points for routing information exchange.
Interactive Media Activity
Crossword Puzzle: OSPF Terminology
When the student has completed this activity, the student will understand the different
OSPF terminology.
2.2.3 Comparing OSPF with distance vector routing protocols
OSPF uses link-state technology, compared with distance vector technology such as RIP.
Link-state routers maintain a common picture of the network and exchange link information
upon initial discovery or network changes. Link-state routers do not broadcast their routing
tables periodically as distance vector protocols do. Therefore, link-state routers use less
bandwidth for routing table maintenance.
RIP is appropriate for small networks, and the best path is based on the lowest number
of hops. OSPF is appropriate for the needs of large scalable internetworks, and the best path is
determined by speed. RIP and other distance vector protocols use simple algorithms to
compute best paths. The SPF algorithm is complex. Routers implementing distance vector
routing may need less memory and less powerful processors than those running OSPF.
OSPF selects routes based on cost, which is related to speed. The higher the speed, the
lower the OSPF cost of the link.
Chapter 2
Single Area OSPF
23
OSPF selects the fastest loop-free path from the shortest-path first tree as the best path in
the network.
OSPF guarantees loop-free routing. Distance vector protocols may cause routing loops.
If links are unstable, flooding of link-state information can lead to unsynchronized
link-state advertisements and inconsistent decisions among routers.
OSPF addresses the following issues:

Speed of convergence

Support for Variable Length Subnet Mask (VLSM)

Network size

Path selection

Grouping of members
In large networks RIP convergence can take several minutes since the routing table of
each router is copied and shared with directly connected routers. After initial OSPF
convergence, maintaining a converged state is faster because only the changes in the network
are flooded to other routers in an area.
OSPF supports VLSMs and therefore is referred to as a classless protocol. RIP v1 does
not support VLSMs, however, RIP v2 does support VLSMs.
RIP considers a network that is more than 15 routers away to be unreachable because the
number of hops is limited to 15. This limits RIP to small topologies. OSPF has no size limits
and is suitable for intermediate to large networks.
RIP selects a path to a network by adding one to the hop count reported by a neighbor. It
compares the hop counts to a destination and selects the path with the smallest distance or
hops. This algorithm is simple and does not require a powerful router or a lot of memory. RIP
does not take into account the available bandwidth in best path determination.
OSPF selects a path using cost, a metric based on bandwidth. All OSPF routers must
obtain complete information about the networks of every router to calculate the shortest path.
This is a complex algorithm. Therefore, OSPF requires more powerful routers and more
memory than RIP.
RIP uses a flat topology. Routers in a RIP region exchange information with all routers.
OSPF uses the concept of areas. A network can be subdivided into groups of routers. In this
way OSPF can limit traffic to these areas. Changes in one area do not affect performance in
other areas. This hierarchical approach allows a network to scale efficiently.
24
Cisco Academy – CCNA 3.0 Semester 3
Interactive Media Activity
Checkbox: Link-State and Distance Vector Comparison
When the student has completed this activity, the student will be able to identify the
difference between link-state and distance vector routing protocols.
2.2.4 Shortest path algorithm
The shortest path algorithm is used by OSPF to determine the best path to a destination.
In this algorithm, the best path is the lowest cost path. The algorithm was discovered by
Dijkstra, a Dutch computer scientist, and was explained in 1959. The algorithm considers a
network to be a set of nodes connected by point-to-point links.
Each link has a cost. Each
node has a name. Each node has a complete database of all the links and so complete
information about the physical topology is known. All router link-state databases are identical.
The table in Figure
shows the information that node D has received. For example, D
received information that it was connected to node C with a link cost of 4 and to node E with
a link cost of 1.
The shortest path algorithm then calculates a loop-free topology using the node as the
starting point and examining in turn information it has about adjacent nodes. In Figure , node
B has calculated the best path to D. The best path to D is by way of node E, which has a cost
of 4. This information is converted to a route entry in B which will forward traffic to C.
Packets to D from B will flow B to C to E, then to D in this OSPF network.
In the example, node B determined that to get to node F the shortest path has a cost of 5,
via node C. All other possible topologies will either have loops or a higher cost paths.
2.2.5 OSPF network types
A neighbor relationship is required for OSPF routers to share routing information. A
router will try to become adjacent, or neighbor, to at least one other router on each IP network
to which it is connected. Some routers may try to become adjacent to all their neighbor
routers. Other routers may try to become adjacent to only one or two neighbor routers. OSPF
routers determine which routers to become adjacent to based on the type of network they are
connected to. Once an adjacency is formed between neighbors, link-state information is
exchanged.
OSPF interfaces recognize three types of networks:

Broadcast multi-access, such as Ethernet

Point-to-point networks
Chapter 2

Single Area OSPF
25
Nonbroadcast multi-access (NBMA), such as Frame Relay
A fourth type, point-to-multipoint, can be configured on an interface by an administrator.
In a multiaccess network, the number of routers that will be connected in advance is
unknown. In point-to-point networks, only two routers can be connected.
In a broadcast multi-access network segment, many routers may be connected. If every
router had to establish full adjacency with every other router and exchange link-state
information with every neighbor, there would be too much overhead. If there are 5 routers, 10
adjacency relationships would be needed and 10 link states sent. If there are 10 routers then
45 adjacencies would be needed. In general, for n routers, n*(n-1)/2 adjacencies would need
to be formed.
The solution to this overhead is to hold an election for a designated router (DR). This
router becomes adjacent to all other routers in the broadcast segment. All other routers on the
segment send their link-state information to the DR. The DR in turn acts as the spokesperson
for the segment. Using the example numbers above, only 5 and 10 sets of link states need be
sent respectively. The DR sends link-state information to all other routers on the segment
using the multicast address of 224.0.0.5 for all OSPF routers.
Despite the gain in efficiency that electing a DR provides, there is a disadvantage. The
DR represents a single point of failure. A second router is elected as a backup designated
router (BDR) to take over the duties of the DR if it should fail. To ensure that both the DR
and the BDR see the link states all routers send on the segment, the multicast address for all
designated routers, 224.0.0.6, is used.
On point-to-point networks only two nodes exist and no DR or BDR is elected. Both
routers become fully adjacent with each other.
Interactive Media Activity
Drag and Drop: OSPF Network Types
When the student has completed this activity, the student will be able to identify the
different OSPF network types.
2.2.6 OSPF Hello protocol
When a router starts an OSPF routing process on an interface, it sends a hello packet and
continues to send hellos at regular intervals. The rules that govern the exchange of OSPF
hello packets are called the Hello protocol.
26
Cisco Academy – CCNA 3.0 Semester 3
At Layer 3 of the OSI model, the hello packets are addressed to the multicast address
224.0.0.5. This address is “all OSPF routers”. OSPF routers use hello packets to initiate new
adjacencies and to ensure that neighbor routers are still functioning. Hellos are sent every 10
seconds by default on broadcast multi-access and point-to-point networks. On interfaces that
connect to NBMA networks, such as Frame Relay, the default time is 30 seconds.
On multi-access networks the Hello protocol elects a designated router (DR) and a
backup designated router (BDR).
Although the hello packet is small, it consists of the OSPF packet header. For the hello
packet the type field is set to 1.
The hello packet carries information that all neighbors must agree upon before an
adjacency is formed, and link-state information is exchanged.
Interactive Media Activity
Drag and Drop: OSPF Packet Header
When the student has completed this activity, the student will be able to identify the
different fields in an OSPF packet header.
2.2.7 Steps in the operation of OSPF
OSPF routers send Hello packets on OSPF enabled interfaces. If all parameters in the
OSPF Hello packets are agreed upon, the routers become neighbors. On multi-access
networks, the routers elect a DR and BDR. On these networks other routers become adjacent
to the DR.
Adjacent routers go through a sequence of states. Adjacent routers must be in the full
state before routing tables are created and traffic routed. Each router sends link-state
advertisements (LSA) in link-state update (LSU) packets. These LSAs describe all of the
routers links. Each router that receives an LSA from its neighbor records the LSA in the
link-state database. This process is repeated for all routers in the OSPF network.
When the databases are complete, each router uses the SPF algorithm to calculate a loop
free logical topology to every known network. The shortest path with the lowest cost is used
in building this topology, therefore the best route is selected.
Routing information is now maintained. When there is a change in a link state, routers
use a flooding process to notify other routers on the network about the change. The Hello
protocol dead interval provides a simple mechanism for determining that an adjacent neighbor
Chapter 2
Single Area OSPF
27
is down. Interactive Media Activity
Drag and Drop: OSPF State Flowchart
When the student has completed this activity, the student will be able to identify the
different OSPF neighbor states.
2.3 Single Area OSPF Configuration
2.3.1 Configuring OSPF routing process
OSPF routing uses the concept of areas. Each router contains a complete database of
link-states in a specific area. An area in the OSPF network, it may be assigned any number
from 0 to 65,535. However a single area is assigned the number 0 and is known as area 0. In
multi-area OSPF networks, all areas are required to connect to area 0. Area 0 is also called the
backbone area.
OSPF configuration requires that the configuration be enabled on the router with
network addresses and area information. Network addresses are configured with a wildcard
mask and not a subnet mask. The wildcard mask represents the links or host addresses that
can be present in this segment. Area IDs can be written as a whole number or dotted decimal
notation.
To enable OSPF routing, use the global configuration command syntax:
Router(config)#router ospf process-id
The process ID is a number that is used to identify an OSPF routing process on the
router. Multiple OSPF processes can be started on the same router. The number can be any
value between 1 and 65,535. Most network administrators keep the same process ID
throughout an autonomous system, but this is not a requirement. It is rarely necessary to run
more than one OSPF process on a router. IP networks are advertised as follows in OSPF:
Router(config-router)#network address wildcard-mask area area-id
Each network must be identified with the area to which it belongs. The network address
can be a whole network, a subnet, or the address of the interface. The wildcard mask
represents the set of host addresses that the segment supports. This is different than a subnet
mask, which is used when configuring IP addresses on interfaces.
28
Cisco Academy – CCNA 3.0 Semester 3
Lab Activity
Lab Exercise: Configuring the OSPF Routing Process
This lab is to setup an IP addressing scheme for OSPF area 0 and configure and verify
OSPF routing.
Lab Activity
e-Lab Activity: Configuring OSPF
In this lab, the students will configure and verify OSPF routing.
2.3.2 Configuring OSPF loopback address and router priority
When the OSPF process starts, the Cisco IOS uses the highest local active IP address as
its OSPF router ID. If there is no active interface, the OSPF process will not start. If the active
interface goes down, the OSPF process has no router ID and therefore ceases to function until
the interface comes up again.
To ensure OSPF stability there should be an active interface for the OSPF process at all
times. A loopback interface, which is a logical interface, can be configured for this purpose.
When a loopback interface is configured, OSPF uses this address as the router ID, regardless
of the value. On a router that has more than one loopback interface, OSPF takes the highest
loopback IP address as its router ID.
To create and assign an IP address to a loopback interface use the following commands:
Router(config)#interface loopback number
Router(config-if)#ip address ip-address subnet-mask
It is considered good practice to use loopback interfaces for all routers running OSPF.
This loopback interface should be configured with an address using a 32-bit subnet mask of
255.255.255.255. A 32-bit subnet mask is called a host mask because the subnet mask
specifies a network of one host. When OSPF is requested to advertise a loopback network,
OSPF always advertises the loopback as a host route with a 32-bit mask.
In broadcast multi-access networks there may be more than two routers. OSPF elects a
designated router (DR) to be the focal point of all link-state updates and link-state
advertisements. Because the DR role is critical, a backup designated router (BDR) is elected
to take over if the DR fails.
If the network type of an interface is broadcast, the default OSPF priority is 1. When
Chapter 2
Single Area OSPF
29
OSPF priorities are the same, the OSPF election for DR is decided on the router ID. The
highest router ID is selected.
The election result can be determined by ensuring that the ballots, the hello packets,
contain a priority for that router interface. The interface reporting the highest priority for a
router will ensure that it becomes the DR.
The priorities can be set to any value from 0 to 255. A value of 0 prevents that router
from being elected. A router with the highest OSPF priority will be selected as the DR. A
router with the second highest priority will be the BDR. After the election process, the DR
and BDR retain their roles even if routers are added to the network with higher OSPF priority
values.
Modify the OSPF priority by entering global interface configuration ip ospf priority
command on an interface that is participating in OSPF.
The command show ip ospf
interface will display the interface priority value as well as other key information.
Router(config-if)#ip ospf priority number
Router#show ip ospf interface type number
Lab Activity
Lab Exercise: Configuring OSPF with Loopback Addresses
This lab is to configure routers with a Class C IP addressing scheme.
Lab Activity
e-Lab Activity: Configuring OSPF with Loopback Addresses
In this lab, the student will observe the election process for designated routers, DR, and
BDR.
2.3.3 Modifying OSPF cost metric
OSPF uses cost as the metric for determining the best route. Cost is calculated using the
formula 108/bandwidth, where bandwidth is expressed in bps. The Cisco IOS automatically
determines cost based on the bandwidth of the interface. It is essential for proper OSPF
operation that the correct interface bandwidth is set.
Router(config)#interface serial 0/0
Router(config-if)#bandwidth 64
Cisco Academy – CCNA 3.0 Semester 3
30
The default bandwidth for Cisco serial interfaces is 1.544 Mbps, or 1544 kbps.
Cost can be changed to influence the outcome of the OSPF cost calculation. A common
situation requiring a cost change is in a multi-vendor routing environment. A cost change
would ensure that one vendor’s cost value would match another vendor’s cost value. Another
situation is when Gigabit Ethernet is being used. The default cost assigns the lowest cost
value of 1 to a 100 Mbps link. In a 100-Mbps and Gigabit Ethernet situation, the default cost
values could cause routing to take a less desirable path unless they are adjusted. The cost
number can be between 1 and 65,535.
Use the following interface configuration command to set the link cost:
Router(config-if)#ip ospf cost number
Lab Activity
Lab Exercise: Modifying OSPF Cost Metric
This lab is to setup an Open Shortest Path First (OSPF) area.
Lab Activity
e-Lab Activity: Modifying OSPF Cost Metric
In this lab, the student will modify the OSPF cost metric.
2.3.4 Configuring OSPF authentication
By default, a router trusts that routing information is coming from a router that should
be sending the information. A router also trusts that the information has not been tampered
with along the route.
To guarantee this trust, routers in a specific area can be configured to authenticate each
other.
Each OSPF interface can present an authentication key for use by routers sending OSPF
information to other routers on the segment. The authentication key, known as a password, is
a shared secret between the routers. This key is used to generate the authentication data in the
OSPF packet header.
The password can be up to eight characters. Use the following
command syntax to configure OSPF authentication:
Router(config-if)#ip ospf authentication-key password
Chapter 2
Single Area OSPF
31
After the password is configured, authentication must be enabled:
Router(config-router)#area area-number authentication
With simple authentication, the password is sent as plain text. This means that it can be
easily decoded if a packet sniffer captures an OSPF packet.
It is recommended that authentication information be encrypted. To send encrypted
authentication information and to ensure greater security, the message-digest keyword is used.
The MD5 keyword specifies the type of message-digest hashing algorithm to use, and the
encryption type field refers to the type of encryption, where 0 means none and 7 means
proprietary.
Use the interface configuration command mode syntax:
Router(config-if)#ip ospf message-digest-key key-id md5 encryption-type key
The key-id is an identifier and takes the value in the range of 1 through 255. The key is
an alphanumeric password up to sixteen characters. Neighbor routers must use the same key
identifier with the same key value.
The following is configured in router configuration mode:
Router(config-router)#area area-id authentication message-digest
MD5 authentication creates a message digest. A message digest is scrambled data that is
based on the password and the packet contents. The receiving router uses the shared password
and the packet to re-calculate the digest. If the digests match, the router believes that the
source and contents of the packet have not been tampered with. The authentication type
identifies which authentication, if any, is being used. In the case of message-digest
authentication, the authentication data field contains the key-id and the length of the message
digest that is appended to the packet. The message digest is like a watermark that cannot be
counterfeited.
Lab Activity
Lab Exercise: Configuring OSPF Authentication
This lab is to setup an IP addressing scheme for Open Shortest Path First (OSPF) area.
Lab Activity
e-Lab Activity: Configuring OSPF Authentication
32
Cisco Academy – CCNA 3.0 Semester 3
In this lab, the student will setup an IP addressing scheme for OSPF area, configure and
verify OSPF routing, and introduce OSPF authentication in to the area.
2.3.5 Configuring OSPF timers
OSPF routers must have the same hello intervals and the same dead intervals to
exchange information. By default, the dead interval is four times the value of the hello
interval. This means that a router has four chances to send a hello packet before being
declared dead.
On broadcast OSPF networks, the default hello interval is 10 seconds and the default
dead interval is 40 seconds. On nonbroadcast networks, the default hello interval is 30
seconds and the default dead interval is 120 seconds. These default values result in efficient
OSPF operation and seldom need to be modified.
A network administrator is allowed to choose these timer values. A justification that
OSPF network performance will be improved is needed prior to changing the timers. These
timers must be configured to match those of any neighboring router.
To configure the hello and dead intervals on an interface, use the following commands:
Router(config-if)#ip ospf hello-interval seconds
Router(config-if)#ip ospf dead-interval seconds
Lab Activity
Lab Exercise: Configuring OSPF Timers
This lab is to setup OSPF timers.
Lab Activity
e-Lab Activity: Configuring OSPF Timers
In this lab, the student will adjust OSPF timers to maximize efficiency of the network.
2.3.6 OSPF, propagating a default route
OSPF routing ensures loop-free paths to every network in the domain. To reach
networks outside the domain, either OSPF must know about the network or OSPF must have
a default route. To have an entry for every network in the world would require enormous
resources for each router.
Chapter 2
Single Area OSPF
33
A practical alternative is to add a default route to the OSPF router connected to the
outside network. This route can be redistributed to each router in the AS through normal
OSPF updates.
A configured default route is used by a router to generate a gateway of last resort. The
static default route configuration syntax uses the network 0.0.0.0 address and a subnet mask
0.0.0.0:
Router(config)#ip route 0.0.0.0 0.0.0.0 [interface | next-hop address]
This is referred to as the quad-zero route, and any network address is matched using the
following rule. The network gateway is determined by ANDing the packet destination with
the subnet mask.
The following configuration statement will propagate this route to all the routers in a
normal OSPF area:
Router(config-router)#default-information originate
All routers in the OSPF area will learn a default route provided that the interface of the
border router to the default gateway is active.
Lab Activity
Lab Exercise: Propagating Default Routes in an OSPF Domain
This lab is to setup an IP addressing scheme for OSPF area.
Lab Activity
e-Lab Activity: Propagate Default Route Information in an OSPF Domain
In this lab, the student will configure the OSPF network so that all hosts in the OSPF
area can connect to outside networks.
2.3.7 Common OSPF configuration issues
An OSPF router must establish a neighbor or adjacency relationship with another OSPF
router to exchange routing information. Failure to establish a neighbor relationship is caused
by any of the following reasons:

Hellos are not sent from both neighbors.

Hello and dead interval timers are not the same.

Interfaces are on different network types.
34
Cisco Academy – CCNA 3.0 Semester 3

Authentication passwords or keys are different.
In OSPF routing it is also important to ensure the following:

All interfaces have the correct addresses and subnet mask.

network area statements have the correct wildcard masks.

network area statements put interfaces into the correct area.
2.3.8 Verifying the OSPF configuration
To verify the OSPF configuration a number of show commands are available. Figure
lists these commands. Figure
shows commands useful for troubleshooting OSPF.
Summary
An understanding of the following key points should have been achieved:

The features of link-state routing

How link-state routing information is maintained

The link-state routing algorithm

The advantages and disadvantages of link-state routing

Link-state routing compared with distance vector routing

OSPF terminology

The differences between distance vector and link-state routing protocols

OSPF network types

The operation of the shortest path first (SPF) algorithm

The OSPF Hello protocol

The basics steps in the operation of OSPF

Enabling OSPF on a router

Configuring a loopback address to set router priority

Changing OSPF route preference by modifying the cost metric

Configuring OSPF authentication

Changing OSPF timers

Creating and propagating a default route

Using show commands to verify OSPF operation
Chapter 3
Chapter 3
EIGRP
35
EIGRP
Overview
Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco-proprietary routing
protocol based on Interior Gateway Routing Protocol (IGRP).
Unlike IGRP, which is a classful routing protocol, EIGRP supports classless interdomain
routing (CIDR), allowing network designers to maximize address space by using CIDR and
variable-length subnet mask (VLSM). Compared to IGRP, EIGRP boasts faster convergence
times, improved scalability, and superior handling of routing loops.
Furthermore, EIGRP can replace Novell Routing Information Protocol (RIP) and
AppleTalk Routing Table Maintenance Protocol (RTMP), serving both IPX and AppleTalk
networks with powerful efficiency.
EIGRP is often described as a hybrid routing protocol, offering the best of distance
vector and link-state algorithms.
EIGRP is an advanced routing protocol that relies on features commonly associated with
link-state protocols. Some of the best features of OSPF, such as partial updates and neighbor
discovery, are similarly put to use by EIGRP. However, EIGRP is easier to configure than
OSPF.
EIGRP is an ideal choice for large, multi-protocol networks built primarily on Cisco
routers.
This module covers common EIGRP configuration tasks. Particular attention is paid to
the ways in which EIGRP establishes relationships with adjacent routers, calculates primary
and backup routes, and when necessary, responds to failures in known routes to a particular
destination.
A network is made up of many devices, protocols, and media that allow data
communication to happen. When one piece of the network does not work properly, one or two
users may be unable to communicate, or the entire network may fail. In either case, the
network administrator must quickly identify and troubleshoot problems when they arise.
Network problems commonly result from the following:

Mistyped commands

Incorrectly constructed or incorrectly placed access lists

Misconfigured routers, switches, or other network devices
36
Cisco Academy – CCNA 3.0 Semester 3

Bad physical connections
A network administrator should approach troubleshooting in a methodical manner, using
a general problem-solving model. It is often useful to check for physical layer problems first
and then move up the layers in an organized manner. Although this module will focus on
troubleshooting the operation of routing protocols, which work at Layer 3, it is important to
eliminate any problems that may exist at lower layers.
Students completing this module should be able to:

Describe the differences between EIGRP and IGRP

Describe the key concepts, technologies, and data structures of EIGRP

Understand EIGRP convergence and the basic operation of the Diffusing Update
Algorithm (DUAL)

Perform a basic EIGRP configuration

Configure EIGRP route summarization

Describe the processes used by EIGRP to build and maintain routing tables

Verify EIGRP operations

Describe the eight-step process for general troubleshooting

Apply a logical process to routing troubleshooting

Troubleshoot a RIP routing process using show and debug commands

Troubleshoot an IGRP routing process using show and debug commands

Troubleshoot an EIGRP routing process using show and debug commands

Troubleshoot an OSPF routing process using show and debug commands
3.1 EIGRP Concepts
3.1.1 Comparing EIGRP with IGRP
Cisco released EIGRP in 1994 as a scalable, improved version of its proprietary distance
vector routing protocol, IGRP. The same distance vector technology found in IGRP is used in
EIGRP, and the underlying distance information remains the same.
EIGRP improves the convergence properties and the operating efficiency significantly
over IGRP. This allows for an improved architecture while retaining the existing investment
in IGRP.
Comparisons between EIGRP and IGRP fall into the following major categories:

Compatibility mode

Metric calculation

Hop count
Chapter 3

Automatic protocol redistribution

Route tagging
EIGRP
37
IGRP and EIGRP are compatible with each other. This compatibility provides seamless
interoperability with IGRP routers. This is important so users can take advantage of the
benefits of both protocols. EIGRP offers multiprotocol support, but IGRP does not.
EIGRP and IGRP use different metric calculations. EIGRP scales the metric of IGRP by
a factor of 256. That is because EIGRP uses a metric that is 32 bits long, and IGRP uses a
24-bit metric. By multiplying or dividing by 256, EIGRP can easily exchange information
with IGRP.
IGRP has a maximum hop count of 255. EIGRP has a maximum hop count limit of 224.
This is more than adequate to support the largest, properly designed internetworks.
Enabling dissimilar routing protocols such as OSPF and RIP to share information
requires advanced configuration. Redistribution, the sharing of routes, is automatic between
IGRP and EIGRP as long as both processes use the same autonomous system (AS) number. In
Figure , RTB automatically redistributes EIGRP-learned routes to the IGRP AS, and vice
versa.
EIGRP will tag routes learned from IGRP or any outside source as external because they
did not originate from EIGRP routers. IGRP cannot differentiate between internal and external
routes.
Notice that in the show ip route command output for the routers in Figure , EIGRP
routes are flagged with D, and external routes are denoted by EX. RTA identifies the
difference between the network learned via EIGRP (172.16.0.0) and the network that was
redistributed from IGRP (192.168.1.0). In the RTC table, the IGRP protocol makes no such
distinction. RTC, which is running IGRP only, just sees IGRP routes, despite the fact that both
10.1.1.0 and 172.16.0.0 were redistributed from EIGRP.
Interactive Media Activity
Checkbox: IGRP and EIGRP Comparison
When the student has completed this activity, the student will be able to identify the
difference between IGRP and EIGRP.
3.1.2 EIGRP concepts and terminology
EIGRP routers keep route and topology information readily available in RAM, so they
38
Cisco Academy – CCNA 3.0 Semester 3
can react quickly to changes. Like OSPF, EIGRP saves this information in several tables and
databases.
EIGRP saves routes that are learned in specific ways. Routes are given a particular
status and can be tagged to provide additional useful information.
EIGRP maintains three tables:

Neighbor table

Topology table

Routing table
The neighbor table is the most important table in EIGRP. Each EIGRP router maintains
a neighbor table that lists adjacent routers. This table is comparable to the adjacency database
used by OSPF. There is a neighbor table for each protocol that EIGRP supports.
When newly discovered neighbors are learned, the address and interface of the neighbor
is recorded. This information is stored in the neighbor data structure. When a neighbor sends a
hello packet, it advertises a hold time. The hold time is the amount of time a router treats a
neighbor as reachable and operational. In other words, if a hello packet is not heard within the
hold time, then the hold time expires. When the hold time expires, the Diffusing Update
Algorithm (DUAL), which is the EIGRP distance vector algorithm, is informed of the
topology change and must recalculate the new topology.
The topology table is made up of all the EIGRP routing tables in the autonomous system.
DUAL takes the information supplied in the neighbor table and the topology table and
calculates the lowest cost routes to each destination. By tracking this information, EIGRP
routers can identify and switch to alternate routes quickly. The information that the router
learns from the DUAL is used to determine the successor route, which is the term used to
identify the primary or best route. A copy is also placed in the topology table.
Every EIGRP router maintains a topology table for each configured network protocol.
All learned routes to a destination are maintained in the topology table.
The topology table includes the following fields:

Feasible distance (FD is 2195456) 200.10.10.10 – The feasible distance (FD) is the
lowest calculated metric to each destination. For example, the feasible distance to
32.0.0.0 is 90 as indicated by FD is equal 90.

Route source (via 200.10.10.10) – The source of the route is the identification
number of the router that originally advertised that route. This field is populated
only for routes learned externally from the EIGRP network. Route tagging can be
particularly useful with policy-based routing. For example, the route source to
Chapter 3
EIGRP
39
32.0.0.0 is 200.10.10.10 via 200.10.10.10.

Reported distance (FD/RD) – The reported distance (RD) of the path is the
distance reported by an adjacent neighbor to a specific destination. For example,
the reported distance to 32.0.0.0 is 2195456 as indicated by (90/2195456).

Interface information – The interface through which the destination is reachable

Route status – Routes are identified as being either passive (P), which means that
the route is stable and ready for use, or active (A), which means that the route is in
the process of being recomputed by DUAL.
The EIGRP routing table holds the best routes to a destination. This information is
retrieved from the topology table. Each EIGRP router maintains a routing table for each
network protocol.
A successor is a route selected as the primary route to use to reach a destination.
DUAL identifies this route from the information contained in the neighbor and topology
tables and places it in the routing table. There can be up to four successor routes for any
particular route. These can be of equal or unequal cost and are identified as the best loop-free
paths to a given destination. A copy of the successor routes is also placed in the topology
table.
A feasible successor (FS) is a backup route. These routes are identified at the same
time the successors are identified, but they are only kept in the topology table. Multiple
feasible successors for a destination can be retained in the topology table although it is not
mandatory.
A router views its feasible successors as neighbors downstream, or closer to the
destination than it is. Feasible successor cost is computed by the advertised cost of the
neighbor router to the destination. If a successor route goes down, the router will look for an
identified feasible successor. This route will be promoted to successor status. A feasible
successor must have a lower advertised cost than the existing successor cost to the destination.
If a feasible successor is not identified from the existing information, the router places an
Active status on a route and sends out query packets to all neighbors in order to recompute the
current topology. The router can identify any new successor or feasible successor routes from
the new data that is received from the reply packets that answer the query requests. The router
will then place a Passive status on the route.
The topology table can record additional information about each route. EIGRP classifies
routes as either internal or external. EIGRP adds a route tag to each route to identify this
classification. Internal routes originate from within the EIGRP autonomous system (AS).
40
Cisco Academy – CCNA 3.0 Semester 3
External routes originate outside the EIGRP AS. Routes learned or redistributed from
other routing protocols, such as Routing Information Protocol (RIP), OSPF, and IGRP, are
external. Static routes originating outside the EIGRP AS are external. The tag can be
configured to a number between 0-255 to customize the tag.
3.1.3 EIGRP design features
EIGRP operates quite differently from IGRP. EIGRP is an advanced distance vector
routing protocol and acts as a link-state protocol when updating neighbors and maintaining
routing information. The advantages of EIGRP over simple distance vector protocols include
the following:

Rapid convergence

Efficient use of bandwidth

Support for variable-length subnet mask (VLSM) and classless interdomain
routing (CIDR). Unlike IGRP, EIGRP offers full support for classless IP by
exchanging subnet masks in routing updates.

Multiple network-layer support

Independence from routed protocols. Protocol-dependent modules (PDMs) protect
EIGRP from lengthy revision. Evolving routed protocols, such as IP, may require a
new protocol module but not necessarily a reworking of EIGRP itself.
EIGRP routers converge quickly because they rely on DUAL. DUAL guarantees
loop-free operation at every instant throughout a route computation allowing all routers
involved in a topology change to synchronize at the same time.
EIGRP makes efficient use of bandwidth by sending partial, bounded updates and its
minimal consumption of bandwidth when the network is stable. EIGRP routers make partial,
incremental updates rather than sending their complete tables. This is similar to OSPF
operation, but unlike OSPF routers, EIGRP routers send these partial updates only to the
routers that need the information, not to all routers in an area. For this reason, they are called
bounded updates. Instead of using timed routing updates, EIGRP routers keep in touch with
each other using small hello packets. Though exchanged regularly, hello packets do not use up
a significant amount of bandwidth.
EIGRP supports IP, IPX, and AppleTalk through protocol-dependent modules (PDMs).
EIGRP can redistribute IPX RIP and SAP information to improve overall performance. In
effect, EIGRP can take over for these two protocols. An EIGRP router will receive routing
and service updates, updating other routers only when changes in the SAP or routing tables
occur. Routing updates occur as they would in any EIGRP network, using partial updates.
Chapter 3
EIGRP
41
EIGRP can also take over for the AppleTalk Routing Table Maintenance Protocol
(RTMP). As a distance vector routing protocol, RTMP relies on periodic and complete
exchanges of routing information. To reduce overhead, EIGRP redistributes AppleTalk
routing information using event-driven updates. EIGRP also uses a configurable composite
metric to determine the best route to an AppleTalk network. RTMP uses hop count, which can
result in suboptimal routing. AppleTalk clients expect RTMP information from local routers,
so EIGRP for AppleTalk should be run only on a clientless network, such as a wide-area
network (WAN) link.
3.1.4 EIGRP technologies
EIGRP includes many new technologies, each of which represents an improvement in
operating efficiency, speed of convergence, or functionality relative to IGRP and other routing
protocols. These technologies fall into one of the following four categories:

Neighbor discovery and recovery

Reliable Transport Protocol

DUAL finite-state machine algorithm

Protocol-dependent modules
Simple distance vector routers do not establish any relationship with their neighbors.
RIP and IGRP routers merely broadcast or multicast updates on configured interfaces. In
contrast, EIGRP routers actively establish relationships with their neighbors, much the same
way that OSPF routers do.
EIGRP routers establish adjacencies as described in Figure . EIGRP routers establish
adjacencies with neighbor routers by using small hello packets. Hellos are sent by default
every five seconds. An EIGRP router assumes that as long as it is receiving hello packets from
known neighbors, those neighbors and their routes remain viable or passive. By forming
adjacencies, EIGRP routers do the following:

Dynamically learn of new routes that join their network

Identify routers that become either unreachable or inoperable

Rediscover routers that had previously been unreachable
Reliable Transport Protocol (RTP) is a transport-layer protocol that can guarantee
ordered delivery of EIGRP packets to all neighbors. On an IP network, hosts use TCP to
sequence packets and ensure their timely delivery. However, EIGRP is protocol-independent.
This means it does not rely on TCP/IP to exchange routing information the way that RIP,
IGRP, and OSPF do. To stay independent of IP, EIGRP uses RTP as its own proprietary
transport-layer protocol to guarantee delivery of routing information.
42
Cisco Academy – CCNA 3.0 Semester 3
EIGRP can call on RTP to provide reliable or unreliable service as the situation warrants.
For example, hello packets do not require the overhead of reliable delivery because they are
frequent and should be kept small. Nevertheless, the reliable delivery of other routing
information can actually speed convergence, because EIGRP routers are not waiting for a
timer to expire before they retransmit.
With RTP, EIGRP can multicast and unicast to different peers simultaneously, which
allows for maximum efficiency.
The centerpiece of EIGRP is the Diffusing Update Algorithm (DUAL), which is the
EIGRP route-calculation engine. The full name of this technology is DUAL finite-state
machine (FSM). An FSM is an algorithm machine, not a mechanical device with moving parts.
FSMs define a set of possible states that something can go through, what events cause those
states, and what events result from those states. Designers use FSMs to describe how a device,
computer program, or routing algorithm will react to a set of input events. The DUAL FSM
contains all the logic used to calculate and compare routes in an EIGRP network.
DUAL tracks all the routes advertised by neighbors. Composite metrics of each route are
used to compare them.
DUAL also guarantees that each path is loop free. DUAL inserts
lowest cost paths into the routing table. These primary routes are known as successor routes.
A copy of the successor routes is also placed in the topology table.
EIGRP keeps important route and topology information readily available in a neighbor
table and a topology table. These tables supply DUAL with comprehensive route information
in case of network disruption. DUAL selects alternate routes quickly by using the information
in these tables. If a link goes down, DUAL looks for an alternative route path, or feasible
successor, in the topology table.
One of the best features of EIGRP is its modular design. Modular, layered designs prove
to be the most scalable and adaptable. Support for routed protocols, such as IP, IPX, and
AppleTalk, is included in EIGRP through PDMs. In theory, EIGRP can easily adapt to new or
revised routed protocols, such as IPv6, by adding protocol-dependent modules.
Each PDM is responsible for all functions related to its specific routed protocol. The
IP-EIGRP module is responsible for the following:

Sending and receiving EIGRP packets that bear IP data

Notifying DUAL of new IP routing information that is received

Maintaining the results of DUAL routing decisions in the IP routing table

Redistributing routing information that was learned by other IP-capable routing
protocols
Chapter 3
EIGRP
43
3.1.5 EIGRP data structure
Like OSPF, EIGRP relies on different types of packets to maintain its various tables and
establish complex relationships with neighbor routers.
The five EIGRP packet types are:

Hello

Acknowledgment

Update

Query

Reply
EIGRP relies on hello packets to discover, verify, and rediscover neighbor routers.
Rediscovery occurs if EIGRP routers do not receive hellos from each other for a hold time
interval but then re-establish communication.
EIGRP routers send hellos at a fixed but configurable interval, called the hello interval.
The default hello interval depends on the bandwidth of the interface. On IP networks,
EIGRP routers send hellos to the multicast IP address 224.0.0.10.
An EIGRP router stores information about neighbors in the neighbor table. The neighbor
table includes the Sequence Number (Seq No) field to record the number of the last received
EIGRP packet that each neighbor sent. The neighbor table also includes a Hold Time field
which records the time the last packet was received. Packets should be received within the
Hold Time interval period to maintain a Passive state. The Passive state is a reachable and
operational status.
If a neighbor is not heard from for the duration of the hold time, EIGRP considers that
neighbor down, and DUAL must step in to re-evaluate the routing table. By default, the hold
time is three times the hello interval, but an administrator can configure both timers as
desired.
OSPF requires neighbor routers to have the same hello and dead intervals to
communicate. EIGRP has no such restriction. Neighbor routers learn about each of the other
respective timers via the exchange of hello packets. Then they use that information to forge a
stable relationship regardless of unlike timers.
Hello packets are always sent unreliably. This means that no acknowledgment is
transmitted.
An EIGRP router uses acknowledgment packets to indicate receipt of any EIGRP packet
44
Cisco Academy – CCNA 3.0 Semester 3
during a reliable exchange. Reliable Transport Protocol (RTP) can provide reliable
communication between EIGRP hosts. To be reliable, a sender's message must be
acknowledged by the recipient. Acknowledgment packets, which are hello packets without
data, are used for this purpose. Unlike multicast hellos, acknowledgment packets are unicast.
Acknowledgments can be made by attaching them to other kinds of EIGRP packets, such as
reply packets.
Update packets are used when a router discovers a new neighbor. An EIGRP router
sends unicast update packets to that new neighbor so that it can add to its topology table.
More than one update packet may be needed to convey all the topology information to the
newly discovered neighbor.
Update packets are also used when a router detects a topology change. In this case, the
EIGRP router sends a multicast update packet to all neighbors, which alerts them to the
change. All update packets are sent reliably.
An EIGRP router uses query packets whenever it needs specific information from one or
all of its neighbors. A reply packet is used to respond to a query.
If an EIGRP router loses its successor and cannot find a feasible successor for a route,
DUAL places the route in the Active state. A query is then multicasted to all neighbors in an
attempt to locate a successor to the destination network. Neighbors must send replies that
either provide information on successors or indicate that no information is available. Queries
can be multicast or unicast, while replies are always unicast. Both packet types are sent
reliably.
3.1.6 EIGRP algorithm
The sophisticated DUAL algorithm results in the exceptionally fast convergence of
EIGRP. To better understand convergence with DUAL, consider the example in Figure . Each
router has constructed a topology table that contains information about how to route to
destination Network A.
Each topology table identifies the following:

The routing protocol or EIGRP

The lowest cost of the route, which is called Feasible Distance (FD)

The cost of the route as advertised by the neighboring router, which is called
Reported Distance (RD)
The Topology heading identifies the preferred primary route, called the successor route
(Successor), and, where identified, the backup route, called the feasible successor (FS). Note
Chapter 3
EIGRP
45
that it is not necessary to have an identified feasible successor.
The EIGRP network will follow a sequence of actions to bring about convergence
between the routers, which currently have the following topology information:
Router C has one successor route by way of Router B.
Router C has one feasible successor route by way of Router D.
Router D has one successor route by way of Router B.
Router D has no feasible successor route.
Router E has one successor route by way of Router D.
Router E has no feasible successor.
The feasible successor route selection rules are specified in Figure .
The following example demonstrates how each router in the topology will carry out the
feasible successor selection rules when the route from Router D to Router B goes down:
In Router D:
Route by way of Router B is removed from the topology table.
This is the successor route. Router D has no feasible successor identified.
Router D must complete a new route computation.
In Router C:
Route to Network A by way of Router D is down.
Route by way of Router D is removed from the table.
This is the feasible successor route for Router C.
In Router D:
Router D has no feasible successor. It cannot switch to an identified alternative backup
route.
Router D must recompute the topology of the network. The path to destination Network
A is set to Active.
Router D sends a query packet to all connected neighbors, Router C and Router E,
requesting topology information.
46
Cisco Academy – CCNA 3.0 Semester 3
Router C does have a previous entry for Router D.
Router D does not have a previous entry for Router E.
In Router E:
Route to Network A through Router D is down.
The route by way of Router D is taken down.
This is the successor route for Router E.
Router E does not have a feasible route identified.
Note that the Reported Distance cost of routing by way of Router C is 3, the same cost
as the successor route by way of Router D.
In Router C:
Router E sends a query packet to Router C.
Router C removes Router E from the table.
Router C replies to Router D with new route to Network A.
In Router D:
Route status to destination Network A is still marked as Active. Computing has not been
completed yet.
Router C has replied to Router D to confirm that a route to destination Network A is
available with a cost of 5.
Router D is still waiting for a reply from Router E.
In Router E:
Router E has no feasible successor to reach destination Network A.
Router E, therefore, tags the status of the route to destination network as Active.
Router E will have to recompute the network topology.
Router E removes the route by way of Router D from the table.
Chapter 3
EIGRP
47
Router E sends a query to Router C, requesting topology information.
Router E already has an entry by way of Router C. It is at a cost of 3, the same as the
successor route.
In Router E:
Router C replies with an RD of 3.
Router E can now set the route by way of Router C as the new successor with an FD of
4 and an RD of 3.
Router E replaces the “Active” status of the route to destination Network A with a
“Passive Status”. Note that a route will have a “Passive Status” by default, as long as hello
packets are being received. In this example, only “Active Status” routes are flagged.
In Router E:
Router E sends a reply to Router D informing of Router E topology information.
In Router D:
Router D receives the reply packed from Router E, informing of Router E topology
information.
Router D enters this data for the route to destination Network A by way of Router E.
This route becomes an additional successor route as the cost is the same as routing by
way of Router C and the RD is less than the FD cost of 5.
Convergence has occurred among all EIGRP routers using the DUAL algorithm.
3.2 EIGRP Configuration
3.2.1 Configuring EIGRP
Despite the complexity of DUAL, configuring EIGRP can be relatively simple. EIGRP
configuration commands vary depending on the protocol that is to be routed. Some examples
of these protocols are IP, IPX, and AppleTalk. This section covers EIGRP configuration for
the IP protocol.
Perform the following steps to configure EIGRP for IP:

Use the following to enable EIGRP and define the autonomous system:
48
Cisco Academy – CCNA 3.0 Semester 3
router(config)#router eigrp autonomous-system-number
The autonomous system number is used to identify all routers that belong within
the internetwork. This value must match all routers within the internetwork.

Indicate which networks belong to the EIGRP autonomous system on the local
router by using the following command:
router(config-router)#network network-number
The network-number is the network number that determines which interfaces of
the router are participating in EIGRP and which networks are advertised by the
router.
The network command configures only connected networks. For example,
network 3.1.0.0, which is on the far left of the main Figure, is not directly
connected to Router A. Consequently, that network is not part of the configuration
of Router A.

When configuring serial links using EIGRP, it is important to configure the
bandwidth setting on the interface. If the bandwidth for these interfaces is not
changed, EIGRP assumes the default bandwidth on the link instead of the true
bandwidth. If the link is slower, the router may not be able to converge, routing
updates might become lost, or suboptimal path selection may result. To set the
interface bandwidth, use the following syntax:
router(config-if)#bandwidth kilobits
The bandwidth command is only used by the routing process and should be set to
match the line speed of the interface.

Cisco also recommends adding the following command to all EIGRP
configurations:
router(config-if)#eigrp log-neighbor-changes
This command enables the logging of neighbor adjacency changes to monitor the
stability of the routing system and to help detect problems.
Lab Activity
Lab Exercise: Configuring EIGRP Routing
This lab is to setup an IP addressing scheme for the network.
Lab Activity
e-Lab Activity: Configuring EIGRP
In this lab, the student will configure EIGRP routing.
3.2.2 Configuring EIGRP summarization
Chapter 3
EIGRP
49
EIGRP automatically summarizes routes at the classful boundary. This is the boundary
where the network address ends, as defined by class-based addressing. This means that even
though RTC is connected only to the subnet 2.1.1.0, it will advertise that it is connected to the
entire Class A network, 2.0.0.0. In most cases auto summarization is beneficial because it
keeps routing tables as compact as possible.
However, automatic summarization may not be the preferred option in certain instances.
For example, if there are discontiguous subnetworks auto-summarization must be disabled for
routing to work properly. To turn off auto-summarization, use the following command:
router(config-router)#no auto-summary
With EIGRP, a summary address can be manually configured by configuring a prefix
network. Manual summary routes are configured on a per-interface basis, so the interface that
will propagate the route summary must be selected first. Then the summary address can be
defined with the ip summary-address eigrp command:
router(config-if)#ip
summary-address
eigrp
autonomous-system-number
ip-address mask administrative-distance
EIGRP summary routes have an administrative distance of 5 by default. Optionally, they
can be configured for a value between 1 and 255.
In Figure , RTC can be configured using the commands shown:
RTC(config)#router eigrp 2446
RTC(config-router)#no auto-summary
RTC(config-router)#exit
RTC(config)#interface serial 0/0
RTC(config-if)#ip summary-address eigrp 2446 2.1.0.0 255.255.0.0
Therefore, RTC will add a route to its table as follows:
D 2.1.0.0/16 is a summary, 00:00:22, Null0
Notice that the summary route is sourced from Null0 and not from an actual interface.
This is because this route is used for advertisement purposes and does not represent a path
that RTC can take to reach that network. On RTC, this route has an administrative distance of
5.
RTD is not aware of the summarization but accepts the route. The route is assigned the
administrative distance of a normal EIGRP route, which is 90 by default.
50
Cisco Academy – CCNA 3.0 Semester 3
In the configuration for RTC, auto-summarization is turned off with the no
auto-summary command. If auto-summarization was not turned off, RTD would receive two
routes, the manual summary address, which is 2.1.0.0 /16, and the automatic, classful
summary address, which is 2.0.0.0 /8.
In most cases when manually summarizing, the no auto-summary command should be
issued.
3.2.3 Verifying basic EIGRP
Verifying EIGRP operation is performed by the use of various show commands. Figure
lists the key EIGRP show commands and briefly discusses their functions.
The Cisco IOS debug feature also provides useful EIGRP monitoring commands.
Lab Activity
Lab Exercise: Verifying Basic EIGRP Configuration
This lab is to setup an IP addressing scheme for the network and to verify EIGRP
configuration.
Lab Activity
e-Lab Activity: Verifying Basic EIGRP
In this lab, the student will configure and verify EIGRP Routing.
3.2.4 Building neighbor tables
Simple distance vector routers do not establish any relationship with their neighbors.
RIP and IGRP routers merely broadcast or multicast updates on configured interfaces. In
contrast, EIGRP routers actively establish relationships with their neighbors as do OSPF
routers.
The neighbor table is the most important table in EIGRP. Each EIGRP router maintains
a neighbor table that lists adjacent routers. This table is comparable to the adjacency database
used by OSPF. There is a neighbor table for each protocol that EIGRP supports.
EIGRP routers establish adjacencies with neighbor routers by using small hello packets.
Hellos are sent by default every five seconds.
An EIGRP router assumes that, as long as it is
receiving hello packets from known neighbors, those neighbors and their routes remain viable
or passive. By forming adjacencies, EIGRP routers do the following:
Chapter 3

Dynamically learn of new routes that join their network

Identify routers that become either unreachable or inoperable

Rediscover routers that had previously been unreachable
EIGRP
51
The following fields are found in a neighbor table:

Neighbor address – This is the network layer address of the neighbor router.

Hold time – This is the interval to wait without receiving anything from a neighbor
before considering the link unavailable. Originally, the expected packet was a hello
packet, but in current Cisco IOS software releases, any EIGRP packets received
after the first hello will reset the timer.

Smooth Round-Trip Timer (SRTT) – This is the average time that it takes to send
and receive packets from a neighbor. This timer is used to determine the retransmit
interval (RTO).

Queue count (Q Cnt) – This is the number of packets waiting in a queue to be sent.
If this value is constantly higher than zero, there may be a congestion problem at
the router. A zero means that there are no EIGRP packets in the queue.

Sequence Number (Seq No) – This is the number of the last packet received from
that neighbor. EIGRP uses this field to acknowledge a transmission of a neighbor
and to identify packets that are out of sequence. The neighbor table is used to
support reliable, sequenced delivery of packets and can be regarded as analogous
to the TCP protocol used in the reliable delivery of IP packets.
Interactive Media Activity
Crossword Puzzle: EIGRP Concepts and Terminology
When the student has completed this activity, the student will understand the different
EIGRP concepts and terminology.
3.2.5 Discover routes
EIGRP routers keep route and topology information available in RAM, so changes can
be reacted to quickly. Like OSPF, EIGRP keeps this information in several tables or
databases.
The EIGRP distance vector algorithm, DUAL, uses the information gathered in the
neighbor and topology tables and calculates the lowest cost route to the destination. The
primary route is called the successor route. When calculated, DUAL places the successor
route in the routing table and a copy in the topology table.
DUAL also attempts to calculate a backup route in case the successor route fails. This is
52
Cisco Academy – CCNA 3.0 Semester 3
called the feasible successor route. When calculated, DUAL places the feasible route in the
topology table. This route can be called upon if the successor route to a destination becomes
unreachable or unreliable.
3.2.6 Select routes
If a link goes down, DUAL looks for an alternative route path, or feasible successor, in
the topology table.
If a feasible successor is not found, the route is flagged as Active, or
unusable at present. Query packets are sent to neighboring routers requesting topology
information. DUAL uses this information to recalculate successor and feasible successor
routes to the destination.
Once DUAL has completed these calculations, the successor route is placed in the
routing table. Then both the successor route and feasible successor route are placed in the
topology table. The route to the final destination will now pass from an Active status to a
Passive status. This means that the route is now operational and reliable.
The sophisticated algorithm of DUAL results in EIGRP having exceptionally fast
convergence. To better understand convergence using DUAL, consider the example in Figure .
All routers have built a topology table that contains information about how to route to
destination network Z.
Each table identifies the following:

The routing protocol or EIGRP

The lowest cost of the route or Feasible Distance (FD)

The cost of the route as advertised by the neighboring router or Reported Distance
(RD)
The Topology heading identifies the preferred primary route, which is called the
successor route (Successor). If it is identified, the Topology heading will also identify the
backup route, which is called the feasible successor (FS). Note that it is not necessary to have
an identified feasible successor.
3.2.7 Maintaining routing tables
DUAL tracks all routes advertised by neighbors using the composite metric of each
route to compare them. DUAL also guarantees that each path is loop-free.
Lowest-cost paths are then inserted by the DUAL algorithm into the routing table. These
primary routes are known as successor routes. A copy of the successor paths is placed in the
topology table.
Chapter 3
EIGRP
53
EIGRP keeps important route and topology information readily available in a neighbor
table and a topology table. These tables supply DUAL with comprehensive route information
in case of network disruption. DUAL selects alternate routes quickly by using the information
in these tables.
If a link goes down, DUAL looks for an alternative route path, or feasible successor, in
the topology table. If a feasible successor is not found, the route is flagged as active, or
unusable at present. Query packets are sent to neighboring routers requesting topology
information. DUAL uses this information to recalculate successor and feasible successor
routes to the destination.
Once DUAL has completed these calculations, the successor route is placed in the
routing table. Then both the successor route and feasible successor route are placed in the
topology table. The route to the final destination will now pass from an active status to a
passive status. This means that the route is now operational and reliable.
EIGRP routers establish and maintain adjacencies with neighbor routers by using small
hello packets. Hellos are sent by default every five seconds. An EIGRP router assumes that, as
long as it is receiving hello packets from known neighbors, those neighbors and their routes
remain viable, or passive.
When newly discovered neighbors are learned, the address and interface of the neighbor
is recorded. This information is stored in the neighbor data structure. When a neighbor sends a
hello packet, it advertises a hold time. The hold time is the amount of time a router treats a
neighbor as reachable and operational.In other words, if a hello packet is not heard from
within the hold time, the hold time expires. When the hold time expires, DUAL is informed of
the topology change, and must recalculate the new topology.
In the example in Figures
- , DUAL must reconstruct the topology following the
discovery of a broken link between router D and router B.
The new successor routes will be placed in the updated routing table.
3.3 Troubleshooting Routing Protocols
3.3.1 Routing protocol troubleshooting process
All routing protocol troubleshooting should begin with a logical sequence, or process
flow. This process flow is not a rigid outline for troubleshooting an internetwork. However, it
is a foundation from which a network administrator can build a problem-solving process to
suit a particular environment.
54
Cisco Academy – CCNA 3.0 Semester 3
When analyzing a network failure, make a clear problem statement.

Gather the facts needed to help isolate possible causes.

Consider possible problems based on the facts that have been gathered.

Create an action plan based on the remaining potential problems.

Implement the action plan, performing each step carefully while testing to see
whether the symptom disappears.

Analyze the results to determine whether the problem has been resolved. If it has,
then the process is complete.

If the problem has not been resolved, create an action plan based on the next most
likely problem in the list. Return to Step 4, change one variable at a time, and
repeat the process until the problem is solved.

Once the actual cause of the problem is identified, try to solve it.
Cisco routers provide numerous integrated commands to assist in monitoring and
troubleshooting an internetwork:

show commands help monitor installation behavior and normal network behavior,
as well as isolate problem areas

debug commands assist in the isolation of protocol and configuration problems

TCP/IP network tools such as ping, traceroute, and telnet
Cisco IOS show commands are among the most important tools for understanding the
status of a router, detecting neighboring routers, monitoring the network in general, and
isolating problems in the network.
EXEC debug commands can provide a wealth of information about interface traffic,
internal error messages, protocol-specific diagnostic packets, and other useful troubleshooting
data. Use debug commands to isolate problems, not to monitor normal network operation.
Only use debug commands to look for specific types of traffic or problems. Before using the
debug command, narrow the problems to a likely subset of causes. Use the show debugging
command to view which debugging features are enabled.
3.3.2 Troubleshooting RIP configuration
The most common problem found in Routing Information Protocol (RIP) that prevents
RIP routes from being advertised is the variable-length subnet mask (VLSM). This is because
RIP Version 1 does not support VLSM. If the RIP routes are not being advertised, check the
following:

Layer 1 or Layer 2 connectivity issues exist.

VLSM subnetting is configured. VLSM subnetting cannot be used with RIP v1.

Mismatched RIP v1 and RIP v2 routing configurations exist.
Chapter 3

Network statements are missing or incorrectly assigned.

The outgoing interface is down.

The advertised network interface is down.
EIGRP
55
The show ip protocols command provides information about the parameters and current
state of the active routing protocol process. RIP sends updates to the interfaces in the
specified networks.
If interface FastEthernet 0/1 was configured but the network was not
added to RIP routing, no updates would be sent out or received from the interface.
Use the debug ip rip EXEC command to display information on RIP routing
transactions. The no debug ip rip, no debug all, or undebug all commands will turn off all
debugging.
Figure
shows that the router being debugged has received an update from another
router at source address 192.168.3.1. That router sent information about two destinations in
the routing table update. The router being debugged also sent updates. Both routers
broadcasted address 255.255.255.255 as the destination. The number in parentheses is the
source address encapsulated into the IP header.
An entry most likely caused by a malformed packet from the transmitter is shown in the
following output:
RIP: bad version 128 from 160.89.80.43
3.3.3 Troubleshooting IGRP configuration
Interior Gateway Routing Protocol (IGRP) is an advanced distance vector routing
protocol developed by Cisco in the middle 1980s. IGRP has several features that differentiate
it from other distance vector routing protocols such as RIP.
Use the router igrp autonomous-system command to enable the IGRP routing process:
R1(config)#router igrp 100
Use the router configuration network network-number command to enable interfaces to
participate in the IGRP update process:
R1(config-router)#network 172.30.0.0
R1(config-router)#network 192.168.3.0
Verify IGRP configuration with the show running-configuration and show ip
protocols commands:
56
Cisco Academy – CCNA 3.0 Semester 3
R1#show ip protocols
Verify IGRP operation with the show ip route command:
R1#show ip route
If IGRP does not appear to be working correctly, check the following:

Layer 1 or Layer 2 connectivity issues exist.

Autonomous system numbers on IGRP routers are mismatched.

Network statements are missing or incorrectly assigned.

The outgoing interface is down.

The advertised network interface is down.
To view IGRP debugging information, use the following commands:

debug ip igrp transactions [host ip address] to view IGRP transaction
information

debug ip igrp events [host ip address] to view routing update information
To turn off debugging, use the no debug ip igrp command.
If a network becomes inaccessible, routers running IGRP send triggered updates to
neighbors to inform them. A neighbor router will then respond with poison reverse updates
and keep the suspect network in a holddown state for 280 seconds.
3.3.4 Troubleshooting EIGRP configuration
Normal EIGRP operation is stable, efficient in bandwidth utilization, and relatively
simple to monitor and troubleshoot.
Use the router eigrp autonomous-system command to enable the EIGRP routing
process:
R1(config)#router eigrp 100
To exchange routing updates, each router in the EIGRP network must be configured
with the same autonomous system number.
Use the router configuration network network-number command to enable interfaces to
participate in the EIGRP update process:
R1(config-router)#network 172.30.0.0
R1(config-router)#network 192.168.3.0
Chapter 3
EIGRP
57
Verify EIGRP configuration with the show running-configuration and show ip
protocols commands:
R1#show ip protocols
Some possible reasons why EIGRP may not be working correctly are:

Layer 1 or Layer 2 connectivity issues exist.

Autonomous system numbers on EIGRP routers are mismatched.

The link may be congested or down.

The outgoing interface is down.

The advertised network interface is down.

Auto-summarization is enabled on routers with discontiguous subnets.

Use no auto-summary to disable automatic network summarization.
One of the most common reasons for a missing neighbor is a failure on the actual link.
Another possible cause of missing neighbors is an expired holddown timer. Since hellos are
sent every 5 seconds on most networks, the hold-time value in a show ip eigrp neighbors
command output should normally be a value between 10 and 15.
To effectively monitor and troubleshoot an EIGRP network, use the commands
described in Figures
-.
3.3.5 Troubleshooting OSPF configuration
Open Shortest Path First (OSPF) is a link-state protocol. A link is an interface on a router.
The state of the link is a description of that interface and of its relationship to its neighboring
routers. For example, a description of the interface would include the IP address, the mask,
the type of network to which it is connected, the routers connected to that network, and so on.
This information forms a link-state database.
The majority of problems encountered with OSPF relate to the formation of adjacencies
and the synchronization of the link-state databases. The show ip ospf neighbor command is
useful for troubleshooting adjacency formation. OSPF configuration commands are shown in
Figure .
Use the debug ip ospf events privileged EXEC command to display the following
information about OSPF-related events:

Adjacencies

Flooding information

Designated router selection

Shortest path first (SPF) calculation
58
Cisco Academy – CCNA 3.0 Semester 3
If a router configured for OSPF routing is not seeing an OSPF neighbor on an attached
network, perform the following tasks:

Verify that both routers have been configured with the same IP mask, OSPF hello
interval, and OSPF dead interval.

Verify that both neighbors are part of the same area.
To display information about each Open Shortest Path First (OSPF) packet received, use
the debug ip ospf packet privileged EXEC command. The no form of this command disables
debugging output.
The debug ip ospf packet command produces one set of information for each packet
received. The output varies slightly, depending on which authentication is used.
Summary
An understanding of the following key points should have been achieved:

Differences between EIGRP and IGRP

Key concepts, technologies, and data structures of EIGRP

EIGRP convergence and the basic operation of the Diffusing Update Algorithm, or
DUAL

Basic EIGRP configuration

Configuring EIGRP route summarization

The processes used by EIGRP to build and maintain routing tables

Verifying EIGRP operations

The eight-step process for general troubleshooting

Applying a logical process to routing troubleshooting

Troubleshooting a RIP routing process using show and debug commands

Troubleshooting an IGRP routing process using show and debug commands

Troubleshooting an EIGRP routing process using show and debug commands

Troubleshooting an OSPF routing process using show and debug commands
Chapter 4
Chapter 4
Switching Concepts
59
Switching Concepts
Overview
Local-area network (LAN) design has developed and changed over time. Network
designers until very recently used hubs and bridges to build networks. Now switches and
routers are the key components in LAN design, and the capabilities and performance of these
devices are continually improving.
This module returns to some of the roots of modern Ethernet LANs with a discussion of
the evolution of Ethernet/802.3, the most commonly deployed LAN architecture. A look at the
historical context of LAN development and various networking devices that can be utilized at
Layer 1, Layer 2, and Layer 3 of the OSI model will help provide a solid understanding of the
reasons why network devices have evolved as they have.
Until recently, most Ethernet networks were built using repeaters. When the
performance of these networks began to suffer because too many devices shared the same
segment, network engineers added bridges to create multiple collision domains. As networks
grew in size and complexity, the bridge evolved into the modern switch, allowing
microsegmentation of the network. Today’s networks typically are built using switches and
routers, often with the routing and switching function in the same device.
Many modern switches are capable of performing varied and complex tasks in the
network. This module will provide an introduction to network segmentation and will describe
the basics of switch operation.
Switches and bridges perform much of the heavy work in a LAN, making nearly
instantaneous decisions when frames are received. This module describes in detail how
frames are transmitted by switches, how frames are filtered, and how switches learn the
physical addresses of all network nodes. As an introduction to the use of bridges and switches
in LAN design, the principles of LAN segmentation and collision domains are also covered.
Switches are Layer 2 devices that are used to increase available bandwidth and reduce
network congestion. A switch can segment a LAN into microsegments, which are segments
with only a single host. Microsegmentation creates multiple collision-free domains from one
larger domain. As a Layer 2 device, the LAN switch increases the number of collision
domains, but all hosts connected to the switch are still part of the same broadcast domain.
Students completing this module should be able to:
60
Cisco Academy – CCNA 3.0 Semester 3

Describe the history and function of shared, half-duplex Ethernet

Define collision as it relates to Ethernet networks

Define microsegmentation

Define CSMA/CD

Describe some of the key elements affecting network performance

Describe the function of repeaters

Define network latency

Define transmission time

Describe the basic function of Fast Ethernet

Define network segmentation using routers, switches, and bridges

Describe the basic operations of a switch

Define Ethernet switch latency

Explain the differences between Layer 2 and Layer 3 switching

Define symmetric and asymmetric switching

Define memory buffering

Compare and contrast store-and-forward and cut-through switching

Understand the differences between hubs, bridges, and switches

Describe the main functions of switches

List the major switch frame transmission modes

Describe the process by which switches learn addresses

Identify and define forwarding modes

Define LAN segmentation

Define microsegmentation using switching

Describe the frame-filtering process

Compare and contrast collision and broadcast domains

Identify the cables needed to connect switches to workstations

Identify the cables needed to connect switches to switches
4.1 Introduction to Ethernet/802.3 LANs
4.1.1 Ethernet/802.3 LAN development
The earliest LAN technologies commonly used either thick Ethernet or thin Ethernet
infrastructures.
It is important to understand some of the limitations of these infrastructures
in order to see where LAN switching stands today.
Adding hubs or concentrators into the network offered an improvement on thick and thin
Ethernet technology. A hub is a Layer 1 device and is sometimes referred to as an Ethernet
concentrator or a multi-port repeater. Introducing hubs into the network allowed greater
Chapter 4
Switching Concepts
61
access to the network for more users. Active hubs also allowed for the extension of networks
to greater distances. A hub does this by regenerating the data signal. A hub does not make any
decisions when receiving data signals. It simply regenerates and amplifies the data signals that
it receives to all connected devices.
Ethernet is fundamentally a shared technology where all users on a given LAN segment
compete for the same available bandwidth. This situation is analogous to a number of cars all
trying to access a one-lane road at the same time. Because the road has only one lane, only
one car can access it at a time. The introduction of hubs into a network resulted in more users
competing for the same bandwidth.
Collisions are a by-product of Ethernet networks. If two or more devices try to transmit
at the same time a collision occurs. This situation is analogous to two cars merging into a
single lane and the resulting collision. Traffic is backed up until the collision can be cleared.
When the number of collisions in a network is excessive, sluggish network response times
result. This indicates that the network has become too congested or too many users are trying
to access the network at the same time.
Layer 2 devices are more intelligent than Layer 1 devices. Layer 2 devices make
forwarding decisions based on Media Access Control (MAC) addresses contained within the
headers of transmitted data frames.
A bridge is a Layer 2 device used to divide, or segment, a network. A bridge is capable
of collecting and selectively passing data frames between two network segments. Bridges do
this by learning the MAC address of all devices on each connected segment. Using this
information, the bridge builds a bridging table and forwards or blocks traffic based on that
table. This results in smaller collision domains and greater network efficiency. Bridges do
not restrict broadcast traffic. However, they do provide greater traffic control within a
network.
A switch is also a Layer 2 device and may be referred to as a multi-port bridge. A switch
has the intelligence to make forwarding decisions based on MAC addresses contained within
transmitted data frames. The switch learns the MAC addresses of devices connected to each
port and this information is entered into a switching table.
Switches create a virtual circuit between two connected devices that want to
communicate. When the virtual circuit is created, a dedicated communication path is
established between the two devices. The implementation of a switch on the network provides
microsegmentation. In theory this creates a collision free environment between the source and
destination, which allows maximum utilization of the available bandwidth. A switch is also
62
Cisco Academy – CCNA 3.0 Semester 3
able to facilitate multiple, simultaneous virtual circuit connections. This is analogous to a
highway being divided into multiple lanes with each car having its own dedicated lane.
The disadvantage of Layer 2 devices is that they forward broadcast frames to all
connected devices on the network. When the number of broadcasts in a network is excessive,
sluggish network response times result.
A router is a Layer 3 device. The router makes decisions based on groups of network
addresses, or classes, as opposed to individual Layer 2 MAC addresses. Routers use routing
tables to record the Layer 3 addresses of the networks that are directly connected to the local
interfaces and network paths learned from neighboring routers.
The purpose of a router is to do all of the following:

Examine incoming packets of Layer 3 data

Choose the best path for them through the network

Switch them to the proper outgoing port
Routers are not compelled to forward broadcasts. Therefore, routers reduce the size of
both the collision domains and the broadcast domains in a network. Routers are the most
important traffic regulating devices on large networks. They enable virtually any type of
computer to communicate with any other computer anywhere in the world.
LANs typically employ a combination of Layer 1, Layer 2, and Layer 3 devices.
Implementation of these devices depends on factors that are specific to the particular needs of
the organization.
Interactive Media Activity
Drag and Drop: Devices Function at Layers
After completing this activity, students will be able to identify the different OSI layers
where networking devices function.
4.1.2 Factors that impact network performance
Today's LANs are becoming increasingly congested and overburdened. In addition to an
ever-growing population of network users, several other factors have combined to test the
limits of the capabilities of traditional LANs:

The multitasking environment present in current desktop operating systems such as
Windows, Unix/Linux and Mac allows for simultaneous network transactions. This
increased capability has lead to an increased demand for network resources.

The use of network intensive applications such as the World Wide Web is
Chapter 4
increasing.
Client/server
applications
allow
Switching Concepts
administrators
to
63
centralize
information, thus making it easier to maintain and protect information.

Client/server applications free local workstations from the burden of maintaining
information and the cost of providing enough hard disk space to store it. Given the
cost benefit of client/server applications, such applications are likely to become
even more widely used in the future.
4.1.3 Elements of Ethernet/802.3 networks
The most common LAN architecture is Ethernet. Ethernet is used to transport data
between devices on a network. These devices include computers, printers, and file servers. All
nodes on a shared Ethernet media transmit and receive data using a data frame broadcast
method. The performance of a shared medium Ethernet/802.3 LAN can be negatively affected
by several factors:

The data frame delivery of Ethernet/802.3 LANs is of a broadcast nature.

The carrier sense multiple access/collision detect (CSMA/CD) method allows only
one station to transmit at a time.

Multimedia applications with higher bandwidth demand such as video and the
Internet, coupled with the broadcast nature of Ethernet, can create network
congestion.

Normal latency occurs as the frames travel across the Layer 1 medium and through
Layer 1, Layer 2, and Layer 3 networking devices.

Extending the distances and increasing latency of the Ethernet/802.3 LANs by
using Layer 1 repeaters.
Ethernet using CSMA/CD and a shared medium can support data transmission rates of
up to 100 Mbps. CSMA/CD is an access method that allows only one station to transmit at a
time. The goal of Ethernet is to provide a best-effort delivery service and allow all devices on
the shared medium to transmit on an equal basis. A certain number of collisions are expected
in the design of Ethernet and CSMA/CD. Therefore, collisions can become a major problem
in a CSMA/CD network.
4.1.4 Half-duplex networks
Originally Ethernet was a half-duplex technology. Using half-duplex, a host could either
transmit or receive at one time, but not both. Each Ethernet host checks the network to see
whether data is being transmitted before it transmits additional data. If the network is already
in use, the transmission is delayed. Despite transmission deferral, two or more Ethernet hosts
could transmit at the same time. This results in a collision. When a collision occurs, the host
that first detects the collision will send out a jam signal to the other hosts. Upon receiving the
64
Cisco Academy – CCNA 3.0 Semester 3
jam signal, each host will stop sending data, then wait for a random period of time before
attempting to retransmit. The back-off algorithm generates this random delay. As more hosts
are added to the network and begin transmitting, collisions are more likely to occur.
Ethernet LANs become saturated because users run network intensive software, such as
client/server applications, which cause hosts to transmit more often and for longer periods of
time. The network interface card (NIC), used by LAN devices, provides several circuits so
that communication among devices can occur.
4.1.5 Network congestion
Advances in technology are producing faster and more intelligent desktop computers
and workstations. The combination of more powerful workstations and network intensive
applications has created a need for greater network capacity, or bandwidth. The requirements
have exceeded the 10 Mbps available on shared Ethernet/802.3 LANs.
Today's networks are experiencing an increase in the transmission of many forms of
media:

Large graphics files

Images

Full-motion video

Multimedia applications
There is also an increase in the number of users on a network. All these factors place an
even greater strain on the 10-Mbps of available bandwidth. As more people utilize a network
to share larger files, access file servers, and connect to the Internet, network congestion occurs.
This can result in slower response times, longer file transfers, and network users becoming
less productive. To relieve network congestion, more bandwidth is needed or the available
bandwidth must be used more efficiently.
Interactive Media Activity
Drag and Drop: Bandwidth Requirements
When the student has completed this activity, the student will be able to identify the
bandwidth requirements for different multimedia applications on a network.
4.1.6 Network latency
Latency, or delay, is the time a frame or a packet takes to travel from the source station
to the final destination. It is important to quantify the total latency of the path between the
source and the destination for LANs and WANs. In the specific case of an Ethernet LAN,
Chapter 4
Switching Concepts
65
understanding latency and its effect on network timing is crucial to determining whether
CSMA/CD for detecting collisions and negotiating transmissions will work properly.
Latency has at least three sources:

First, there is the time it takes the source NIC to place voltage pulses on the wire
and the time it takes the receiving NIC to interpret these pulses. This is sometimes
called NIC delay, typically around 1 microsecond for a 10BASE-T NIC.

Second, there is the actual propagation delay as the signal takes time to travel
along the cable. Typically, this is about 0.556 microseconds per 100 m for Cat 5
UTP. Longer cable and slower nominal velocity of propagation (NVP) results in
more propagation delay.

Third, latency is added according to which networking devices, whether they are
Layer 1, Layer 2, or Layer 3, are added to the path between the two
communicating computers.
Latency does not depend solely on distance and number of devices. For example, if
three properly configured switches separate two workstations, the workstations may
experience less latency than if two properly configured routers separated them. This is
because routers conduct more complex and time-consuming functions. A router must analyze
Layer 3 data.
4.1.7 Ethernet 10 BASE-T transmission time
All networks have what is called bit time or slot time. Many LAN technologies, such as
Ethernet, define bit time as the basic unit of time in which ONE bit can be sent. In order for
the electronic or optical devices to recognize a binary one or zero, there must be some
minimum duration during which the bit is on or off.
Transmission time equals the number of bits being sent times the bit time for a given
technology. Another way to think about transmission time is the time it takes a frame to be
transmitted. Small frames take a shorter amount of time. Large frames take a longer amount
of time.
Each 10 Mbps Ethernet bit has a 100 ns transmission window. This is the bit time. A
byte equals 8 bits. Therefore, 1 byte takes a minimum of 800 ns to transmit. A 64-byte frame,
the smallest 10BASE-T frame allowing CSMA/CD to function properly, takes 51,200 ns
( 51.2 microseconds). Transmission of an entire 1000-byte frame from the source station
requires 800 microseconds just to complete the frame. The time at which the frame actually
arrives at the destination station depends on the additional latency introduced by the network.
This latency can be due to a variety of delays including all of the following:
66
Cisco Academy – CCNA 3.0 Semester 3

NIC delays

Propagation delays

Layer 1, Layer 2, or Layer 3 device delays
Interactive Media Activity
Drag and Drop: 10BASE-T Transmission Times
After completing this activity, students will be able to identify the transmission times of
10BASE-T.
4.1.8 The benefits of using repeaters
The distance that a LAN can cover is limited due to attenuation. Attenuation means that
the signal weakens as it travels through the network. The resistance in the cable or medium
through which the signal travels causes the loss of signal strength. An Ethernet repeater is a
physical layer device on the network that boosts or regenerates the signal on an Ethernet LAN.
When a repeater is used to extend the distance of a LAN, a single network can cover a greater
distance and more users can share that same network. However, the use of repeaters and hubs
compounds problems associated with broadcasts and collisions. It also has a negative effect
on the overall performance of the shared media LAN.
Interactive Media Activity
PhotoZoom: Cisco 1503 Micro Hub
In this PhotoZoom, the student will view the Cisco 1503 Micro Hub.
4.1.9 Full-duplex transmitting
Full-duplex Ethernet allows the transmission of a packet and the reception of a different
packet at the same time. This simultaneous transmission and reception requires the use of two
pairs of wires in the cable and a switched connection between each node. This connection is
considered point-to-point and is collision free. Because both nodes can transmit and receive at
the same time, there are no negotiations for bandwidth. Full-duplex Ethernet can use an
existing cable infrastructure as long as the medium meets the minimum Ethernet standards.
To transmit and receive simultaneously, a dedicated switch port is required for each node.
Full-duplex connections can use 10BASE-T, 100BASE-TX, or 100BASE-FX media to create
point-to-point connections. The NICs on all connected devices must have full-duplex
capabilities.
Chapter 4
Switching Concepts
67
The full-duplex Ethernet switch takes advantage of the two pairs of wires in the cable by
creating a direct connection between the transmit (TX) at one end of the circuit and the
receive (RX) at the other end. With the two stations connected in this manner a collision free
environment is created as the transmission and receipt of data occurs on separate
non-competitive circuits.
Ethernet usually can only use 50%-60% of the available 10 Mbps of bandwidth because
of collisions and latency. Full-duplex Ethernet offers 100% of the bandwidth in both
directions. This produces a potential 20 Mbps throughput, which results from 10 Mbps TX
and 10 Mbps RX.
Interactive Media Activity
Drag and Drop: Full Duplex Ethernet
After completing this activity, students will be able to identify the requirements for full
duplex Ethernet.
4.2 Introduction to LAN Switching
4.2.1 LAN segmentation
A network can be divided into smaller units called segments. Figure shows an example
of a segmented Ethernet network. The entire network has fifteen computers. Of these fifteen
computers, six are servers and nine are workstations. Each segment uses the CSMA/CD
access method and maintains traffic between users on the segment. Each segment is its own
collision domain.
Segmentation allows network congestion to be significantly reduced within each
segment. When transmitting data within a segment, the devices within that segment share the
total available bandwidth. Data passed between segments is transmitted over the backbone of
the network using a bridge, router, or switch.
4.2.2 LAN segmentation with bridges
Bridges are Layer 2 devices that forward data frames according to the MAC address.
Bridges read the sender's MAC address of the data packets that are received on the incoming
ports to discover which devices are on each segment. The MAC addresses are then used to
build a bridging table. This will allow the bridge to block packets that do not need to be
forwarded from the local segment.
68
Cisco Academy – CCNA 3.0 Semester 3
Although the operation of a bridge is transparent to other network devices, the latency
on a network is increased by ten to thirty percent when a bridge is used. This latency is a
result of the decision making process prior to the forwarding of a packet. A bridge is
considered a store-and-forward device. The bridge must examine the destination address field
and calculate the cyclic redundancy check (CRC) in the frame check sequence field before
forwarding the frame. If the destination port is busy, the bridge can temporarily store the
frame until that port is available.
4.2.3 LAN segmentation with routers
Routers provide segmentation of networks, adding a latency factor of 20% to 30% over
a switched network. This increased latency is because a router operates at the network layer
and uses the IP address to determine the best path to the destination node. Figure
shows a
Cisco router.
Bridges and switches provide segmentation within a single network or subnetwork.
Routers provide connectivity between networks and subnetworks.
Routers also do not forward broadcasts while switches and bridges must forward
broadcast frames.
Interactive Media Activity
PhotoZoom: Cisco 2621 Router
In this PhotoZoom, the student will view a Cisco 2621 router.
Interactive Media Activity
PhotoZoom: Cisco 3640 Router
In this PhotoZoom, the student will view a Cisco 3640 router.
4.2.4 LAN segmentation with switches
LAN switching decreases bandwidth shortages and network bottlenecks, such as those
between several workstations and a remote file server. Figure
shows a Cisco switch. A
switch will segment a LAN into microsegments which decreases the size of collision domains.
However all hosts connected to the switch are still in the same broadcast domain.
In a pure switched Ethernet LAN, the sending and receiving nodes function as if they
are the only nodes on the network. When these two nodes establish a link, or virtual circuit,
they have access to the maximum available bandwidth. These links provide significantly more
Chapter 4
Switching Concepts
69
throughput than Ethernet LANs connected by bridges or hubs. This virtual network circuit
is established within the switch and exists only when the nodes need to communicate.
4.2.5 Basic operations of a switch
Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber
Distributed Data Interface (FDDI) LANs. Switching accomplishes this by reducing traffic and
increasing bandwidth. LAN switches are often used to replace shared hubs and are designed
to work with existing cable infrastructures.
Switching equipment performs the following two basic operations:

Switching data frames

Maintaining switching operations
Figures
- show the basic operations of a switch.
4.2.6 Ethernet switch latency
Latency is the period of time from when the beginning of a frame enters to when the end
of the frame exits the switch. Latency is directly related to the configured switching process
and volume of traffic.
Latency is measured in fractions of a second. With networking devices operating at
incredibly high speeds, every additional nanosecond of latency adversely affects network
performance.
4.2.7 Layer 2 and layer 3 switching
Switching is the process of receiving an incoming frame on one interface and delivering
that frame out another interface. Routers use Layer 3 switching to route a packet. Switches
use Layer 2 switching to forward frames.
The difference between Layer 2 and Layer 3 switching is the type of information inside
the frame that is used to determine the correct output interface. Layer 2 switching is based on
MAC address information. Layer 3 switching is based on network layer addresses or IP
addresses.
Layer 2 switching looks at a destination MAC address in the frame header and forwards
the frame to the appropriate interface or port based on the MAC address in the switching table.
The switching table is contained in Content Addressable Memory (CAM). If the Layer 2
switch does not know where to send the frame, it broadcasts the frame out all ports to the
network. When a reply is returned, the switch records the new address in the CAM.
70
Cisco Academy – CCNA 3.0 Semester 3
Layer 3 switching is a function of the network layer. The Layer 3 header information is
examined and the packet is forwarded based on the IP address.
Traffic flow in a switched or flat network is inherently different from the traffic flow in a
routed or hierarchical network. Hierarchical networks offer more flexible traffic flow than flat
networks.
4.2.8 Symmetric and asymmetric switching
LAN switching may be classified as symmetric or asymmetric based on the way in
which bandwidth is allocated to the switch ports. A symmetric switch provides switched
connections between ports with the same bandwidth. An asymmetric LAN switch provides
switched connections between ports of unlike bandwidth, such as a combination of 10 Mbps
and 100 Mbps ports.
Asymmetric switching enables more bandwidth to be dedicated to the server switch port
in order to prevent a bottleneck. This allows smoother traffic flows where multiple clients are
communicating with a server at the same time. Memory buffering is required on an
asymmetric switch. The use of buffers keeps the frames contiguous between different data
rate ports.
4.2.9 Memory buffering
An Ethernet switch may use a buffering technique to store and forward frames.
Buffering may also be used when the destination port is busy. The area of memory where the
switch stores the data is called the memory buffer. This memory buffer can use two methods
for forwarding frames, port-based memory buffering and shared memory buffering.
In port-based memory buffering frames are stored in queues that are linked to specific
incoming ports. A frame is transmitted to the outgoing port only when all the frames ahead of
it in the queue have been successfully transmitted. It is possible for a single frame to delay the
transmission of all the frames in memory because of a busy destination port. This delay
occurs even if the other frames could be transmitted to open destination ports.
Shared memory buffering deposits all frames into a common memory buffer which all
the ports on the switch share. The amount of buffer memory required by a port is dynamically
allocated. The frames in the buffer are linked dynamically to the transmit port. This allows the
packet to be received on one port and then transmitted on another port, without moving it to a
different queue.
The switch keeps a map of frame to port links showing where a packet needs to be
Chapter 4
Switching Concepts
71
transmitted. The map link is cleared after the frame has been successfully transmitted. The
memory buffer is shared. The number of frames stored in the buffer is restricted by the size of
the entire memory buffer, and not limited to a single port buffer. This permits larger frames to
be transmitted with fewer dropped frames. This is important to asynchronous switching,
where frames are being exchanged between different rate ports.
4.2.10 Two switching methods
The following two switching modes are available to forward frames:

Store-and-forward – The entire frame is received before any forwarding takes
place. The destination and source addresses are read and filters are applied before
the frame is forwarded. Latency occurs while the frame is being received. Latency
is greater with larger frames because the entire frame must be received before the
switching process begins. The switch is able to check the entire frame for errors,
which allows more error detection.

Cut-through – The frame is forwarded through the switch before the entire frame is
received. At a minimum the frame destination address must be read before the
frame can be forwarded. This mode decreases the latency of the transmission, but
also reduces error detection.
The following are two forms of cut-through switching:

Fast-forward – Fast-forward switching offers the lowest level of latency.
Fast-forward switching immediately forwards a packet after reading the destination
address. Because fast-forward switching starts forwarding before the entire packet
is received, there may be times when packets are relayed with errors. Although this
occurs infrequently and the destination network adapter will discard the faulty
packet upon receipt. In fast-forward mode, latency is measured from the first bit
received to the first bit transmitted.

Fragment-free – Fragment-free switching filters out collision fragments before
forwarding begins. Collision fragments are the majority of packet errors. In a
properly functioning network, collision fragments must be smaller than 64 bytes.
Anything greater than 64 bytes is a valid packet and is usually received without
error. Fragment-free switching waits until the packet is determined not to be a
collision fragment before forwarding. In fragment-free mode, latency is also
measured from the first bit received to the first bit transmitted.
The latency of each switching mode depends on how the switch forwards the frames. To
accomplish faster frame forwarding, the switch reduces the time for error checking. However,
reducing the error checking time can lead to a higher number of retransmissions.
72
Cisco Academy – CCNA 3.0 Semester 3
4.3 Switch Operation
4.3.1 Functions of Ethernet switches
A switch is a network device that selects a path or circuit for sending a frame to its
destination. Both switches and bridges operate at Layer 2 of the OSI model.
Switches are sometimes called multiport bridges or switching hubs. Switches make
decisions based on MAC addresses and therefore, are Layer 2 devices.
In contrast, hubs
regenerate the Layer 1 signals out of all ports without making any decisions. Since a switch
has the capacity to make path selection decisions, the LAN becomes much more efficient.
Usually, in an Ethernet network the workstations are connected directly to the switch.
Switches learn which hosts are connected to a port by reading the source MAC address in
frames. The switch opens a virtual circuit between the source and destination nodes only. This
confines communication to those two ports without affecting traffic on other ports. In contrast,
a hub forwards data out all of its ports so that all hosts see the data and must process it, even if
that data is not intended for it. High-performance LANs are usually fully switched.

A switch concentrates connectivity, making data transmission more efficient.
Frames are switched from incoming ports to outgoing ports. Each port or interface
can provide the full bandwidth of the connection to the host.

On a typical Ethernet hub, all ports connect to a common backplane or physical
connection within the hub, and all devices attached to the hub share the bandwidth
of the network. If two stations establish a session that uses a significant level of
bandwidth, the network performance of all other stations attached to the hub is
degraded.

To reduce degradation, the switch treats each interface as an individual segment.
When stations on different interfaces need to communicate, the switch forwards
frames at wire speed from one interface to the other, to ensure that each session
receives full bandwidth.
To efficiently switch frames between interfaces, the switch maintains an address table.
When a frame enters the switch, it associates the MAC address of the sending station with the
interface on which it was received.
The main features of Ethernet switches are:

Isolate traffic among segments

Achieve greater amount of bandwidth per user by creating smaller collision
domains
The first feature, isolate traffic among segments, is known as microsegmentation.
Chapter 4
Switching Concepts
73
Microsegmentation is the name given to the smaller units into which the networks are divided
by use of Ethernet switches. Each segment uses the CSMA/CD access method to maintain
data traffic flow among the users on that segment. Such segmentation allows multiple users to
send information at the same time on the different segments without slowing down the
network.
By using the segments in the network fewer users and/or devices are sharing the same
bandwidth when communicating with one another. Each segment has its own collision
domain. Ethernet switches filter the traffic by redirecting the datagrams to the correct port
or ports, which are based on Layer 2 MAC addresses.
The second function of an Ethernet switch is to ensure each user has more bandwidth by
creating smaller collision domains. Both Ethernet and Fast Ethernet switches allow the
segmentation of a LAN, thus creating smaller collision domains. Each segment becomes a
dedicated network link, like a highway lane functioning at up to 100 Mbps. Popular servers
can then be placed on individual 100-Mbps links. Often in networks of today, a Fast Ethernet
switch will act as the backbone of the LAN, with Ethernet hubs, Ethernet switches, or Fast
Ethernet hubs providing the desktop connections in workgroups. As demanding new
applications such as desktop multimedia or videoconferencing become more popular, certain
individual desktop computers will have dedicated 100-Mbps links to the network.
4.3.2 Frame transmission modes
There are three main frame transmission modes:

Fast-forward – With this transmission mode, the switch reads the destination
address before receiving the entire frame. The frame is then forwarded before the
entire frame arrives. This mode decreases the latency of the transmission but has
poor LAN switching error detection. Fast-forward is the term used to indicate a
switch is in cut-through mode.

Store-and-forward – The entire frame is received before any forwarding takes
place. The destination and source addresses are read and filters are applied before
the frame is forwarded. Latency occurs while the frame is being received. Latency
is greater with larger frames because the entire frame must be received before the
switching process begins. The switch has time available to check for errors, which
allows more error detection.

Fragment-free – This mode of switching reads the first 64 bytes of an Ethernet
frame and then begins forwarding it to the appropriate port or ports. Fragment-free
is a term used to indicate the switch is using modified cut-through switching.
Another transmission mode is a combination of cut-through and store-and-forward. This
74
Cisco Academy – CCNA 3.0 Semester 3
hybrid mode is called adaptive cut-through. In this mode, the switch uses cut-through until it
detects a given number of errors. Once the error threshold is reached, the switch changes to
store-and-forward mode.
Interactive Media Activity
Drag and Drop: Switching Method Trigger Points
When the student has completed this activity, the student will be able to understand the
different methods of switching.
4.3.3 How switches and bridges learn addresses
Bridges and switches only forward frames, which need to travel from one LAN segment
to another. To accomplish this task, they must learn which devices are connected to which
LAN segment.
A bridge is considered an intelligent device because it can make decisions based on
MAC addresses. To do this, a bridge refers to an address table. When a bridge is turned on,
broadcast messages are transmitted asking all the stations on the local segment of the network
to respond. As the stations return the broadcast message, the bridge builds a table of local
addresses. This process is called learning.
Bridges and switches learn in the following ways:

Reading the source MAC address of each received frame or datagram

Recording the port on which the MAC address was received.
In this way, the bridge or switch learns which addresses belong to the devices connected
to each port.
The learned addresses and associated port or interface are stored in the addressing table.
The bridge examines the destination address of all received frames. The bridge then scans the
address table searching for the destination address.
CAM is used in switch applications:

To take out and process the address information from incoming data packets

To compare the destination address with a table of addresses stored within it
The CAM stores host MAC addresses and associated port numbers. The CAM compares
the received destination MAC address against the CAM table contents. If the comparison
yields a match, the port is provided, and routing control forwards the packet to the correct port
and address.
Chapter 4
Switching Concepts
75
An Ethernet switch can learn the address of each device on the network by reading the
source address of each frame transmitted and noting the port where the frame entered the
switch. The switch then adds this information to its forwarding database. Addresses are
learned dynamically. This means that as new addresses are read, they are learned and stored in
CAM. When a source address is not found in CAM, it is learned and stored for future use.
Each time an address is stored, it is time stamped. This allows for addresses to be stored
for a set period of time. Each time an address is referenced or found in CAM, it receives a
new time stamp. Addresses that are not referenced during a set period of time are removed
from the list. By removing aged or old addresses, CAM maintains an accurate and functional
forwarding database.
The processes followed by the CAM are as follows:

If the address is not found, the bridge forwards the frame out all ports except the
port on which it was received. This process is called flooding.
The address may also have been deleted by the bridge because the bridge software
was recently restarted, ran short of address entries in the address table, or deleted
the address because it was too old. Since the bridge does not know which port to
use to forward the frame, it will send it to out all ports, except the one from which
it was received. It is clearly unnecessary to send it back to the same cable segment
from which it was received, since any other computer or bridges on this cable must
already have received the packet.

If the address is found in an address table and the address is associated with the
port on which it was received, the frame is discarded. It must already have been
received by the destination.

If the address is found in an address table and the address is not associated with the
port on which it was received, the bridge forwards the frame to the port associated
with the address.
4.3.4 How switches and bridges filter frames
Bridges are capable of filtering frames based on any Layer 2 fields. For example, a
bridge can be programmed to reject, not forward, all frames sourced from a particular network.
Because link layer information often includes a reference to an upper-layer protocol, bridges
can usually filter on this parameter. Furthermore, filters can be helpful in dealing with
unnecessary broadcast and multicast packets.
Once the bridge has built the local address table, it is ready to operate. When it receives
a frame, it examines the destination address. If the frame address is local, the bridge ignores it.
If the frame is addressed for another LAN segment, the bridge copies the frame onto the
76
Cisco Academy – CCNA 3.0 Semester 3
second segment.

Ignoring a frame is called filtering.

Copying the frame is called forwarding.
Basic filtering keeps local frames local and sends remote frames to another LAN
segment.
Filtering on specific source and destination addresses performs the following actions:

Stopping one station from sending frames outside of its local LAN segment

Stopping all "outside" frames destined for a particular station, thereby restricting
the other stations with which it can communicate
Both types of filtering provide some control over internetwork traffic and can offer
improved security.
Most Ethernet bridges can filter broadcast and multicast frames. Occasionally, a device
will malfunction and continually send out broadcast frames, which are continuously copied
around the network. A broadcast storm, as it is called, can bring network performance to zero.
If a bridge can filter broadcast frames, a broadcast storm has less opportunity to act.
Today, bridges are also able to filter according to the network-layer protocol. This blurs
the demarcation between bridges and routers. A router operates on the network layer using a
routing protocol to direct traffic around the network. A bridge that implements advanced
filtering techniques is usually called a brouter. Brouters filter by looking at network layer
information but they do not use a routing protocol.
4.3.5 LAN segmentation using bridging
Ethernet LANs that use a bridge to segment the LAN provide more bandwidth per user
because there are fewer users on each segment. In contrast, LANs that do not use bridges for
segmentation provide less bandwidth per user because there are more users on a
non-segmented LAN.
Bridges segment a network by building address tables that contain the address of each
network device and which segment to use to reach that device. Bridges are Layer 2 devices
that forward data frames based on MAC addresses of the frame. In addition, bridges are
transparent to the other devices on the network.
Bridges increase the latency in a network by 10 to 30 percent. This latency is due to the
decision-making required of the bridge or bridges in transmitting data. A bridge is considered
a store-and-forward device because it must examine the destination address field and calculate
Chapter 4
Switching Concepts
77
the CRC in the frame check sequence field, before forwarding the frame. If the destination
port is busy, the bridge can temporarily store the frame until the port is available. The time it
takes to perform these tasks slows the network transmissions causing increased latency.
4.3.6 Why segment LANs?
There are two primary reasons for segmenting a LAN. The first is to isolate traffic
between segments. The second reason is to achieve more bandwidth per user by creating
smaller collision domains.
Without LAN segmentation, LANs larger than a small workgroup could quickly become
clogged with traffic and collisions.
LAN segmentation can be implemented through the utilization of bridges, switches, and
routers. Each of these devices has particular pros and cons.
With the addition of devices like bridges, switches, and routers the LAN is segmented
into a number of smaller collision domains. In the example shown, four collision domains
have been created.
By dividing large networks into self-contained units, bridges and switches provide
several advantages. Bridges and switches will diminish the traffic experienced by devices on
all connected segments, because only a certain percentage of traffic is forwarded. Bridges and
switches reduce the collision domain but not the broadcast domain.
Each interface on the router connects to a separate network. Therefore the insertion of
the router into a LAN will create smaller collision domains and smaller broadcast domains.
This occurs because routers do not forward broadcasts unless programmed to do so.
A switch employs “microsegmentation” to reduce the collision domain on a LAN. The
switch does this by creating dedicated network segments, or point-to-point connections. The
switch connects these segments in a virtual network within the switch.
This virtual network circuit exists only when two nodes need to communicate. This is
called a virtual circuit as it exists only when needed, and is established within the switch.
4.3.7 Microsegmentation implementation
LAN switches are considered multi-port bridges with no collision domain, because of
microsegmentation.
Data is exchanged at high speeds by switching the frame to its
destination. By reading the destination MAC address Layer 2 information, switches can
achieve high-speed data transfers, much like a bridge does. The frame is sent to the port of the
78
Cisco Academy – CCNA 3.0 Semester 3
receiving station prior to the entire frame entering the switch. This process leads to low
latency levels and a high rate of speed for frame forwarding.
Ethernet switching increases the bandwidth available on a network. It does this by
creating dedicated network segments, or point-to-point connections, and connecting these
segments in a virtual network within the switch. This virtual network circuit exists only when
two nodes need to communicate. This is called a virtual circuit because it exists only when
needed, and is established within the switch.
Even though the LAN switch reduces the size of collision domains, all hosts connected
to the switch are still in the same broadcast domain. Therefore, a broadcast from one node
will still be seen by all the other nodes connected through the LAN switch.
Switches are data link layer devices that, like bridges, enable multiple physical LAN
segments to be interconnected into a single larger network. Similar to bridges, switches
forward and flood traffic based on MAC addresses. Because switching is performed in
hardware instead of in software, it is significantly faster. Each switch port can be considered a
micro-bridge acting as a separate bridge and gives the full bandwidth of the medium to each
host.
4.3.8 Switches and collision domains
A major disadvantage of Ethernet 802.3 networks is collisions. Collisions occur when
two hosts transmit frames simultaneously. When a collision occurs, the transmitted frames are
corrupted or destroyed in the collision. The sending hosts stop sending further transmissions
for a random period of time, based on the Ethernet 802.3 rules of CSMA/CD. Excessive
collisions cause networks to be unproductive.
The network area where frames originate and collide is called the collision domain. All
shared media environments are collision domains.
When a host is connected to a switch port,
the switch creates a dedicated 10 Mbps bandwidth connection. This connection is considered
to be an individual collision domain. For example, if a twelve-port switch has a device
connected to each port then twelve collision domains are created.
A switch builds a switching table by learning the MAC addresses of the hosts that are
connected to each switch port. When two connected hosts want to communicate with each
other, the switch looks up the switching table and establishes a virtual connection between the
ports. The virtual circuit is maintained until the session is terminated.
In Figure , Host B and Host C want to communicate with each other. The switch creates
the virtual connection which is referred to as a microsegment. The microsegment behaves as
Chapter 4
Switching Concepts
79
if the network has only two hosts, one host sending and one receiving providing maximum
utilization of the available bandwidth.
Switches reduce collisions and increase bandwidth on network segments because they
provide dedicated bandwidth to each network segment.
4.3.9 Switches and broadcast domains
Communication in a network occurs in three ways. The most common way of
communication is by unicast transmissions. In a unicast transmission, one transmitter tries to
reach one receiver.
Another way to communicate is known as a multicast transmission. Multicast
transmission occurs when one transmitter tries to reach only a subset, or a group, of the entire
segment.
The final way to communicate is by broadcasting. Broadcasting is when one transmitter
tries to reach all the receivers in the network. The server station sends out one message and
everyone on that segment receives the message.
When a device wants to send out a Layer 2 broadcast, the destination MAC address in
the frame is set to all ones. A MAC address of all ones is FF:FF:FF:FF:FF:FF in hexadecimal.
By setting the destination to this value, all the devices will accept and process the broadcasted
frame.
The broadcast domain at Layer 2 in referred to as the MAC broadcast domain. The
MAC broadcast domain consists of all devices on the LAN that receive frame broadcasts by a
host to all other machines on the LAN.
A switch is a Layer 2 device. When a switch receives a broadcast, it forwards it to each
port on the switch except the incoming port. Each attached device must process the broadcast
frame. This leads to reduced network efficiency, because available bandwidth is used for
broadcasting purposes.
When two switches are connected, the broadcast domain is increased. In this example a
broadcast frame is forwarded to all connected ports on Switch 1. Switch 1 is connected to
Switch 2. The frame is propagated to all devices connected to Switch 2.
The overall result is a reduction in available bandwidth. This happens because all
devices in the broadcast domain must receive and process the broadcast frame.
Routers are Layer 3 devices. Routers do not propagate broadcasts. Routers are used to
80
Cisco Academy – CCNA 3.0 Semester 3
segment both collision and broadcast domains.
4.3.10 Communication between switches and workstation
When a workstation connects to a LAN, it is unconcerned about the other devices that
are connected to the LAN media. The workstation simply transmits data frames using a NIC
to the network medium.
The workstation could be attached directly to another workstation, using a crossover
cable or attached to a network device, such as a hub, switch, or router, using a
straight-through cable.
Switches are Layer 2 devices that use intelligence to learn the MAC addresses of the
devices that are attached to the ports of the switch. This data is entered into a switching table.
Once the table is complete, the switch can read the destination MAC address of an incoming
data frame on a port and immediately forward it. Until a device transmits, the switch does
not know its MAC address.
Switches provide significant scalability on a network and may be directly connected.
Figure
illustrates one scenario of frame transmission utilizing a multi-switch network.
Summary
An understanding of the following key points should have been achieved:

The history and function of shared, half-duplex Ethernet

Collisions in an Ethernet network

Microsegmentation

CSMA/CD

Elements affecting network performance

The function of repeaters

Network latency

Transmission time

The basic function of Fast Ethernet

Network segmentation using routers, switches, and bridges

The basic operations of a switch

Ethernet switch latency

The differences between Layer 2 and Layer 3 switching

Symmetric and asymmetric switching

Memory buffering

Store-and-forward and cut-through switchings
Chapter 4

The differences between hubs, bridges, and switches

The main functions of switches

Major switch frame transmission modes

The process by which switches learn addresses

The frame-filtering process

LAN segmentation

Microsegmentation using switching

Forwarding modes

Collision and broadcast domains
Switching Concepts
81
82
Cisco Academy – CCNA 3.0 Semester 3
Chapter 5
Switches
Overview
Designing a network can be a challenging task that involves more than just connecting
computers together. A network requires many features in order to be reliable, manageable, and
scalable. To design reliable, manageable, and scalable networks, a network designer must
realize that each of the major components of a network has distinct design requirements.
Network design is becoming more difficult despite improvements in equipment
performance and media capabilities. Using multiple media types and interconnecting LANs
with other external networks makes the networking environment complex. Good network
design will improve performance and also reduce the difficulties associated with network
growth and evolution.
A LAN spans a single room, a building, or a set of buildings that are close together. A
group of buildings that are on a site and belong to a single organization are referred to as a
campus. The design of larger LANs includes identifying the following:

An access layer that connects end users into the LAN

A distribution layer that provides policy-based connectivity between end-user
LANs

A core layer that provides the fastest connection between the distribution points
Each of these LAN design layers requires switches that are best suited for specific tasks.
The features, functions, and technical specifications for each switch vary depending on the
LAN design layer for which the switch is intended. Understanding the role of each layer and
then choosing the switches best suited for that layer ensures the best network performance for
LAN users.
Students completing this module should be able to:

Describe the four major goals of LAN design

List the key considerations in LAN design

Understand the steps in systematic LAN design

Understand the design issues associated with the Layer 1, 2, and 3 LAN structure,
or topology

Describe the three-layer design model

Identify the functions of each layer of the three-layer model

List Cisco access layer switches and their features

List Cisco distribution layer switches and their features
Chapter 5

Switches
83
List Cisco core layer switches and their features
5.1 LAN Design
5.1.1 LAN design goals
The first step in designing a LAN is to establish and document the goals of the design.
These goals are unique to each organization or situation. The following requirements are
usually seen in most network designs:

Functionality – The network must work. The network must allow users to meet
their
job
requirements.
The
network
must
provide
user-to-user
and
user-to-application connectivity with reasonable speed and reliability.

Scalability – The network must be able to grow. The initial design should grow
without any major changes to the overall design.

Adaptability – The network must be designed with a vision toward future
technologies. The network should include no element that would limit
implementation of new technologies as they become available.

Manageability – The network should be designed to facilitate network monitoring
and management to ensure ongoing stability of operation.
Interactive Media Activity
Matching: LAN Design Goals Matching
When the student has completed this activity, the student will be able to understand the
terms, definitions, and goals in network LAN design.
5.1.2 LAN design considerations
Many organizations have been upgrading existing LANs or planning, designing, and
implementing new LANs. This expansion in LAN design is due to the development of
high-speed technologies such as Asynchronous Transfer Mode (ATM). This expansion is also
due to complex LAN architectures that use LAN switching and virtual LANs (VLANs).
To maximize available LAN bandwidth and performance, the following LAN design
considerations must be addressed:

The function and placement of servers

Collision detection issues

Segmentation issues

Broadcast domain issues
84
Cisco Academy – CCNA 3.0 Semester 3
Servers provide file sharing, printing, communication, and application services. Servers
typically do not function as workstations. Servers run specialized operating systems, such as
NetWare, Windows NT, UNIX, and Linux. Each server is usually dedicated to one function,
such as e-mail or file sharing.
Servers can be categorized into two distinct classes: enterprise servers and workgroup
servers. An enterprise server supports all the users on the network by offering services, such
as e-mail or Domain Name System (DNS). E-mail or DNS is a service that everyone in an
organization would need because it is a centralized function. However, a workgroup server
supports a specific set of users, offering services such as word processing and file sharing.
Enterprise servers should be placed in the main distribution facility (MDF). Traffic to
the enterprise servers travels only to the MDF and is not transmitted across other networks.
The reviewer's rewrite leaves out the important point about the traffic to the enterprise servers
traveling only to the MDF. Ideally, workgroup servers should be placed in the intermediate
distribution facilities (IDFs) closest to the users accessing the applications on these servers.
By placing workgroup servers close to the users, traffic only has to travel the network
infrastructure to an IDF, and does not affect other users on that network segment. Layer 2
LAN switches located in the MDF and IDFs should have 100 Mbps or more allocated to these
servers.
Ethernet nodes use CSMA/CD. Each node must contend with all other nodes to access
the shared medium, or collision domain. If two nodes transmit at the same time, a collision
occurs. When this occurs, the transmitted frame is destroyed, and a jam signal is sent to all
nodes on the segment. The transmitting nodes wait a random period of time, and then resend
the data. Excessive collisions can reduce the available bandwidth of a network segment to
35% or 40% of the bandwidth available.
Segmentation is the process of splitting a single collision domain into smaller collision
domains.
Creating smaller collision domains reduces the number of collisions on a LAN
segment, and allows for greater utilization of bandwidth. Layer 2 devices such as bridges and
switches can be used to segment a LAN into smaller collision domains. Routers can achieve
this at Layer 3.
A broadcast occurs when the destination media access control (MAC) data frame
address is set to FF-FF-FF-FF-FF-FF. A broadcast domain refers to the set of devices that
receive a broadcast data frame originating from any device within that set. All hosts that
receive a broadcast data frame must process it. Processing the broadcast data will consume
the resources and available bandwidth of the host. Layer 2 devices such as bridges and
switches reduce the size of a collision domain. These devices do not reduce the size of the
Chapter 5
Switches
85
broadcast domain. Routers reduce the size of the collision domain and the size of the
broadcast domain at Layer 3.
5.1.3 LAN design methodology
For a LAN to be effective and serve the needs of its users, it should be designed and
implemented according to a planned series of systematic steps. These steps include the
following:

Gather requirements and expectations

Analyze requirements and data

Design the Layer 1, 2, and 3 LAN structure, or topology

Document the logical and physical network implementation
The information gathering process helps clarify and identify any current network
problems. This information includes the organization's history and current status, their
projected growth, operating policies and management procedures, office systems and
procedures, and the viewpoints of the people who will be using the LAN.
The following questions should be asked when gathering information:

Who are the people who will be using the network?

What is the skill level of these people?

What are their attitudes toward computers and computer applications?

How developed are the organizational documented policies?

Has some data been declared mission critical?

Have some operations been declared mission critical?

What protocols are allowed on the network?

Are only certain desktop hosts supported?

Who is responsible for LAN addressing, naming, topology design, and
configuration?

What are the organizational human, hardware, and software resources?

How are these resources currently linked and shared?

What financial resources does the organization have available?
Documenting the following requirements allows for an informed estimate of costs and
timelines for projected LAN design implementation. It is important to understand
performance issues of any existing network.
Availability measures the usefulness of the network. Many things affect availability,
including the following:

Throughput
86
Cisco Academy – CCNA 3.0 Semester 3

Response time

Access to resources
Every customer has a different definition of availability. For example, there may be a
need to transport voice and video over the network. These services may require more
bandwidth than is available on the network or backbone. To increase availability, more
resources can be added, but adding more resources will increase the cost of the network.
Network design tries to provide the greatest availability for the least cost.
The next step in designing a network is to analyze the requirements of the network and
its users. Network user needs constantly change. As more voice and video-based network
applications become available, the necessity to increase network bandwidth grows too.
Another component of the analysis phase is assessing the user requirements. A LAN that
is incapable of supplying prompt and accurate information to its users is useless. Steps must
be taken to ensure that the information requirements of the organization and its workers are
met.
The next step is to decide on an overall LAN topology that will satisfy the user
requirements.
In this curriculum, concentration will be on the star topology and extended
star topology. The star topology and extended star topology uses Ethernet 802.3 CSMA/CD
technology. CSMA/CD star topology is the dominant configuration in the industry.
LAN topology design can be broken into the following three unique categories of the
OSI reference model:

Network layer

Data link layer

Physical layer
The final step in LAN design methodology is to document the physical and logical
topology of the network. The physical topology of the network refers to the way in which
various LAN components are connected together. The logical design of the network refers to
the flow of data in a network. It also refers to the naming and addressing schemes used in the
implementation of the LAN design solution.
Important LAN design documentation includes the following:

OSI layer topology map

LAN logical map

LAN physical map

Cut sheets

VLAN logical map
Chapter 5

Layer 3 logical map

Addressing maps
Switches
87
5.1.4 Layer 1 design
One of the most important components to consider when designing a network is the
physical cabling. Today, most LAN cabling is based on Fast Ethernet technology. Fast
Ethernet is Ethernet that has been upgraded from 10 Mbps to 100 Mbps, and has the ability to
utilize full-duplex functionality. Fast Ethernet uses the standard Ethernet broadcast-oriented
logical bus topology of 10BASE-T, and the CSMA/CD method for MAC addressing.
Design issues at Layer 1 include the type of cabling to be used, typically copper or
fiber-optic, and the overall structure of the cabling. Layer 1 cabling media includes types
such as 10/100BASE-TX Category 5, 5e, or 6 unshielded twisted-pair (UTP), or shielded
twisted-pair (STP), 100BaseFX fiber-optic cable, and the TIA/EIA-568-A standard for layout
and connection of wiring schemes.
Careful evaluation of the strengths and weaknesses of the topologies should be
performed. A network is only as effective as its underlying cable.
Layer 1 issues cause most
network problems. A complete cable audit should be conducted, when planning any
significant changes for a network, to identify areas that require upgrades and rewiring.
Fiber-optic cable should be used in the backbone and risers in all cable design settings.
Category 5e UTP cable should be used in the horizontal runs. The cable upgrade should take
priority over any other necessary changes. Enterprises should also make certain that these
systems conform to well-defined industry standards, such as the TIA/EIA-568-A
specifications.
The TIA/EIA-568-A standard specifies that every device connected to the network
should be linked to a central location with horizontal cabling. This applies if all the hosts that
need to access the network are within the 100-meter distance limitation for Category 5e UTP
Ethernet.
In a simple star topology with only one wiring closet, the MDF includes one or more
horizontal cross-connect (HCC) patch panels.
HCC patch cables are used to connect the
Layer 1 horizontal cabling with the Layer 2 LAN switch ports. The uplink port of the LAN
switch, depending on the model, is connected to the Ethernet port of the Layer 3 router using
a patch cable. At this point, the end host has a complete physical connection to the router port.
When hosts in larger networks are outside the 100-meter limitation for Category 5e UTP,
more than one wiring closet is required. By creating multiple wiring closets, multiple
88
Cisco Academy – CCNA 3.0 Semester 3
catchment areas are created. The secondary wiring closets are referred to as intermediate
distribution facilities (IDFs).
TIA/EIA-568-A standards specify that IDFs should be
connected to the MDF by using vertical cabling, also called backbone cabling. A vertical
cross-connect (VCC) is used to interconnect the various IDFs to the central MDF. Fiber-optic
cabling is normally used because the vertical cable lengths are typically longer than the
100-meter limit for Category 5e UTP cable.
The logical diagram is the network topology model without all the detail of the exact
installation paths of the cabling. The logical diagram is the basic road map of the LAN
including the following elements:

Specify the locations and identification of the MDF and IDF wiring closets.

Document the type and quantity of cabling used to interconnect the IDFs with the
MDF.

Document how many spare cables are available for increasing the bandwidth
between the wiring closets. For example, if the vertical cabling between IDF 1 and
the MDF is running at 80% utilization, two additional pairs could be used to
double the capacity.

Provide detailed documentation of all cable runs, the identification numbers, and
the port the run is terminated on at the HCC or VCC.
The logical diagram is essential when troubleshooting network connectivity problems. If
Room 203 loses connectivity to the network, by examining the cut sheet it can be seen that
this room is running off cable run 203-1, which is terminated on HCC 1 port 13. Using a cable
tester it can be determined whether the problem is a Layer 1 failure. If it is, one of the other
two runs can be used to reestablish connectivity and provide time to troubleshoot run 203-1.
5.1.5 Layer 2 design
The purpose of Layer 2 devices in the network is to provide flow control, error detection,
error correction, and to reduce congestion in the network. The two most common Layer 2
networking devices are bridges and LAN switches. Devices at Layer 2 determine the size of
the collision domains.
Collisions and collision domain size are two factors that negatively affect the
performance of a network. Microsegmentation of the network reduces the size of collision
domains and reduces collisions.
Microsegmentation is implemented through the use of
bridges and switches. The goal is to boost performance for a workgroup or a backbone.
Switches can be used with hubs to provide the appropriate level of performance for different
users and servers.
Chapter 5
Switches
89
Another important characteristic of a LAN switch is how it can allocate bandwidth on a
per-port basis. This will provide more bandwidth to vertical cabling, uplinks, and servers.
This type of switching is referred to as asymmetric switching. Asymmetric switching provides
switched connections between ports of unlike bandwidth, such as a combination of 10-Mbps
and 100-Mbps ports.
The desired capacity of a vertical cable run is greater than that of a horizontal cable run.
By installing a LAN switch at the MDF and IDF, the vertical cable run can manage the data
traffic from the MDF to the IDF. The horizontal runs between the IDF and the workstations
uses Category 5e UTP. No horizontal cable drop should be longer than 100 meters. In a
normal environment, 10 Mbps is adequate for the horizontal drop. Use asymmetric LAN
switches to allow for mixing 10-Mbps and 100-Mbps ports on a single switch.
The next task is to determine the number of 10 Mbps and 100 Mbps ports needed in the
MDF and every IDF. This can be determined by reviewing the user requirements for the
number of horizontal cable drops per room and the number of total drops in any catchment
area. This includes the number of vertical cable runs. For example, suppose that user
requirements dictate four horizontal cable runs to be installed to each room. The IDF services
a catchment area of 18 rooms. Therefore, four drops in each of the 18 rooms will equal 72
LAN switch ports. (4x18=72)
The size of a collision domain is determined by how many hosts are physically
connected to any single port on the switch. This also affects how much network bandwidth is
available to any host. In an ideal situation, there is only one host connected on a LAN switch
port. The collision domain would consist only of the source host and destination host. The
size of the collision domain would be two. Because of the small size of this collision domain,
there should be virtually no collisions when any two hosts are communicating with each other.
Another way to implement LAN switching is to install shared LAN hubs on the switch ports,
and connect multiple hosts to a single switch port. All hosts connected to the shared LAN
hub share the same collision domain and bandwidth. Collisions would occur more frequently.
Some older switches, such as the Catalyst 1700, do not properly support sharing the
same collision domain and bandwidth. The older switches do not maintain multiple MAC
addresses mapped to each port. As a result, there are many broadcasts and ARP requests.
Shared media hubs are generally used in a LAN switch environment to create more
connection points at the end of the horizontal cable runs. This is an acceptable solution, but
care must be taken. Collision domains should be kept small and bandwidth requirements to
the host must be provided according to the specifications gathered in the requirements phase
of the network design process.
90
Cisco Academy – CCNA 3.0 Semester 3
5.1.6 Layer 3 design
A router is a Layer 3 device and is considered one of the most powerful devices in the
network topology.
Layer 3 devices can be used to create unique LAN segments. Layer 3 devices allow
communication between segments based on Layer 3 addressing, such as IP addressing.
Implementation of Layer 3 devices allows for segmentation of the LAN into unique physical
and logical networks. Routers also allow for connectivity to wide-area networks (WANs),
such as the Internet.
Layer 3 routing determines traffic flow between unique physical network segments
based on Layer 3 addressing. A router forwards data packets based on destination addresses. A
router does not forward LAN-based broadcasts such as ARP requests. Therefore, the router
interface is considered the entry and exit point of a broadcast domain and stops broadcasts
from reaching other LAN segments.
Routers provide scalability because they serve as firewalls for broadcasts. They can also
provide scalability by dividing networks into subnetworks, or subnets, based on Layer 3
addresses.
When deciding whether to use routers or switches, remember to ask, "What is the
problem that is to be solved?" If the problem is related to protocol rather than issues of
contention, then routers are the appropriate solution. Routers solve problems with excessive
broadcasts, protocols that do not scale well, security issues, and network layer addressing.
Routers are more expensive and more difficult to configure than switches.
Figure
shows an example of an implementation that has multiple physical networks.
All data traffic from Network 1 destined for Network 2 has to go through the router. In this
implementation, there are two broadcast domains. The two networks have unique Layer 3
network addressing schemes. In a structured Layer 1 wiring scheme, multiple physical
networks are easy to create by patching the horizontal cabling and vertical cabling into the
appropriate Layer 2 switch. This can be done using patch cables. This implementation also
provides robust security, because all traffic in and out of the LAN must pass through the
router.
Once an IP addressing scheme has been developed for a client, it should be clearly
documented. A standard convention should be set for addressing important hosts on the
network. This addressing scheme should be kept consistent throughout the entire network.
Addressing maps provide a snapshot of the network.
network helps to troubleshoot the network.
Creating physical maps of the
Chapter 5
Switches
91
VLAN implementation combines Layer 2 switching and Layer 3 routing technologies to
limit both collision domains and broadcast domains. VLANs can also be used to provide
security by creating the VLAN groups according to function and by using routers to
communicate between VLANs.
A physical port association is used to implement VLAN assignment. Ports P1, P4, and
P6 have been assigned to VLAN 1. VLAN 2 has ports P2, P3, and P5. Communication
between VLAN 1 and VLAN 2 can occur only through the router. This limits the size of the
broadcast domains and uses the router to determine whether VLAN 1 can talk to VLAN 2.
5.2 LAN Switches
5.2.1 Switched LANs, access layer overview
The construction of a LAN that satisfies the needs of both medium and large-sized
organizations is more likely to be successful if a hierarchical design model is used. The use of
a hierarchical design model will make it easier to make changes to the network as the
organization grows. The hierarchical design model includes the following three layers:

The access layer provides users in workgroups access to the network.

The distribution layer provides policy-based connectivity.

The core layer provides optimal transport between sites. The core layer is often
referred to as the backbone.
This hierarchical model applies to any network design. It is important to realize that
these three layers may exist in clear and distinct physical entities. However, this is not a
requirement. These layers are defined to aid in successful network design and to represent
functionality that must exist in a network.
The access layer is the entry point for user workstations and servers to the network. In a
campus LAN the device used at the access layer can be a switch or a hub.
If a hub is used, bandwidth is shared. If a switch is used, then bandwidth is dedicated. If
a workstation or server is directly connected to a switch port, then the full bandwidth of the
connection to the switch is available to the connected computer. If a hub is connected to a
switch port, bandwidth is shared between all devices connected to the hub.
Access layer functions also include MAC layer filtering and microsegmentation. MAC
layer filtering allows switches to direct frames only to the switch port that is connected to the
destination device. The switch creates small Layer 2 segments called microsegments. The
collision domain can be as small as two devices. Layer 2 switches are used in the access layer.
92
Cisco Academy – CCNA 3.0 Semester 3
5.2.2 Access layer switches
Access layer switches operate at Layer 2 of the OSI model and provide services such as
VLAN membership. The main purpose of an access layer switch is to allow end users into the
network. An access layer switch should provide this functionality with low cost and high port
density.
The following Cisco switches are commonly used at the access layer:

Catalyst 1900 series

Catalyst 2820 series

Catalyst 2950 series

Catalyst 4000 series

Catalyst 5000 series
The Catalyst 1900 or 2820 series switch is an effective access device for small or
medium campus networks. The Catalyst 2950 series switch effectively provides access for
servers and users that require higher bandwidth. This is achieved by providing Fast Ethernet
capable switch ports. The Catalyst 4000 and 5000 series switches include Gigabit Ethernet
ports and are effective access devices for a larger number of users in large campus networks.
Interactive Media Activity
PhotoZoom: Cisco Catalyst 1912
In this PhotoZoom, the student will view a Cisco Catalyst 1912.
5.2.3 Distribution layer overview
The distribution layer of the network is between the access and core layers. It helps to
define and separate the core. The purpose of this layer is to provide a boundary definition in
which packet manipulation can take place. Networks are segmented into broadcast domains
by this layer. Policies can be applied and access control lists can filter packets. The
distribution layer isolates network problems to the workgroups in which they occur. The
distribution layer also prevents these problems from affecting the core layer. Switches in this
layer operate at Layer 2 and Layer 3. In a switched network, the distribution layer includes
several functions such as the following:

Aggregation of the wiring closet connections

Broadcast/multicast domain definition

Virtual LAN (VLAN) routing

Any media transitions that need to occur

Security
Chapter 5
Switches
93
5.2.4 Distribution layer switches
Distribution layer switches are the aggregation points for multiple access layer switches.
The switch must be able to accommodate the total amount of traffic from the access layer
devices.
The distribution layer switch must have high performance. The distribution layer switch
is a point at which a broadcast domain is delineated. The distribution layer combines VLAN
traffic and is a focal point for policy decisions about traffic flow. For these reasons
distribution layer switches operate at both Layer 2 and Layer 3 of the OSI model. Switches in
this layer are referred to as multilayer switches. These multilayer switches combine the
functions of a router and a switch in one device. They are designed to switch traffic to gain
higher performance than a standard router. If they do not have an associated router module,
then an external router is used for the Layer 3 function.
The following Cisco switches are suitable for the distribution layer:

Catalyst 2926G

Catalyst 5000 family

Catalyst 6000 family
Interactive Media Activity
PhotoZoom: Cisco Catalyst 2950
In this PhotoZoom, the student will view a Cisco Catalyst 2950.
5.2.5 Core layer overview
The core layer is a high-speed switching backbone. If they do not have an associated
router module, an external router is used for the Layer 3 function. This layer of the network
design should not perform any packet manipulation. Packet manipulation, such as access list
filtering, would slow down the switching of packets. Providing a core infrastructure with
redundant alternate paths gives stability to the network in the event of a single device failure.
The core can be designed to use Layer 2 or Layer 3 switching. Asynchronous Transfer
Mode (ATM) or Ethernet switches can be used.
Interactive Media Activity
Point and Click: Core Layer
Students completing this activity will be able to identify the key function of the core
94
Cisco Academy – CCNA 3.0 Semester 3
layer in the three layer design model.
5.2.6 Core layer switches
The core layer is the backbone of the campus switched network. The switches in this
layer can make use of a number of Layer 2 technologies. Provided that the distance between
the core layer switches is not too great, the switches can use Ethernet technology. Other Layer
2 technologies, such as Asynchronous Transfer Mode (ATM) cell switching, can also be used.
In a network design, the core layer can be a routed, or Layer 3, core. Core layer switches are
designed to provide efficient Layer 3 functionality when needed. Factors such as need, cost,
and performance should be considered before a choice is made.
The following Cisco switches are suitable for the core layer:

Catalyst 6500 series

Catalyst 8500 series

IGX 8400 series

Lightstream 1010
Interactive Media Activity
PhotoZoom: Cisco Catalyst 4006
In this PhotoZoom, the student will view a Cisco Catalyst 4006.
Summary
An understanding of the following key points should have been achieved:

The four major goals of LAN design

Key considerations in LAN design

The steps in systematic LAN design

Design issues associated with Layers 1, 2, and 3

The three-layer design model

The functions of each layer in the three-layer model

Cisco access layer switches and their features

Cisco distribution layer switches and their features

Cisco core layer switches and their features
Chapter 6
Chapter 6
Switch Configuration
95
Switch Configuration
Overview
A switch is a Layer 2 network device that acts as the concentration point for the
connection of workstations, servers, routers, hubs, and other switches.
A hub is an older type of concentration device which also provides multiple ports.
However, hubs are inferior to switches because all devices connected to a hub reside in the
same bandwidth domain that produces collisions. Another drawback to using hubs is that they
only operate in half-duplex mode. In half-duplex mode, the hubs can send or receive data at
any given time, but not both at the same time. Switches can operate in full-duplex mode,
which means they can send and receive data simultaneously.
Switches are multi-port bridges. Switches are the current standard technology for
Ethernet LANs that utilize a star topology. A switch provides many dedicated, point-to-point
virtual circuits between connected networking devices, so collisions are virtually impossible.
Because of their dominant role in modern networks, the ability to understand and
configure switches is essential for network support.
A new switch will have a preset configuration with factory defaults. This configuration
will rarely meet the needs of a network administator. Switches can be configured and
managed from a command-line interface (CLI). Increasingly, networking devices can also be
configured and managed using a web based interface and a browser.
A network administrator must be familiar with many tasks to be effective in managing a
network with switches. Some of these tasks are associated with maintaining the switch and its
Internetworking Operating System (IOS). Others are associated with managing interfaces and
tables for optimal, reliable, and secure operation. Basic switch configuration, upgrading the
IOS, and performing password recovery are essential network administrator skills.
Students completing this module should be able to:

Identify the major components of a Catalyst switch

Monitor switch activity and status using LED indicators

Examine the switch bootup output using HyperTerminal

Use the help features of the command line interface

List the major switch command modes

Verify the default settings of a Catalyst switch
96
Cisco Academy – CCNA 3.0 Semester 3

Set an IP address and default gateway for the switch to allow connection and
management over a network

View the switch settings with a Web browser

Set interfaces for speed and duplex operation

Examine and manage the switch MAC address table

Configure port security

Manage configuration files and IOS images

Perform password recovery on a switch

Upgrade the IOS of a switch
6.1 Starting the Switch
6.1.1 Physical startup of the Catalyst switch
Switches are dedicated, specialized computers, which contain a central processing unit
(CPU), random access memory (RAM), and an operating system. As shown in Figure ,
switches usually have several ports for the purpose of connecting hosts, as well as specialized
ports for the purpose of management. A switch can be managed by connecting to the console
port to view and make changes to the configuration.
Switches typically have no power switch to turn them on and off. They simply connect
or disconnect from a power source.
Several switches from the Cisco Catalyst 2950 series are shown in Figure .
6.1.2 Switch LED indicators
The front panel of a switch has several lights to help monitor system activity and
performance. These lights are called light-emitting diodes (LEDs). The front of the switch has
the following LEDs:

System LED

Remote Power Supply (RPS) LED

Port Mode LED

Port Status LEDs
The System LED shows whether the system is receiving power and functioning
correctly.
The RPS LED indicates whether or not the remote power supply is in use.
The Mode LEDs indicate the current state of the Mode button. The modes are used to
Chapter 6
Switch Configuration
97
determine how the Port Status LEDs are interpreted. To select or change the port mode, press
the Mode button repeatedly until the Mode LEDs indicate the desired mode.
The Port Status LEDs have different meanings, depending on the current value of the
Mode LED.
6.1.3 Verifying port LEDs during switch POST
Once the power cable is connected, the switch initiates a series of tests called the
power-on self test (POST). POST runs automatically to verify that the switch functions
correctly. The System LED indicates the success or failure of POST. If the System LED is off
but the switch is plugged in, then POST is running. If the System LED is green, then POST
was successful. If the System LED is amber, then POST failed. POST failure is considered to
be a fatal error. Reliable operation of the switch should not be expected if POST fails.
The Port Status LEDs also change during switch POST. The Port Status LEDs turn
amber for about 30 seconds as the switch discovers the network topology and searches for
loops. If the Port Status LEDs turn green, the switch has established a link between the port
and a target, such as a computer. If the Port Status LEDs turn off, the switch has determined
that nothing is plugged into the port.
6.1.4 Viewing initial bootup output from the switch
In order to configure or check the status of a switch, connect a computer to the switch in
order to establish a communication session. Use a rollover cable to connect the console port
on the back of the switch to a COM port on the back of the computer.
Start HyperTerminal on the computer. A dialog window will be displayed.
The
connection must first be named when initially configuring the HyperTerminal communication
with the switch. Select the COM port to which the switch is connected using the pull-down
menu, and click the OK button. A second dialog window will be displayed.
Set up the
parameters as shown, and click the OK button.
Plug the switch into a wall outlet. The initial bootup output from the switch should be
displayed on the HyperTerminal screen. This output shows information about the switch,
details about POST status, and data about the switch hardware.
After the switch has booted and completed POST, prompts for the System Configuration
dialog are presented. The switch may be configured manually with or without the assistance
of the System Configuration dialog. The System Configuration dialog on the switch is simpler
than that on a router.
98
Cisco Academy – CCNA 3.0 Semester 3
6.1.5 Examining help in the switch CLI
The command-line interface (CLI) for Cisco switches is very similar to the CLI for
Cisco routers.
The help command is issued by entering a question mark (?). When this command is
entered at the system prompt, a list of commands available for the current command mode is
displayed.
The help command is very flexible. To obtain a list of commands that begin with a
particular character sequence, enter those characters followed immediately by the question
mark (?). Do not enter a space before the question mark. This form of help is called word help,
because it completes a word.
To list keywords or arguments that are associated with a particular command, enter one
or more words associated with the command, followed by a space and then a question mark
(?). This form of help is called command syntax help, because it provides applicable
keywords or arguments based on a partial command.
Interactive Media Activity
Fill in the Blanks: Switches and Collision Domain
After completing this activity, the student will be able to identify the role of a switch in
preventing collisions and reducing collision domains.
6.1.6 Switch command modes
Switches have several command modes. The default mode is User EXEC mode. The
User EXEC mode is recognized by its prompt, which ends in a greater-than character (>). The
commands available in User EXEC mode are limited to those that change terminal settings,
perform basic tests, and display system information. Figure
describes the show commands
that are available in User EXEC mode.
The enable command is used to change from User EXEC mode to Privileged EXEC
mode. Privileged EXEC mode is also recognized by its prompt, which ends in a pound-sign
character (#). The Privileged EXEC mode command set includes those commands allowed in
User EXEC mode, as well as the configure command. The configure command allows other
command modes to be accessed. Because these modes are used to configure the switch,
access to Privileged EXEC mode should be password protected to prevent unauthorized use.
If the system administrator has set a password, then users are prompted to enter the password
before being granted access to Privileged EXEC mode. The password does not appear on the
Chapter 6
Switch Configuration
99
screen, and is case sensitive.
6.2 Configuring the Switch
6.2.1 Verifying the Catalyst switch default configuration
When powered up for the first time, a switch has default data in the running
configuration file. The default hostname is Switch. No passwords are set on the console or
virtual terminal (vty) lines.
A switch may be given an IP address for management purposes. This is configured on
the virtual interface, VLAN 1. By default, the switch has no IP address.
The switch ports or interfaces are set to auto mode , and all switch ports are in VLAN 1.
VLAN 1 is known as the default management VLAN.
The flash directory by default, has a file that contains the IOS image, a file called
env_vars, and a sub-directory called html. After configuring the switch, it may contain a
config.text file, and a VLAN database. The flash directory has no VLAN database file,
vlan.dat, and shows no saved configuration file, config.text.
The IOS version and the configuration register settings can be verified with the show
version command.
In this default state, the switch has one broadcast domain and can be managed or
configured through the console port using the CLI. The Spanning-Tree Protocol is also
enabled, and allows the bridge to construct a loop-free topology across an extended LAN.
For small networks, the default configuration may be sufficient. The benefits of better
performance with microsegmentation are obtained immediately.
Lab Activity
Lab Exercise: Verifying Default Switch Configuration
In this lab, the student will investigate the default configuration of a 2900 series switch.
Lab Activity
e-Lab Activity: Basic Switch Operation
In this lab, the student will look at the configuration of a 2950 switch.
100
Cisco Academy – CCNA 3.0 Semester 3
6.2.2 Configuring the catalyst switch
A switch may already be preconfigured and only passwords may need to be entered for
the user EXEC, enable, or privileged EXEC modes. Switch configuration mode is entered
from privileged EXEC mode.
In the CLI, the default privileged EXEC mode is Switch#. In User EXEC mode the
prompt will be Switch>.
The following steps will ensure that a new configuration will completely overwrite any
existing configuration:

Remove any existing VLAN information by deleting the VLAN database file
vlan.dat from the flash directory

Erase the back up configuration file startup-config

Reload the switch
Security, documentation, and management are important for every internetworking
device.
A switch should be given a hostname, and passwords should be set on the console and
vty lines.
To allow the switch to be accessible by Telnet and other TCP/IP applications, IP
addresses and a default gateway should be set. By default, VLAN 1 is the management
VLAN. In a switch-based network, all internetworking devices should be in the management
VLAN. This will allow a single management workstation to access, configure, and manage all
the internetworking devices.
The Fast Ethernet switch ports default to auto-speed and auto-duplex. This allows the
interfaces to negotiate these settings. When a network administrator needs to ensure an
interface has particular speed and duplex values, the values can be set manually.
Intelligent networking devices can provide a web-based interface for configuration and
management purposes. Once a switch is configured with an IP address and gateway, it can be
accessed in this way. A web browser can access this service using the IP address and port 80,
the default port for http. The HTTP service can be turned on or off, and the port address for
the service can be chosen.
Any additional software such as an applet, can be downloaded to the browser from the
switch. Also, the network devices can be managed by a browser based graphical user interface
(GUI).
Chapter 6
Switch Configuration
101
Lab Activity
Lab Exercise: Basic Switch Configuration
In this lab, the student will configure a switch with a name and an IP address.
Lab Activity
e-Lab Activity: Basic Switch Configuration
In this lab, the student will configure a 2950 switch.
6.2.3 Managing the MAC address table
Switches learn the MAC addresses of PCs or workstations that are connected to their
switch ports by examining the source address of frames that are received on that port. These
learned MAC addresses are then recorded in a MAC address table. Frames having a
destination MAC address that has been recorded in the table can be switched out to the correct
interface.
To examine the addresses that a switch has learned, enter the privileged EXEC
command show mac-address–table.
A switch dynamically learns and maintains thousands of MAC addresses. To preserve
memory and for optimal operation of the switch, learned entries may be discarded from the
MAC address table. Machines may have been removed from a port, turned off, or moved to
another port on the same switch or a different switch. This could cause confusion in frame
forwarding. For all these reasons, if no frames are seen with a previously learned address, the
MAC address entry is automatically discarded or aged out after 300 seconds.
Rather than wait for a dynamic entry to age out, the administrator has the option to use
the privileged EXEC command clear mac-address-table.
MAC address entries that an
administrator has configured can also be removed using this command. Using this method to
clear table entries ensures that invalid addresses are removed immediately.
Lab Activity
Lab Exercise: Managing the MAC Address Table
In this lab, the student will create a basic switch configuration and manage the MAC
table.
Lab Activity
102
Cisco Academy – CCNA 3.0 Semester 3
e-Lab Activity: Managing the MAC Address Tables
In this lab, the student will observe and clear the MAC address table.
6.2.4 Configuring static MAC addresses
It may be decided that it is desirable for a MAC address to be permanently assigned to
an interface. The reasons for assigning a permanent MAC address to an interface include:

The MAC address will not be aged out automatically by the switch.

A specific server or user workstation must be attached to the port and the MAC
address is known.

Security is enhanced.
To set a static MAC address entry for a switch:
Switch(config)#mac-address-table
static
<mac-address
of
host>
interface
FastEthernet <Ethernet numer> vlan
To remove this entry use the no form of the command:
Switch(config)#no mac-address-table static <mac-address of host> interface
FastEthernet <Ethernet number> vlan <vlan name>
Lab Activity
Lab Exercise: Configuring Static MAC Addresses
In this lab, the student will create a static address entry in the switch MAC table.
Lab Activity
e-Lab Activity: Configuring Static MAC Addresses
In this lab, the student will configure static MAC addresses.
6.2.5 Configuring port security
Securing an internetwork is an important responsibility for a network administrator.
Access layer switchports are accessible through the structured cabling at wall outlets in
offices and rooms. Anyone can plug in a PC or laptop into one of these outlets. This is a
potential entry point to the network by unauthorized users. Switches provide a feature called
port security. It is possible to limit the number of addresses that can be learned on an interface.
The switch can be configured to take an action if this is exceeded. Secure MAC addresses
Chapter 6
Switch Configuration
103
can be set statically. However, securing MAC addresses statically can be a complex task and
prone to error.
An alternative approach is to set port security on a switch interface. The number of
MAC address per port can be limited to 1. The first address dynamically learned by the switch
becomes the secure address.
To reverse port security on an interface use the no form of the command.
To verify port security status the command show port security is entered.
Lab Activity
Lab Exercise: Configuring Port Security
In this lab, the student will create and verify a basic switch configuration.
Lab Activity
e-Lab Activity: Configuring Port Security
In this lab, the student will set port security for ports on the switch.
6.2.6 Executing adds, moves, and changes
When a new switch is added to a network, configure the following:

Switch name

IP address for the switch in the management VLAN

A default gateway

Line passwords
When a host is moved from one port or switch to another, configurations that can cause
unexpected behavior should be removed. Configuration that is required can then be added.
Lab Activity
Lab Exercise: Add, Move, and Change MAC Addresses
In this lab, the student will create and verify a basic switch configuration.
Lab Activity
e-Lab Activity: Add, Move, and Change MAC Addresses on the Switch
104
Cisco Academy – CCNA 3.0 Semester 3
In this lab, the student will add a MAC address to the switch, then move the address, and
change it.
6.2.7 Managing switch operating system file
An administrator should document and maintain the operational configuration files for
networking devices. The most recent running-configuration file should be backed up on a
server or disk. This is not only essential documentation, but is very useful if a configuration
needs to be restored.
The IOS should also be backed up to a local server. The IOS can then be reloaded to
flash memory if needed.
Lab Activity
Lab Exercise: Managing Switch Operating System Files
In this lab, the student will create and verify a basic switch configuration, backup the
switch IOS to a TFTP server, and then restore it.
Lab Activity
Lab Exercise: Managing Switch Startup Configuration Files
In this lab, the student will create and verify a basic switch configuration, backup the
switch startup configuration file to a TFTP server, and then restore it.
Lab Activity
e-Lab Activity: Managing the Switch Operating System Files
In this lab, the student will move files to and from the switch using a TFTP server.
Lab Activity
e-Lab Activity: Managing the Startup Configuration Files
In this lab, the student will move files to and from the switch using a TFTP server.
6.2.8 1900/2950 password recovery
For security and management purposes, passwords must be set on the console and vty
lines. An enable password and an enable secret password must also be set. These practices
help ensure that only authorized users have access to the user and privileged EXEC modes of
Chapter 6
Switch Configuration
105
the switch.
There will be circumstances where physical access to the switch can be achieved, but
access to the user or privileged EXEC mode cannot be gained because the passwords are not
known or have been forgotten.
In these circumstances, a password recovery procedure must be followed.
Lab Activity
Lab Exercise: Password Recovery Procedure on a Catalyst 2900 Series Switch
In this lab, the student will create and verify a basic switch configuration.
Lab Activity
e-Lab Activity: Password Recovery Procedure on a 2900 Series Switch
In this lab, the student will go through the procedure for password recovery.
6.2.9 1900/2900 firmware upgrade
IOS and firmware images are periodically released with bugs fixed, new features
introduced, and performance improved. If the network can be made more secure, or can
operate more efficiently with a new version of the IOS, then the IOS should be upgraded.
To upgrade the IOS, obtain a copy of the new image to a local server from the Cisco
Connection Online (CCO) Software Center.
Lab Activity
Lab Exercise: Firmware Upgrade of a Catalyst 2900 Series Switch
In this lab, the student will create and verify a basic switch configuration, then upgrade
the IOS and HTML files from a file supplied by the instructor.
Lab Activity
e-Lab Activity: Firmware Upgrade of a Catalyst 2900 Series Switch
In this lab, the student will upgrade the firmware of the switch.
Summary
106
Cisco Academy – CCNA 3.0 Semester 3
An understanding of the following key points should have been achieved:

The major components of a Catalyst switch

Monitoring switch activity and status using LED indicators

Examining the switch bootup output using HyperTerminal

Using the help features of the command line interface

The major switch command modes

The default settings of a Catalyst switch

Setting an IP address and default gateway for the switch to allow connection and
management over a network

Viewing the switch settings with a Web browser

Setting interfaces for speed and duplex operation

Examining and managing the switch MAC address table

Configuring port security

Managing configuration files and IOS images

Performing password recovery on a switch

Upgrading the IOS of a switch
Chapter 7
Chapter 7
Spanning-Tree Protocol
107
Spanning-Tree Protocol
Overview
Redundancy in a network is extremely important because redundancy allows networks
to be fault tolerant. Redundant topologies protect against network downtime due to a failure
of a single link, port, or networking device. Network engineers are often required to make
difficult decisions, balancing the cost of redundancy with the need for network availability.
Redundant topologies based on switches and bridges are susceptible to broadcast storms,
multiple frame transmissions, and MAC address database instability. Therefore network
redundancy requires careful planning and monitoring to function properly.
Switched
networks
provide
the
benefits
of
smaller
collision
domains,
microsegmentation, and full duplex operation. Switched networks provide better performance.
Redundancy in a network is required to protect against loss of connectivity due to the
failure of an individual component. Providing this redundancy, however, often results in
physical topologies with loops. Physical layer loops can cause serious problems in switched
networks. Broadcast storms, multiple frame transmissions, and media access control database
instability can make such networks unusable.
The Spanning-Tree Protocol is used in switched networks to create a loop free logical
topology from a physical topology that has loops. Links, ports, and switches that are not part
of the active loop free topology do not participate in the forwarding of data frames. The
Spanning-Tree Protocol is a powerful tool that gives network administrators the security of a
redundant topology without the risk of problems caused by switching loops.
Students completing this module should be able to:

Define redundancy and its importance in networking

Describe the key elements of a redundant networking topology

Define broadcast storms and describe their impact on switched networks

Define multiple frame transmissions and describe their impact on switched
networks

Identify causes and results of MAC address database instability

Identify the benefits and risks of a redundant topology

Describe the role of spanning tree in a redundant-path switched network

Identify the key elements of spanning tree operation

Describe the process for root bridge election
108
Cisco Academy – CCNA 3.0 Semester 3

List the spanning-tree states in order

Compare Spanning-Tree Protocol and Rapid Spanning-Tree Protocol
7.1 Redundant Topologies
7.1.1 Redundancy
Many companies and organizations increasingly rely on computer networks for their
operations. Access to file servers, databases, the Internet, intranets, and extranets is critical for
successful businesses. If the network is down, productivity is lost and customers are
dissatisfied.
Companies are increasingly looking for 24 hour, seven day a week uptime for their
computer networks. Achieving 100% uptime is perhaps impossible but securing a 99.999% or
five nines uptime is a goal that organizations set. This is interpreted to mean one day of
downtime, on average, for every 30 years, or one hour of downtime, on average, for every
4000 days, or 5.25 minutes of downtime per year.
Achieving such a goal requires extremely reliable networks. Reliability in networks is
achieved by reliable equipment and by designing networks that are tolerant to failures and
faults. The network is designed to reconverge rapidly so that the fault is bypassed.
Fault tolerance is achieved by redundancy. Redundancy means to be in excess or
exceeding what is usual and natural. How does redundancy help achieve reliability?
Assume that the only way to get to work is by a car. If the car develops a fault that
makes it unusable, going to work will be impossible until it is repaired and returned.
If the car fails and is unavailable, on average one day in ten then there is 90% usage.
Going to work is possible nine days in every ten. Reliability is therefore 90%.
Buying another car will improve matters. There is no need for two cars just to get to
work. However, it does provide redundancy (backup) in case the primary vehicle fails. The
ability to get to work is no longer dependent on a single car.
Both cars may become unusable simultaneously, one day in every 100. Purchasing a
second redundant car has improved reliability to 99%.
7.1.2 Redundant topologies
A goal of redundant topologies is to eliminate network outages caused by a single point
of failure. All networks need redundancy for enhanced reliability.
Chapter 7
Spanning-Tree Protocol
109
A network of roads is a global example of a redundant topology. If one road is closed for
repair there is likely an alternate route to the destination.
Consider an outlying community separated by a river from the town center. If there is
only one bridge across the river there is only one way into town. The topology has no
redundancy.
If the bridge is flooded or damaged by an accident, travel to the town center across the
bridge is impossible.
Building a second bridge across the river creates a redundant topology. The suburb is not
cut off from the town center if one bridge is impassable.
7.1.3 Redundant switched topologies
Networks with redundant paths and devices allow for more network uptime. Redundant
topologies eliminate single points of failure. If a path or device fails, the redundant path or
device can take over the tasks of the failed path or device.
If Switch A fails, traffic can still flow from Segment 2 to Segment 1 and to the router
through Switch B.
If port 1 fails on Switch A then traffic can still flow through port 1 on Switch B.
Switches learn the MAC addresses of devices on their ports so that data can be properly
forwarded to the destination. Switches will flood frames for unknown destinations until they
learn the MAC addresses of the devices.
Broadcasts and multicasts are also flooded.
A redundant switched topology may cause broadcast storms, multiple frame copies, and
MAC address table instability problems.
7.1.4 Broadcast storms
Broadcasts and multicasts can cause problems in a switched network.
Multicasts are treated as broadcasts by the switches. Broadcasts and multicasts frames
are flooded out all ports, except the one on which the frame was received.
If Host X sends a broadcast, like an ARP request for the Layer 2 address of the router,
then Switch A will forward the broadcast out all ports. Switch B, being on the same segment,
also forwards all broadcasts. Switch B sees all the broadcasts that Switch A forwarded and
Switch A sees all the broadcasts that Switch B forwarded. Switch A sees the broadcasts and
forwards them. Switch B sees the broadcasts and forwards them.
110
Cisco Academy – CCNA 3.0 Semester 3
The switches continue to propagate broadcast traffic over and over. This is called a
broadcast storm. This broadcast storm will continue until one of the switches is disconnected.
The switches and end devices will be so busy processing the broadcasts that user traffic is
unlikely to flow. The network will appear to be down or extremely slow.
7.1.5 Multiple frame transmissions
In a redundant switched network it is possible for an end device to receive multiple
frames.
Assume that the MAC address of Router Y has been timed out by both switches. Also
assume that Host X still has the MAC address of Router Y in its ARP cache and sends a
unicast frame to Router Y. The router receives the frame because it is on the same segment as
Host X.
Switch A does not have the MAC address of the Router Y and will therefore flood the
frame out its ports. Switch B also does not know which port Router Y is on. Switch B then
floods the frame it received causing Router Y to receive multiple copies of the same frame.
This is a cause of unnecessary processing in all devices.
7.1.6 Media access control database instability
In a redundant switched network it is possible for switches to learn the wrong
information. A switch can incorrectly learn that a MAC address is on one port, when it is
actually on a different port.
In this example the MAC address of Router Y is not in the MAC address table of either
switch.
Host X sends a frame directed to Router Y. Switches A and B learn the MAC address of
Host X on port 0.
The frame to Router Y is flooded on port 1 of both switches. Switches A and B see this
information on port 1 and incorrectly learn the MAC address of Host X on port 1. When
Router Y sends a frame to Host X, Switch A and Switch B will also receive the frame and will
send it out port 1. This is unnecessary, but the switches have incorrectly learned that Host X is
on port 1.
In this example the unicast frame from Router Y to Host X will be caught in a loop.
7.2 Spanning-Tree Protocol
Chapter 7
Spanning-Tree Protocol
111
7.2.1 Redundant topology and spanning tree
Redundant networking topologies are designed to ensure that networks continue to
function in the presence of single points of failure. Users have less chance of interruption to
their work, because the network continues to function. Any interruptions that are caused by a
failure should be as short as possible.
Reliability is increased by redundancy. A network that is based on switches or bridges
will introduce redundant links between those switches or bridges to overcome the failure of a
single link. These connections introduce physical loops into the network. These bridging
loops are created so if one link fails another can take over the function of forwarding traffic.
Switches operate at Layer 2 of the OSI model and forwarding decisions are made at this
layer. As a result of this process, switched networks must not have loops.
Switches flood traffic out all ports when the traffic is sent to a destination that is not yet
known. Broadcast and multicast traffic is forwarded out every port, except the port on which
the traffic arrived. This traffic can be caught in a loop.
In the Layer 2 header there is no Time To Live (TTL). If a frame is sent into a Layer 2
looped topology of switches, it can loop forever. This wastes bandwidth and makes the
network unusable.
At Layer 3 the TTL is decremented and the packet is discarded when the TTL reaches 0.
This creates a dilemma. A physical topology that contains switching or bridging loops is
necessary for reliability, yet a switched network cannot have loops.
The solution is to allow physical loops, but create a loop free logical topology. For this
logical topology, traffic destined for the server farm attached to Cat-5 from any user
workstation attached to Cat-4 will travel through Cat-1 and Cat-2. This will happen even
though there is a direct physical connection between Cat-5 and Cat-4.
The loop free logical topology created is called a tree. This topology is a star or
extended star logical topology, the spanning tree of the network. It is a spanning tree because
all devices in the network are reachable or spanned.
The algorithm used to create this loop free logical topology is the spanning-tree
algorithm. This algorithm can take a relatively long time to converge. A new algorithm called
the rapid spanning-tree algorithm is being introduced to reduce the time for a network to
compute a loop free logical topology.
7.2.2 Spanning-Tree Protocol
112
Cisco Academy – CCNA 3.0 Semester 3
Ethernet bridges and switches can implement the IEEE 802.1D Spanning-Tree Protocol
and use the spanning-tree algorithm to construct a loop free shortest path network.
Shortest path is based on cumulative link costs. Link costs are based on the speed of the
link.
The Spanning-Tree Protocol establishes a root node, called the root bridge. The
Spanning-Tree Protocol constructs a topology that has one path for reaching every network
node. The resulting tree originates from the root bridge. Redundant links that are not part of
the shortest path tree are blocked.
It is because certain paths are blocked that a loop free topology is possible. Data frames
received on blocked links are dropped.
The Spanning-Tree Protocol requires network devices to exchange messages to detect
bridging loops. Links that will cause a loop are put into a blocking state.
The message that a switch sends, allowing the formation of a loop free logical topology,
is called a Bridge Protocol Data Unit (BPDU). BPDUs continue to be received on blocked
ports. This ensures that if an active path or device fails, a new spanning tree can be calculated.
BPDUs contain enough information so that all switches can do the following:

Select a single switch that will act as the root of the spanning tree

Calculate the shortest path from itself to the root switch

Designate one of the switches as the closest one to the root, for each LAN segment.
This bridge is called the “designated switch”. The designated switch handles all
communication from that LAN towards the root bridge.

Choose one of its ports as its root port, for each non-root switch. This is the
interface that gives the best path to the root switch.

Select ports that are part of the spanning tree, the designated ports. Non-designated
ports are blocked.
Interactive Media Activity
Crossword Puzzle: Spanning-Tree States
When the student has completed this activity, the student will be able to identify the
function of spanning-tree states.
Interactive Media Activity
Point and Click: Spanning-Tree Protocol
Chapter 7
Spanning-Tree Protocol
113
After completing this activity, the student will learn about the concept of Spanning-Tree
Protocol.
7.2.3 Spanning-tree operation
When the network has stabilized, it has converged and there is one spanning tree per
network.
As a result, for every switched network the following elements exist:

One root bridge per network

One root port per non root bridge

One designated port per segment

Unused, non-designated ports
Root ports and designated ports are used for forwarding (F) data traffic.
Non-designated ports discard data traffic. These ports are called blocking (B) or
discarding ports.
7.2.4 Selecting the root bridge
The first decision that all switches in the network make, is to identify the root bridge.
The position of the root bridge in a network will affect the traffic flow.
When a switch is turned on, the spanning-tree algorithm is used to identify the root
bridge. BPDUs are sent out with the Bridge ID (BID). The BID consists of a bridge priority
that defaults to 32768 and the switch base MAC address.
By default BPDUs are sent every
two seconds.
When a switch first starts up, it assumes it is the root switch and sends “inferior” BPDUs.
These BPDUs contain the switch MAC address in both the root and sender BID.
All
switches see the BIDs sent. As a switch receives a BPDU with a lower root BID it replaces
that in the BPDUs that are sent out. All bridges see these and decide that the bridge with the
smallest BID value will be the root bridge.
A network administrator may want to influence the decision by setting the switch
priority to a smaller value than the default, which will make the BID smaller. This should only
be implemented when the traffic flow on the network is well understood.
Lab Activity
Lab Exercise: Selecting the Root Bridge
114
Cisco Academy – CCNA 3.0 Semester 3
In this lab, the student will create a basic switch configuration and verify it and
determine which switch is selected as root switch with factory default settings.
Lab Activity
e-Lab Activity: Selecting the Root Bridge
In this lab, the following functions will be performed. Verify configuration of hosts and
switch by testing connectivity.
7.2.5 Stages of spanning-tree port states
Time is required for protocol information to propagate throughout a switched network.
Topology changes in one part of a network are not instantly known in other parts of the
network. There is propagation delay. A switch should not change a port state from inactive to
active immediately, as this may cause data loops.
Each port on a switch that is using the Spanning-Tree Protocol has one of five states, as
shown in Figure .
In the blocking state, ports can only receive BPDUs. Data frames are discarded and no
addresses can be learned. It may take up to 20 seconds to change from this state.
Ports go from the blocked state to the listening state. In this state, switches determine if
there are any other paths to the root bridge. The path that is not the least cost path to the root
bridge goes back to the blocked state. The listening period is called the forward delay and
lasts for 15 seconds. In the listening state, user data is not being forwarded and MAC
addresses are not being learned. BPDUs are still processed.
Ports transition from the listening to the learning state. In this state user data is not
forwarded, but MAC addresses are learned from any traffic that is seen. The learning state
lasts for 15 seconds and is also called the forward delay. BPDUs are still processed.
A port goes from the learning state to the forwarding state. In this state user data is
forwarded and MAC addresses continue to be learned. BPDUs are still processed.
A port can be in a disabled state. This disabled state can occur when an administrator
shuts down the port or the port fails.
The time values given for each state are the default values. These values have been
calculated on an assumption that there will be a maximum of seven switches in any branch of
the spanning tree from the root bridge.
Chapter 7
Spanning-Tree Protocol
115
Interactive Media Activity
Point and Click: Spanning-Tree States
When the student has completed this activity, the student will be able to identify the
function of spanning-tree states.
7.2.6 Spanning-tree recalculation
A switched internetwork has converged when all the switch and bridge ports are in either
the forwarding or blocked state. Forwarding ports send and receive data traffic and BPDUs.
Blocked ports will only receive BPDUs.
When the network topology changes, switches and bridges recompute the Spanning Tree
and cause a disruption of user traffic.
Convergence on a new spanning-tree topology using the IEEE 802.1D standard can take
up to 50 seconds. This convergence is made up of the max-age of 20 seconds, plus the
listening forward delay of 15 seconds, and the learning forward delay of 15 seconds.
Lab Activity
Lab Exercise: Spanning-Tree Recalculation
In this lab, the student will create a basic switch configuration and verify it and observe
the behavior of spanning tree algorithm in presence of switched network topology changes.
Lab Activity
e-Lab Activity: Spanning-Tree Recalculation
In this lab, the students will create a basic switch configuration and verify it.
7.2.7 Rapid Spanning-Tree Protocol
The Rapid Spanning-Tree Protocol is defined in the IEEE 802.1w LAN standard. The
standard and protocol introduce the following:

Clarification of port states and roles

Definition of a set of link types that can go to forwarding state rapidly

Concept of allowing switches, in a converged network, to generate their own
BPDUs rather than relaying root bridge BPDUs
The “blocked” state of a port has been renamed as the “discarding” state. A role of a
116
Cisco Academy – CCNA 3.0 Semester 3
discarding port is an “alternate port”. The discarding port can become the “designated port” in
the event of the failure of the designated port for the segment.
Link types have been defined as point-to-point, edge-type, and shared. These changes
allow failure of links in switched network to be learned rapidly.
Point-to-point links and edge-type links can go to the forwarding state immediately.
Network convergence does not need to be any longer than 15 seconds with these
changes.
The Rapid Spanning-Tree Protocol, IEEE 802.1w, will eventually replace the
Spanning-Tree Protocol, IEEE 802.1D.
Summary
An understanding of the following key points should have been achieved:

Redundancy and its importance in networking

The key elements of a redundant networking topology

Broadcast storms and their impact on switched networks

Multiple frame transmissions and their impact on switched networks

Causes and results of MAC address database instability

The benefits and risks of a redundant topology

The role of spanning tree in a redundant-path switched network

The key elements of spanning-tree operation

The process for root bridge election

Spanning-tree states

Spanning-Tree Protocol compared to Rapid Spanning-Tree Protocol
Chapter 8
Chapter 8
Virtual LANS
117
Virtual LANS
Overview
An important feature of Ethernet switching is the virtual local-area network (VLAN). A
VLAN is a logical grouping of devices or users. These devices or users can be grouped by
function, department, or application despite the physical LAN segment location. Devices on a
VLAN are restricted to only communicating with devices that are on their own VLAN. Just as
routers provide connectivity between different LAN segments, routers provide connectivity
between different VLAN segments. Cisco is taking a positive approach toward vendor
interoperability, but each vendor has developed its own proprietary VLAN product and it may
not be entirely compatible.
VLANs increase overall network performance by logically grouping users and resources
together. Businesses often use VLANs as a way of ensuring that a particular set of users are
logically grouped regardless of the physical location. Therefore, users in the Marketing
department are placed in the Marketing VLAN, while users in the Engineering Department
are placed in the Engineering VLAN.
VLANs can enhance scalability, security, and network management. Routers in VLAN
topologies provide broadcast filtering, security, and traffic flow management.
VLANs are powerful tools for network administrators when properly designed and
configured. VLANs simplify tasks when additions, moves, and changes to a network are
necessary. VLANs improve network security and help control Layer 3 broadcasts. However,
improperly configured VLANs can make a network function poorly or not function at all.
Understanding how to implement VLANs on different switches is important when designing a
network.
Students completing this module should be able to:

Define VLANs

List the benefits of VLANs

Explain how VLANs are used to create broadcast domains

Explain how routers are used for communication between VLANs

List the common VLAN types

Define ISL and 802.1Q

Explain the concept of geographic VLANs

Configure static VLANs on 29xx series Catalyst switches

Verify and save VLAN configurations
118
Cisco Academy – CCNA 3.0 Semester 3

Delete VLANs from a switch configuration
8.1 VLAN Concepts
8.1.1 VLAN introduction
A VLAN is a group of network services not restricted to a physical segment or LAN
switch.
VLANs logically segment switched networks based on the functions, project teams, or
applications of the organization regardless of the physical location or connections to the
network. All workstations and servers used by a particular workgroup share the same VLAN,
regardless of the physical connection or location.
Configuration or reconfiguration of VLANs is done through software. Physically
connecting or moving cables and equipment is unnecessary when configuring VLANs.
A workstation in a VLAN group is restricted to communicating with file servers in the
same VLAN group. VLANs function by logically segmenting the network into different
broadcast domains so that packets are only switched between ports that are designated for the
same VLAN. VLANs consist of hosts or networking equipment connected by a single
bridging domain. The bridging domain is supported on different networking equipment. LAN
switches operate bridging protocols with a separate bridge group for each VLAN.
VLANs are created to provide segmentation services traditionally provided by physical
routers in LAN configurations. VLANs address scalability, security, and network management.
Routers in VLAN topologies provide broadcast filtering, security, and traffic flow
management. Switches may not bridge any traffic between VLANs, as this would violate the
integrity of the VLAN broadcast domain. Traffic should only be routed between VLANs.
8.1.2 Broadcast domains with VLANs and routers
A VLAN is a broadcast domain created by one or more switches. The network design in
Figures and requires three separate broadcast domains.
Figure
shows how three separate broadcast domains are created using three separate
switches. Layer 3 routing allows the router to send packets to the three different broadcast
domains.
In Figure , a VLAN is created using one router and one switch. However, there are three
separate broadcast domains. In this scenario there is one router and one switch, but there are
Chapter 8
Virtual LANS
119
still three separate broadcast domains.
In Figure , three separate broadcast domains are created. The router routes traffic
between the VLANs using Layer 3 routing.
The switch in Figure
forwards frames to the router interfaces:

If it is a broadcast frame.

If it is in route to one of the MAC addresses on the router.
If Workstation 1 on the Engineering VLAN wants to send frames to Workstation 2 on
the Sales VLAN, the frames are sent to the Fa0/0 MAC address of the router. Routing occurs
through the IP address on the Fa0/0 router interface for the Engineering VLAN.
If Workstation 1 on the Engineering VLAN wants to send a frame to Workstation 2 on
the same VLAN, the destination MAC address of the frame is the MAC address for
Workstation 2.
Implementing VLANs on a switch causes the following to occur:

The switch maintains a separate bridging table for each VLAN.

If the frame comes in on a port in VLAN 1, the switch searches the bridging table
for VLAN 1.

When the frame is received, the switch adds the source address to the bridging
table if it is currently unknown.

The destination is checked so a forwarding decision can be made.

For learning and forwarding the search is made against the address table for that
VLAN only.
8.1.3 VLAN operation
Each switch port could be assigned to a different VLAN. Ports assigned to the same
VLAN share broadcasts. Ports that do not belong to that VLAN do not share these broadcasts.
This improves the overall performance of the network.
Static membership VLANs are called port-based and port-centric membership VLANs.
As a device enters the network, it automatically assumes the VLAN membership of the port to
which it is attached.
Users attached to the same shared segment, share the bandwidth of that segment. Each
additional user attached to the shared medium means less bandwidth and deterioration of
network performance. VLANs offer more bandwidth to users than a shared network. The
default VLAN for every port in the switch is the management VLAN. The management
120
Cisco Academy – CCNA 3.0 Semester 3
VLAN is always VLAN 1 and may not be deleted. All other ports on the switch may be
reassigned to alternate VLANs.
Dynamic membership VLANs are created through network management software.
CiscoWorks 2000 or CiscoWorks for Switched Internetworks is used to create Dynamic
VLANs. Dynamic VLANs allow for membership based on the MAC address of the device
connected to the switch port. As a device enters the network, it queries a database within the
switch for a VLAN membership.
In port-based or port-centric VLAN membership, the port is assigned to a specific
VLAN membership independent of the user or system attached to the port. When using this
membership method, all users of the same port must be in the same VLAN. A single user, or
multiple users, can be attached to a port and never realize that a VLAN exists.
This
approach is easy to manage because no complex lookup tables are required for VLAN
segmentation.
Network administrators are responsible for configuring VLANs both manually and
statically.
Each interface on a switch behaves like a port on a bridge. Bridges filter traffic that does
not need to go to segments other than the source segment. If a frame needs to cross the bridge,
the bridge forwards the frame to the correct interface and to no others. If the bridge or switch
does not know the destination, it floods the frame to all ports in the broadcast domain or
VLAN, except the source port.
Interactive Media Activity
Drag and Drop: VLAN Operation
When the student has completed this activity, the student will learn the path packets take
in a network with vlans. The student will predict the path a packet will take given the source
host and the destination host.
8.1.4 Benefits of VLANs
The key benefit of VLANs is that they permit the network administrator to organize the
LAN logically instead of physically. This means that an administrator is able to do all of the
following:

Easily move workstations on the LAN.

Easily add workstations to the LAN.

Easily change the LAN configuration.
Chapter 8

Easily control network traffic.

Improve security.
Virtual LANS
121
8.1.5 VLAN types
There are three basic VLAN memberships for determining and controlling how a packet
gets assigned: 
Port-based VLANs

MAC address based VLANs

Protocol based VLANs
The frame headers are encapsulated or modified to reflect a VLAN ID before the frame
is sent over the link between switches. Before forwarding to the destination device, the frame
header is changed back to the original format.
The number of VLANs in a switch vary depending on several factors:

Traffic patterns

Types of applications

Network management needs

Group commonality
In addition, an important consideration in defining the size of the switch and the number
of VLANs is the IP addressing scheme.
For example, a network using a 24-bit mask to define a subnet has a total of 254 host
addresses allowed on one subnet. Given this criterion, a total of 254 host addresses are
allowed in one subnet. Because a one-to-one correspondence between VLANs and IP subnets
is strongly recommended, there can be no more than 254 devices in any one VLAN. It is
further recommended that VLANs should not extend outside of the Layer 2 domain of the
distribution switch.
There are two major methods of frame tagging, Inter-Switch Link (ISL) and 802.1Q.
ISL used to be the most common, but is now being replaced by 802.1Q frame tagging.
LAN emulation (LANE) is a way to make an Asynchronous Transfer Mode (ATM)
network simulate an Ethernet network. There is no tagging in LANE, but the virtual
connection used implies a VLAN ID. As packets are received by the switch from any attached
end-station device, a unique packet identifier is added within each header. This header
information designates the VLAN membership of each packet. The packet is then forwarded
to the appropriate switches or routers based on the VLAN identifier and MAC address. Upon
reaching the destination node the VLAN ID is removed from the packet by the adjacent
122
Cisco Academy – CCNA 3.0 Semester 3
switch and forwarded to the attached device. Packet tagging provides a mechanism for
controlling the flow of broadcasts and applications while not interfering with the network and
applications.
8.2 VLAN Configuration
8.2.1 VLAN basics
In a switched environment, a station will see only traffic destined for it. The switch
filters traffic in the network allowing the workstation to have full, dedicated bandwidth for
sending or receiving traffic. Unlike a shared-hub system where only one station can transmit
at a time, the switched network allows many concurrent transmissions within a broadcast
domain. The switched network does this without directly affecting other stations inside or
outside of the broadcast domain. Station pairs A/B, C/D, and E/F can all communicate
without affecting the other station pairs.
Each VLAN must have a unique Layer 3 network address assigned. This enables routers
to switch packets between VLANs.
VLANs can exist either as end-to-end networks or they can exist inside of geographic
boundaries.
An end-to-end VLAN network comprises the following characteristics:

Users are grouped into VLANs independent of physical location, but dependent on
group or job function.

All users in a VLAN should have the same 80/20 traffic flow patterns.

As a user moves around the campus, VLAN membership for that user should not
change.

Each VLAN has a common set of security requirements for all members.
Starting at the access layer, switch ports are provisioned for each user. Each color
represents a subnet. Because people have moved around over time, each switch eventually
becomes a member of all VLANs. Frame tagging is used to carry multiple VLAN information
between the access layer wiring closets and the distribution layer switches.
ISL is a Cisco proprietary protocol that maintains VLAN information as traffic flows
between switches and routers. IEEE 802.1Q is an open-standard (IEEE) VLAN tagging
mechanism in switching installations. Catalyst 2950 switches do not support ISL trunking.
Workgroup servers operate in a client/server model. For this reason, attempts have been
made to keep users in the same VLAN as their server to maximize the performance of Layer 2
Chapter 8
Virtual LANS
123
switching and keep traffic localized.
In Figure , a core layer router is being used to route between subnets. The network is
engineered, based on traffic flow patterns, to have 80 percent of the traffic contained within a
VLAN. The remaining 20 percent crosses the router to the enterprise servers and to the
Internet and WAN.
8.2.2 Geographic VLANs
End-to-end VLANs allow devices to be grouped based upon resource usage. This
includes such parameters as server usage, project teams, and departments. The goal of
end-to-end VLANs is to maintain 80 percent of the traffic on the local VLAN.
As many corporate networks have moved to centralize their resources, end-to-end
VLANs have become more difficult to maintain. Users are required to use many different
resources, many of which are no longer in their VLAN. Because of this shift in placement and
usage of resources, VLANs are now more frequently being created around geographic
boundaries rather than commonality boundaries.
This geographic location can be as large as an entire building or as small as a single
switch inside a wiring closet. In a VLAN structure, it is typical to find the new 20/80 rule in
effect. 80 percent of the traffic is remote to the user and 20 percent of the traffic is local to the
user. Although this topology means that the user must cross a Layer 3 device in order to reach
80 percent of the resources, this design allows the network to provide for a deterministic,
consistent method of accessing resources.
8.2.3 Configuring static VLANs
Static VLANs are ports on a switch that are manually assigned to a VLAN by using a
VLAN management application or by working directly within the switch. These ports
maintain their assigned VLAN configuration until they are changed manually. This topology
means that the user must cross a Layer 3 device in order to reach 80 percent of the resources.
This design also allows the network to provide for a deterministic, consistent method of
accessing resources. This type of VLAN works well in networks where the following is true:

Moves are controlled and managed.

There is robust VLAN management software to configure the ports.

It is not desirable to assume the additional overhead required when maintaining
end-station MAC addresses and custom filtering tables.
Dynamic VLANs do not rely on ports assigned to a specific VLAN.
124
Cisco Academy – CCNA 3.0 Semester 3
The following guidelines must be followed when configuring VLANs on Cisco 29xx
switches:

The maximum number of VLANs is switch dependent.

VLAN 1 is one of the factory-default VLANs.

VLAN 1 is the default Ethernet VLAN.

Cisco Discovery Protocol (CDP) and VLAN Trunking Protocol (VTP)
advertisements are sent on VLAN 1.

The Catalyst 29xx IP address is in the VLAN 1 broadcast domain by default.

The switch must be in VTP server mode to create, add, or delete VLANs.
The creation of a VLAN on a switch is a very straightforward and simple task. If using a
Cisco IOS command based switch, enter the VLAN configuration mode with the privileged
EXEC level vlan database command. The steps necessary to create the VLAN are shown
below. A VLAN name may also be configured, if necessary.
Switch#vlan database
Switch(vlan)#vlan vlan_number
Switch(vlan)#exit
Upon exiting, the VLAN is applied to the switch. The next step is to assign the VLAN to
one or more interfaces:
Switch(config)#interface fastethernet 0/9
Switch(config-if)#switchport access vlan vlan_number
Lab Activity
Lab Exercise: Configuring Static VLANs
This lab is to create a basic switch configuration and verify it and determine the switch
firmware version.
Lab Activity
e-Lab Activity: Configuring Static VLANs
In this lab, the students will create a basic switch configuration and verify it.
8.2.4 Verifying VLAN configuration
A good practice is to verify VLAN configuration by using the show vlan, show vlan
brief, or show vlan id id_number commands.
Chapter 8
Virtual LANS
125
The following facts apply to VLANs:

A created VLAN remains unused until it is mapped to switch ports.

All Ethernet ports are on VLAN 1 by default.
Refer to Figure for a list of applicable commands.
Figure
shows the steps necessary to assign a new VLAN to a port on the Sydney
switch.
Figures and list the output of the show vlan and show vlan brief commands.
Lab Activity
Lab Exercise: Verifying VLAN Configurations
This lab is to create a basic switch configuration and verify it and determine the switch
firmware version.
Lab Activity
e-Lab Activity: Verifying VLAN Configurations
In this lab, the students will create two separate VLANs on the switch.
8.2.5 Saving VLAN configuration
It is often useful to keep a copy of the VLAN configuration as a text file for backup or
auditing purposes.
The switch configuration settings may be backed up in the usual way using the copy
running-config tftp command. Alternatively, the HyperTerminal capture text feature can be
used to store the configuration settings.
8.2.6 Deleting VLANs
Removing a VLAN from a Cisco IOS command based switch interface is just like
removing a command from a router. In Figure , VLAN 300 was created on Fastethernet 0/9
using the interface configuration switchport access vlan 300 command. To remove this VLAN
from the interface, simply use the no form of the command.
When a VLAN is deleted any ports assigned to that VLAN become inactive. The ports
will, however, remain associated with the deleted VLAN until assigned to a new VLAN.
Lab Activity
126
Cisco Academy – CCNA 3.0 Semester 3
Lab Exercise: Deleting VLAN Configurations
The purpose of this exercise is to delete VLAN settings.
Lab Activity
e-Lab Activity: Deleting VLAN Configurations
In this lab, the students will create two separate VLANs on the switch.
8.3 Troubleshooting VLANs
8.3.1 Overview
VLANs are now commonplace in campus networks. VLANs give network engineers
flexibility in designing and implementing networks. VLANs also enable broadcast
containment, security, and geographically disparate communities of interest. However, as with
basic LAN switching, problems can occur when VLANs are implemented. This lesson will
show some of the more common problems that can occur with VLANs, and it will provide
several tools and techniques for troubleshooting.
Students completing this lesson should be able to:

Utilize a systematic approach to VLAN troubleshooting

Demonstrate the steps for general troubleshooting in switched networks

Describe how spanning-tree problems can lead to broadcast storms

Use show and debug commands to troubleshoot VLANs
8.3.2 VLAN troubleshooting process
It is important to develop a systematic approach for troubleshooting switch related
problems. The following steps can assist in isolating a problem on a switched network:

Check the physical indications, such as LED status.

Start with a single configuration on a switch and work outward.

Check the Layer 1 link.

Check the Layer 2 link.

Troubleshoot VLANs that span several switches.
When troubleshooting, check to see if the problem is a recurring one rather than an
isolated fault. Some recurring problems are due to growth in demand for services by
workstation ports outpacing the configuration, trunking, or capacity to access server resources.
For example, the use of Web technologies and traditional applications, such as file transfer
Chapter 8
Virtual LANS
127
and e-mail, is causing network traffic growth that enterprise networks must handle.
Many campus LANs face unpredictable network traffic patterns that result from the
combination of intranet traffic, fewer centralized campus server locations, and the increasing
use of multicast applications. The old 80/20 rule, which stated that only 20 percent of network
traffic went over the backbone, is obsolete. Internal Web browsing now enables users to locate
and access information anywhere on the corporate intranet. Traffic patterns are dictated by
where the servers are located and not by the physical workgroup configurations with which
they happen to be grouped.
If a network frequently experiences bottleneck symptoms, like excessive overflows,
dropped frames, and retransmissions, there may be too many ports riding on a single trunk or
too many requests for global resources and access to intranet servers.
Bottleneck symptoms may also occur because a majority of the traffic is being forced to
traverse the backbone. Another cause may be that any-to-any access is common, as users draw
upon corporate Web-based resources and multimedia applications. In this case, it may be
necessary to consider increasing the network resources to meet the growing demand.
8.3.3 Preventing broadcast storms
A broadcast storm occurs when a large number of broadcast packets are received on a
port. Forwarding these packets can cause the network to slow down or to time out. Storm
control is configured for the switch as a whole, but operates on a per-port basis. Storm control
is disabled by default.
Prevention of broadcast storms by setting threshold values to high or low discards
excessive broadcast, multicast, or unicast MAC traffic. In addition, configuration of values for
rising thresholds on a switch will shut the port down.
STP problems include broadcast storms, loops, dropped BPDUs and packets. The
function of STP is to ensure that no logic loops occur in a network by designating a root
bridge. The root bridge is the central point of a spanning-tree configuration that controls how
the protocol operates.
The location of the root bridge in the extended router and switch is necessary for
effective troubleshooting. The show commands on both the router and the switch can display
root-bridge information. Configuration of root bridge timers set parameters for forwarding
delay or maximum age for STP information. Manually configuring a device as a root bridge
is another configuration option.
Cisco Academy – CCNA 3.0 Semester 3
128
If the extended router and switch network encounters a period of instability, it helps to
minimize the STP processes occurring between devices.
If it becomes necessary to reduce BPDU traffic, put the timers on the root bridge at their
maximum values. Specifically, set the forward delay parameter to the maximum of 30
seconds, and set the max_age parameter to the maximum of 40 seconds.
A physical port on a router or switch may be part of more than one spanning tree if it is a
trunk.
Note: VTP runs on Catalyst switches not routers.
It is advisable to configure a Catalyst switch neighboring a router to operate in VTP
transparent mode until Cisco supports VTP on its routers.
The Spanning-Tree Protocol (STP) is considered one of the most important Layer 2
protocols on the Catalyst switches. By preventing logical loops in a bridged network, STP
allows Layer 2 redundancy without generating broadcast storms.
Minimize spanning-tree problems by actively developing a baseline study of the
network.
8.3.4 Troubleshooting VLANs
The show and debug commands can be extremely useful when troubleshooting VLANs.
Figure
illustrates the most common problems found when troubleshooting VLANs.
To troubleshoot the operation of Fast Ethernet router connections to switches, it is
necessary to make sure that the router interface configuration is complete and correct. Verify
that an IP address is not configured on the Fast Ethernet interface. IP addresses are configured
on each subinterface of a VLAN connection. Verify that the duplex configuration on the
router matches that on the appropriate port/interface on the switch.
The show vlan command displays the VLAN information on the switch. Figure ,
displays the output from the show vlan command. The display shows the VLAN ID, name,
status, and assigned ports.
The CatOS show vlan keyword options and keyword syntax descriptions of each field
are also shown.
The show vlan displays information about that VLAN on the router. The show vlan
command followed by the VLAN number displays specific information about that VLAN on
Chapter 8
Virtual LANS
129
the router. Output from the command includes the VLAN ID, router subinterface, and
protocol information.
The show spanning-tree command displays the spanning-tree topology known to the
router. This command will show the STP settings used by the router for a spanning-tree
bridge in the router and switch network.
The first part of the show spanning-tree output lists global spanning tree configuration
parameters, followed by those that are specific to given interfaces.
Bridge Group 1 is executing the IEEE compatible Spanning-Tree Protocol.
The following lines of output show the current operating parameters of the spanning
tree:
Bridge Identifier has priority 32768, address 0008.e32e.e600
Configured hello time 2, Max age 20, forward delay 15
The following line of output shows that the router is the root of the spanning tree:
We are the root of the spanning tree.
Key information from the show spanning-tree command creates a map of the STP
network.
The debug sw-vlan packets command displays general information about VLAN packets
received but not configured to support the router.
VLAN packets that the router is
configured to route or switch are counted and indicated when using the show sw-vlan
command.
8.3.5 VLAN troubleshooting scenarios
Proficiency at troubleshooting switched networks will be achieved after the techniques
are learned and are adapted to the company needs. Experience is the best way of improving
troubleshooting skills.
Three practical VLAN troubleshooting scenarios referring to the most common
problems will be described. Each of these scenarios contains an analysis of the problem to
then solving the problem. Using appropriate specific commands and gathering meaningful
information from the outputs, the progression of the troubleshooting process can be
completed.
Scenario 1:
130
Cisco Academy – CCNA 3.0 Semester 3
A trunk link cannot be established between a switch and a router.
When having difficulty with a trunk connection between a switch and a router, be sure to
consider the following possible causes:

Make sure that the port is connected and not receiving any physical-layer,
alignment or frame-check-sequence (FCS) errors. This can be done with the show
interface command on the switch.

Verify that the duplex and speed are set properly between the switch and the router.
This can be done with the show int status command on the switch or the show
interface command on the router.

Configure the physical router interface with one subinterface for each VLAN that
will route traffic. Verify this with the show interface IOS command. Also, make
sure that each subinterface on the router has the proper encapsulation type, VLAN
number, IP address, and subnet mask configured. This can be done with the show
interface or show running-config IOS commands.

Confirm that the router is running an IOS release that supports trunking. This can
be verified with the show version command.
Scenario 2:
VTP is not correctly propagating VLAN configuration changes.
When VTP is not correctly affecting configuration updates on other switches in the VTP
domain, check the following possible causes:

Make sure the switches are connected through trunk links. VTP updates are
exchanged only over trunk links. This can be verified with the show int status
command.

Make sure the VTP domain name is the same on all switches that need to
communicate with each other. VTP updates are exchanged only between switches
in the same VTP domain. This scenario is one of the most common VTP problems.
It can be verified with the show vtp status command on the participating switches.

Check the VTP mode of the switch. If the switch is in VTP transparent mode, it
will not update its VLAN configuration dynamically. Only switches in VTP server
or VTP client mode update their VLAN configuration based on VTP updates from
other switches. Again, use the show vtp status command to verify this.

If using VTP passwords, the same password must be configured on all switches in
the VTP domain. To clear an existing VTP password, use the no vtp password
password command on the VLAN mode.
Scenario 3:
Chapter 8
Virtual LANS
131
Dropped packets and loops.
Spanning-tree bridges use topology change notification Bridge Protocol Data Unit
packets (BPDUs) to notify other bridges of a change in the spanning-tree topology of the
network. The bridge with the lowest identifier in the network becomes the root. Bridges send
these BPDUs any time a port makes a transition to or from a forwarding state, as long as there
are other ports in the same bridge group. These BPDUs migrate toward the root bridge.
There can be only one root bridge per bridged network. An election process determines
the root bridge. The root determines values for configuration messages, in the BPDUs, and
then sets the timers for the other bridges. Other designated bridges determine the shortest path
to the root bridge and are responsible for advertising BPDUs to other bridges through
designated ports. A bridge should have ports in the blocking state if there is a physical loop.
Problems can arise for internetworks in which both IEEE and DEC spanning-tree
algorithms are used by bridging nodes. These problems are caused by differences in the way
the bridging nodes handle spanning tree BPDU packets, or hello packets, and in the way they
handle data.
In this scenario, Switch A, Switch B, and Switch C are running the IEEE spanning-tree
algorithm. Switch D is inadvertently configured to use the DEC spanning-tree algorithm.
Switch A claims to be the IEEE root and Switch D claims to be the DEC root. Switch B
and Switch C propagate root information on all interfaces for IEEE spanning tree. However,
Switch D drops IEEE spanning-tree information. Similarly, the other routers ignore Router
D's claim to be root.
The result is that in none of the bridges believing there is a loop and when a broadcast
packet is sent on the network, a broadcast storm results over the entire internetwork. This
broadcast storm will include Switches X and Y, and beyond.
To resolve this problem, reconfigure Switch D for IEEE. Although a configuration
change is necessary, it might not be sufficient to reestablish connectivity. There will be a
reconvergence delay as devices exchange BPDUs and recompute a spanning tree for the
network.
Summary
An understanding of the following key points should have been achieved:

ISL and 802.1Q trunking

Geographic VLANs
132
Cisco Academy – CCNA 3.0 Semester 3

Configuring static VLANs on 29xx series Catalyst switches

Verifying and saving VLAN configurations

Deleting VLANs from a switch

Definition of VLANs

The benefits of VLANs

How VLANs are used to create broadcast domains

How routers are used for communication between VLANs

The common VLAN types

A systematic approach to VLAN troubleshooting

The steps for general troubleshooting in switched networks

How spanning-tree problems can lead to broadcast storms

Using show and debug commands to troubleshoot VLANs
Chapter 9
Chapter 9
VLAN Trunking Protocol
133
VLAN Trunking Protocol
Overview
Early VLANs were difficult to implement across networks. Most VLANs were defined
on each switch, which meant that defining VLANs over an extended network was a
complicated task. Every switch manufacturer had a different idea of the best ways to make
their switches VLAN capable, which further complicated matters. VLAN trunking was
developed to solve these problems.
VLAN trunking allows many VLANs to be defined throughout an organization by
adding special tags to frames to identify the VLAN to which they belong. This tagging allows
many VLANs to be carried across a common backbone, or trunk. VLAN trunking is
standards-based, with the IEEE 802.1Q trunking protocol now widely implemented. Cisco’s
Inter-Switch Link (ISL) is a proprietary trunking protocol that can be implemented in all
Cisco networks.
VLAN trunking uses tagged frames to allow multiple VLANs to be carried throughout a
large switched network over shared backbones. Manually configuring and maintaining VLAN
Trunking Protocol (VTP) on numerous switches can be challenging. The benefit of VTP is
that, once a network is configured with VTP, many of the VLAN configuration tasks are
automatic.
This module explains VTP implementation in a VLAN switched LAN environment.
VLAN technology provides network administrators with many advantages. Among
other things, VLANs help control Layer 3 broadcasts, they improve network security, and
they can help logically group network users. However, VLANs have an important limitation.
They operate at Layer 2, which means that devices on one VLAN cannot communicate with
users on another VLAN without the use of routers and network layer addresses.
Students completing this module should be able to:

Explain the origins and functions of VLAN trunking

Describe how trunking enables the implementation of VLANs in a large network

Define IEEE 802.1Q

Define Cisco ISL

Configure and verify a VLAN trunk

Define VTP

Explain why VTP was developed
134
Cisco Academy – CCNA 3.0 Semester 3

Describe the contents of VTP messages

List and define the three VTP modes

Configure and verify VTP on an IOS-based switch

Explain why routing is necessary for inter-VLAN communication

Explain the difference between physical and logical interfaces

Define subinterfaces

Configure inter-VLAN routing using subinterfaces on a router port
9.1 Trunking
9.1.1 History of trunking
The history of trunking goes back to the origins of radio and telephony technologies. In
radio technologies, a trunk is a single communications line that carries multiple channels of
radio signals.
In the telephony industry, the trunking concept is associated with the telephone
communication path or channel between two points. One of these two points is usually the
Central Office (CO). Shared trunks may also be created for redundancy between COs.
The concept that had been used by the telephone and radio industries was then adopted
for data communications. An example of this in a communications network is a backbone link
between an MDF and an IDF. A backbone is composed of a number of trunks.
At present, the same principle of trunking is applied to network switching technologies.
A trunk is a physical and logical connection between two switches across which network
traffic travels.
9.1.2 Trunking concepts
As mentioned before, a trunk is a physical and logical connection between two switches
across which network traffic travels. It is a single transmission channel between two points.
Those points are usually switching centers.
In the context of a VLAN switching environment, a trunk is a point-to-point link that
supports several VLANs. The purpose of a trunk is to conserve ports when creating a link
between two devices implementing VLANs. Figure illustrates two VLANs shared across
two switches, (Sa and Sb). Each switch is using two physical links so that each port carries
traffic for a single VLAN. This is the simplest way of implementing inter-switch VLAN
communication, but it does not scale well.
Chapter 9
VLAN Trunking Protocol
135
Adding a third VLAN would require using two additional ports, one on each connected
switch. This design is also inefficient in terms of load sharing. In addition, the traffic on some
VLANs may not justify a dedicated link. Trunking will bundle multiple virtual links over one
physical link by allowing the traffic for several VLANs to travel over a single cable between
the switches.
A comparison for trunking is like a Highway Distributor. The roads with different
starting and ending points share a main national highway for a few kilometers then will divide
again to reach their particular destinations. This method is more cost effective than building
an entire road from start to end for every existing or new destination.
9.1.3 Trunking operation
The switching tables at both ends of the trunk can be used to make port forwarding
decisions based on frame destination MAC addresses. As the number of VLANs traveling
across the trunk increases, the forwarding decisions become slower and more difficult to
manage . The decision process becomes slower because the larger switching tables take
longer to process.
Trunking protocols were developed to effectively manage the transfer of frames from
different VLANs on a single physical line. The trunking protocols establish agreement for the
distribution of frames to the associated ports at both ends of the trunk.
Currently two types of trunking mechanisms exist, frame filtering and frame tagging.
Frame tagging has been adopted as the standard trunking mechanism by IEEE.
Trunking protocols that use a frame tagging mechanism assign an identifier to the
frames to make their management easier and to achieve a faster delivery of the frames.
The unique physical link between the two switches is able to carry traffic for any VLAN.
In order to achieve this, each frame sent on the link is tagged to identify which VLAN it
belongs to. Different tagging schemes exist. The most common tagging schemes for Ethernet
segments are listed below:

ISL – Cisco proprietary Inter-Switch Link protocol.

802.1Q – IEEE standard that will be focused on in this section.
Interactive Media Activity
Fill in the Blanks: Trunking Operation
When the student has completed this activity, the student will how using trunk links can
the number of physical interfaces needed on a switch.
136
Cisco Academy – CCNA 3.0 Semester 3
9.1.4 VLANs and trunking
Specific protocols, or rules, are used to implement trunking. Trunking provides an
effective method to distribute VLAN ID information to other switches.
Using frame tagging as the standard trunking mechanism, as opposed to frame filtering,
provides a more scalable solution to VLAN deployment. Frame tagging is the way to
implement VLANs according to IEEE 802.1Q.
VLAN frame tagging is an approach that has been specifically developed for switched
communications. Frame tagging places a unique identifier in the header of each frame as it is
forwarded throughout the network backbone. The identifier is understood and examined by
each switch before any broadcasts or transmissions are made to other switches, routers, or
end-station devices. When the frame exits the network backbone, the switch removes the
identifier before the frame is transmitted to the target end station. Frame tagging functions at
Layer 2 and requires little processing or administrative overhead.
It is important to understand that a trunk link does not belong to a specific VLAN. The
responsibility of a trunk link is to act as a conduit for VLANs between switches and routers.
ISL is a protocol that maintains VLAN information as traffic flows between the switches.
With ISL, an Ethernet frame is encapsulated with a header that contains a VLAN ID.
9.1.5 Trunking implementation
To create or configure a VLAN trunk on a Cisco IOS command-based switch, configure
the port first as a trunk and then specify the trunk encapsulation with the following
commands:
Before attempting to configure a VLAN trunk on a port, determine what encapsulation
the port can support. This can be done using the show port capabilities command.
In the
example, notice in the highlighted text that Port 2/1 will support only the IEEE 802.1Q
encapsulation.
Verify that trunking has been configured and verify the settings by using the show trunk
[mod_num/port_num] command from privileged mode on the switch.
Figure
shows the trunking modes available in Fast Ethernet and Gigabit Ethernet.
Lab Activity
Lab Exercise: Trunking with ISL
Chapter 9
VLAN Trunking Protocol
137
This lab is to create an ISL trunk line between the two switches to allow communication
between paired VLANs.
Lab Activity
Lab Exercise: Trunking with 802.1q
This lab is to create an 802.1q trunk line between the two switches to allow
communication between paired VLANs.
Lab Activity
e-Lab Activity: Trunking with ISL
In this lab, the student will create multiple VLANs on two separate switches, name the
switches, and assign multiple member ports to the switches.
Lab Activity
e-Lab Activity: Trunking with 802.1q
In this lab, the student will create multiple VLANs on two separate switches, name the
switches, and assign multiple member ports to the switches.
9.2
VTP
9.2.1 History of VTP
VLAN Trunking Protocol (VTP) was created to solve operational problems in a
switched network with VLANs.
Consider the example of a domain with several interconnected switches that support
several VLANs. To maintain connectivity within VLANs, each VLAN must be manually
configured on each switch. As the organization grows and additional switches are added to the
network, each new switch must be manually configured with VLAN information. A single
incorrect VLAN assignment could cause two potential problems:

Cross-connected VLANs due to VLAN configuration inconsistencies

VLAN misconfiguration across mixed media environments such as Ethernet and
Fiber Distributed Data Interface (FDDI)
With VTP, VLAN configuration is consistently maintained across a common
administrative domain. Additionally, VTP reduces the complexity of managing and
138
Cisco Academy – CCNA 3.0 Semester 3
monitoring VLAN networks.
9.2.2 VTP concepts
The role of VTP is to maintain VLAN configuration consistency across a common
network administration domain. VTP is a messaging protocol that uses Layer 2 trunk frames
to manage the addition, deletion, and renaming of VLANs on a single domain. Further, VTP
allows for centralized changes that are communicated to all other switches in the network.
VTP messages are encapsulated in either Cisco proprietary Inter-Switch Link (ISL) or
IEEE 802.1Q protocol frames, and passed across trunk links to other devices. In IEEE 802.1Q
frames a 4 byte field is added that tags the frame. Both formats carry the VLAN ID.
While switch ports are normally assigned to only a single VLAN, trunk ports by default
carry frames from all VLANs.
9.2.3 VTP operation
A VTP domain is made up of one or more interconnected devices that share the same
VTP domain name. A switch can be in one VTP domain only.
When transmitting VTP messages to other switches in the network, the VTP message is
encapsulated in a trunking protocol frame such as ISL or IEEE 802.1Q. Figure shows the
generic encapsulation for VTP within an ISL frame. The VTP header varies, depending upon
the type of VTP message, but generally, four items are found in all VTP messages:

VTP protocol version: Either Version 1 or 2

VTP message type: Indicates one of four types

Management domain name length: Indicates size of the name that follows

Management domain name: The name configured for the management domain
VTP switches operate in one of three modes:

Server

Client

Transparent
VTP servers can create, modify, and delete VLAN and VLAN configuration parameters
for the entire domain. VTP servers save VLAN configuration information in the switch
NVRAM. VTP servers send VTP messages out to all trunk ports.
VTP clients cannot create, modify, or delete VLAN information. This mode is useful for
switches lacking memory to store large tables of VLAN information. The only role of VTP
clients is to process VLAN changes and send VTP messages out all trunk ports.
Chapter 9
VLAN Trunking Protocol
139
Switches in VTP transparent mode forward VTP advertisements but ignore information
contained in the message. A transparent switch will not modify its database when updates are
received, nor will the switch send out an update indicating a change in its VLAN status.
Except for forwarding VTP advertisements, VTP is disabled on a transparent switch.
VLANs detected within the advertisements serve as notification to the switch that traffic
with the newly defined VLAN IDs may be expected.
In Figure , Switch C transmits a VTP database entry with additions or deletions to
Switch A and Switch B. The configuration database has a revision number that is incremented
by one. A higher configuration revision number indicates that the VLAN information that is
being sent is more current then the stored copy. Any time a switch receives an update that has
a higher configuration revision number the switch will overwrite the stored information with
the new information being sent in the VTP update. Switch F will not process the update
because it is in a different domain. This overwrite process means that if the VLAN does not
exist in the new database, it is deleted from the switch. In addition, VTP maintains its own
NVRAM. An erase startup-configuration clears the NVRAM of configuration commands, but
not the VTP database revision number. To set the configuration revision number back to zero,
the switch must be rebooted.
By default, management domains are set to a nonsecure mode, meaning that the
switches interact without using a password. Adding a password automatically sets the
management domain to secure mode. The same password must be configured on every switch
in the management domain to use secure mode.
9.2.4 VTP implementation
With VTP, each switch advertises on its trunk ports, its management domain,
configuration revision number, the VLANs that it knows about, and certain parameters for
each known VLAN. These advertisement frames are sent to a multicast address so that all
neighboring devices can receive the frames. However, the frames are not forwarded by
normal bridging procedures. All devices in the same management domain learn about any
new VLANs configured in the transmitting device. A new VLAN must be created and
configured on one device only in the management domain. All the other devices in the same
management domain automatically learn the information.
Advertisements on factory-default VLANs are based on media types. User ports should
not be configured as VTP trunks.
Each advertisement starts as configuration revision number 0. As changes are made the
configuration revision number is increased incrementally by one, (n + 1). The revision
140
Cisco Academy – CCNA 3.0 Semester 3
number continues to increment until it reaches 2,147,483,648. When it reaches that point, the
counter will reset back to zero.
There are two types of VTP advertisements:

Requests from clients that want information at bootup

Response from servers
There are three types of VTP messages:

Advertisement requests

Summary advertisements

Subset advertisements
With advertisement requests, clients request VLAN information and the server responds
with summary and subset advertisements.
By default, server and client Catalyst switches issue summary advertisements every five
minutes. Servers inform neighbor switches what they believe to be the current VTP revision
number. Assuming the domain names match, the receiving server or client compares the
configuration revision number. If the revision number in the advertisement is higher than the
current revision number in the receiving switch, the receiving switch then issues an
advertisement request for new VLAN information.
Subset advertisements contain detailed information about VLANs such as VTP version
type, domain name and related fields, and the configuration revision number. The following
can trigger these advertisements:

Creating or deleting a VLAN

Suspending or activating a VLAN

Changing the name of a VLAN

Changing the maximum transmission unit (MTU) of a VLAN
Advertisements may contain some or all of the following information:

Management domain name. Advertisements with different names are ignored.

Configuration revision number. The higher number indicates a more recent
configuration.

Message Digest 5 (MD5). MD5 is the key that is sent with the VTP when a
password has been assigned. If the key does not match, the update is ignored.

Updater identity. The updater identity is the identity of the switch that is sending
the VTP summary advertisement.
9.2.5 VTP configuration
Chapter 9
VLAN Trunking Protocol
141
The following basic tasks must be considered before configuring VTP and VLANs on
the network.

Determine the version number of VTP that will be utilized.

Decide if this switch is to be a member of an existing management domain or if a
new domain should be created. If a management domain exists, determine the
name and password of the domain.

Choose a VTP mode for the switch.
Two different versions of VTP are available, Version 1 and Version 2. The two versions
are not interoperable. If a switch is configured in a domain for VTP Version 2, all switches in
the management domain must be configured for VTP Version 2. VTP Version 1 is the default.
VTP Version 2 may be implemented if some of the specific features that VTP Version 2 offers
are not offered in VTP Version 1. The most common feature that is needed is Token Ring
VLAN support.
To configure the VTP version on a Cisco IOS command-based switch, first enter VLAN
database mode.
Use the following command to change the VTP version number on a set
command-based switch.
Switch#vlan database
Switch(vlan)#vtp v2-mode
If the switch being installed is the first switch in the network, create the management
domain. If the management domain has been secured, configure a password for the domain.
To create a management domain use the following command:
Switch(vlan)#vtp domain cisco
The domain name can be between 1 and 32 characters. The password must be between 8
and 64 characters long.
To add a VTP client to an existing VTP domain, always verify that its VTP configuration
revision number is lower than the configuration revision number of the other switches in the
VTP domain. Use the show vtp status command. Switches in a VTP domain always use the
VLAN configuration of the switch with the highest VTP configuration revision number. If a
switch is added that has a revision number higher than the revision number in the VTP
domain, it can erase all VLAN information from the VTP server and VTP domain.
Choose one of the three available VTP modes for the switch. If this is the first switch in
142
Cisco Academy – CCNA 3.0 Semester 3
the management domain and additional switches will be added, set the mode to server. The
additional switches will be able to learn VLAN information from this switch. There should be
at least one server.
VLANs can be created, deleted, and renamed at will without the switch propagating
changes to other switches. If a large number of people are configuring devices within the
network, there is a risk of overlapping VLANs with two different meanings in the network but
with the same VLAN identification.
To set the correct mode of the Cisco IOS command-based switch, use the following
command:
Switch(vlan)#vtp {client | server | transparent}
Figure
shows the output of the show vtp status command. This command is used to
verify VTP configuration settings on a Cisco IOS command-based switch.
Figure
shows an example of the show vtp counters command. This command is used
to display statistics about advertisements sent and received on the switch.
Lab Activity
Lab Exercise: VTP Client and Server Configurations
This lab is to configure the VTP protocol to establish server and client switches .
Lab Activity
e-Lab Activity: VTP Client and Server Configurations
In this lab, the student will configure the VTP protocol to establish server and client
switches .
9.3 Inter-VLAN Routing Overview
9.3.1 VLAN basics
A VLAN is a logical grouping of devices or users that can be grouped by function,
department, or application regardless of their physical location.
VLANs are configured at the switch through software. The number of competing VLAN
implementations can require the use of proprietary software from the switch vendor. Grouping
ports and users into communities of interest, referred to as VLAN organizations, may be
Chapter 9
VLAN Trunking Protocol
143
accomplished by the use of a single switch or more powerfully among connected switches
within the enterprise. By grouping the ports and users together across multiple switches,
VLANs can span single building infrastructures or interconnected buildings. VLANs assist in
the effective use of bandwidth as they share the same broadcast domain or Layer 3 network.
VLANs optimize the collection and use of bandwidth. VLANs contend for the same
bandwidth although the bandwidth requirements may vary greatly by workgroup or
department.
The following are some VLAN configuration issues:

A switch creates a broadcast domain

VLANs help manage broadcast domains

VLANs can be defined on port groups, users or protocols

LAN switches and network management software provide a mechanism to create
VLANs
VLANs help control the size of broadcast domains and localize traffic. VLANs are
associated with individual networks. Therefore, network devices in different VLANs cannot
directly communicate without the intervention of a Layer 3 routing device.
When a node in one VLAN needs to communicate with a node in another VLAN, a
router is necessary to route the traffic between VLANs. Without the routing device,
inter-VLAN traffic would not be possible.
9.3.2 Introducing inter-VLAN routing
When a host in one broadcast domain wishes to communicate with a host in another
broadcast domain, a router must be involved.
Port 1 on a switch is part of VLAN 1, and port 2 is part of VLAN 200. If all of the
switch ports were part of VLAN 1, the hosts connected to these ports could communicate. In
this case however, the ports are part of different VLANs, VLAN 1 and VLAN 200. A router
must be involved if hosts from the different VLANs need to communicate.
The most important benefit of routing is its proven history of facilitating networks,
particularly large networks. Although the Internet serves as the obvious example, this point is
true for any type of network, such as a large campus backbone. Because routers prevent
broadcast propagation and use more intelligent forwarding algorithms than bridges and
switches, routers provide more efficient use of bandwidth. This simultaneously results in
flexible and optimal path selection. For example, it is very easy to implement load balancing
across multiple paths in most networks when routing. On the other hand, Layer 2 load
balancing can be very difficult to design, implement, and maintain.
If a VLAN spans across multiple devices a trunk is used to interconnect the devices. A
144
Cisco Academy – CCNA 3.0 Semester 3
trunk carries traffic for multiple VLANs. For example, a trunk can connect a switch to another
switch, a switch to the inter-VLAN router, or a switch to a server with a special NIC installed
that supports trunking.
Remember that when a host on one VLAN wants to communicate with a host on another,
a router must be involved.
Interactive Media Activity
Drag and Drop: Inter-VLAN Routing
When the student has completed this activity, the student will learn the path packets take
in a network with inter-VLAN routing. The student will predict the path a packet will take
given the source host and the destination host.
9.3.3 Inter-VLAN issues and solutions
When VLANs are connected together, several technical issues will arise. Two of the
most common issues that arise in a multiple-VLAN environment are:

The need for end user devices to reach non-local hosts

The need for hosts on different VLANs to communicate
When a device needs to make a connection to a remote host, it checks its routing table to
determine if a known path exists. If the remote host falls into a subnet that it knows how to
reach, then the system checks to see if it can connect along that interface. If all known paths
fail, the system has one last option, the default route. This route is a special type of gateway
route, and it is usually the only one present in the system. On a router, an asterisk (*) indicates
a default route in the output of the show ip route command. For hosts on a local area network,
this gateway is set to whatever machine has a direct connection to the outside world, and it is
the Default Gateway listed in the workstation TCP/IP settings. If the default route is being
configured for a router which itself is functioning as the gateway to the public Internet, then
the default route will point to the gateway machine at an Internet service provider (ISP) site.
Default routes are implemented using the ip route command.
Router(Config)#ip route 0.0.0.0 0.0.0.0 192.168.1.1
In this example, 192.168.1.1 is the gateway. Inter-VLAN connectivity can be achieved
through either logical or physical connectivity.
Logical connectivity involves a single connection, or trunk, from the switch to the router.
That trunk can support multiple VLANs. This topology is called a router on a stick because
there is a single connection to the router. However, there are multiple logical connections
Chapter 9
VLAN Trunking Protocol
145
between the router and the switch.
Physical connectivity involves a separate physical connection for each VLAN. This
means a separate physical interface for each VLAN.
Early VLAN designs relied on external routers connected to VLAN-capable switches. In
this approach, traditional routers are connected via one or more links to a switched network.
The router-on-a-stick designs employ a single trunk link that connects the router to the rest of
the campus network. Inter-VLAN traffic must cross the Layer 2 backbone to reach the
router where it can move between VLANs. Traffic then travels back to the desired end station
using normal Layer 2 forwarding. This out-to-the-router-and-back flow is characteristic of
router-on-a-stick designs.
Interactive Media Activity
Drag and Drop: Inter-VLAN Routing Issues and Solutions
When the student has completed this activity, the student will learn about some of the
problems when using VLAN. They will also learn some of the solutions.
9.3.4 Physical and logical interfaces
In a traditional situation, a network with four VLANs would require four physical
connections between the switch and the external router.
As technologies such as Inter-Switch Link (ISL) became more common, network
designers began to use trunk links to connect routers to switches. Although any trunking
technology such as ISL, 802.1Q, 802.10, or LAN emulation (LANE) can be used,
Ethernet-based approaches such as ISL and 802.1Q are most common.
The Cisco Proprietary protocol ISL as well as the IEEE multivendor standard 802.1q are
used to trunk VLANs over Fast Ethernet links.
The solid line in the example refers to the single physical link between the Catalyst
Switch and the router. This is the physical interface that connects the router to the switch.
As the number of VLANs increases on a network, the physical approach of having one
router interface per VLAN quickly becomes unscalable. Networks with many VLANs must
use VLAN trunking to assign multiple VLANs to a single router interface.
The dashed lines in the example refer to the multiple logical links running over this
physical link using subinterfaces. The router can support many logical interfaces on individual
Cisco Academy – CCNA 3.0 Semester 3
146
physical links. For example, the Fast Ethernet interface FastEthernet 0/0 might support three
virtual interfaces numbered FastEthernet 1/0.1, 1/0.2 and 1/0.3.
The primary advantage of using a trunk link is a reduction in the number of router and
switch ports used. Not only can this save money, it can also reduce configuration complexity.
Consequently, the trunk-connected router approach can scale to a much larger number of
VLANs than a one-link-per-VLAN design.
9.3.5 Dividing physical interfaces into subinterfaces
A subinterface is a logical interface within a physical interface, such as the Fast Ethernet
interface on a router.
Multiple subinterfaces can exist on a single physical interface.
Each subinterface supports one VLAN, and is assigned one IP address. In order for
multiple devices on the same VLAN to communicate, the IP addresses of all meshed
subinterfaces must be on the same network or subnetwork. For example, if subinterface 2 has
an IP address of 192.168.1.1 then 192.168.1.2, 192.168.1.3, and 192.1.1.4 are the IP addresses
of devices attached to subinterface 2.
In order to route between VLANs with subinterfaces, a subinterface must be created for
each VLAN.
The next section discusses the commands necessary to create subinterfaces and apply a
trunking protocol and an IP address to each subinterface.
9.3.6 Configuring inter-VLAN routing
This section demonstrates the commands necessary to configure inter-VLAN routing
between a router and a switch. Before any of these commands are implemented, each router
and switch should be checked to see which VLAN encapsulations they support. Catalyst 2950
switches have supported 802.1q trunking since the release of Cisco IOS release
12.0(5.2)WC(1), but they do not support Inter-Switch Link (ISL) trunking. In order for
inter-VLAN routing to work properly, all of the routers and switches involved must support
the same encapsulation.
On a router, an interface can be logically divided into multiple, virtual subinterfaces.
Subinterfaces provide a flexible solution for routing multiple data streams through a single
physical interface. To define subinterfaces on a physical interface, perform the following
tasks:

Identify the interface.
Chapter 9

Define the VLAN encapsulation.

Assign an IP address to the interface.
VLAN Trunking Protocol
147
To identify the interface, use the interface command in global configuration mode.
Router(config)#interface fastethernet port-number. subinterface-number
The port-number identifies the physical interface, and the subinterface-number identifies
the virtual interface.
The router must be able to talk to the switch using a standardized trunking protocol. This
means that both devices that are connected together must understand each other. In the
example, 802.1q is used. To define the VLAN encapsulation, enter the encapsulation
command in interface configuration mode.
Router(config-if)#encapsulation dot1q vlan-number
The vlan-number identifies the VLAN for which the subinterface will carry traffic. A
VLAN ID is added to the frame only when the frame is destined for a nonlocal network. Each
VLAN packet carries the VLAN ID within the packet header.
To assign the IP address to the interface, enter the following command in interface
configuration mode.
Router(config-if)#ip address ip-address subnet-mask
The ip-address and subnet-mask are the 32-bit network address and mask of the specific
interface.
In the example, the router has three subinterfaces configured on Fast Ethernet interface
0/0. These three interfaces are identified as 0/0.1, 0/0.2, and 0/0.3. All interfaces are
encapsulated for ISL. Interface 0/0.1 is routing packets for VLAN 1, whereas interface 0/0.2
is routing packets for VLAN 20 and 0/0.3 is routing packets for VLAN 30.
Lab Activity
Lab Exercise: Configuring Inter-VLAN Routing
This lab is to create a basic configuration on a router and test the routing functionality.
Lab Activity
e-Lab Activity: Configuring Inter-VLAN Routing
148
Cisco Academy – CCNA 3.0 Semester 3
In this lab, the student will create a basic configuration on a router and test the routing
functionality.
Summary
An understanding of the following key points should have been achieved:

The origins and functions of VLAN trunking

How trunking enables the implementation of VLANs in a large network

IEEE 802.1Q

Cisco ISL

Configuring and verifying a VLAN trunk

Definition of VLAN Trunking Protocol (VTP)

Why VTP was developed

The contents of VTP messages

The three VTP modes

Configuring and verifying VTP on an IOS-based switch

Why routing is necessary for inter-VLAN communication

The difference between physical and logical interfaces

Subinterfaces

Configuring inter-VLAN routing using subinterfaces on a router port
Appendix
Appendix

URL
149
URL
http://www.cisco.com/en/US/products/sw/iosswrel/ps1828/products_configuration
_guide_chapter09186a00800ca569.html

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_configuration
_guide_chapter09186a00800ca765.html#xtocid2

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/ospf.htm

http://www.cisco.com/warp/public/104/2.html#1.1

http://www.cisco.com/warp/public/104/2.html#2.0

http://www.cisco.com/warp/public/104/2.html#3.0

http://www.cisco.com/warp/public/104/2.html#9.0

http://www.juniper.net/techpubs/software/junos50/swconfig50-routing/html/ospf-o
verview6.html

http://www.cisco.com/warp/public/104/2.html#5.0

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_configuration
_guide_chapter09186a0080087093.html#1012547

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_ref
erence_chapter09186a00800917e6.html#1018073

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_ref
erence_chapter09186a00800917e6.html#1017391

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_ref
erence_chapter09186a00800917e6.html#1020269

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_command_ref
erence_chapter09186a00800917e6.html#1025099

http://www.cisco.com/en/US/tech/tk826/tk365/technologies_tech_note09186a0080
0949f7.shtml

http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_configuration
_guide_chapter09186a00800ca74c.html#1001661

http://www.cisco.com/en/US/tech/tk826/tk365/technologies_tech_note09186a0080
094613.shtml

http://www.cisco.com/en/US/about/ac123/ac114/ac173/ac169/about_cisco_packet_
enterprise_solution09186a00800a3453.html

http://www.cisco.com/en/US/tech/tk365/tk352/technologies_tech_note09186a0080
093f0b.shtml

http://www.cisco.com/en/US/tech/tk826/tk365/technologies_tech_note09186a0080
094613.shtml

http://www.cisco.com/en/US/tech/tk826/tk365/technologies_tech_note09186a0080
0949f7.shtml
150
Cisco Academy – CCNA 3.0 Semester 3

http://www.maznets.com/tech/switched.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/lanswtch.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/lanswtch.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/lanswtch.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/lanswtch.htm

http://www.2000trainers.com/article.aspx?articleID=56&page=1

http://www.cisco.com/en/US/products/hw/switches/ps628/index.html

http://www.cisco.com/en/US/products/hw/switches/ps4324/index.html

http://www.cisco.com/en/US/products/hw/switches/ps708/index.html

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/2950_wc/scg/scg_m
gmt.htm#xtocid111203

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/12111ea1/scg/swcli.
htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/12111ea1/scg/swad
min.htm#xtocid65

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/12111ea1/scg/swad
min.htm#xtocid72

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/2950_wc/scg/scg_m
gmt.htm#xtocid1112059

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/1219ea1/ol236202.h
tm#xtocid10

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/12111ea1/scg/swtrbl
.htm#xtocid3

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/1219ea1/ol236202.h
tm#xtocid10

http://www.howstuffworks.com/lan-switch5.htm

http://www.oreillynet.com/pub/a/network/2001/03/30/net_2nd_lang.html

http://www.oreillynet.com/pub/a/network/2001/03/30/net_2nd_lang.html

http://www.networkuptime.com/tips/lights/

http://www.zyxel.com/support/supportnote/ves1012/app/stp.htm

http://www.cisco.com/univercd/cc/td/doc/product/rtrmgmt/sw_ntman/cwsimain/cw
si2/cwsiug2/vlan2/stpapp.htm

http://www.cisco.com/warp/public/473/146.html

http://www.zyxel.com/support/supportnote/ves1012/app/vlan.htm

http://www.cisco.com/univercd/cc/td/doc/product/software/ios113ed/113ed_cr/swit
ch_c/xcvlan.htm

http://www.zyxel.com/support/supportnote/ves1012/app/vlan.htm

http://www.intel.com/network/connectivity/resources/doc_library/tech_brief/virtua
Appendix
URL
151
l_lans.htm

http://www.cisco.com/warp/public/538/7.html

http://www.cisco.com/en/US/tech/tk389/tk689/tech_tech_notes_list.html

http://www.cisco.com/en/US/tech/tk389/tk689/tech_tech_notes_list.html

http://www.cisco.com/en/US/tech/tk389/tk689/tech_tech_notes_list.html

http://www.cisco.com/en/US/tech/tk389/tk689/tech_tech_notes_list.html

http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080
094c52.shtml

http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080
094c52.shtml

http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080
094c52.shtml

http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080
094c52.shtml

http://infocenter.cramsession.com/techlibrary/gethtml.asp?ID=1676

http://infocenter.cramsession.com/techlibrary/gethtml.asp?ID=1676

http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080
094c52.shtml

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12cgcr/switch_r
/xrmls.htm

http://www.cisco.com/univercd/cc/td/doc/product/lan/cat5000/rel_5_2/layer3/routi
ng.htm
Download