Uploaded by Ahmed El Ashmawy

The Big Note

advertisement
NETWORK APPLICATIONS:
THE BIG NOTE – A HARTMAN PRODUCTION
Really enjoying the big note? Buy me a coffee ☕
UNIT 1 – ELEMENTS OF MODERN NETWORKING:
THE KEY ELEMENTS OF A MODERN NETWORKING ECOSYSTEM:
The entire ecosystem exists to provide services to end users. The
term end user refers to the person who is working within an
enterprise or in a public setting or at home.
The user platform can be fixed (e.g. PC or workstation), portable
(e.g. laptop), or mobile (e.g. tablet or smartphone).
Users connect to network-based services and content through a
wide variety of network access facilities. These include digital
subscriber line (DSL) and cable modems, Wi-Fi and Worldwide
Interoperability for Microwave Access (WiMAX) wireless modems,
and cellular modems.
Such network access facilities enable the user to connect directly to
the Internet or to a variety of network providers, including Wi-Fi
networks and cellular networks.
Users also use network facilities to access applications and content. Application providers provide
applications, or apps, that run on the user’s platform. The application provider downloads software to
the user’s platform; however, the application service provider acts as a server or host of application
software that is executed on the provider’s platforms. A content provider is an organization or individual
that creates information, including educational or entertainment content, distributed via the Internet or
enterprise networks.
The networking ecosystem can be deployed via two different architectures:
1. Data center networking which consists of a very large number of interconnected servers.
Typically, as much as 80% of the data traffic is within the data center network, and only 20
percent relies on external networks to reach users.
2. IoT or Fog networking consists of millions of devices and the vast bulk of the data traffic to and
from these devices is machine to machine rather than user to machine.
Consider two examples of networking ecosystems. Begin
with an architecture that could represent an enterprise
network of national or global extent, or a portion of the
Internet with some of its associated networks. Then present
another example to illustrate how enterprises design their
network facilities in a three-tier hierarchy: access,
distribution, and core. This example is illustrated to the
right.
Notice in the figure that the IP backbone (or core network)
typically consists of core routers, edge routers, and
aggregation routers.
Core routers are high performance routers that are
interconnected with high-volume optical links. The optical
links often use wavelength division multiplexing (WDM),
such that each link has multiple logical channels occupying
different portions of the optical bandwidth.
Edge routers are the routers that provide connectivity to external networks and users. Aggregation
routers are used within an enterprise network to connect several routers and switches to external
resources, such as an IP backbone or a high-speed WAN.
Enterprises often design their network facilities in a three-tier hierarchy of access, distribution, and core.
An access network is a local-area network (LAN) or campuswide network that consists of LAN switches (typically
Ethernet switches) and in larger LANs also consists of IP
routers that provide connectivity among the switches.
The distribution network connects access networks with
each other and with the core network. An edge router in
the distribution network connects to an edge router in an
access network to provide connectivity. This connection
between edge routers is referred to as peering.
The core network (also referred to as the backbone
network) connects geographically dispersed distribution
networks as well as provides access to other networks that
are not part of the enterprise network. The core network
uses very high-performance routers, high-capacity
transmission lines, and multiple interconnected routers for
increased redundancy and capacity.
AN OVERVIEW OF ETHERNET:
Ethernet, Wi-Fi, and 4G/5G cellular networks are the key network transmission technologies that have
evolved to support very high data rates.
ETHERNET APPLICATIONS:
Ethernet is the commercial name for a wired local-area network technology. Ethernet involves the use
of a shared physical medium, a medium access control protocol, and the transmission of data in packets.
It supports data rates up to 100Gbps and distances from a few meters to tens of kilometers. Ethernet
has become essential for supporting personal computers, workstations, servers, and massive data
storage devices in organizations large and small.
Ethernet in the Home:
Ethernet has long been used to create a local network of computers with access to the Internet via a
broadband modem/router. With the increasing availability of high-speed but low-cost Wi-Fi technology,
Ethernet has declined. Nevertheless, almost all home networking setups include some use of Ethernet.
Two recent extensions of Ethernet technology have enhanced and extended the use of Ethernet in the
home: Powerline carrier (PLC) and Power over Ethernet (PoE). The PLC extension uses the power wire as
a communication channel to transmit Ethernet packets on top of the power signal. The PoE extension
uses existing Ethernet cables to distribute power to devices on the network.
Ethernet in the Office:
Ethernet has been the dominant network technology for wired local-area networks (LANs) in the office
environment. There was some competitors such as IBM’s Token Ring LAN and the Fiber Distributed Data
Interface (FDDI), but the simplicity and wide availability of Ethernet hardware made Ethernet the
preferrable choice.
Today, the wired Ethernet technology exists side by side with the wireless Wi-Fi technology. Ethernet
retains its popularity because it can support many devices at high speeds, it isn’t subject to interference,
and it provides a security advantage as it’s resistant to eavesdropping. Therefore, a combination of
Ethernet and Wi-Fi is the most common architecture.
Ethernet in the Enterprise:
A great advantage of Ethernet is that it is possible
to scale the network both in terms of distance and
data rate with the same Ethernet protocol and
associated quality. An enterprise can easily extend
an Ethernet using a mixture of cable types and
Ethernet hardware. It can extend among several
buildings with links ranging from 10 Mbps to 100
Gbps. This is because all the hardware and
communications software conform to the same
standard within different vendor equipment.
Ethernet in the Data Center:
Ethernet is widely used in the data center where very high data rates are needed to handle massive
volumes of data among networked servers and storage units.
Two great features of the new Ethernet approach are co-located servers and storage units as well as the
backplane Ethernet. Co-located servers and storage units have high-speed Ethernet fiber links and
switches providing the needed networking infrastructure. The backplane Ethernet runs over copper
jumper cables that can provide up to 100Gbps over very short distances. The backplane Ethernet is ideal
for Blade servers where multiple server modules are housed in a single chassis.
ETHERNET STANDARDS:
10 = 10mbps
10Base2
Base = baseband signaling
10Base5
5 = 500m, 2 = 185m
10BaseT
T = twisted pair 100m
100BaseT
100 mbps
Gigabit
1000 mbps
10 GBase-?
10 gbps
100 GBase-?
100 gbps
Carrier sense multiple access
CSMA/CD
with collision detection facility
Addressing
MAC address
Framing
802.3
Thick & Thin Ethernets:
10Base5 (thick Ethernet) uses a bus topology with a thick
coaxial cable as the transmission medium.
10Base2 (thin Ethernet or Cheapernet) uses a bus topology
with a thin coaxial cable as the transmission medium.
Twisted-pair and Fiber Optic Ethernets:
10BaseT (twisted-pair Ethernet) uses a physical star topology
(the logical topology is still a bus) with stations connected by
two pairs of twisted-pair cables to the hub.
10BaseF (fiber link Ethernet) uses a star topology (the logical
topology is still a bus) with stations connected by a pair of
fiber-optic cables to the hub.
Fast Ethernet Implementation:
The two-wire implementation is called 100Base-X. It
uses either twisted-pair cables (100Base-TX) or fiberoptic cables (100Base-FX).
The four-wire implementation is designed only for
twisted-pair cables (100Base-T4).
Ethernet CSMA/CD:
Carrier sense, multiple access with collision detection (CSMA/CD) devices use the following process to
send data:
1. Every station has an equal right to the medium (multiple access)
2. Every station with a frame to send will first listen (sense) the medium. If there is no data on the
medium, the station can start sending (carrier sense).
3. It may happen that two stations both sense the
medium, find it idle, and start sending. In this case a
collision occurs. The protocol forces the station to
continue to listen to the line after sending has begun.
When collisions occur, each sending station sends a
jam signal to destroy the data on the line and after
each waits a different random amount of time (called
the backoff), it will try again. The random times
prevent the simultaneous resending of data.
Ethernet Frame Structure:
Packets sent in an Ethernet LAN are called a frame. The
Ethernet frame contains seven fields as illustrated to the right.
The preamble contains seven bytes (56 bits) of alternating 0s and 1s that alert the receiving system to
the coming fame and enable it to synchronize its input timing. The preamble is actually added at the
physical layer and is not formally part of the frame.
The start frame delimiter (SFD) is one byte (10101011) which signals the beginning of the frame. The
SFD gives the station a last chance for synchronization. The last two bits are 11 to signal that the next
field is the destination address.
The destination address (DA) is six bytes and contains the physical address of the next station. The
source address (SA) is also six bytes and contains the physical address of the sender.
The length/type has one of two meanings. If the value of the field is less than 1518 then it’s a length
field and defines the length of the data field that follows. If the value of the field is greater than 1536 it
defines the upper layer protocol that uses the services of the Internet.
The data field carries data encapsulated from the upper layer protocols. It is a minimum of 46 bytes and
has a maximum of 1500 bytes.
The CRC (CRC-32) is the last field in the 802.3 frame and contains the error detection information which
is checked at the receiver. If an error is detected the frame is dropped.
Cyclic Redundancy Check (CRC):
The cyclic redundancy check will view data bits, D, as a binary
number. It will choose a random number, R, of length r-bits
(called CRC) and choose a generator, G, of length r+1 bits such
that <D,R> is exactly divisible by G (modulo 2). The receiver
divides <D,R> by G as the receiver knows G. If there’s a non-zero remainder, an error has been detected.
CRC Example:
ETHERNET PERFORMANCE:
AN OVERVIEW OF WI-FI:
WI-FI APPLICATIONS:
Wi-Fi is standardized by IEEE 802.11. It has become the dominant technology for wireless LANs. The first
important use was in the home to replace Ethernet cabling or connecting desktop and laptop computers
with each other and the Internet.
Wi-Fi provides a cost-effective way to the Internet and is essential to implementing the Internet of
Things (IoT).
Enterprise Wi-Fi:
The economic benefit of Wi-Fi is most clearly seen in the enterprise. Approximately half of all enterprise
network traffic is via Wi-Fi rather than the traditional Ethernet.
Two trends have driven the transition to a Wi-Fi centered enterprise:
1. Demand has increased with more and more employees preferring to use laptops, tablets, and
smartphones to connect to the enterprise network.
2. The arrival of Gigabit Ethernet allows the enterprise network to support high-speed connections
to mobile devices simultaneously.
WI-FI STANDARDS:
Interoperability is essential to the success of Wi-Fi. Wi-Fi enabled devices must be able to communicate
with Wi-Fi access points regardless of the manufacturer of the device or access point.
Interoperability is guaranteed by the following 3 things:
1. IEEE 802.11 wireless LAN committee develops the protocol and signaling standards
2. The Wi-Fi alliance creates test suites to clarify interoperability for commercial products that
conform to various IEEE 802.11 standards
3. The term Wi-Fi (wireless fidelity) is used for products certified by the alliance
WI-FI PERFORMANCE:
THE DIFFERENCES BETWEEN THE FIVE GENERATIONS OF CELLULAR NETWORKS:
FIRST GENERATION (1G):
First generation cellular networks were the original cellular networks. The provided
analog traffic channels and were designed to be an extension of the public switched
telephone networks.
The most widely deployed system was the advanced mobile phone service (AMPS)
developed by AT&T. Voice transmission was purely analog and control signals were sent
over a 10kbps analog channel.
SECOND GENERATION (2G):
Second generation cellular networks were developed to provide higher-quality signals,
higher data rates for support of digital services, and greater capacity. The key differences
between 1G and 2G networks were 2G had digital traffic channels, encryption, error
detection and correction, as well as channel access.
THIRD GENERATION (3G):
The objective of third generation cellular networks was to provide fairly high-speed
wireless communication to support multimedia, data, and video in addition to voice.
Third generation cellular networks had the following design features:
-
Bandwidth
Data rate
Multirate
FOURTH GENERATION (4G):
Fourth generation cellular networks had ultra-broadband Internet access for a variety of
mobile devices, including laptops, smartphones, and tablets.
4G networks support mobile web access and high-bandwidth applications such as high
definition mobile TV, mobile video conferencing, and gaming services. 4G networks are
designed to maximize bandwidth and throughput while also maximizing spectral
efficiency.
4G networks have the following characteristics:
-
Based on an all-IP packet switched network
Support peak data rates
Dynamically share and use network resources to support more simultaneous users per cell
Support smooth handovers across heterogeneous networks
Support high QoS for next-generation multimedia applications
FIFTH GENERATION (5G):
Fifth generation cellular networks are still some years away. By 2020 the huge
amounts of data traffic generated by tablets and smartphones will be
augmented by an equally huge amount of traffic from the Internet of Things
(which includes shoes, watches, appliances, cars, thermostats, door locks, and
much more).
The focus of 5G will be on:
-
Building more intelligence into the network
Meeting service quality demands by dynamic use of priorities
Adaptive network reconfiguration
Other network management techniques
AN OVERVIEW OF CLOUD COMPUTING CONCEPTS:
Cloud computing first became available in the early 2000s. It was particularly targeted at large
enterprises but has spread to small and medium-size businesses (and recently to consumers).
Apple’s iCloud was launched in 2012 and had 20 million users within a week of the launch. Evernote
launched in 2008 and approached 100 million users in less than six years. In 2014 Google announced
that Google Drive had almost a quarter of a billion active users.
CLOUD COMPUTING CONCEPTS:
The National Institute of Standards and Technology (NIST) defines the essential characteristics of cloud
computing as:
-
-
-
-
Broad network access in terms of the
capability to access the network through
heterogeneous platforms (e.g. mobile phones,
laptops, PDAs, etc…)
Rapid elasticity in terms of the ability of users
to expand and reduce resources according to
their specific service requirement
Measured service which allows resource
usage to be monitored, controlled, and
reported
On-demand self service
Resource pooling
BENEFITS OF CLOUD COMPUTING:
Cloud computing provides economies of scale, professional network management, and professional
security management.
Another big advantage of using cloud computing to store your data and share it with others is that the
cloud provider takes care of security. Unfortunately, the customer isn’t always protected as there have
been a number of security failures among cloud providers.
CLOUD NETWORKING:
Cloud networking refers to the networks and network management functionality that must be in place
to enable cloud computing. Many cloud computing solutions rely on the Internet, but that is only a piece
of the networking infrastructure.
One example is the provisioning high-performance/high-reliability networking between the provider and
subscriber. In this case, some, or all of the traffic between an enterprise and the cloud bypasses the
Internet and uses dedicated private network facilities owned or leased by the cloud service provider.
More generally, cloud networking refers to the collection of network capabilities required to access a
cloud. This includes making use of specialized services over the Internet, linking enterprise data centers
to a cloud, and using firewalls and other network security devices at critical points to enforce access
security policies.
CLOUD STORAGE:
Cloud storage can be thought of as a subset of cloud computing. It consists of a database storage and
database applications hosted remotely on cloud servers. Cloud storage enables small businesses and
individual users to take advantage of data storage that scales with their needs and to take advantage of
a variety of database applications without having to buy, maintain, and manage the storage assets.
THE INTERNET OF THINGS:
The internet of things is a term that refers to the expanding interconnection of smart devices, ranging
from appliances to tiny sensors. A dominant theme is the embedding of short-range mobile transceivers
into a wide array of gadgets and everyday items, enabling new forms of communication between people
and things and between things themselves.
The internet of things is primarily driven by deeply embedded devices. These devices are lowbandwidth, low-repetition data capture, and low-bandwidth data-usage appliances that communicate
with each other and provide data via user interfaces.
EVOLUTION OF THE INTERNET OF THINGS:
With reference to the end systems supported, the Internet has gone through roughly four generations
of deployment culminating in the internet of things:
1. Information Technology (IT)
 PCs, servers, routers, firewalls, and so on
 Bought as IT devices by enterprise IT people, primarily using wired connectivity
2. Operation Technology (OT)
 Machines/appliances with embedded IT built by non-IT companies such as medical
machinery, SCADA, process control, and kiosks
 Bought as appliances by enterprise OT people and primary using wired connectivity
3. Personal Technology
 Smartphones, tablets, and eBook readers
 Bought as IT devices by consumers exclusively using wireless connectivity and often in
multiple forms of wireless connectivity
4. Sensor/Actuator Technology
 Single-purpose devices
 Bought by consumers, IT, and OT people exclusively using wireless connectivity,
generally of a single form, as part of larger systems
LAYERS OF THE INTERNET OF THINGS:
Sensors and Actuators:
Sensors and actuators are the “things” in the internet of things. Sensors observe their environment and
report back quantitative measurements. Actuators operate on their environment.
Connectivity:
A device may connect via either a wireless or wired link into a network to send collected data to the
appropriate data center (sensor) or receive operational commands from a controller site (actuator).
Capacity:
The network supporting the devices must be able to handle a potentially huge flow of data.
Storage:
There needs to be a large storage facility to store and maintain backups of all the collected data.
Data Analytics:
For large collections of devices, “big data” is generated which requires data analytics capabilities to
process the data flow.
NETWORK CONVERGENCE:
Network convergence refers to the merger of previously distinct telephony and information
technologies and markets. This convergence can be thought of in terms of a three-layer model of
enterprise communications:
1. Application Convergence
 These are seen by the end users of a business.
Convergence integrates communications
applications with business applications.
2. Enterprise Services
 At this level, the manager deals with the
information network in terms of the services that
must be available to ensure that users can take
full advantage of the applications that they use.
3. Infrastructure
 The network and communications infrastructure consists of the communication links,
LANs, WANs, and Internet connections available to the enterprise. A key aspect of
convergence at this level is the ability to carry voice, image, and video over networks
that were originally designed to carry data traffic.
UNIFIED COMMUNICATIONS (UC):
Unified communications focus on the integration of real-time communication services to optimize
business processes. IP is the cornerstone on which UC systems are built.
Key elements of UC include:
-
UC systems typically provide a unified user interface and consistent user experience across
multiple devices and media
UC merges real-time communications services with non-real-time services and business process
applications
Typical Components of a UC Architecture:
UNIT 2 – PEER-TO-PEER NETWORKS:
INTRODUCTION:
DEFINITION OF P2P SYSTEMS:
There’s no universally accepted definition of P2P systems, however, there are many definitions and
there’s some common characteristics shared by most P2P systems. These common characteristics are:
-
A ‘peer’ is a computer that can act as both server and/or client.
A P2P system should consist of at least 2 or more peers.
Peers should be able to exchange resources directly among themselves. Such resources include
files, storages, information, central processing unit power and knowledge.
Dedicated servers may or may not be present in a P2P system depending on the nature of the
applications.
P2P systems without dedicated servers are sometimes described as ‘pure’ P2P systems.
Peers can join and/or leave the system freely.
ADVANTAGES OF P2P SYSTEMS:
Some benefits of P2P systems are that the workload is spread to all peers. It’s possible to have millions
of computers in a P2P network, which can deliver huge resources and power. Another benefit is that
P2P systems maximize system utilization. Many office computers are not used at all from 5p to 9am, so
P2P computers can use these resources which maximizes the utilization.
P2P systems also have the benefit of not having a single point of failure. For example, the Internet and
the Web don’t have a central point of failure. P2P network will still function when some of its peers are
not working properly. Thus, it’s more fault tolerant than other systems. This also gives P2P systems great
scalability as every peer is alike, it’s possible to add more peers to the system scaling to larger networks.
DISADVANTAGES OF P2P SYSTEMS:
Some disadvantages of P2P computing is that the peer will be more susceptible to hacker’s attacks. It’s
also difficult to enforce standards in P2P systems. P2P networks can’t guarantee that a particular
resource will be available all the time. For example, the owner may shut down their computer or delete
a file. It’s also difficult to predict the overall performance of a system. Another disadvantage is that it
can be difficult to prevent illegal uploading and downloading of copyrighted materials in a P2P system.
And lastly, popular P2P systems can generate enormous amounts of network traffic and as a result,
some universities didn’t allow their students to access some P2P applications inside the campus.
P2P TOPOLOGIES:
CENTRALIZED:
Centralized systems are the most familiar form of topology. They are typically seen
as the client/server patter used by databases, web servers, and other simple
distributed systems. All function and information is centralized into one server with
many clients connecting directly to the server to send and receive information.
Many peer-to-peer applications also have a centralized component. For example, the original Napster’s
search architecture was centralized, although the file sharing was not.
RING:
A single centralized server cannot handle high client’s load. A common solution is to
use a cluster of machines arranged in a ring to act as distributed servers.
Communication among the nodes coordinates state-sharing to provide identical
function with fail-over and load-balancing capabilities.
Ring systems are generally built on the assumption that the machines are all nearby on the network and
owned by a single organization.
HIERARCHICAL:
Hierarchical systems have a long history on the Internet. The best-known hierarchical
system on the Internet is the Domain Name Service where authority flows from the
root name servers to the server for the registered name.
The Network Time Protocol (NTP) is a protocol for synchronizing the clocks of computer systems over
networks. There are root time servers that have authoritative clocks, and other computers synchronize
to root time servers in a self-organizing tree.
DECENTRALIZED:
Decentralized topology systems are the opposite of centralized topology systems. In
decentralized topology systems, all peers communicate symmetrically and have equal
roles. Decentralized systems are not new and in fact, the Internets routing architecture
is largely decentralized with the Border Gateway Protocol between various
autonomous systems. Gnutella is probably the “purest” decentralized system used in
practice today.
HYBRID:
Real-world systems often combine several topologies into one system making a hybrid topology. Nodes
typically play multiple roles in such a system. For example, a node might have a centralized interaction
with one part of the system, while being part of a ring with other nodes.
Centralized and Ring:
A combination of centralized and ring topologies is a very common hybrid. Most web
server applications often have a ring of servers for load balancing and fail-over. The
system as a whole is a hybrid, as a centralized system for clients where the server is
itself a ring.
Centralized and Decentralized:
Combining centralized and decentralized topologies typically results in an architecture of
centralized systems embedded in decentralized systems. Most peers have a centralized
relationship to a super node forwarding all file queries to this server. Instead of super
nodes being standalone servers, they band themselves together in a decentralized
network propagating queries.
EVALUATING TOPOLOGIES:
Things to consider when deciding which topology to use are listed below:
-
Manageability (How hard is it to keep working in terms of updating, repairing, and logging?)
Information coherence (If a bit of data is found in the system, is that data correct?)
Extensibility (How easy is the system to grow?)
Fault tolerance (How well can it handle failures?)
Resistance to legal or political intervention (How hard is it to shut down?)
Security (How hard is the system to be attacked?)
Scalability (How big can the system grow?)
Centralized:
Hierarchical:
Centralized + Ring:
Ring:
Decentralized:
Centralized + Decentralized:
P2P APPLICATIONS:
P2P COMPUTING APPLICATIONS:
Some common applications of P2P computing are listed below:
-
-
-
File sharing
o Improves data availability
o Replications to compensate for failures
o E.g. Napster, Gnutella, Freenet, KaZaA, etc…
Process sharing
o For large-scale computations
o Data analysis, data mining, scientific computing, etc…
Collaborative environments
o For remote real-time human collaboration
o Instant messaging, shared whiteboards, teleconferencing, etc…
o E.g. Skype, Messenger, etc…
Some technical challenges of P2P applications are peer identification, routing protocols, network
topologies, peer discovery, communication/coordination protocols, quality of service, and security.
NAPSTER MODEL:
Created in 1999 by Shawn Fanning, Napster was a P2P application network which gives its members the
ability to connect directly to other members’ computers and search their hard drives for digital music
files to share and trade.
Members download a software package from Napster and install it on their computers. The Napster
central computer maintains directories of music files from members who are currently connected to the
network. These directories are automatically updated when a member logs on or off the network.
Whenever a member submits a request to search for a file, the central
computer provides information to the requesting member. The
requesting member can then establish a connection directly with another
member’s computer containing that particular file. The download of the
target file takes place directly between the members’ computers,
bypassing the central computer.
Over 36 million people joined the Napster community. It rapidly accelerated the development and
implementation of other P2P models. It’s main limitation was that it could only share music files, and in
July 2001 the Recording Industry Association of America (RIAA) ordered to shutdown Napster due to the
free copying of copyrighted material.
OTHER P2P SYSTEMS:
Napster was ordered to shutdown because it maintained a central directory for its members. New filesharing P2P systems bypass the legal problems as they don’t hold a central directory. They don’t even
need a central server or any company to run the system. Thus, it’s impossible to kill the network. These
new P2P systems include Gnutella, KaZaA, LimeWire, Direct Connect, etc…
THE NETWORK STRUCTURE OF GNUTELLA:
The idea of Gnutella is similar to the “search strategies” employed by humans. If a
user wants to get a particular file, they ask one of their friends. If they don’t have the
file, they will ask their friends. This request will be conveyed from one person to
another until it reaches someone who has the file. This piece of information will then
be routed to the user according to the original path.
Computers in the network have different connection speeds. A high-speed computer will connect many
computers, while the low-speed computer will only connect to a few computers. Over the course of
time, the network will have a high-speed computer in the core.
P2P AND THE INTERNET:
P2P OVERLAYS AND NETWORK SERVICES:
Peers in P2P applications communicate with other peers using messages transmitted over the Internet
or other types of networks. The protocol of various P2P applications have some common features:
-
Protocols are constructed at the application layer.
Peers have a unique identifier, which is the peer ID or peer address.
P2P protocols support some type of message-routing capability where a message intended for
one peer can be transmitted via intermediate peers to reach the destination peer.
To distinguish the operation of the P2P protocol at the application layer from the behavior of the
underlying physical network, the collection of peer connections in a P2P network is called a P2P overlay.
The image to the right shows the correspondence between peers
connecting in an overlay network with the corresponding nodes in
the underlying physical network. Peers form an overlay network
(top) use network connections in the native network (bottom). The
overlay organization is a logical view that might not directly mirror
the physical network.
OVERLAY NETWORK TYPES:
Depending on how the nodes in a P2P overlay are linked, the overlay network can be classified as either
an unstructured or structured overlay network.
Unstructured networks have the nodes linked randomly. A search in unstructured P2P is not very
efficient, and a query may not be resolved. Gnutella and Freenet are examples of unstructured P2P
networks.
Structured networks use a predefined set of rules to link nodes so that a query can be effectively and
efficiently resolved. The most common technique used for this purpose is the Distributed Hash Table
(DHT). One popular P2P file sharing protocol that uses DHT is BitTorrent.
DISTRIBUTED HASH TABLE (DHT):
DHTs distribute data items (objects) among a set of nodes according to some predefined rules. Each
peer in a DHT-based network becomes responsible for a range of data items. Both the data item and the
responsible peer are mapped to a point in a large address space of size 2𝑚 (most of the DHT
implementations use 𝑚 = 160).
The address space is designed using modular arithmetic,
which means that the points in the address space are
distributed on a circle with 2𝑚 points (0 to 2𝑚 − 1) using
clockwise direction as shown in the image to the right.
Hashing Peer/Object Identifier:
The first step in creating the DHT system is to place all peers on the address space ring. This is normally
done using a hash function that hashes the peer identifier (normally its IP address) to an m-bit integer
called a node ID.
Node ID = hash(Peer IP address)
DHT uses some of the cryptographic hash functions such as Secure Hash Algorithm (SHA-1) that are
collision resistant. The name of the object (e.g. a file) to be shared is also hashed to an m-bit integer in
the same address space called a key.
Key = hash(Object name)
In the DHT, an object is normally related to the pair (key, value) in which the key is the hash of the
object name, and the value is the object itself (or a reference to it).
Storing the Object:
There are two strategies for storing the object. The first option is the direct method where the object is
stored in the node whose ID is the closest to the key in the ring. The term closest is defined differently in
each protocol. The second option is the indirect method where the peer that owns the object keeps the
object, but a reference to the object is created and stored in the node whose ID is closest to the key
point. This means, the physical object and the reference to the object are stored in two different
locations (peers). Most DHT systems use the indirect method due to efficiency. In either case, a search
mechanism is needed to find the object if the name of the object is given.
EXAMPLE:
The normal value of 𝑚 is 160, but for the purpose of
demonstration, consider a scenario where 𝑚 = 5. The
node N5 with IP address 110.34.56.23 has a file names
“SE3314b-Assignment” that it wants to share with its
peers. The file is stored in N5, the key of the file is k14,
but the reference to the file is stored in node N17.
UNSTRUCTURED OVERLAY TOPOLOGY:
An unstructured P2P network is formed when the overlay links are established arbitrarily. Unstructured
overlays (e.g. Gnutella) organize nodes into a random graph and use floods or random walks to discover
data stored by overlay nodes.
Each node visited during a flood or random walk evaluates the query locally on the data items that it
stores. Unstructured overlays don’t impose any constraints on the node graph or on data placement
(e.g. each node can choose any other node to be its neighbor in the overlay). Unstructured overlays
cannot find rare data items efficiently, and it doesn’t guarantee that an object can be found if it exists in
the overlay.
FLOODING AND EXPANDING RING:
When each peer keeps a list of its neighbors and when this neighbor’s relations are
transitive, the result is connectivity graphs as shown to the right.
In this particular graph, peers have degree from 2-5. Increasing the degree reduces
the diameter of the overlay, but requires more storage at each peer. Peers can
echange messages with other peers in its neighbor list and messages can be a query
that contains the search criteria (such as a filename or keywords).
Flooding:
Because it’s unkown which peers in the overlay have the information, a flooding
algorithm is used. The peer tries to send a query to all its neighbors. If the neighbor
peers don’t have the information, they can in turn, forward the request to their
neighbors and so on. To prevent messages from circulating endlessly, message
identifiers are used and a TTL value is attached to a message to limit its lifetime.
Each peer has a list of neighbors. It initializes its list of
neighbors when it joins the overlay (e.g. by getting a copy
of the neighbor list of the first peer it connects to). When
the query is satisfied at some peer, a response message is
sent to the requesting peer. If the object is not found
quickly, the flooding mechanism continues to propagate
the query message along other paths until the TTL value
expires or the query is satisfied.
FloodForward(Query q, Source p)
// have we seen this query before?
if(q.id ∈ oldIdsQ) return // yes, drop it
oldIdsQ = oldIdsQ ∪ q.id // remember this query
// expiration time reached?
q.TTL = q.TTL – 1
if q.TTL ≤ 0 then return // yes, drop it
// no, forward it to remaining neighbors
foreach(s ∈ Neighbors) if(s ≠ p) FloodForward(q,s)
Expanding Ring:
The flooding mechanism creates substantial redundant messaging, which is inefficient for the network.
The search may start with a small TTL value. If this succeeds the search stops. Otherwise, the TTL value is
increased by a small amount and the query is reissued. This variation of flooding is called iterative
deepening or expanding ring.
Random Walk:
To avoid the message overhead of flooding, unstructured overlays can
use some type of random walk. In random walk, a single query
message is sent to a randomly selected neighbor. The message has a
TTL value that is decremented at each hop.
If the desired object is found, the search terminates.
Otherwise, the query fails by a timeout or an explicit
failure message. The same process may be repeated
to another randomly chose path.
To improve the response time, several random walk
queries can be issued in parallel.
RandomWalk(source, query, TTL)
if (TTL > 0) {
TTL = TTL – 1
// select next hop at random, don’t send back to source
while((next_hop = neighbors[random()]) == source){}
RandomWalk(next_hop, query, TTL)
}
STRUCTURED OVERLAY TOPOLOGY:
MOTIVATIONS AND CATEGORIES:
The earliest peer-to-peer systems used unstructured overlays that were easy to implement but had
inefficient routing and an inability to locate rare objects. These problems turned the attention to
designing overlays with routing mechanisms that are deterministic and can provide guarantees on the
ability to locate any object stored in the overlay.
The large majority of these designs used overlays with a specific routing geometry and are called
structured overlays.
STRUCTURE OVERLAYS & DIRECTED SEARCHES:
The idea of structured overlays is to assign particular nodes to hold particular content (or pointers to it
like an information booth). When a node wants that content, it goes to the node that is supposed to
have or know about it.
The challenges with this idea are making it distributed and adaptive. The responsibilities should ideally
be distributed among existing nodes in the overlay and the nodes should be able to easily join and leave
the P2P overlay. The knowledge responsibility should be distributed to joining nodes and redistributed
from leaving nodes.
Structured overlays support key-based routing such that object identifiers are mapped to the peer
identifiers address space and an object request (lookup message) is routed to the nearest peer in the
peer address space. P2P systems using key-based routing are called distributed object location and
routing (DOLR) systems. A specific type of DOLR is a distributed hash table (DHT).
Pastry:
Pastry is designed by Antony Rowstron and Peter Druschel in 2001, and it uses DHT. Nodes and data
items are identified by 𝑚-bit IDs that create an identifier space of size 2𝑚 points distributed in a circle in
the clockwise direction. The common value for 𝑚 is 128. The protocol uses the SHA-1 hashing algorithm
with 𝑚 = 128.
𝑚
In Pastry, an identifier is seen as an 𝑛-digit string in base 2𝑏 in which 𝑏 is normally 4 and 𝑛 = 𝑏 . For
instance, an identifier is a 32-digit number in base 16 (hexadecimal). In Pastry, a key is stored in the
node whose identifier is numerically closest to the key.
Each node in Pastry can resolve a query using
two entities: a routing table and a leaf set. A
routing table might look like the table to the
right.
For node 𝑁, 𝑇𝑎𝑏𝑙𝑒[𝑖, 𝑗] gives the ID of a node (if it exists) that shares the 𝑖 leftmost digits with the ID for
𝑁 and its (𝑖 + 1)th digit has a value of 𝑗. The first row (row 0) shows the list of live nodes whose
identifiers have no common prefix with 𝑁. The last row (row 31) shows the list of all live nodes that
share the leftmost 31 digits wth node 𝑁 (only the last digit is different).
For example, assume the node 𝑁 ID is
(574𝐴234𝐵12𝐸374𝐴2001𝐵23451𝐸𝐸𝐸4𝐵𝐶𝐷)16. The
value of the 𝑇𝑎𝑏𝑙𝑒[2, 𝐷] can be the identifier of a node
such as (57𝐷 … ). Note that the leftmost 2 digits are 57
which are common with the first two digits of 𝑁, but the next digit is D, which is the value corresponding
to the Dth column. If there are more nodes with the prefix 57D, the closest one, according to the
proximity metric, is chosen and its identifier is inserted in this cell.
The proximity metric is a measurement of closeness determined by the application that uses the
network. It can be based on the number of hops between the two nodes, the round-trip time between
the two nodes, or other metrics.
A leaf set is another entity used in routing and is a set of
2𝑏 identifiers (the size of a row in the routing table). The
left half of the set is a list of IDs that are numerically
smaller than the current node ID. The right half is a list of
IDs that are numerically larger than the current node ID.
The leaf set gives the identifier of 2𝑏−1 live nodes located
before the current node in the ring and the list of 2𝑏−1
nodes located after the current node in the ring.
Pastry enables the lookup
operation, where given a
key, it will find the node that
stores the information about
the key or the key itself.
Lookup (key) {
if (key is in the range of N's leaf set)
forward the message to the closest node in the leaf set
else
route (key, Table)
}
route (key, Table) {
p = length of shared prefix between key and N
v = value of the digit at position p of the key // Position starts from 0
if (Table [p, v] exists)
forward the message to the node in Table [p, v]
else
forward the message to a node sharing a prefix as long as
the current node, but numerically closer to the key.
}
Two examples of how the pastry lookup works can be seen
below.
The process of joining the ring in Pastry is as follows:
1. The new node, X, should know at least one node 𝑁0 , which should be close to X, and send a join
message to it (assume that 𝑁0 has no common prefix with X)
2. Node 𝑁0 sends the contents of its row 0 to node X. Since the two nodes have no common prefix,
node X uses the appropriate parts of this information to built its row 0.
3. Node 𝑁0 calls a lookup operation with X’s ID as a key, which will forward the join message to
node 𝑁1 whose identifier is closest to X.
4. Node 𝑁1 sends the contents of its row 1 to node X since the two nodes have one common
prefix.
5. The process continues until the routing table of node X is complete.
6. The last node in the process, which has the
longest common prefix with X, also sends its
leaf set to node X, which becomes the leaf
set of X.
Consider the example shown to the right. A new
node X with node ID n2212 uses the information in
four nodes as shown to create its initial routing
table and leaf set for joining the ring. Assume that
node 0302 is nearby to node 2212 based on the
proximity metric.
Each Pastry node periodically tests the liveness of the nodes in its leaf set and routing table by
exchanging probe messages. If a local node finds that a node in its leaf set is not responding to the
probe message, it assumes that the node has failed or departed. The local node then contacts the live
node in its leaf set with the largest identifier and repairs its leaf set with the information in the leaf set
of that node.
If a local node finds that a node in its routing table, 𝑇𝑎𝑏𝑙𝑒[𝑖, 𝑗], is not responsive to the probe message,
it sends a message to a live node in the same row and requests the identifier in 𝑇𝑎𝑏𝑙𝑒[𝑖, 𝑗] of that node.
This identifier replaces the failed or departed node.
Kademlia:
Kademlia is a DHT peer-to-peer network that is designed by Petar Maymounkov and David Mazires in
2002. Kademlia routes messages based on the distance between nodes. The distance between the two
identifiers (nodes or keys) is measured as the bitwise exclusive-or (XOR) between them. For instance, if x
and y are two identifiers, the distance between them is defined as:
𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒(𝑥, 𝑦) = 𝑥 ⊕ 𝑦
This distance function has the properties shown in
the table to the right.
Nodes and data items are 𝑚-bit identifiers that
create an identifier space of 2𝑚 points distributed
on the leaves of a binary tree. The protocol uses the
SHA-1 hashing algorithm with 𝑚 = 160. For
example, if 𝑚 = 4, there are 16 IDs distributed on
the leaves of a binary tree as shown to the right.
In the binary tree shown to the side, 𝑘3 is stored in 𝑁3 because 3 ⊕ 3 = 0. 𝑘7 is stored in 𝑁6 not in 𝑁8
because 6 ⊕ 7 = 1 but 7 ⊕ 8 = 15. 𝑘12 is stored in 𝑁15 not in 𝑁11 because 11 ⊕ 12 = 7, but 12 ⊕
15 = 3.
Kademlia keeps only one routing table for each node
(there’s no leaf set). Each node 𝑁 divides the bianry tree
into 𝑚 subtrees. A subtree 𝑖 includes nodes that share 𝑖
leftmost bits (common prefix 𝑃) with the the node 𝑁,
and doesn’t include the node 𝑁 itself. For example, the
node 𝑁5 (0101) divides the previous tree as shown to
the right.
The routing table is made of 𝑚 rows but only one column
as can be seen to the right. The idea is the same as that
used by Pastry, but the length of the common prefix is
based on the number of bits instead of the number of
digits in base 2𝑏 .
For example, the routing table for the previous example can be found as shown below. To make the
example simple, it was assumed that each row only uses one identifier.
In the above example, it’s assumed node 𝑁0 (0000)2 receoves a lookup message to find the node
responsible for 𝑘12(1100)2.
The length of the common prefix between node 𝑁0 and 𝑘12 is 0. 𝑁0 sends the message to the node in
row 0 of its routing table, node 𝑁8. In 𝑁8, the length of the common prefix is 1. It checks row 1 and
sends the query to 𝑁15 which is responsible for 𝑘12. The routing process is then terminated and the
route is determined as 𝑁0 → 𝑁8 → 𝑁15.
For more efficieny, Kademlia requires that each row in the routing table keeps at least up to 𝐾 = 20
nodes from the corresponding subtree. For this reason, each row in the routing table is referred to as a
k-bucket. Having more than one node in each row allows the node to use an alternative node when a
node leaves the network or fails. Kademlia keeps those nodes in a bucket that has been connected in
the network for a long time.
Just as in Pastry, a node that needs to join the network needs to know at least one other node. The
joining node sends its identifier to the node as though it’s a key to be found. The response it receives
allows the new node to create its k-buckets. When a node leaves the network or fails, other nodes
update their k-buckets using the lookup process.
Chord:
Chord was published by Stoica in 2001. Chord uses 𝑚-bit numbers to identify the data items denoted as
𝑘 (for key) and to identify the peers denoted as 𝑁 (for node). The identifier space of 2𝑚 points
distributed in a circle in the clockwise direction. All arithmetic in the identifier space is done modulo 2𝑚 .
Chord recommends the cryptographic hash function SHA-1 for the identifier space generation. SHA-1
produces output of fixed length equal to 160 bits.
The closest peer 𝑁 ≥ 𝑘 is called the successor of 𝑘 and hosts the value (𝑘, 𝑣) where 𝑘 is the key (hash of
the data name) and 𝑣 is the value (information about the peer that has the actual object).
Any node should be able to resolve a query/lookup that asks
for the node identifier responsible for a given key. If a node
has no information about this key it forwards the query to
another node that may know. To do forwarding, each node
needs to know about 𝑚 successor nodes and one
predecessor node. This information is saved in a routing
table called a Finger table.
For example, consider a ring with few nodes and 𝑚 = 5 to make
the example simpler. Only the successor column from the finger
table is shown.
I Chord, the lookup operation is used to find where an object is
located among the available peers in the ring. To find the object, a
peer needs to know the node that is responsible for that object
(the peer that stores reference to that object). A peer that is the
successor of a set of keys in the ring is the responsible peer for
those keys, so finding the responsible node is actually finding the
successor of a key.
To find the successor of a key, the lookup operation is used as
follows:
1. Find the predecessors of the key (using find_predecessor
function)
2. From the predecessor node, find the next node in the ring
which is the value of finger[1]
3. If the key is located far from the current node, the node
needs the help of other nodes to find the predecessor
(using find_closest_predecessor function)
Lookup (key) {
if (the current node N is responsible for the key)
return (N’s ID)
else
return find_successor (key)
}
find_successor (id) {
x= find_ predecessor (id)
return x.finger[1]
}
find_predecessor (id) {
x=N // N is the current node
while (id Ï (x, x.finger[1]) {
x = x.find_closest_predecessor (id) // Let x find it
}
return x
}
find_closest_predecessor (id) {
for (i = m downto 1) {
if (finger [i] Î (N, id)) //N is the current node
return (finger [i])
}
return N //The node itself is closest predecessor
}
Leaving and joining of a node or a group of nodes may destabilize
the ring. Chord defines an operation called stabilize to address this issue resulting in each node in the
ring periodically using stabilize to validate the information about their successor and to let the successor
validate their information about their predecessor.
In other words:
1. Node 𝑁 uses the value of finger[1], S, to ask node 𝑆 to
return its predecessor 𝑃
2. If the return value 𝑃 from this query is between 𝑁 and 𝑆,
this means that there’s a node with the ID equal to 𝑃 that
lies between 𝑁 and 𝑆
3. Node 𝑁 makes 𝑃 its successor and notifies 𝑃 to make node
𝑁 its predecessor
Stabilize ( ) {
P= finger[1].Pre // Ask the successor to return its predecessor
if(P Î (N, finger[1]))
finger[1] =P
// P is the possible successor of N
finger[1].notify (N) // Notify P to change its predecessor
}
Notify (x) {
if (Pre = null or x Î (Pre, N))
Pre = x
}
Destabilization may change the finger table of up to 𝑚
nodes. Chord defines a fix_finger function to updates its
finger tables. Each node in the ring must periodically call
this function. To avoid traffic on the system, each node
must only update its fingers in each call. This finger is
chosen randomly.
When a new node 𝑁 joins the ring, it uses the join
operation. Join functions need to know an ID of another
node (say 𝑥) to find the successor of the new node and set
its predecessor to null. It immediately will then call the
stabilize function to validate its successor. The new node
then asks the successor to call the Move_Keys function that
transfers the keys that the new node is responsible for.
Fix_Finger () {
Generate (i Î (1, m]) // Randomly generate i such that 1< i ≤ m
finger[i] =find_successor (N + 2 i– 1) // Find value of finger[i]
}
Join (x) {
Initialize (x)
finger[1].Move_Keys (N)
}
Initialize (x) {
Pre = null
if (x = null) finger[1] = N
else finger[1] = x. Find_Successor (N)
}
Move_Keys (x) {
for (each key k)
if (x Î [k, N)) move (k to node x) // N is the current node
}
Note that, after this operation the finger table of the new joined node is empty and the finger table of
up to 𝑚 predecessors is out of date. The stabilize and fix_finger operations that run periodically after
this event will gradually stabilize the system.
When a peer leaves the ring or the peer fails, the status of the ring will be disrupted unless the ring
stabilizes itself. Each node exchanges ping and pong messages with neighbors to find out if they are
alive. When a node doesn’t receive a pong message in response to its ping message, the node knows
that the neighbor is dead. The node that detects the problem can immediately launch these stabilize
and fix finger operations. Note that the data managed by the node that left or failed is no longer
available. Therefore, Chord requires that data and references be duplicated on other nodes.
UNIT 3 – SOFTWARE DEFINED NETWORKING:
THE LIMITATIONS OF THE TRADITIONAL NETWORK ARCHITECTURES:
EVOLVING NETWORK REQUIREMENTS:
A number of trends are driving network providers and users to reevaluate traditional approaches to
network architecture. These trends can be grouped under the following categories:
-
Demand (due to the increase in cloud computing, big data, mobile traffic, IoT, etc…)
Supply (due to the increase in capacity of the network transmission technologies such as 5G)
Traffic patterns
Traditional network architectures are inadequate. The traditional internetworking approach is based on
the TCP/IP protocol architecture. There are three significant characteristics of this approach:
1. Two-level end system addressing
2. Routing based on destination
3. Distributed autonomous control
The Open Networking Foundation (ONF) cites four general limitations of traditional network
architectures:
1.
2.
3.
4.
Static and complex architecture
Inconsistent policies
Inability to scale
Vendor dependence
THE KEY REQUIREMENTS FOR AN SDN ARCHITECTURE:
PRINCIPAL REQUIREMENTS FOR A MODERN NETWORK:
Networks must adjust and respond dynamically, based on application needs, business policy, and
Adaptability
network conditions.
Policy changes must be automatically propagated so that manual work and errors can be reduced.
Automation
Maintainability Introduction of new features and capabilities (software upgrades and patches) must be seamless with
Model
Management
Mobility
Integrated
Security
On-demand
Scaling
minimal disruption of operations.
Network management software must allow management of the network at a model level, rather than
implementing conceptual changes by reconfiguring individual network elements.
Control functionality must accommodate mobility, including mobile users and virtual servers.
Network applications must integrate seamless security as a core service instead of as an add-on
solution.
Implementations must have the ability to scale up or scale down the network and its services to
support on-demand requests.
SOFTWARE DEFINED NETWORKING (SDN):
To provide adaptability and scalability, two key technologies that are rapidly being deployed by a variety
of network services and application providers are SDN and NFV. Network functions virtualization (NFV) is
outside of the scope of this course and will be covered in SE4455.
SDN is replacing the traditional networking model as it provides an enhanced level of flexibility to meet
the needs of newer networking and IT trends such as cloud, mobility, social networking, and video.
In SDN, there are two elements involved in forwarding packets
through routers.
1. A control function which decides the route for the flow to
take and the relative priority of traffic.
2. A data function which forwards data based on controlfunction policy.
To the right is a comparison of traditional networking and
the SDN approach. Note that in traditional networking
each switch has both a data and control plane within it.
This is because the routing is handled in each router in
contrast to it being centralized in one SDN controller.
The Data Plane:
The data plane consists of physical switches and virtual switches, which are responsible for forwarding
packets. The internal implementation of buffers, priority parameters, and other data structures should
be uniform and open to the SDN controllers. This can be defined in terms of an open application
programming interface (API) between the control plane and the data plane (southbound API). The most
prominent example of such an open API is OpenFlow.
The Control Plane:
SDN controllers can be implemented directly on a server or on a
virtual server. OpenFlow or some other open API is used to
control the switches in the data plane. In addition, controllers
use information about capacity and demand obtained from the
networking equipment through which the traffic flows. SDN
controllers also expose northbound APIs which allow developers
and network managers to deploy a wide range of off-the-shelf
and custom-built network applications. A number of vendors
offer a Representational State Transfer (REST)-based API to
provide a programmable interface to their SDN controller.
The Application Plane:
At the application plane there are a variety of applications that interact with SDN controllers. SDN
applications are programs that may use an abstract view of the network for their decision-making goals.
These applications convey their network requirements and desired network behavior to the SDN
controller via northbound API. Examples of applications are energy-efficient networking, security
monitoring, access control, and network management.
Characteristics of SDN:
The control plane is separated from the data plane so the data plane devices become simple packet
forwarding devices.
The control plane is implemented in a centralized controller or set of coordinated centralized
controllers. The SDN controller has a centralized view of the network or networks under its control. The
controller is portable software that can run on servers and is capable of programming the forwarding
devices based on a centralized view of the network.
The network is programmable by applications running on top of the SDN controllers. The SDN
controllers present an abstract view of network resources to the applications.
STANDARDS-DEVELOPING ORGANIZATIONS:
Unlike some technology areas such as Wi-Fi,
there’s no single standards body responsible
for developing open standards for SDN and
NFV. Rather, there’s a large and evolving
collection of standards-developing
organizations (SDOs), industrial consortia, and
open development initiatives involved in
creating standards and guidelines for SDN and
NFV.
The table to the right lists the main SDOs and
other organizations involved in the effort and
the main outcomes so far produced.
OPENDAYLIGHT AND OPENSTACK:
OpenDaylight:
OpenDaylight is an open-source software activity under the auspices of the Linux foundation. Its
member companies provide resources to develop an SDN controller for a wide range of applications. It’s
more in the nature of an open development initiative and a consortium. It also supports network
programmability via southbound protocols, programmable network services, collections of northbound
APIs, and applications.
OpenStack:
OpenStack is an open-source software project that aims to produce an open-source cloud operating
system. It provides multitenant Infrastructure as a Service (IaaS) and aims to meet the needs of public
and private clouds regardless of size, by being simple to implement and massively scalable. SDN
technology is expected to contribute to its networking part, and to make the cloud operating system
more efficient, flexible, and reliable.
THE FUNCTIONS OF THE SDN DATA PLANE:
THE SDN DATA PLANE:
The SDN data plane is referred to as the resource layer or as
the infrastructure layer, where network forwarding devices
perform the transport and processing of data according to
decisions made by the SDN control plane.
The important characteristics of the network devices in an SDN
network is that these devices perform a simple forwarding
function without embedded software to make autonomous
decisions. The data plane network devices are also called data
plane network elements or switches.
A SIMPLE FORWARDING FUNCTION:
The principal functions of the network device are the following:
-
-
Control support function
o Interacts with the SDN control layer to support
programmability via resource-control interfaces.
o The switch communicates with the controller
and the controller manages the switch via the
OpenFlow switch protocol.
Data forwarding function
o Accepts incoming data flows from other network devices and forwards them along the
data forwarding paths that have been computed and established by the SDN controller
according to the rules defined by the SDN applications.
The network device can alter the packet header before forwarding or discard the packet. As shown,
arriving packets may be placed in an input queue awaiting processing by the network device, and
forwarded packets are generally placed in an output queue awaiting transmission.
THE OPENFLOW LOGICAL ARCHITECTURE AND NETWORK PROTOCOL:
THE OPENFLOW:
There must be a common logical architecture in all network
devices to be managed by an SDN controller. The SDN controller
should see a uniform logical switch functionality. A standard
secure protocol is needed between the SDN controller and the
network device.
OpenFlow is both a protocol between SDN controllers
and network devices and a specification of the logical
structure of the network switch functionality.
OpenFlow is defined in the OpenFlow switch specification published by the Open Networking
Foundation (ONF). An SDN controller communicates with OpenFlow-compatible switches using the
OpenFlow protocol running over Transport Layer Security (TLS).
Each switch connects to other OpenFlow switches and possibly to end-user devices that are the sources
and destinations of packet flows. On the switch side, the interface is known as an OpenFlow channel.
These connections are via OpenFlow ports. An OpenFlow port also connects the switch to the SDN
controller.
Switch Ports:
OpenFlow defines three types of ports:
1. Physical port: corresponds to a hardware interface of the switch (e.g. an Ethernet switch)
2. Logical port: doesn’t correspond directly to a hardware interface of the switch and may be
defined in the switch using non-OpenFlow methods (e.g. link aggregation groups, tunnels,
loopback interfaces) and may map to various physical ports
3. Reserved port: specifies generic forwarding actions like sending to/receiving from the controller,
flooding, or forwarding using non-OpenFlow methods such as “normal” switch processing
Tables:
OpenFlow defines three types of tables:
1. Flow table: matches incoming packets to a particular flow and specifies what functions are to be
performed on the packets (often multiple flow tables are combined to operate in a pipeline
fashion)
2. Group table: when a flow is directed to a group table, it may trigger a variety of actions that
affect one or more flows
3. Meter table: consists of meter entries that can trigger a variety of performance-related actions
on a flow
Using the OpenFlow switch protocol, the controller can add, update, and delete flow entries in tables. It
can do this both reactively (in response to packets) and proactively.
Flow Tables:
Each packet that enters an OpenFlow switch passes
through one or more flow tables. Each flow table consists
of a number of rows (called entries) consisting of seven
components. The seven components are as follows:
1. Match fields: used to select packets that match the values in the fields
2. Priority: relative priority of table entries (a 16-bit field with 0 corresponding to the lowest
priority)
3. Counters: updated for matching packets (OpenFlow specification defines a variety of counters)
4. Instructions: instructions to be performed if a match occurs
5. Timeouts: maximum amount of idle time before a flow is expired by the switch
6. Cookie: 64-bit data value chosen by the controller (may be used by the controller to filter flow
statistics, flow modification, and flow deletion)
7. Flags: alter the way flow entries are managed
Match Field Categories:
1. Ingress port: the identifier of the port on this switch on which the packet arrived (may be a
physical port or a switch-defined virtual port and is required in ingress tables)
2. Egress port: the identifier of the egress port from action set (required in egress tables)
3. Ethernet source and destination addresses: each entry can be an exact address, a bit masked
value, or a wildcard value
4. Ethernet type field: indicates type of the Ethernet packet payload
5. IP: version 4 or 6
6. IPv4 or IPv6 source address and destination address: each entry can be an exact address, a bit
masked value, a subnet mask value, or a wildcard value
7. TCP source and destination ports: exact match or wildcard value
8. UDP source and destination ports: exact match or wildcard value
Counters:
Counter
Reference count (active entries)
Duration (seconds)
Received packets
Transmitted packets
Duration (seconds)
Transmit packets
Duration (seconds)
Duration (seconds)
Duration (seconds)
Usage
Per Flow Table
Per Flow Entry
Per Port
Per Port
Per Port
Per Queue
Per Queue
Per Group
Per Meter
Bit Length
32
32
64
64
32
64
32
32
32
Packet Flow Through the Processing Pipeline:
A switch includes one or more flow tables. If there’s more than one flow table, they are organized as a
pipeline where the tables are labeled with increasing numbers starting at zero. The use of multiple
tables in a pipeline (rather than a single flow table) provides the SDN controller with considerable
flexibility.
The OpenFlow specification defines two stages of
processing: Ingress processing and Egress processing.
Ingress processing always happens, beginning with Table 0, and
uses the identity of the input port. Table 0 may be the only table,
in which case the ingress processing is simplified to the
processing performed on that single table, and there’s no egress
processing.
Egress processing is the processing that happens after the
determination of the output port. It happens in the context of
the output port. This stage is optional, and if it occurs it may
involve one or more tables.
Ingress Processing:
At the final table in the pipeline, forwarding to another flow table isn’t an option. If and when a packet is
finally directed to an output port, the accumulated action set is executed and then the packet is queued
for output.
Egressing Processing:
If egress processing is associated with a particular output port, then after a packet is directed to an
output port in the ingress process, the packet is directed to the first flow table of the egress pipeline.
There’s no group table processing at the end of the egress pipeline.
Using Multiple Tables:
The use of multiple tables enables the breaking down of a single
flow into a number of parallel sub flows. The use of multiple
tables simplifies the processing in both the SDN controller and
the OpenFlow switch.
Actions such as next hop that apply to the aggregate flow can be defined once by the controller then
examined and performed once by the switch.
The addition of new subflows at any level involves less setup. Therefore, the use of pipelined, multiple
tables increases the efficiency of network operations, provides granular control, and enables the
network to respond to real-time changes at the application, user, and session levels.
The Group Tables:
During the pipeline processing, a flow table may direct a flow of packets to the group table rather than
another flow table. The group table and group actions enable OpenFlow to represent a set of ports as a
single entity for forwarding packets.
Different types of groups are provided to represent different forwarding abstractions, such as
multicasting and broadcasting.
Each group table consists of a number of rows called group entries, consisting of four components:
1. Group identifier: a 32-bit unsigned integer uniquely identifying the group (a group is defined as
an entry in the group table)
2. Group type: determines group semantics, explained in the next slide
3. Counters: updated when packets are processed by a group
4. Action buckets: an ordered list of action buckets, where each action bucket contains a set of
actions to execute
The action list is executed in sequence and generally ends with the Output action, which forwards the
packet to a specified port. The action list may also end with the Group action, which sends the packet to
another group.
A group is designated as “all”, “select”, “fast failover”, or “indirect”.
-
-
-
-
“all” executes all buckets in the group
o Each arriving packet is effectively cloned
o Each bucket will designed a different output port, so that the
incoming packet is then transmitted on multiple output ports
o This group is used for multicast
“select” executes one bucket in the group based on a switch-computed
selection algorithm (e.g. hash on some user-configured tuple or simple
round robin)
o The selection algorithm should implement equal load sharing or
load sharing based on bucket weights assigned by the SDN
controller
“fast failover” executes the first live bucket
o Port liveness is managed by code outside of the scope of
OpenFlow and may have to do with routing algorithms
o The buckets are evaluated in order, and first live bucket is
selected
o This group type enables the switch to change forwarding without
requiring a round trip to the controller
“indirect” allows multple packet flows (multiple flow table entries) to point to
a common group identifier
o This type provides for more efficient management by the controller in
certain situations
OpenFlow Protocol:
The OpenFlow protocol describes message exchanges that take place between an OpenFlow controller
and an OpenFlow switch. Typically, the protocol is implemented on top of TLS, providing a secure
OpenFlow channel.
The OpenFlow protocol enables the controller to perform add, update, and delete actions to the flow
entries in the flow tables. It supports three types of messages:
1. Controller-to-Switch
2. Asynchronous
3. Symmetric
OpenFlow Messages:
Message
Description
Controller-to-Switch
Features
Request the capabilities of a switch. Switch responds with a features reply that
specifies its capabilities.
Configuration
Set and query configuration parameters. Switch responds with parameter settings.
Modify-State
Add, delete, and modify flow/group entries and set switch port properties.
Read-State
Collect information from switch, such as current configuration, statistics, and
capabilities.
Packet-Out
Direct packet to a specified port on the switch.
Barrier
Barrier request/reply messages are used by the controller to ensure message
dependencies have been met or to receive notifications for completed operations.
Role-Request
Set or query role of the OpenFlow channel. Useful when switch connects to
multiple controllers.
Asynchronous- Set filter on asynchronous messages or query that filter. Useful when switch
Configuration
connects to multiple controllers.
Asynchronous
Packet-In
Transfer packet to controller.
Flow-Removed Inform the controller about the removal of a flow entry from a flow table.
Port-Status
Inform the controller of a change on a port.
Role-Status
Inform controller of a change of its role for this switch from master controller to
slave controller.
ControllerInform the controller when the status of an OpenFlow channel changes. This can
Status
assist failover processing if controllers lose the ability to communicate among
themselves.
Flow-Monitor
Inform the controller of a change in a flow table. Allows a controller to monitor in
real time the changes to any subsets of the flow table done by other controllers.
Symmetric
Hello
Exchanged between the switch and controller upon connection startup.
Echo
Echo request/reply messages can be sent from either the switch or the controller,
and must return an echo reply.
Error
Used by the switch or the controller to notify problems to the other side of the
connection.
Experimenter
For additional functionality.
THE FUNCTIONS OF THE SDN CONTROL PLANE:
SDN CONTROL PLANE ARCHITECTURE:
The SDN control layer maps application layer service
requests into specific commands and directives to data
plane switches and supplies applications with information
about data plane topology and activity. The control layer is
implemented as a server or cooperating set of servers
known as SDN controllers.
SDN CONTROLLERS FUNCTIONS:
Shortest path forwarding uses routing information collected
from switches to establish preferred routes.
Notification manager receives, processes, and forwards an
application event such as alarm notifications, security
alerts, and state changes.
Security mechanisms provide isolation and security
enforcement between applications and services.
Topology managers build and maintain switch
interconnection topology information.
Statistics managers collect data on traffic through the switches.
Device managers configure switch parameters and attributes and manage flow table entries.
NETWORK OPERATING SYSTEM (NOS):
The functionality provided by the SDN controller can be viewed as a network operating system (NOS). As
with a conventional OS, NOS provides essential services, common application programming interfaces
(APIs), and an abstraction of lower-layer elements to developers.
The functions of an SDN NOS enable developers to define network policies and manage networks
without concern for the details of the network device characteristics.
Northbound interfaces enable developers to create software that is independent not only of data plane
details, but to a variety of SDN controller servers.
SDN CONTROLLER:
Implementations:
A number of different initiatives, both commercial and open source, have resulted in SDN controller
implementations.
-
-
-
-
OpenDaylight: An open source platform for network programmability to enable SDN, written in
Java.
Floodlight: An open source package developed by Big Switch Networks. Both web-based and
Java based GUI are available and most of its functionality is exposed through a REST API.
Open Network Operating Systems (ONOS): An open source SDN NOS, a non-profit effort funded
and developed by a number of carriers, such as AT&T and NTT, and other service providers and
supported by the Open Networking Foundation.
Ryu: An open source component-based SDN framework developed by NTT labs, developed in
python.
POX: An open source OpenFlow controller that has been implemented by a number of SDN
developers and engineers. POX has a well written API and documentation. It also provides a
web-based graphical user interface (GUI) and is written in Python.
Beacon: An open source package developed at Stanford, written in Java. Beacon was the first
controller that made it possible for beginner programmers to work with and create a working
SDN environment.
Onix: Commercially available SDN controller, developed by VMWare, Google, and NTT.
Interfaces:
The southbound interface provides the logical connection between the SDN controller and the data
plane switches. The most commonly implemented southbound API is OpenFlow.
Other southbound interfaces include the following:
-
-
Open vSwitch Database Management Protocol (OVSDB): An
open source software project which implements virtual
switching. OVS uses OpenFlow for message forwarding in the
control plane for both virtual and physical ports.
Forwarding and Control Element Separation (ForCES): An IETF
effort that standardizes the interface between the control plane
and the data plane for IP routers.
The northbound interface enables applications to access control plane functions and services without
needing to know the details of the underlying network switches. The northbound interface is more
viewed as a software API rather than a protocol.
-
Base Controller Function APIs: These APIs expose the basic functions of the controller and are
used by developers to create network services.
Network Service APIs: These APIs expose network service to the north (e.g. Firewalls, routings,
optimizations).
Northbound Interface Application APIs: These APIs expose application-related services that are
built on top of network serves (e.g. security-related services).
Routing:
The routing function comprises a protocol for collecting information about the topology and traffic
conditions of the network, and an algorithm for designing routes through the network. There are two
categories of routing protocols.
1. Interior Router Protocols (IPRs) that operate within an autonomous system (AS).
 Concerned with discovering the topology of routers within an AS and then determining
the best route to each destination based on different metrics.
2. Exterior Router Protocols (ERPs) that operate between autonomous systems (AS).
 Not needed to collect as much detailed traffic information.
 Primarily concerned with determining reachability of networks and end systems outside
of the AS.
Traditionally, the routing function is distributed among the routers in a network. Each router is
responsible for building up an image of the topology of the network. For interior routing, each router as
well must collect information about connectivity and delays and then calculate the preferred route for
each IP destination address.
However, in an SDN-controlled network, the controller provides a centralized routing that can develop a
consistent view of the network state to calculate shortest paths. The data plane switches relieve the
processing and storage burden associated with routing, leading to improved performance.
The centralized routing application performs two distinct functions:
1. Link discovery: The routing function needs to be aware of links between data plane switches. It
must be performed between a router and a host system and between a router in the domain of
this controller and a router in a neighboring domain. Discovery is triggered by unknown traffic
entering the controller’s network domain either from an attached host or from a neighboring
router.
2. Topology manager: Maintains the topology information for the network and calculates routes in
the network. Route calculation involves determining the shortest path between two data plane
nodes or between a data plane node and a host.
AN OVERVIEW OF OPENDAYLIGHT AND REST APIS:
THE OPENDAYLIGHT ARCHITECTURE:
SERVICE ABSTRACTION LAYER MODEL:
OpenDaylight is not tied to Openflow or any other specific
southbound interface. This provides greater flexibility in
constructing SDN network configurations.
The key element in this design is the SAL, which enables the
controller to support multiple protocols on the southbound
interface and provides consistent services for controller
functions and for SDN applications.
The service manager maintains a registry that maps service requests to feature requests. Based on the
service request, the SAL maps to the appropriate plug-in and thus uses the most appropriate
southbound protocol to interact with a given network device.
All code in the OpenDaylight project is implemented in Java and is contained within its own Java Virtual
Machine (JVM). As such, it can be deployed on any hardware and operating system platform that
supports Java.
OPENDAYLIGHT – THE HELIUM RELEASE:
The controller platform (exclusive of applications, which may
also run on the controller) consists of a growing collection of
dynamically pluggable modules, each of which performs one
or more SDN-related functions and services.
Five modules are considered base network service functions:
1. Topology manager: A service for learning the network
layout by subscribing to events of node addition and
removal and their interconnection. Applications
requiring network view can use this service.
2. Statistics manager: Collects switch-related statistics, including flow statistics, node connector,
and queue occupancy.
3. Switch manager: Holds the details of the data plane devices. As a switch is discovered, its
attributes (e.g. what switch/router it is, software version, capabilities, etc…) are stored in a
database by the switch manager.
4. Forwarding rules manager: Installs routes and tracks next-hop information. Works in
conjunction with switch manager and topology manager to register and maintain network flow
state. Applications using this need not have visibility of network device specifics.
5. Host tracker: Tracks and maintains information about connected hosts.
REPRESENTATIONAL STATE TRANSFER (REST):
REST is an architectural style used to define APIs. This has become a standard way of constructing
northbound APIs for SDN controllers. A REST API, or an API that is RESTful is not a protocol, language, or
established standard. It’s essentially six constraints that an API must follow to be RESTful. The objective
of these constraints is to maximize the scalability and independence/interoperability of software
interactions, and to provide for a simple means of constructing APIs. The six constraints are as follows:
1. Client-Server: This simple constraint dictates that interaction between application and server is
in the client-server request/response style. The principle defined for this constraint is the
separation of user interface concerns from data storage concerns. This separation allows client
and server components to evolve independently and supports the portability of server-side
functions to multiple platforms.
2. Stateless: Dictates that each request from a client to a server must contain all the information
necessary to understand the request and cannot take advantage of any stored context on the
server. Similarly, each response from the server must contain all the desired information for that
request. One consequence is that any memory of a transaction is maintained in a session state
kept entirely on the client. Another consequence is that if the client and server reside on
different machines, and therefore communicate via a protocol (that protocol doesn’t need to be
connection oriented). REST typically runs over HTTP which is a stateless protocol.
3. Cache: Requires that the data within a response to a request be implicitly or explicitly labeled as
cacheable or non-cacheable. If a response is cacheable, then a client is given the right to reuse
that response data for later, equivalent requests. Therefore, subsequent requests for the same
data can be handled locally at the client, reducing communication overhead between client and
server.
4. Uniform Interface: REST emphasizes a uniform interface between components, regardless of the
specific client-server application API implemented using REST. To obtain a uniform interface,
REST defines four interface constraints: identification of resources, manipulation of resources
through representations, self-descriptive messages, and hypermedia as the engine of the
application state. The benefit of this constraint is that for an SDN environment, different
applications can invoke the same controller service via a REST API.
5. Layered System: A given function is organized in layers, with each layer only having direct
interaction with the layers immediately above and below. This is a fairly standard architecture
approach for protocol architectures, OS design, and system services design.
6. Code on Demand: REST allows client functionality to be extended by downloading and executing
code in the form of applets or scripts. This simplifies clients by reducing the number of features
required to be pre-implemented. Allowing features to be downloaded after deployment
improves system extensibility.
AN OVERVIEW OF THE SDN APPLICATION PLANE ARCHITECTURE:
THE SDN APPLICATION PLANE ARCHITECTURE:
NORTHBOUND INTERFACE:
Enables applications to access control plane functions and services without needing to know the details
of the underlying network switches. Typically, the northbound interface provides an abstract view of
network resources controlled by the software in the SDN control plane.
Northbound interfaces can be a local or remote interface. For a local interface, the SDN applications are
running on the same server as the control plane software. On remote systems, the northbound interface
is a protocol or application programming interface (API) that connects the applications to the controller
network operating system (NOS) running on the central server.
NETWORK SERVICES ABSTRACTION LAYER:
An abstraction layer is a mechanism that translates a
high-level request into the low-level commands
required to perform the request. It shields the
implementation details of a lower level of abstraction
from software at a higher level.
A network abstraction represents the basic properties
or characteristics of network entities in such a way
that network programs can focus on the desired
functionality without having to program the detailed
actions.
TRAFFIC ENGINEERING:
Traffic engineering is a method for dynamically analyzing, regulating, and predicting the behavior of data
flowing in networks with the aim of performance optimization to meet service level agreements (SLAs).
It involves establishing routing and forwarding policies based on QoS requirements.
With SDN, the tasks of traffic engineering should be considerably simplified compared with a non-SDN
network. The following traffic engineering functions have been implemented as SDN applications:
-
On-demand virtual private networks
Load balancing
Energy-aware routing
QoS for broadband access networks
Scheduling/optimization
Traffic engineering with minimal overhead
Dynamic QoS routing for multimedia apps
Fast recovery through fast-failover groups
QoS policy management framework
QoS enforcement
QoS over heterogeneous networks
Multiple packet schedulers
Queue management for QoS enforcement
Divide and spread forwarding tables
POLICYCOP:
An instructive example of traffic engineering SDN
application, which is an automated QoS policy
enforcement framework.
It leverages the programmability offered by SDN and
OpenFlow for dynamic traffic steering, flexible flow level
control, dynamic traffic classes, and custom flow
aggregation levels. Key features of PolicyCop are that it
monitors the network to detect policy violations and
reconfigures the network to reinforce the violated
policy.
MEASUREMENTS AND MONITORING:
The area of measurement and monitoring applications can roughly be divided into two categories:
1. Applications that provide new functionality for other networking services
 An example is in the area of broadband home connections. If the connection is to an
SDN-based network, new functions can be added to the measurement of home network
traffic and demand, allowing the system to react to changing conditions.
2. Applications that add value to OpenFlow-based SDNs
This category typically involves using different kinds of sampling and estimation techniques to reduce
the burden of the control plane in the collection of data plane statistics.
UNIT 4 – NETWORK SECURITY FOUNDATIONS:
MODELS FOR NETWORK SECURITY:
MODEL 1:
In model 1, information being transferred from one
party to another is done over an insecure
communications channel in the presence of possible
opponents.
Using this model requires the following:
1.
2.
3.
4.
Design a suitable algorithm for the security transformation
Generate the secret information (session keys) used by the algorithm
Develop methods to distribute and share the secret information
Specify a protocol enabling the principles to use the transformation and secret information for
security services (e.g. authentication, confidentiality, integrity, etc…)
MODEL 2:
Model 2 is concerned with controlled access to
information or resources on a computer system, in the
presence of possible opponents.
Using this model requires the following:
1. Select appropriate gatekeeper functions to identify users
2. Implement security controls to ensure only authorized users access designated information or
resources
PROTOCOLS DESIGN:
Protocols can be very subtle. Innocuous changes can make a significant difference in a protocol. Several
well-known security protocols have serious flaws including IPSec, GSM, and WEP. Even if the protocol
itself isn’t flawed, a particular implementation can be.
It’s difficult to get protocols right, and therefore a stronger understanding of the protocols is needed in
terms of what the protocol really achieves, how many assumptions the protocols needs, and if the
protocol does anything unnecessary that could be left out without tweaking.
INFORMATION SECURITY PRINCIPLES:
INFORMATION PROTECTION:
Information is an important asset. The more information at someone’s command, the better they can
adapt to the world around them. In business, information is often one of the most important assets a
company can possess. Information differentiates companies and provides leverage that helps one
company become more successful than another.
Cryptography is all about controlling access to information. This includes access to learning information
as well as access to manipulate information.
Consider the image to the right. Alice and Bob are communicating. Alice wants
Bob to learn a message without Trudy learning it. Alice can send out a bit string
(the message) on the channel but Bob and Trudy, both get it.
There are three algorithms involved in controlling access to information:
1. Key Generation: What Alice and Bob do for creating the shared secret key (a bit string)
2. Encryption: What Alice does with the message and the key to obtain a ciphertext
3. Decryption: What bob does with the ciphertext and the key to get the message (the plaintext)
out of it
All of these are computations which have attributes that will be discussed later.
CRYPTO TERMINOLOGY:
Cryptology is the art and science of making and breaking “secret codes”. Cryptography is the making of
“secret codes” and cryptanalysis is the breaking of “secret codes”. Crypto itself is all of the above and
more.
A cipher or cryptosystem is used to encrypt plaintext. The result of encryption is ciphertext. Ciphertext is
decrypted to recover plaintext. A key is used to configure a cryptosystem and a symmetric key
cryptosystem uses the same key to encrypt as to decrypt. A public key cryptosystem (a.k.a. asymmetric
key cryptosystem) uses a public key to encrypt and a private key to decrypt (sign).
Symmetric-key Cipher:
Public-key Cryptosystem:
Digital Signature Process:
FEASIBLE AND INFEASIBLE COMPUTATION:
Feasible Computation:
In analyzing complexity of algorithms, the computational complexity is
decided by how it grows with input size. Only the rough rate is
considered because the exact time depends on the technology used to
implement an algorithm. Polynomial time (i.e. O(n), O(n2), O(n3), …) is
considered feasible.
Infeasible Computation:
Super-polynomial time (e.g. O(2n), O(2√n), …) is considered infeasible. In
other words, in super-polynomial time, as n grows the computation
time becomes infeasibly large. The goal of security is to make breaking
security infeasible for Trudy.
CRYPTO:
In crypto, the basic assumption is that the system is completely known to the attacker. The only secret is
the key. This is also known as Kerckhoffs’ principle which states that crypto algorithms are no secret.
This assumption is made because experience has shown secrete algorithms are weak when exposed.
Secret algorithms also never remain secret so it’s better to find weaknesses beforehand.
SYMMETRIC KEY CRYPTO NOTATIONS:
𝑃 = plaintext block
𝐶 = ciphertext block
Encrypt 𝑃 with key 𝐾 to get ciphertext 𝐶
𝐶 = 𝐸(𝑃, 𝐾)
Decrypt 𝐶 with key 𝐾 to get plaintext 𝑃
𝑃 = 𝐷(𝐶, 𝐾)
Note that 𝑃 = 𝐷(𝐸(𝑃, 𝐾), 𝐾) and 𝐶 = 𝐸(𝐷(𝐶, 𝐾), 𝐾)
PUBLIC KEY CRYPTO NOTATIONS:
Sign message 𝑀 with Alice’s private key: [𝑀]𝐴𝑙𝑖𝑐𝑒
Encrypt message 𝑀 with Alice’s public key: {𝑀}𝐴𝑙𝑖𝑐𝑒
Note that {[𝑀]𝐴𝑙𝑖𝑐𝑒 }𝐴𝑙𝑖𝑐𝑒 = 𝑀 and [{𝑀}𝐴𝑙𝑖𝑐𝑒 ]𝐴𝑙𝑖𝑐𝑒 = 𝑀
SIMPLE SECURITY PROTOCOLS:
EXAMPLES:
Security Entry to NSA:
As the NSA, employees are given a badge that they must wear at all times when they are in the secure
facility. To enter the building, they must do the following:
1. Insert badge into reader
2. Enter PIN
3. Correct PIN?
 Yes? Enter

No? Get shot by security guard
ATM Machine Protocol:
When withdrawing money from an ATM machine, the protocol is virtually identical to the secure entry
protocol of the NSA. To with draw money, do the following:
1. Insert ATM card
2. Enter PIN
3. Correct PIN?
 Yes? Conduct transactions

No? Machine eats card
Identify Friend or Foe (IFF):
The military has a need for many specialized security protocols. One such class of protocols is used to
identify friend or foe called IFF.
Consider the protocol that was used by the South African Air Force, or SAAF, when fighting in Angola.
SAAF were based in Namibia, and they were fighting soldiers stationed in Angola who were flying Soviet
MiG aircrafts.
1. When the SAAF radar detected an aircraft approaching, a
random number (or challenge) 𝑁 was sent to the aircraft.
2. All SAAF aircraft knew a key 𝐾 that they used to encrypt
the challenge (𝐸(𝑁, 𝐾)) which was then sent back to the
radar station.
AUTHENTICATION PROTOCOLS:
In authentication, Alice must prove her identity to Bob. Alice and Bob can be humans or computers. For
mutual authentication, Bob is also required to prove he’s Bob. There may also be a need to establish a
session key. Other requirements might be established such as only using public keys, only using
symmetric keys, and only using hash functions.
Authentication on a stand-alone computer is relatively simple. The main concern is an attack on the
authentication software. Authentication over a network is much more complex as the attacker can
passively observe messages, replay messages, and active attacks are still possible (insert, delete, change
messages).
Simple Authentication:
Simple authentication is very simple and may be okay
for stand-alone systems. It is inefficient and insecure
for network systems, however. It’s subject to replay
attacks and Bob must know Alice’s password.
The image to the right shows an
example of a replay attack. Basically,
Trudy will listen for Alice to send a
password, and then replay the
messages Alice sent to authenticate
herself so that Trudy can then
authenticate himself.
To the right is another example of simple
authentication that is more efficient, but it still
suffers from the same problem as the previous
version.
The example on the right is better than simple
authentication as it hides Alice’s password from both
Bob and Attackers. But it is unfortunately still subject
to replay attacks.
Challenge-Response:
To prevent replay attacks, challenge-response can be used. Suppose Bob wants to authenticate Alice. A
challenge would be sent from Bob to Alice and only Alice can provide the correct response. The
challenge would be chosen so that replay is not possible. This is accomplished by using a password that
only Alice would know and a “number used once” (or nonce).
To the right is an example of this type of
authentication system. Nonce is the challenge, and the
hash is the response. Nonce prevents replay attacks
and ensures freshness. Note that Bob must know
Alice’s password for this system to work.
To the right is a more general case of challengeresponse. To achieve this, hashed passwords can
work, but crypto might be better.
Recall symmetric key notation where the ciphertext 𝐶 is the result of encrypting the plaintext 𝑃 with the
key 𝐾 (𝐶 = 𝐸(𝑃, 𝐾)) and the reverse can also be done for decryption (𝑃 = 𝐷(𝐶, 𝐾)). Here the concern is
with attacks on protocols, not directly on the crypto. It’s assumed that the crypto algorithm is secure.
Symmetric key authentication works using a symmetric key 𝐾𝐴𝐵 which Alice and Bob share. Key 𝐾𝐴𝐵 is
known only to Alice and Bob and authentication is done by proving knowledge of the shared symmetric
key. To accomplish this, the key must not be revealed and replay attacks must not be allowed.
To the right is a secure method for Bob to
authenticate Alice. Alice doesn’t authenticate Bob,
however.
Is it possible to implement mutual authentication with
this method? The implementation on the right would
not work because Alice could be Trudy (or anybody
else).
Since there’s a secure one-way authentication protocol,
it would seem the obvious thing to do is use the protocol
twice as shown to the right. This still doesn’t provide
mutual authentication because of attacks similar to the
MiG-in-the-Middle attack.
Consider the situation to the right. Notice that it’s
assumed Trudy knows how to encrypt in the same
way that Alice and Bob encrypt.
Mutual Authentication:
The one-way authentication protocol would not
secure mutual authentication. Protocols are very
subtle, and the obvious thing may not end up being
secure. Also, if assumptions or environments change
then protocols may not work. This is a common source
of security failure.
Is the system more secure with the changes made as
shown to the right? They may seem insignificant but
they actually do help.
Public Key Authentication:
Recall public key notation, where Alice’s public key can encrypt 𝑀 using {𝑀}𝐴𝑙𝑖𝑐𝑒 and sign 𝑀 using
[𝑀]𝐴𝑙𝑖𝑐𝑒 . Anybody can do public key operations, but only Alice can user her private key (sign).
Consider the scenario on the right. Is this secure? Trudy can get
Alice to decrypt anything so there must be two pairs of keys
instead.
Similarly, the situation on the right where Alice is signing R is not
secure because Trudy can get Alice to sign anything. Again, there
must be two key pairs.
With public keys, never use the same key pair for encryption and signing. One key pair should be used
for encryption/decryption and a different key pair should be used for signing/verifying signatures.
Session Keys:
In addition to authentication, a session key is often required where one symmetric key is used per
session. Can a symmetric key be established and shared? In some cases a perfect forward secrecy (PFS)
may be required which is discussed more later.
The example to the right demonstrates encryption only for
authentication and sessions keys. This is secure for the key, but not
secure for mutual authentication.
The second example on the right shows a similar situation where the
authentication and session keys are signed. This is secure for mutual
authentication, but the key is not secret.
The third example on the right seems to be okay for both mutual
authentication and session keys. It uses both signing and encrypting
in that specific order.
The fourth and final example on the right seems to also be okay. It
uses both encrypting and signing in that specific order. With this
example, anyone can see {𝑅, 𝐾}𝐴𝑙𝑖𝑐𝑒 and {𝑅 + 1, 𝐾}𝐵𝑜𝑏 .
Perfect Forward Secrecy:
The concern with the above methods is that if Alice encrypts a message with a shared key 𝐾𝐴𝐵 and sends
the ciphertext to Bob, then Trudy could record the ciphertext and later attack Alice’s (or Bob’s)
computer to find 𝐾𝐴𝐵 . Then, Trudy could decrypt and record messages.
Perfect forward secrecy (PFS) is where Trudy cannot later decrypt recorded ciphertext even if they get
key 𝐾𝐴𝐵 or other secrets.
Suppose Alice and Bob share key 𝐾𝐴𝐵 . For perfect forward secrecy, Alice and Bob cannot use 𝐾𝐴𝐵 to
encrypt and instead they must use a session key 𝐾𝑠 and forget that key after it’s used. The problem is
finding a session key 𝐾𝑠 that Alice and Bob can agree on to ensure PFS.
Diffie-Hellman:
Diffie-Hellman is a key exchange algorithm that was invented by Whitfield Diffie and Martin Hellman. It
is used to establish shared symmetric keys but not for encrypting or signing. The security rests on the
difficulty of the discrete logarithm problem which asks to find 𝑘 given 𝑔, 𝑝, and (𝑔𝑘 𝑚𝑜𝑑 𝑝). The
discrete log problem is very difficult to solve.
Let 𝑝 be prime and let 𝑔 be a generator. For any 𝑥 ∈ {1,2, … , 𝑝 − 1} there is 𝑛 such that 𝑥 = 𝑔𝑛 𝑚𝑜𝑑 𝑝.
The process of using the Diffie-Hellman algorithm is as follows:
1.
2.
3.
4.
5.
6.
Alice selects secret value 𝑎
Bob selects secret value 𝑏
Alice sends 𝑔𝑎 𝑚𝑜𝑑 𝑝 to Bob
Bob sends 𝑔𝑏 𝑚𝑜𝑑 𝑝 to Alice
Both compute the shared secrete 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝
Shared secret can be used as a symmetric key
Both 𝑔 and 𝑝 are public but Alice’s exponent 𝑎 and
Bob’s exponent 𝑏 are secret in this algorithm.
Bob computes (𝑔𝑎 )𝑏 𝑚𝑜𝑑 𝑝 = 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝 and
𝑎
Alice computes (𝑔𝑏 ) 𝑚𝑜𝑑 𝑝 = 𝑔𝑏𝑎 𝑚𝑜𝑑 𝑝. Both
could use 𝐾 = 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝 as the symmetric key.
Suppose that Bob and Alice use 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝 as a symmetric key. Trudy can see 𝑔𝑎 𝑚𝑜𝑑 𝑝 and 𝑔𝑏 𝑚𝑜𝑑 𝑝
but 𝑔𝑎 𝑔𝑏 𝑚𝑜𝑑 𝑝 = 𝑔𝑎+𝑏 𝑚𝑜𝑑 𝑝 ≠ 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝. If Trudy can find 𝑎 or 𝑏 then the system would be
broken. If Trudy can solve the discrete log problem then they could find 𝑎 or 𝑏.
If Alice and Bob agree on the values 𝑝 = 113 and 𝑔 = 23, then Alice selects the secret value 4 and
sends Bob the value 234 𝑚𝑜𝑑 113 = 53. While Bob selects the secret value 11 and sends Alice the
value 2311 𝑚𝑜𝑑 113 = 27. Bob calculates the common key using 5311 𝑚𝑜𝑑 113 = 2 and Alice
calculates the common key using 274 𝑚𝑜𝑑 113 = 2.
The Diffie-Hellman system is still subject to man-in-themiddle attacks as shown on the right. Trudy would
share the secret 𝑔𝑎𝑡 𝑚𝑜𝑑 𝑝 with Alice and secret
𝑔𝑏𝑡 𝑚𝑜𝑑 𝑝 with Bob. Note that Alice and Bob don’t
know Trudy exists.
To prevent a man-in-the-middle attacks on the Diffie-Hellman system:
-
Encrypt the exchange with a symmetric key
Encrypt the exhange with a public key
Sign the values with a private key
Diffie-Hellman can be used for PFS. To get PFS and
prevent man-in-the-middle attacks, create session key
𝐾𝑠 = 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝. Alice forgets 𝑎 and Bob forgets 𝑏.
This is called the Ephemeral Diffie-Hellman system.
Not even Alice and Bob can later recover 𝐾𝑠 using this
system.
Another way to get PFS, mutual authentication, and
PFS is the approach to the right. The session key is
𝐾𝑠 = 𝑔𝑎𝑏 𝑚𝑜𝑑 𝑝 and Alice forgets 𝑎 while Bob forgets
𝑏. If Trudy later gets Bob’s and Alice’s secrets they can
still not recover session key 𝐾𝑠 .
Timestamps:
A timestamp 𝑇 is the current time. Timestamps are used in many security protocols (e.g. Kerberos).
Timestamps reduce the number of messages needed and is like a nonce that both sides know in
advance. Clocks are never exactly the same however, so a clock skew must be allowed which adds a
vulnerability to replay attacks.
The approach to the right is an example of how timestamps
can be used to make a system secure. Note the ordering of
encryption and signing.
The second example seems to be secure, but Trudy can use
Alice’s public key to find {𝑇, 𝐾}𝐵𝑜𝑏 . This allows Trudy to obtain
the Alice-Bob session key 𝐾𝑠 but Trudy must act within a clock
skew.
Summary:
For public key authentication:
-
Sign and encrypt with nonce: Secure
Encrypt and sign with nonce: Secure
Sign and encrypt with timestamp: Secure
Encrypt and sign with timestamp: Insecure
For mutual authentication with public key:
-
Sign and encrypt with nonce: Secure
Encrypt and sign with nonce: Secure
Sign and encrypt with timestamp: Secure
Encrypt and sign with timestamp: Secure
AUTHENTICATION AND TCP:
TCP-BASED AUTHENTICATION:
TCP is not intended for use as an authentication protocol, however IP addresses in a TCP connection are
often used for authentication. One mode of IPSec uses IP addresses for authentication, and this can
cause problems.
Recall the TCP three-way handshake. The initial
sequence numbers are SEQ a and SEQ b which are
supposed to be random.
If they aren’t random, the system is subject to a TCP
authentication attack as shown in the second figure.
Trudy cannot see what Bob sends, but they can send
packets to Bob while posing as Alice. Trudy must
prevent Alice from receiving Bob’s packets or else the
connection will terminate. If a password (or other
authentication method) is required, this attack fails
but if the TCP connection is relied on for
authentication, then the attack can succeed. It is
generally a bad idea to rely on a TCP connection for
authentication.
UNIT 6 – NETWORK DESIGN AND CONFIGURATION:
CONNECTING DEVICES & COMMAND LINE INTERFACES:
CONNECTING DEVICES:
Cable Uses:
Console Connection:
A console connection with a router uses the router’s console port. One of the
following can be used to establish a console connection:
1. Connect a terminal directly to the router
2. Connect a PC running terminal emulation software to the console port
through the PC’s COM port
Terminal Settings:
When connecting a Cisco device through the console port, the HyperTerminal program included with
Windows can be used to make a console connection with the router. The default console port settings
are listed below:
-
9600 baud (or a rate supported by the
router)
Data bits = 8 (default)
-
Parity = none (default)
Stop bits = 1 (default)
Flow control = none
VTY:
With a virtual (teletype) terminal (VTY) connection, the router can be connected
to and administered to through a network connection. This connection does not
require a close physical proximity to the router. The following configuration tasks
must be completed before a VTY connection can be made:
1. Configure a router interface with an IP address
2. Configure the router VTY line
3. Set the enable secrete password
For Windows PC, Hyperterminal or Telnet can be used to make a VTY connection using the router’s IP.
BACK-TO-BACK CONNECTIONS:
When a router is being configured to connect to a network through a serial interface, the router must be
connected to a device (such as a CSU/DSU or another router) that provides clocking signals.
When two routers are configured in a back-to-back configuration through their serial ports, one router
interface must be configured to provide the clocking signals for the connection. The router providing the
clocking is known as the DCE (data circuit-terminating equipment) and the router not providing clocking
is known as the DTE (data terminal equipment).
The DCE Interface is identified in two ways:
1. The cable connecting the two routers has both a DCE and DTE end. Connect the DCE end of the
cable to the interface which will be the DCE device.
2. The DCE interface is configured to provide a clocking signal with the clock rate command. If the
clock rate command is not issued, clocking is not provided and the line between the two routers
will not change to up.
COMMAND LINE INTERFACE (CISCO IOS):
The Cisco internetwork operating system (IOS) is the operating system for the Cisco router. It includes all
the programs, commands, and configuration options that the router uses to run and complete its tasks.
The terminal or terminal emulation software program interacts with the router through two executable
(EXEC) modes: user mode or privileged mode. Each mode has its own distinctive prompt. The user mode
prompt is denoted by > and the privileged mode prompt is denoted by #.
For example, to change the router name, enter the commands to the right.
Command Mode Prompts and Commands:
Mode
Prompt
To Enter
User EXEC
Privileged EXEC
Global Configuration
Line
Interface
Sub interface
Router
Setup
Router>
Router#
Router(config)#
Router(config-line)#
Router(config-if)#
Router(config-subif)#
Router(config-router)#
None, interactive dialog
Press <enter>, login
enable
config terminal
line <type> <number>
interface <type> <number>
interface <type> <number>.<subnumber>
router <type>
Setup or erase startup config and reload
To Exit
Exit, logout, or disconnect
Disable (exit disconnects)
Exit, ^Z
Exit, ^Z
Exit, ^Z
Exit, ^Z
Exit, ^Z
^C
ROUTING PROTOCOLS:
ROUTING CONCEPTS:
The term routing is used for taking a packet from one device and sending it through the network to
another device on a different network. Routers route traffic to all the networks in the internetwork. To
be able to route packets, a router must (at least) know the following:
-
Destination address
Neighbor routers from which it can learn about remote networks
Possible routes to all remote networks
-
The best route to each remote network
How to maintain and verify routing information
The router learns about remote networks from
neighboring routers or from an administrator. The
router then builds a routing table as shown to the
right. A routing table is essentially a map of the
internetwork that describes how to find the remote
networks.
If a network is directly connected, the router already
knows how to get to it. If a network isn’t directly
connected to the router, the router must learn how to
get to the remote network in one of two ways:
1. By using static routing, meaning that someone must hand type all network locations into the
routing table
2. Through dynamic routing
If static routing is used, the administrator is responsible for updating all changes by hand into all routers.
In dynamic routing, a protocol on one router communicates with the same protocol running on
neighboring routers, and the routers update each other about all the networks they know about before
placing that information into the routing table. With dynamic routing, if a change occurs in the network,
then the dynamic routing protocols automatically inform all routers about the event.
There are three classes of dynamic routing protocols:
1. Distance Vector
2. Link State
3. Hybrid
Each organization that has been assigned a network
address from an ISP is considered an Autonomous System
(AS). That organization is free to create one large network
or divide the network into subnets.
Routers are used within an autonomous system to
segment (subnet) the network. They are also used to
connect multiple autonomous systems together.
Routing protocols can be classified based on whether they are routing traffic within or between
autonomous systems. Interior gateway protocol (IGP) is a protocol that routes traffic within an
autonomous system (e.g. RIP, OSPF, or IGRP). Exterior gateway protocol (EGP) is a protocol that routes
traffic outside or between autonomous systems (e.g. Border Gateway Protocol (BGP)). This unit will
focus on routing information protocol (RIP) and open shortest path first (OSPF) protocols.
Administrative distances are used to rate the
credibility of the routing information received on a
router from a neighboring router. An administrative
distance is an integer from 0 to 255, where 0 is the
most trusted and 255 means traffic will not be passed
via that route. The default administrative distances
that Cisco routers use are in the table to the right.
STATIC ROUTING:
Manually Configuring Routes:
Static routes lock a router into using the route specified
for all packets. Configuring static routes is useful for
increasing security, and for small networks that have
only one possible path. When the router cannot find a
packet’s address in its routing table, it sends the packet
to the default router.
The image to the right shows how to manually configure
routing through a CLI. A default router can be configured
as shown below the image to the right.
Task
Command
Identify a next hop router to receive packets
sent to the specified destination network.
Identify the interface used to forward packets
to the specified destination network.
Router(config)#ip route
<destination> <next_hop>
Router(config)#ip route
<destination> <interface>
Example 1:
Configure a static route on router A to network 12.0.0.0. Router B has already been configured with a
static route to network 10.0.0.0. Use the syntax:
ip route [destination network] [mask] [out interface] [next hop router]
Solution:
Router A is configured to route to 12.0.0.0/8, via Serial1 as the interface and 11.0.0.2 as the next hop
address.
A(config)#ip route 12.0.0.0 255.0.0.0 s1 11.0.0.2
Example 2:
The London and Toronto routers are connected as shown in the diagram.
Configure static routes on both routers so that each router can communicate
with all connected networks.
Solution:
London(config)#ip route 211.118.64.0 255.255.255.0 133.238.0.2
Toronto(config)#ip route 197.12.155.0 255.255.255.0 133.238.0.1
Example 3:
The London, Windsor, and Toronto routers are connected as shown in the
diagram. Configure static routes on all routers so that each router can
communicate with all other routers and networks in the diagram.
Solution:
Windsor(config)#ip route 12.0.0.0 255.0.0.0 s1 11.0.0.1
Windsor(config)#ip route 13.0.0.0 255.0.0.0 s1 11.0.0.1
London(config)#ip route 10.0.0.0 255.0.0.0 s0 11.0.0.2
London(config)#ip route 13.0.0.0 255.0.0.0 s1 12.0.0.1
Toronto(config)#ip route 11.0.0.0 255.0.0.0 s0 12.0.0.2
Toronto(config)#ip route 10.0.0.0 255.0.0.0 s0 12.0.0.2
DISTANCE VECTOR PROTOCOLS:
Distance vector protocols find the best path to a remote
network by judging the distance. A hop is counted each time a
packet goes through a router. The least number of hops to the
destination is determined to be the best route. RIP and IGRP are
distance vector protocols.
The below is done in distance vector routing:
-
Routers send updates only to their neighbors
Routers send their entire routing table
Tables are sent at regular intervals (each router is configured to specify its own update interval)
Routers modify their tables based on information received from their neighbors
One problem distance vector protocols face is the “count to infinity” problem where a path being
updated causes a cyclical increase of the estimated distance between a set of routers.
Because routers use the distance vector method to send their entire routing table at specified intervals,
they are susceptible to a condition known as a routing loop (also called a count-to-infinity condition).
Routing loops can occur when one network (network 4.0 in the
example to the right) is down, and the routing information of
this route (router C) will be dropped from its table. Before this
event is being propagated to the other routers (to router B and
A), each router, starting by router C, will think that its
information about network 4.0 is old and need to be updated
based on the number of hops from their neighbor. This process
will keep going and will increase the number of hops forever.
Routing loops can occur because every router is not updated simultaneously, or even close to it. The
following methods can be used to minimize the effects of a routing loop:
-
Maximum Hop Count
Split Horizon
-
Route Poisoning
Hold-downs
Maximum Hop Count:
Distance vector routing protocols set a specified
metric value to indicate infinity. Once a router
“counts to infinity” it marks the route as
unreachable. RIP permits a hop count of up to
15, so anything that requires 16 hops is deemed
unreachable.
Split Horizon Rule:
The split horizon rule is used to prevent routing
loops and specifies that a router shouldn’t
advertise a network through the interface from
which the update came.
Route Poisoning:
Split horizon with poison reverse is a rule that
states once a router learns of an unreachable
route through an interface, advertise it as
uncreachable back through the same interface.
Hold-Downs:
Hold-down timers allow a router to not accept
any changes to a route for a specified period of
time. The point of using hold-down timers is to
allow routing updates to propagate through the
network with the most current information.
Advantages and Disadvantages of Distance Vector Protocols:
The distance vector method has the following advantages:
-
Stable and proven method (distance vector was the original routing algorithm)
Easy to implement and administer
Bandwidth requirements negligible for a typical LAN environment
Requires less hardware and processing power than other routing methods
Distance vector has the following disadvantages:
-
Relatively long time to reach convergence (updates sent at specified intervals)
Routers must recalculate their routing tables before forwarding changes
Susceptible to routing loops (count-to-infinity)
Bandwidth requirements can be too great for WAN or complex
LAN environments
Configuring & Troubleshooting RIP:
Manually configuring routes is fine if there are only a few possible routes and they don’t change over
time. However, if the route goes down, the network will be inaccessible through the default route until
the administrator modifies the routing entry.
A more flexible method involves configuring a routing protocol which dynamically discovers routes and
automatically adjusts to changes in the network topology. In this unit, it will be shown how to configure
the RIP as a distance vector routing protocol for IP.
To configure an RIP routing protocol, complete the following basic steps:
1. Enable IP routing if not already enabled (using the “ip routing” command)
2. At the router configuration mode use the “router RIP” command
3. Identify the networks that will participate in dynamic routing using the “network” command
When the network command is used to identify the networks that will participate in dynamic routing,
the below rules should be followed:
-
Identify only networks to which the router is directly connected
Use the classful network address, not a subnetted network address
For an example, to configure RIP on router A Identify only network 1, network 2,
and network 3, even though it’s known that router A can reach to network 4 or
5. Router A will learn about these networks automatically through RIP.
EXAMPLE 1:
COMMANDS LIST:
Task
Command
Enable IP routing for the entire router.
Enter the router configuration mode for the specified routing protocol.
Identify networks that will participate in the router protocol.
Disable IP routing on the router.
Disable RIP and remove all RIP networks.
Remove a specific RIP network.
Prevent routing update messages to send out through the mentioned
router interface.
Router(config)#ip routing
Router(config)#router <protocol>
Router (config-router)#network <address>
Router(config)#no ip routing
Router(config)#no router rip
Router(config-router)#no network <network>
Router(config)#passive-interface <interface>
The commands to the right enable IP routing and identify two
networks that will participate in the RIP routing protocol.
EXAMPLE 2:
Consider the scenario where the London and Toronto routers have been partially configured. The
following tasks have been completed:
1. IP addresses have been assigned to all interfaces as
indicated in the diagram
2. The clock rate has been added to the DCE device
3. All interfaces have been brought up
Configure both routers to share routing information for all connected networks.
Solution: To complete this scenario, enter the following commands:
EXAMPLE 3:
Consider the scenario where the London and the Toronto routers
have three interfaces (one Ethernet and two Serial). London
Serial1 connects to Toronto Serial0. The other Serial interfaces
connect to existing networks. All interfaces have been configured
with IP addresses and are enabled. Configure both routers to
share routing information about all connected networks.
Solution: To complete this scenario, enter the following commands:
EXAMPLE 4:
Consider the scenario where the Mickey and Minnie routers are
connected back-to-back as shown in the network diagram. The
Ethernet interfaces on both routers are already configured.
Configure the serial link between the routers (assign an IP address
and bring the link up) and configure each router to share routing
information about all connected networks.
Solution: To complete this scenario, enter the following commands:
IMPORTANT FACTS:
The Routing Information Protocol (RIP) is a simple and effective routing protocol for small-to-mediumsized networks. It has the following characteristics when running on a Cisco router.
-
RIP uses hop and tick counts to calculate optimal routes
RIP routing is limited to 15 hops to any location (16 hops indicates the network is unreachable)
RIP uses the split horizon with poison reverse method to prevent the count-to-infinity problem
RIP uses only classful routing, so it uses full address classes, not subnets
RIP broadcasts updates to the entire network
RIP can maintain up to six multiple paths to each network, but only if the cost is the same
RIP uses four different kinds of timers to regulate its performance
The route update timer sets the interval (typically 30 seconds) between periodic routing updates, in
which the router sends a complete copy of its routing table out to all neighbors. The route invalid timer
determines the length of time that must pass (180 seconds) before a router determines that a route has
become invalid. It will come to this conclusion if it hasn’t heard any updates about a particular route for
that period. When that happens, the router will send out updates to all its neighbors letting them know
that the route is invalid.
The hold-down timer sets the amount of hold time when an update packet is received that indicates the
route is unreachable. This continues until either an update packet is received with a better metric or
until the hold-down timer expires. The default is 180 seconds.
The route flush timer sets the time between a route becoming invalid and its removal from the routing
table (240 seconds). This gives the router enough time to tell its neighbors about the invalid route
before the local routing table is updated.
Because RIP uses the hop count in determining the best route to a
remote network, it might end up selecting a less than optimal route. For
example, suppose that two routes exist between two networks. One
route uses a 56 kbps link with a single hop, while the other route uses a
Gigabit link that has two hops. Because the first route has fewer hops,
RIP will select this route as the optimal route.
If there’s problems with routers not sharing or learning routes, use the following commands to help
identify the problem:
-
show ip route
-
show ip protocols
-
show run
-
debug ip rip
LINK STATE PROTOCOLS:
Link state routing protocols are also known as the shortest path
first algorithms. These protocols are built around Dijkstra’s SPF.
In link state protocols, each router creates three separate tables:
-
One to keep track of the directly connected neighbors
One to determine the topology of the entire network
One to use as the routing table
OSPF is an IP routing protocol that is completely link state. Link state protocols send the updated
information (instead of the entire table) to the directly connected neighbors.
Routing Process:
Routers using link state routing protocols follow the below steps to reach convergence:
1. Each router learns about its own directly connected networks
2. Link state routers exchange a hello packet to “meet” other directly connected link state routers
3. Each router builds its own Link State Packet (LSP) which includes information about neighbors
such as neighbor ID, link type, & bandwidth
4. After the LSP is created the router floods it to all neighbors who then store the information and
forward it until all routers have the same information
5. Once all the routers have received all the LSPs, the routers then construct a topological map of
the network which is used to determine the best routes to a destination
Sending Hello Packets:
Connected interfaces that are using the same link state
routing protocols will exchange hello packets, and once
routers learn they have neighbors, they form an
adjacency.
Link & Link State:
Links are interfaces on a router. Link states are the information about
the state of the links. The diagram to the right shows an example of
the link states that may be stored within a router using link state
routing protocols.
Building Link State Packets (LSPs):
Each router builds its own link state packets (LSPs). The contents of
an LSP are as follows:
-
The state of each directly connected link
Information about neighbors such as neighbor ID, link type, and bandwidth
Flooding LSPs:
Once LSPs are created, they are forwarded out to neighbors.
After receiving the LSP, the neighbor continues to forward it
throughout the routing area. LSP are sent out under the
following conditions:
-
The initial router starts up or during the routing process
There’s a change in topology
Databases:
Routers use a database to construct the topology map of the
network. This can be seen to the right.
Routing Tables:
Once the SPF algorithm has determined the shortest path
routes, these routes are placed in the routing table as can be
seen below and to the right.
Advantages and Disadvantages of Link State Protocols:
The link state method has the following advantages over
the distance vector method.
-
Less convergence time (because updates are
forwarded immediately)
Not susceptible to routing loops
Less susceptible to erroneous information (because only firsthand information is broadcast)
Bandwidth requirements negligible for a typical LAN environment
Link state has the following disadvantages:
-
The link state algorithm requires greater CPU and memory capability to calculate the network
topology and select the route
Increased network traffic when the topology changes
OSPF:
DESIGN:
Typically, OSPF is designed in a hierarchal fashion, which means
that the larger internetwork can be separated into smaller
internetworks called areas. This design helps decrease routing
overhead and speed up the convergence process.
ROUTING PROTOCOL:
As part of the OSPF process, each router is assigned a router ID
(RID). The router ID is the IP address assigned to a loopback (logical) interface and if a loopback interface
is not defined, the highest IP address of the router's physical interfaces is used.
The following example shows how loopback (logical) interface can be defined to a router:
COMMANDS LIST:
Configuration is as simple as defining the OSPF process using the “router ospf” command, and then
identifying the networks that will participate in OSPF routing. The following table lists the commands
and details for configuring OSPF and that are useful in monitoring and troubleshooting OSPF:
Task
Use to enter configuration for OSPF. The ID identifies a separate
routing on the router. Note: IDs do not need to match routers (in
other words, two routers configured with different process IDs
might still share OSPF information).
Identifies networks that participate in OSPF routing. n.n.n.n is the
network address. This can be a subnetted, classless network.
m.m.m.m is a wildcard mask (not the normal subnet mask). The
wildcard mask identifies the subnet address. Number is the area
number in the OSPF. The area number must match routers.
View the routing table and OSPF entries
View neighbor OSPF routers. Shows the neighbor router ID numbers.
View interfaces that are running OSPF.
- Area number
- Process ID
- Router ID
- Timer settings
- Adjacent routers
Command
router ospf process-id
network n.n.n.n m.m.m.m
area number
show ip route
show ip ospf neighbor
show ip ospf interface
WORKING WITH WILDCARD MASKS:
The wildcard mask is used with access list statements and OSPF configuration. To calculate the wildcard
mask, identify the decimal value of the subnet mask and subtract 255 from each octet in the subnet
mask.
For example, suppose you wanted to configure the subnet 10.12.16.0/21. To find the wildcard mask:
-
The decimal value that covers 21 bits in the subnet mask is 255.255.248.0
The wildcard mask would be:
o First octet: 255 - 255 = 0
o Second octet: 255 - 255 = 0
o Third octet: 255 - 248 = 7
o Fourth octet: 255 - 0 = 255
This gives you the mask of 0.0.7.255.
EXAMPLE 1:
The graphic below shows a sample network with two OSPF areas. Use the listed commands to configure
OSPF on each router.
EXAMPLE 2:
In this scenario, the Toronto router is configured for OSPF routing for area 0. Configure the London
router to share routing information with the Toronto router through OSPF for area 0 about all
connected networks. All interfaces on the London router are already enabled and configured with IP
addresses.
Solution: Complete the following steps on London (you can choose whichever process ID number you
like with the router ospf command):
EXAMPLE 3:
The London and Toronto routers are connected as shown in the diagram. All interfaces on both routers
are configured with IP addresses and are enabled (the interfaces are up). Configure the London and
Toronto routers to share routing information about all connected networks through OSPF area 0 for all
networks.
Solution: To complete this scenario, use the router ospf command (with any process ID number you
want), then the corresponding network command for all directly connected networks:
ACCESS LIST CONCEPTS AND CONFIGURING:
ACCESS LIST CONCEPTS:
Routers use access lists to control incoming or outgoing traffic which have the following characteristics:
-
-
Access list entries identify either permitted or denied traffic.
Each access list applies only to a specific protocol.
Each router interface can have up to two access lists for
each protocol, one for incoming traffic and one for outgoing
traffic.
When an access list is applied to an interface, it identifies
whether the list restricts incoming or outgoing traffic.
Each access list can be applied to more than one interface.
However, each interface can only have one incoming and
one outgoing list.
There are also a few important rules that a packet follows when it’s being compared with an access list:
-
It is always compared with each line of the access list in sequential order (i.e. always start with
the first line in the access list, then go to line 2, then line 3, and so on.
It will be continually compared with the lines in the access list until a match is made, or it
reaches the end of the access list.
Once the packet matches the condition on the line of the access list, the packet is acted upon
and no further comparisons take place.
There is an implicit “deny” at the end of each access list, which means that if a packet doesn’t
match any lines in the list, the packet will be dropped.
There are two general types of access lists: standard and extended. All decisions made in standard
access lists are based on the packet source address only (e.g. IP address in an IP packet). Extended
access lists can evaluate many of the other fields from layer 3 and 4 as well. The decision can be made
based on source and destination addresses, the protocol type, and the port numbers.
CONFIGURING STANDARD ACCESS LISTS:
Standard IP access lists filter network traffic by
examining the source IP address in a packet. Standard
IP access lists can be created by using the access-list
numbers 1-99 or 1300-1999. Access list types are
generally differentiated using a number and based on
the number used, the router knows which type of
syntax it expects to follow. To the right is a list of
many access-list number ranges.
Most access lists are created using the following process. Create
the list and add list entries with the “access-list” command. Then,
apply the list to an interface (for IP, use the “ip access-group”
command).
Example 1:
Create a standard IP access list that rejects all traffic except traffic
from host 10.12.12.16 and applies the list to the Serial0 interface.
Solution:
Remember that each access list contains an explicit deny and entry. When created, the access list denies
all traffic except traffic explicitly permitted by permit statements in the list.
Example 2:
ECE-SE3314b router is already configured with a standard IP access list (number 45) that prevents traffic
from hosts 77.18.20.115 and 82.65.211.12. Add additional statements to prevent traffic from the
following hosts: 88.111.5.5 and 200.15.66.11.
Solution:
Example 3:
ECE-SE3314b router is already configured with a standard IP access list (number 90) that prevents all
traffic except traffic from hosts 77.18.20.115 and 82.65.211.12. Add additional statements to allow
traffic from the following hosts: 192.168.12.15 and 222.48.93.77.
Solution:
Example 4:
Standard IP access 4 has already been configured on the ECE-SE3314b router. Apply this list to the
Serial1 interface to control the traffic sent out that interface.
Solution:
Example 5:
For the ECE-SE3314b router, create a standard IP access list (numbered 66) with statements that do the
following:
-
Deny traffic from host 1.1.2.12
Deny traffic from host 2.16.11.155
Apply the list to Ethernet0 to prevent the traffic defined by the list from being sent out the interface.
Solution:
Example 6:
Standard IP access lists 92 and 15 have been configured on the ECE-SE3314b router and applied to the
Serial0 interface (one for incoming traffic and the other for outgoing traffic). Remove access list 15 from
the interface.
Solution:
Before access list 15 can be removed, it must be determined whether the list restricts inbound or
outbound traffic. Use one of the following commands to identify how list 15 is applied to Serial0:
Assume that access list 15 restricts outbound traffic:
WORKING WITH WILDCARD MASKS:
The wildcard mask is used with access list statements to identify a range of IP addresses (such as all
addresses on a specific network. When used to identify network addresses in access list statements,
wildcard masks are the exact opposite of a subnet mask.
To calculate the wildcard mask, first identify the decimal value of the subnet mask, then subtract each
octet in the subnet mask from 255.
For example, to find the wildcard mask to allow all traffic on a network 10.12.16.0/21 the following
calculations would be made. The decimal value that covers 21 bits in the subnet mask is 255.255.248.0.
The wildcard mask for the first octet is 255– 255 = 0. The wildcard mask for the second octet is 255 −
255 = 0. The wildcard mask for the third octet is 255 − 248 = 7. The wildcard mask for the fourth
octet is 255 − 0 = 255. This results in the mask of 0.0.7.255.
Like subnet masks, wildcard masks operate at the bit level.
Any bit in the wildcard mask with a 0 value means that the
bit must match the access list statement. A bit with a 1
value means that the bit doesn’t have to match.
Suppose an access list was created with a statement as follows:
Now, suppose that a packet addressed to
10.12.16.15 was received. The router uses the
wildcard mask to compare the bits in the address to
the bits in the subnet address as shown to the right.
In this example, 10.12.16.15 matches the access list
statement and the traffic is denied.
Now, instead suppose that a packet address to
10.13.17.15 was received. The router uses the
wildcard mask to compare the bits in the address to
the bits in the subnet address as shown to the right.
This address doesn’t match the access list statement
as identified with the wildcard mask, so in this case
traffic would be permitted.
Example 1:
Standard IP access list 15 is configured on the ECE-SE3314b router. Modify the access list to also restrict
traffic from network 199.15.16.0/20.
Solution:
Begin by identifying the wildcard mask that will restrict traffic from network 199.15.16.0/20. 20 masked
bits need 4 bits from the third octet, so the subnet mask is 255.255.240.0. The wildcard mask is thus
0.0.15.255. Now, use the access-list command to add the statement.
Example 2:
Standard IP access list 14 is configured on the ECE-SE3314b router. Modify the access list to also permit
traffic from network 222.11.199.64/27.
Solution:
Begin by identifying the wildcard mask that will restrict traffic from network 222.11.199.64/27. 27
masked bits need 3 bits from the fourth octet, so the subnet mask is 255.255.255.224. The wildcard
mask is thus 0.0.0.31. Now, use the access-list command to add the statement.
Example 3:
Standard IP access list 92 has been configured on the ECE-SE3314b router. Modify the access list to also
deny traffic from the following networks: 10.64.0.0/13 and 129.15.128.0/17.
Solution:
For 10.64.0.0/13, 13 masked bits need 5 bits from the second octet, so the subnet mask is 255.248.0.0.
This gives a wildcard mask of 0.7.255.255. For 129.15.128.0/17, 17 masked bits need 1 bit from the third
octet, so the subnet mask is 255.255.128.0. This gives a wildcard mask of 0.0.127.255. Now, use the
access-list command to add the statement.
Example 4:
Access list 74 has been configured on the ECE-SE314b router. Modify the access list to also permit traffic
from the following networks: 74.240.96.0/19 and 215.122.95.128/26.
Solution:
For 74.240.96.0/19, 19 masked bits need 3 bits from the third octet, so the subnet mask is
255.255.224.0. This gives a wildcard mask of 0.0.31.255. For 215.122.95.128/26, 26 masked bits need 2
bits from the fourth octet, so the subnet mask is 255.255.255.192. This gives a wildcard mask of
0.0.0.63. Now, use the access-list command to add the statement.
Example 5:
Create a standard IP access list that permits all outgoing traffic except the traffic from network 10.0.0.0
and applies the list to the Ethernet0.
Solution:
Example 6:
ECE-SE3314b router is already configured with a standard IP access list (number 93) that prevents traffic
from networks 131.12.0.0/16 and 199.15.77.0/24. Add additional statements to prevent traffic from the
following hosts: 144.199.0.0/16 and 220.6.118.0/24.
Solution:
To restrict traffic from a network, the wildcard mask must be used.
Example 7:
ECE-SE3314b router is already configured with a standard IP access list (number 14) that denies all traffic
except for traffic from networks 131.12.0.0/16 and 199.15.77.0/24. Add additional statements to the
access list to allow traffic from the following networks: 140.32.0.0/16 and 206.178.23.0/24.
Solution:
CONFIGURING EXTENDED ACCESS LISTS:
Extended access lists enable the specification of source and
destination addresses, as well as the protocol port number
that identifies the upper-layer protocol or application. By
using extended access lists, users can be stopped from
accessing specific hosts while still being allowed to access a
physical LAN. Extended IP access lists can be created by using the
access-list numbers 100-199 or 2000-2699. Above and to the right
is a list of many access-list number ranges.
Creating extended access lists is similar to creating standard access
lists, as it uses the following process. Create the list and add list
entries with the “access-list” command. Then, apply the list to an
interface (for IP, use the “ip access-group” command).
Example 1:
Create an extended IP access list that rejects packets from host
10.1.1.1 sent to host 15.1.1.1 and applies the list to the second
serial interface.
Solution:
Example 2:
Create an extended IP access list that does not forward TCP packets from any host on network 10.0.0.0
to network 11.12.0.0, and applies the list to the first serial interface.
Solution:
Example 3:
The ECE-SE3314b router is already configured with a standard IP access list (number 93) that prevents
traffic from networks 131.12.0.0/16 and 199.15.77.0/24. Add additional statements to prevent traffic
from the following hosts: 11.12.0.0 and 0.0.255.255.
Solution:
NAMED ACCESS CONTROL LISTS:
The example below explains how to create a
named ACL.
MONITORING ACCESS CONTROL LISTS:
The example below shows how to monitor and
verify ACLs.
EDITING NAMED ACCESS CONTROL LISTS:
The example below shows the process for editing named ACLs.
DESIGNING AND MONITORING ACCESS LISTS:
After an access list has been created, it must be applied to an interface. In many cases, this means neede
to decide which router, which port, and which direction to apply the access list to. The following should
be considered:
-
Each interface can only have one inbound and one outbound access list for each protocol. This
means that an interface can have either standard inbound or an extended inbound IP access list,
but not both.
-
-
Two access lists can be used for the same direction applied to an interface if the list restricts
different networking protocols. For example, one outbound IP access list can be used with one
outbound IPX access list.
When constructing access lists, place the most restrictive statements at the top. If traffic
matches a statement high in the list, subsequent statements won’t be applied to the traffic.
Each access list has an implicit “deny any” statement at the end of the access list so access lists must
contain at least one allow statement or no traffic will be allowed. Access lists applied to inbound traffic
filter packets before the routing decision is made. Access lists applied to outbound traffic filter packets
after the routing decision is made.
As a general rule, apply extended access lists as close to the source router as possible. This keeps the
packets from being sent though the rest of the network. Another general rule is to apply standard access
lists as close to the destination router as possible. This is because standard access lists can only filter on
source addresses. Placing the list too close to the source will prevent any traffic from the source from
getting to any other parts of the network.
The list to the right summarizes the commands to use for
viewing specific access list information on the router:
SWITCH BASICS:
COMPONENTS:
Configuring a Cisco switch is much like configuring a router. Many
of the same commands are used with the switch as with the
router as can be seen to the right.
Switches connect multiple segments or devices and forward packets to only one specific port. Modern
switches can also be used to create virtual LANs (VLANs) and perform some tasks previously performed
only by routers. An important characteristic of a switch is having multiple ports, all of which are part of
the same network. In this unit, the Catalyst 2950 series switch will be used to discuss configuration of
switches.
Each switch port has a single LED. The color of
the LEDs change to give information about how
the switch is working. Port LEDs mean different
things based on the mode selected with the
“Mode Button”.
CONFIGURATION MODES:
Like a router, the switch has similar configuration modes, with
some differences to account for switch functionality not included
in routers. The graph to the right illustrates some of the
configuration modes of the switch.
Like a router, the swtich has multiple interface modes depending on the physical (or logical) interface
type. The following switch interface modes should become familiar:
-
FastEthernet (100 Mbps Ethernet)
GigabitEthernet (1 GB Ethernet)
VLAN (logical management interface)
To enter the interface configuration mode, follow the interface type and number (FastEthernet0) with
the port number (/14). Ports are numbered beginning with 1 (not 0). In addition to the special interface
modes, Catalyst switches include a VLAN database configuration mode.
COMMAND LIST:
Task
Move to privileged mode from user mode
Move to user mode from privileged mode
Move to global configuration mode
Move to interface configuration mode
Leave the current configuration mode, or exit the
system
Exit all configuration modes
Show the current switch configuration
Show switch information such as software
version and hardware components
Show interface status and configuration
information
Save the current switch configuration
Load a configuration file from another location
Set the enable password (to cisco)
Set the secret password (to class)
Set the default gateway
Set the switch hostname
Set a description for a port
Enable CDP on the switch
Enable CDP on a port
Set CDP parameters
Set the port speed
Set the duplex mode
Commands
switch>enable
switch#disable
switch#configure terminal
switch(config)#interface fastethernet0/14
switch(config)#interface gigabitethernet 0/17
switch(config)#interface con 0
switch(config)#interface vty 0 4
switch(config)#interface vlan 1
switch(config-if)#exit
switch(config)#^Z
switch#show running-config
switch#show version
switch#show interfaces
switch#show interfaces fastethernet 0/14
switch#copy running-config startup-config
switch#copy tftp://1.0.0.0/my_config.cfg
switch(config)#enable password cisco
switch(config)#enable secret class
switch(config)#ip default-gateway 1.1.1.1
switch(config)#hostname SE3314b
switch(config-if)#description IS_VLAN
switch(config)#cdp run
switch(config-if)#cdp enable
switch(config)#cdp holdtime 181
switch(config)#cdp timer 66
switch(config-if)#speed 10
switch(config-if)#speed 100
switch(config-if)#speed auto
switch(config-if)#duplex half
switch(config-if)#duplex full
switch(config-if)#duplex auto
IP ADDRESSES:
One task that is different for switches than for routers, is configuring the IP address. The following facts
should be considered:
-
-
Basic switches operate at layer 2, and therefore don’t need an IP address to function. In fact, a
switch performs switching functions fine without an IP address set.
A switch IP address only needs to be configured if it’s desired to manage the switch from a
Telnet or Web session.
The switch itself has only a single (active) IP address. Each switch port doesn’t have an IP
address (unless the switch is performing layer 3 switching which is not supported on 2950
switches).
The IP address identifies the switch as a host on the network, but it’s not required for switching
functions.
To configure the switch IP address, the address must be set on the management VLAN logical interface.
This is a logical interface defined on the switch to allow
management functions. By default, this VLAN is VLAN 1 on the
switch. The commands to the right can be used to configure
the switch IP address.
To enable management from a remote network, it’s also required to configure the default gateway on
the switch using the command to the right. Note that the
default gateway is set in global configuration mode.
VIRTUAL LANS (VLANS):
INTRO TO VLANS:
A virtual LAN (VLAN) can be defined as broadcast domains based on
switch ports rather than network addresses. It can also be defined as a
grouping of devices based on service need, protocol, or other criteria
rather than physical proximity.
Using VLANs enables the assigning of devices on different switch ports
to different logical (or virtual) LANs. Although each switch can be
connected to multiple VLANs, each switch port can only be assigned to
one VLAN at a time.
In the graphic to the right, FastEthernet ports 0/1 and 0/2 are members of VLAN 1.
FastEthernet ports 0/3 and 0/4 are members of VLAN 2. Workstations in VLAN 1 will
not be able to communicate with workstations in VLAN 2, even though they are
connected to the same physical switch. Defining VLANs creates additional broadcast
domains. The example to the right has two broadcast domains, each of which
corresponds to one of the VLANs.
By default, switches come configured with several default VLANs:
-
VLAN 1
VLAN 1002
VLAN 1003
VLAN 1004
VLAN 1005
By default, all ports are members of VLAN 1. Ports can be assigned to VLANs either statically or
dynamically. Assigning statically (manually assigning) will require assigning one particular port for one
VLAN. Assigning dynamically will assign ports based on matching data (MAC and VLANs) stored on the
virtual membership policy server (VMPS).
Although VLANs can be created with only one switch, most networks involve connecting multiple
switches. The area between switches is called the switch fabric. As a frame moves from switch to switch
within the switch fabric, each switch must be able to identify the destination virtual LAN.
One way to identify the VLAN is for the switch to append a VLAN ID to each frame. This process is called
frame tagging or frame coloring and it identifies the VLAN of the destination device. The following
should be considered regarding frame tagging:
-
VLAN IDs identify the VLAN of the destination device.
Tags are appended by the first switch in the path and removed by the last.
Only VLAN-capable devices understand the frame tag.
Tags must be removed before a frame is forwarded to a non-VLAN-capable device.
Tag formats and specifications can vary from vendor to vendor. When designing VLANs, it might
be needed to stick with one switch vendor.
Cisco’s proprietary protocol is called the Inter-Switch Link (ISL) protocol. It uses 802.1q-capable
switches to ensure a consistent tagging protocol.
Creating VLANs with switches offers many benefits over using routers to create distinct networks.
Switches are easier to administer than routers, they are less expensive than routers, and they offer
higher performance (they introduce less latency). A disadvantage of using switches to create VLANs is
that it might be tied to a specific vendor. Details of how VLANs are created and identified can vary from
vendor to vendor, so when using multiple vendors in a switched network, be sure each switch supports
the 802.1q standards if implementing VLANs. Despite advances in switch technology, routers still are
needed to filter WAN traffic, route traffic between separate networks, and route packets between
VLANs.
COMMAND LIST:
Task
Commands
switch# vlan database*
switch(vlan)# vlan 2 name <name>**
Define a VLAN (you can create VLANs in either
switch(vlan)# exit OR apply
vlan database mode or by using the vlan
command in global configuration mode.)
switch(config)# vlan 2
switch(config-vlan)# name **
switch(config-if)# switchport access vlan
Assign ports to the VLAN
<number>***
Show a list of VLANs on the system
switch# show vlan
Show information for a specific VLAN
switch# show vlan id <number>
Notice that the VLAN database command is issued in privileged EXEC mode (which doesn’t exist in all
modules). Also notice giving the VLAN name is optional. If the VLAN is not already defined, it will be
created when assigned the port.
The following commands create VLAN 12 named IS_VLAN, identifies port 0/12 as having only
workstations attached to it, and assigns the port to VLAN 12.
TRUNKING:
Trunking is a term used to describe connecting two switches together. Trunking is important when
configuring VLANs that span multiple switches as shown in the diagram below.
The following should be considered regarding trunking:
-
-
In the graphic to the right, each switch has two VLANs. One port
on each switch has been assigned to each VLAN.
Workstations in VLAN 1 can only communicate with workstations
in VLAN 1. This means that the two workstations connected to
the same switch cannot communicate with each other.
Communications within the VLAN must pass through the trunk
link to the other switch.
Trunk ports identify which ports are connected to other switches.
Trunk ports are automatically members of all VLANs defined on
the switch. Typically, Gigabit ethernet ports are used for trunk ports.
When trunking is used, frames that are sent over a trunk are tagged with the VLAN ID number so that
the receiving switch knows to which VLAN the frame belongs. Cisco supports two trunking protocols that
are used for tagging frames.
Trunking Protocol
Characteristics
A Cisco-proprietary trunking protocol. ISL can only be used between Cisco
Inter-switch Link (ISL) devices. ISL tags each frame with the VLAN ID. Catalyst 2950 switches do not
support ISL. Performed with application-specific integrated circuits (ASIC).
An IEEE standard for trunking and therefore supported by a wide range of
802.1Q
devices. With 802.1Q trunking, frames from the default VLAN 1 are not
tagged. Frames from all other VLANs are tagged.
Cisco switches have the ability to automatically detect ports that are trunk ports, and to negotiate the
trunking protocol used between devices. Switches use the Dynamic Trunking Protocol (DTP) to detect
and configure trunk ports. For example, when connecting two switches together, they will automatically
recognize each other and select the trunking port to use.
COMMAND LIST:
Task
Enables trunking on the interface. The port will
not use DTP on the interface.
Sets the trunking protocol to use 2950 switches
only support 802.1Q and therefore you will not
use this command on 2950 switches.
Enables automatic trunking discovery and
configuration. The switch uses DTP to configure
trunking.
Enables dynamic trunking configuration. If a
switch is connected, it will attempt to use the
desired trunking protocol (802.1Q for 2950
switches). If a switch is not connected, it will
communicate as a normal port.
Disables trunking configuration on the port. You
must disable trunking before you can assign a
port to a VLAN.
Shows interface trunking information.
Command
Switch(config-if)#switchport mode trunk
Switch(config-if)#switchport trunk encapsulation
dot1q
Switch(config-if)#switchport trunk encapsulation
isl
Switch(config-if)#switchport mode dynamic auto
Switch(config-if)#switchport mode dynamic
desirable
Switch(config-if)#switchport mode access
Switch#show interface trunk
Switch#show interface fa0/1 trunk
VIRTUAL TRUNKING PROTOCOL (VTP):
Virtual Trunking Protocol (VTP) is a messaging system that maintains VLAN configuration consistency
throughout the network. VTP does this by synchronizing the latest VLAN
configurations among all switches in the VTP domain. With the VTP,
switches are placed in one of the following three configuration modes:
1. Server: Switches in server mode are used to modify the VLAN
configuration. Configuration information is then broadcasted to
other VTP devices.
2. Client: Switches in client mode receive changes from a VTP server
and passes VTP information to other switches. The VLAN
configuration from a switch cannot be modified in client mode.
3. Transparent: Switches in transparent mode don’t receive VTP
configuration from other switches. They pass VTP information to
other switches as they receive it. VLAN configuration can be modified from a switch in
transparent mode, but the changes apply only to the local switch (changes are not sent to other
devices.
Extra cautions should be taken when moving a switch from an existing
environment to another environment, due to the fact that it may have
higher revision numbers than the existing switches. This is one of the
reasons why transparent mode exists.
As VLAN switches are configured, it is important to consider the below information:
-
To make changes on a switch, the switch must be in either server or transparent mode.
Switches cannot modify the VLAN configuration on client mode.
By default, switches are configured in server mode.
Use the “vtp mode” command to configure the VTP mode of a switch.
Use the “show vtp status” command to view the current VTP mode of the switch.
SPANNING TREE AND ADVANCED SWITCHING:
To provide fault tolerance, many networks implement redundant paths between devices using multiple
switches. However, providing redundant paths between segments causes the following problems (a.k.a.
bridging loops):
-
Broadcast storms
Multiple frame transmissions
MAC address database instability
To prevent bridging loops, the IEEE 802.1d committee defined a standard called the spanning tree
algorithm (STA) or spanning tree protocol (STP). With this protocol, one bridge (or switch) for each route
is assigned as the Route bridge. Only the Route bridge can forward packets. Redundant bridges (and
switches) are assigned as backups (non-route bridge).
The spanning tree algorithm has numerous benefits. It eliminates bridging loops, provides redundant
paths between devices, enables dynamic role configuration, recovers automatically from a topology
change or device failure, and identifies the optimal path between any two network devices. The
spanning tree algorithm automatically discovers the network topology and creates a single, optimal path
through a network by assigning one of the following roles to each bridge or switch.
Role
Characteristic
The root bridge is the master or controlling bridge. The root bridge periodically
broadcasts configuration messages. These messages are used to select routes
Root Bridge
and reconfigure the roles of other bridges if necessary. There is only one root
bridge per network.
A designated bridge is any other device that participates in forwarding packets
through the network. They are selected automatically by exchanging bridge
Designated Bridge
configuration packets. To prevent bridging loops, there is only one designated
bridge (port) per segment.
All redundant devices are classified as backup bridges. Backup bridges listen to
Backup Bridge
network traffic and build the bridge database. However, they will not forward
packets. A backup bridge can take over if the root bridge fails.
Devices send special packets called Bridge Protocol Data Units (BPDUs) out each port. BPDUs sent and
received from other bridges are used to determine the bridge roles, verify that neighboring devices are
still functioning, and recover from network topology changes.
Devices participating in the spanning tree algorithm use the following process to configure themselves.
-
At startup, switches send BPDUs out each port.
Switches use information in the BPDUs to elect a root bridge.
Switches on redundant paths are configured as either designated (active) or backup (inactive)
switches.
After configuration, switches periodically send BPDUs to ensure connectivity and discover
topology changes.
As the switch participates in the configuration process, each of its ports is placed into one of five states.
Port State
Description
A device in the disabled state is powered on but does not participate in listening or
Disabled
forwarding to network messages. It must be manually placed in the disabled state.
When a device is first powered on, it is in the blocking state. In addition, backup bridges
Blocking
are always in a blocking state. The bridge receives packets and BPDUs sent to all bridges
but will not process any other packets.
The listening state is a transition state between blocking and learning. The port remains
in listening state for a specific period of time. This time period allows network traffic to
Listening
settle down after a change has occurred. For example, if a bridge goes down, all other
bridges go to the listening state for a period of time. During this time the bridges
redefine their roles.
A port in the learning state is receiving packets and building the bridge database
Learning
(associating MAC addresses with ports). A timer is also associated with this state. The
port goes to the forwarding state after the timer expires.
The root and designated bridges are in the forwarding state when they can receive and
Forwarding
forward packets. A port in the forwarding state can both learn and forward.
COMMAND LIST:
By default, the spanning tree protocol is enabled on all Cisco switches. The following commands can be
used to customize the spanning tree protocol.
Task
Command
Disables spanning tree on the selected VLAN.
Switch(config)#no spanning-tree vlan number
Forces the switch to be the root of the spanning
Switch(config)#spanning-tree vlan number root
tree.
primary
Show spanning tree configuration information. To
determine if the VLAN is functioning properly,
verify that the first line of the output is: VLAN1 is Switch#show spanning-tree
executing the IEEE compatible spanning tree
protocol.
For example, the following commands disable spanning tree for VLAN 12 and force the switch to be the
root of the spanning tree for VLAN 1.
ETHERCHANNEL:
EtherChannel combines multiple switch ports into a single logical link between two switches. With
EtherChannel, only 2-8 ports can be combined into a single link. All links in the channel group are used
for communication between the switches. EtherChannel can be used to increase the bandwidth
between switches and to establish automatic-redundant paths between switches. If one link fails,
communication will still occur over the other links in the group. EtherChannel can also be used to reduce
spanning tree convergence times.
The “channel-group” command for a port to enable EtherChannel can be used as follows.
Each channel group has its own number. All ports assigned to the same
channel group will be viewed as a single logical link. Note that if the
channel-group command isn’t used, the spanning tree algorithm will
identify each link as a redundant path to the other bridge and will put
one of the ports in a blocking state.
ADVANCED SWITCHING (INTER-VLAN ROUTING):
In a typical configuration with multiple VLANs and a single or multiple switches, workstations in one
VLAN will not be able to communicate with workstations in other VLANs. To enable inter-VLAN
communication, a router (or layer 3 switch) will be needed and can be used in 2 different ways.
1. Using two physical interfaces on the router
2. Using a single physical interface on the router
In the second configuration, the physical interface is divided into two logical interfaces called sub
interfaces. The configuration is also called a “router on a stick”. In each case, the router interfaces are
connected to switch trunk ports. The router interfaces or sub interfaces must be running a trunking
protocol (either ISL or 802.1Q). Each interface or sub interface requires an IP address.
EXAM REVIEW/PRACTICE PROBLEMS:
NOTES FROM FINAL REVIEW LECTURE:
The exam consists of 70 multiple choice questions. It is 2 hours long and focuses on units 2, 3, and 4.
When studying, pay attention to statements within the slides.
For unit 2, focus on structure overlay topology, Kademlia, how binary trees are used, and how the
protocols all really work. Calculations will be required.
For unit 3, review the 3 planes (including explanations of each plane), OpenFlow (as an architecture),
physical and virtual switches, SDN control functions (topology and device manager), NOS, Routing, SAL
(know the function of this layer specifically), REST constraint names (don’t need to know details of REST)
and really focus on Northbound interfaces. The material of every slide after slide 81 is not on the exam.
For unit 4, review IFF, RSA, authentication protocols, public key, private key, hash functions, nonce, and
symmetric key vs public key notation/authentication.
PRACTICE QUESTIONS:
Unit 2:
1. (Slide 51) P2P systems using key-based routing are called _______.
a. DOLR
b. Centralized
c. Decentralized
d. None of the above
2. (Slide 61) In Kademlia networks, the size of the address space is 1024. What is the number of
rows in each routing table?
a. 10
b. 256
c. 512
d. 1024
3. (Slide 34) In a structured-decentralized P2P network _______.
a. The directory system is kept in a center
b. A query to find a file must be flooded through the network
c. A pre-defined set of rules is used to link nodes so that a query can be effectively
resolved
d. None of the above
3
4. (Slide 57) In a Pastry network with 𝑚 = 2 and 𝑏 = 4, what is the size of the leaf set?
a.
b.
c.
d.
1 row and 16 columns
1 row and 8 columns
8 rows and 1 column
16 rows and 1 column
5. (Slide 64) In Kademlia, the distance between the two identifiers (nodes or keys) is measured as
the bitwise _______ between them.
a. AND
b. NOR
c. OR
d. None of the above
6. (Slide 34) The structured overlay networks use a predefined set of rules to link nodes so that a
query can be effectively and efficiently resolved.
a. True
b. False
7. (Slide 40) The unstructured overlay networks cannot find rare data items efficiently, but it does
not guarantee that an object can be found if it exists in the network.
a. True
b. False
8. (Slide 26) In a centralized P2P network, the directory system uses the _______ paradigm; the
storing and downloading of the files are done using _______ paradigm.
a. Client-server, client-server
b. Peer-to-peer, client-server
c. Client-server, peer-to-peer
d. Peer-to-peer, peer-to-peer
9. (Slide 53) In _______ a key is stored in a node whose identifier is numerically closest to the key.
a. Gnutella
b. Pastry
c. Kademlia
d. None of the above
10. (Slide 54) To resolve a query, _______ uses two entities: a routing table and a leaf set.
a. Gnutella
b. Pastry
c. Kademlia
d. None of the above
11. (Slide 37) In a DHT-based network, assume node 4 has a file with key 18. The closest next node
to key 18 is node 20. Where is the file stored (using the direct method)?
a. Node 4
b. Node 20
c. Both nodes 4 and 20
d. None of the above
12. (Slide 66) In Kademlia, each node in the network divides the binary tree into m subtrees that
_______.
a. Include the node itself
b. Do not include the node itself
c. Include the node itself and the successor node
d. None of the choices are correct
13. (Slide 65) In Kademlia, assume m=4 and active nodes are N4, N7, and N12. Where is the key k3
stored in this system?
a. N4
b. N7
c. N12
d. N4 and N7
14. (Slide 54) In a Pastry network with m=32 and b=4, what is the size of the routing table?
a. 8 rows and 16 columns
b. 16 rows and 8 columns
c. 8 rows and 8 columns
d. 16 rows and 16 columns
15. (Slide 53) In a Pastry, assume the address space is 16 and that b=2. How many digits are in an
address space?
a. 2
b. 4
c. 16
d. 32
Unit 3:
1. (Slide 23) In SDN data plane the network elements perform the transport and processing of data
based on the decisions from the SDN control plane.
a. True
b. False
2. (Slide 25) Data forwarding function accepts incoming data flows from other network devices and
end systems and forwards them along the data forwarding paths that have been computed and
established by the SDN application according to the rules defined by the SDN controller.
a. True
b. False
3. (Slide 25) The control support function interacts with the SDN control layer to support
programmability via resource-control interfaces.
a. True
b. False
4. (Slide 29) The OpenFlow channel is the interface between an OpenFlow controller and an
OpenFlow switch and is used by the switch to manage the controller
a. True
b. False
5. (Slide 14) The southbound interface provides a uniform means for application developers and
network managers to access SDN services and perform network management tasks.
a. True
b. False
6. (Slide 51) The _______ maintains the topology information for the network and calculates
routes in the network.
a. Application layer
b. Topology manager
c. Link discovery
d. Resource layer
7. (Slide 50) The SDN control layer maps application layer service requests into specific commands
and directives to data plane switches and supplies applications with information about data
plane topology and activity.
a. True
b. False
8. (Slide 66) Representational State Transfer (REST) is an architectural style used to define APIs.
a. True
b. False
9. (Slide 78) The northbound interface enables applications to access control plane functions and
services without needing to know the details of the underlying network switches.
a. True
b. False
10. (Slide 80) An abstraction layer is a mechanism that translates a high-level request into the lowlevel commands required to perform the request.
a. True
b. False
11. (Slide 16) The application plane contains applications and services that define, monitor, and
control network resources and behavior.
a. True
b. False
Unit 4:
1. (Slide 49) Consider the following mutual authentication protocol, where 𝐾𝐴𝐵 is a shared
symmetric key between Alice and Bob. This protocol is _______.
a.
b.
c.
d.
Secure mutual authentication protocol
Insecure because it is susceptible to the man-in-the-middle attack
Insecure because Bob can authenticate Alice, but Alice cannot authenticate Bob
Insecure because Trudy can record messages 1 and 3, then replay them later
2. (Slide 79) This 2-message protocol is designed for mutual authentication and to establish a
shared symmetric key 𝐾. 𝑇 is a timestamp. This protocol is _______.
a.
b.
c.
d.
Secure mutual authentication protocol
Insecure because it is susceptible to man-in-the-middle attack
Insecure because if Trudy acts within the clock skew, she can get the value of 𝐾
Insecure because both of (b) and (c) above
Download