Uploaded by domenico.artesi

CCNP.Wireless.642-742.IUWVN.Quick.Reference.May.2012

advertisement
Table of Contents
Chapter 1
QoS for Wireless Applications ...............3
CCNP Wireless
(642-742 IUWVN)
Quick Reference
Jerome Henry
ciscopress.com
Chapter 2
VoWLAN Architecture .......................... 38
Chapter 3
VoWLAN Implementation .................... 58
Chapter 4
Multicast over Wireless ....................... 95
Chapter 5
Video and High-Bandwidth
Applications over Wireless ................ 115
[2]
CCNP Wireless (642-742 IUWVN) Quick Reference
About the Author
Jerome Henry is technical leader at Fast Lane. Jerome has more than 10 years of experience teaching technical
Cisco courses in more than 15 countries and in 4 different languages, to audiences ranging from Bachelor degree
students, to networking professionals, to Cisco internal system engineers. Jerome joined Fast Lane in 2006. Before
this, he consulted and taught Heterogeneous Networks and Wireless Integration with the European Airespace team,
which was later acquired by Cisco and became its main wireless solution. He is a Certified Wireless Networking
Expert (CWNE #45), CCIE Wireless (#24750), and CCNP Wireless, and has developed several Cisco courses
focusing on wireless topics, including CUWSS, IAUWS, IUWNE, IUWMS, IUWVN, CWLBS, and the CWMN
lab guide. With more than 20 IT industry certifications and more than 10,000 hours in the classroom, Jerome was
awarded the IT Training Award Best Instructor silver medal in 2009. He is based in Cary, North Carolina.
About the Technical Reviewer
Denise Papier is senior technical instructor at Fast Lane. Denise has more than 11 years of experience teaching
technical Cisco courses in more than 15 different countries, to audiences ranging from Bachelor degree students to
networking professionals and Cisco internal system engineers. Focusing on her wireless experience, Denise joined
Fast Lane in 2004. Before that, she was teaching the Cisco Academy Program and lecturing BSc (Hons) Information
Security at various universities. She is CCNP Wireless and developed several Cisco courses focusing on wireless
topics (IUWNE, IAUWS, ACS, ISE, lab guides, and so on). With more than 15 IT industry certifications (from
Cisco CCNP R & S, CCIP to Microsoft Certified System Engineer and Security Specialist, CICSP - Cisco IronPort
Certified Security Professional) and more than 5000 hours in the classroom, Denise is a fellow member of the
Learning and Performance Institute (LPI). She is based in the United Kingdom.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[3]
CCNP Wireless (642-742 IUWVN) Quick Reference
Chapter 1
QoS for Wireless Applications
Quality of service (QoS) for wireless applications is an important topic because it can lead, if misunderstood, to many structural
mistakes in wireless networks deployments and poor quality when QoS-dependent devices (for example, VoIP phones) are
added. Expect to be tested extensively on this topic on the IUWVN exam. Make sure to understand the concepts and their related
configuration.
QoS Concepts
QoS is described as a tool for network convergence (that is, the efficient coexistence of VoIP, video, and data in the same network).
QoS does not replace bandwidth, but provides a means to ensure that all traffic gets the best treatment (adapted to each traffic type) in
times of network congestion. QoS can be implemented in three different ways:
■
Best effort, this is basically “no implementation”: All traffic is treated the same way. When buffers are full, additional
frames are dropped, regardless of what type of traffic they carry.
■
Integrated services (IntServ), also called “hard” QoS: Out-of-band control messages are used to check and reserve endto-end bandwidth before sending packets into the network. Resource Reservation Protocol (RSVP) and H.323 are both
examples of IntServ QoS methods. Each node on the path must support IntServ, which is difficult to achieve in large IP
networks.
■
Differentiated services (DiffServ), the most common method: Each type of traffic receives an importance value
represented by a number. Each node on the path independently implements one or several prioritization techniques based on
each traffic number.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[4]
Chapter 1: QoS for Wireless Applications
DiffServ is the method used in most IP networks and can be implemented with different techniques, as detailed in the following
sections. You can use each alone or in combination with others.
Classification and Marking
The first step in a QoS approach is to identify the different types of traffic that traverse the network and classify them into categories,
such as:
■
Internetwork control traffic: Control and Provisioning of Wireless Access Points (CAPWAP) Protocol, Enhanced Interior
Gateway Routing Protocol (EIGRP) updates that need to be transmitted for the network to function
■
Critical traffic: VoIP that cannot be delayed without impacting the call quality
■
Standard data traffic: Email, web browsing, and so on
■
Scavenger traffic: Traffic that is accepted but receives the lowest priority, such as peer-to-peer file download and so forth
Classification can be done through deep packet inspection to look at the packet content at Layer 7 (for example, Cisco Network
Based Application Recognition [NBAR]) is a function of the IOS that can recognize applications); through access control lists (ACL)
based on incoming interface, source, or destination ports or addresses; or through many other techniques. Once each traffic type is
established, mark each identified packet with a number showing the traffic priority value. This marking should be done as close to
the packet source as possible. Some devices (IP phones, for example) can mark their own traffic. You should decide where traffic
is identified and marked, and from where marking is trusted (called the trust boundary). This trust boundary should be as close as
possible to the point where the packet enters the network (for example, at the sending device network interface, or at the access switch
where the device connects). You cannot always trust devices or user marking, or that the access switch will perform the classification,
so you might have to move this trust boundary to the distribution switch.
To apply marking on Cisco IOS using standard QoS configuration commands (called the Modular QoS Console [MQC]), which is a
component of the IOS command set), you can create a class map and specify one or several conditions that identify the traffic. (Each
condition can be enough to identify the traffic if you use the keyword match-any, or they must all match if you use the keyword
match-all.) Example 1-1 shows a class map.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[5]
Chapter 1: QoS for Wireless Applications
Example 1-1
Class Map
Router(config)# access-list 101 permit tcp any host 192.168.1.1 eq 80
Router(config)# class-map match-any MyExample
Router(config-cmap)# match ip dscp 46
Router(config-cmap)# match ip precedence 5
Router(config-cmap)# match access-group 101
Router(config-cmap)# match protocol http
Router(config-cmap)# exit
Marking can be done at Layer 2 or Layer 3. At Layer 2, the priority tag can be inserted into the Class of Service (CoS) field available
in the 802.1p section of the 802.1Q 4-byte element, which is added to frames transiting on a trunk. The CoS field offers 3 bits and
8 values (from 0 [000] to 7 [111]). Its limitations are that it is only present on frames that have an 802.1Q VLAN tag (that is, not
on frames sent on switch ports set to access mode, and not on frames using the native VLAN on trunks) and that it does not survive
routing (a router removes the Layer 2 header before routing a packet, thus losing the CoS value). Its advantage is that it is on a low
layer and can be used efficiently by Layer 2 switches.
At Layer 3, the Type of Service (ToS) field in the IP header can be used for marking. One way to use this field is called IP
Precedence, and uses 3 bits to duplicate the Layer 2 CoS value and position this value at Layer 3, allowing the QoS tag to survive
routing. Eight values may still be a limited range for advanced classification, and the ToS field contains 8 bits. Therefore, another
way to use this field exists: Differentiated Service Code Point (DSCP). DSCP uses 6 of the 8 bits (allowing for 64 QoS values). The
last 2 bits are used to inform the destination point about congestion on the link. The first 3 bits are used to create a priority category
(or class). There are four types of classes: Best Effort (BE) for class 000, Assured Forwarding (AF) for classes 001 to 100, and
Expedited Forwarding (EF) for class 101. A special class, Class Selector (CS), is also used when only these first 3 bits are used (and
the other bits set to 0), thus mapping perfectly to IP Precedence and CoS. The next 2 bits are used to determine a drop probability
(DP); a higher DP means a higher probability for the packet to be dropped if congestion occurs. (Therefore, within a class, a packet
with a DP value of 2 is dropped before another packet of the same class with a DP of 1 or 0.) The last bit is usually set to 0. Figure 1-1
shows the various tagging methods.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[6]
Chapter 1: QoS for Wireless Applications
1 bit
12 bits
3 bits (CoS)
User Priority Format
VLAN ID
16 bits
Protocol ID
Layer 2
802.1Q/p
Pream.
SFD
DA
TAG
4 Bytes
PT
Data
FCS
Three Bits Used for CoS (User Priority)
Layer 3
IPV 4
Version
Length
SA
IP Precedence or DSCP
ToS
1 Byte
0
Len
1
ID
Offset
2
TTL Proto
3
4
IP Precedence
3 bits
0
1
FCS
IP-SA
5
IP-DA
Data
6
7
6
7
Unused
5 bits
2
3
4
5
DSCP
6 bits
Class
3 bits
Unused for QoS
2 bit
Drop Probability
2 bits
0
1 bit
Example: PHB AF11 (DSCP 10)
0
0
1
0
Class
QoS Tags:
Layer 2 = CoS
Layer 3 = ToS
1
0
Value
AF1
001
dp
0
AF2
AF3
AF4
010
011
100
dp
dp
dp
0
0
0
AF = Assured Forwarding
(DSCP 10 to DSCP 38)
EF(46) 101 11
EF = Expedited Forwarding
CS3(24) 011 00
(DSCP 46)
CS = Class Selector
Used for compatibility with IP precedence
0
0
Drop
AF
Probability Value
Value
(dp)
Low
Medium
High
01
10
11
AF11
AF12
AF13
Figure 1-1 Marking Techniques
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[7]
Chapter 1: QoS for Wireless Applications
Note
Notice that the DP is
valid only within a class:
AF21 has a lower priority than AF32 (because
AF32 has a higher class
than AF21).
DSCP QoS tags are sometimes difficult to read. The DSCP value itself is usually represented in decimal. For example, DSCP
011010 is converted to decimal and usually written as DSCP 26. The initial 011 distinguishes that it is in the AF class. To better
read the difference between the class and the drop probability, DSCP tags are also often written using the per-hop behavior (PHB)
convention, writing the class (AF in this example), then the class translated into decimal (011 = 3), and then the DP value (01 = 1),
ignoring the last bit (which is always set to 0 anyway). Therefore, DSCP 26 can be written as PHB AF31; both are equivalent. PHB
is often used because it helps clearly see the DP value. DSCP 26 (011010) and DSCP 30 (011110) might be difficult to compare, but
expressed in PHB, they become AF31 and AF32, and you can see that AF31 has a higher priority than AF32. (Yes, higher. They both
belong to the same class, but AF32 has a higher DP than AF31; if one packet must be drop within the AF3 class, packets with the
higher DP are dropped first.)
When all the DP bits are set to 0, the PHB naming convention uses CS. For example, 011000 could be written AF30, but is in fact
written CS3, to remind that it does not have any DP bits and perfectly maps to CoS or IP Precedence. Tag 101110 is written DSCP 46
(101110 in decimal). Because the class is 101, this tag can also be called EF. EF usually has only one DP (101110), and is used for
voice. Because this class commonly uses only one DP, it is often simply called EF (and not EF52).
All these numbers may be complex to master. Luckily, as a wireless engineer, you need to know only a limited number of tags: EF for
voice, AF31 or CS3 for voice signaling, and a few other values for controller-related traffic (see Chapter 2, “VoWLAN Architecture,”
and Chapter 4, “Multicast over Wireless”).
Congestion Management
Once traffic is marked, you can decide to prioritize some traffic in case of congestion (and drop traffic of lower priority). With the
MQC, this decision is usually done with the keyword policy-map (followed by a name). You then call each class map created during
the classification phase and decide which prioritization system to use. Prioritization can be accomplished in many ways. Each way
solves a specific network traffic problem and has a particular effect on network performance. Any technique that drops or marks
packets when they reach predefined burst limits is called policing (inbound or outbound). Another technique, shaping, uses outbound
only, thus enabling you to buffer packets when they reach predefined burst limits. Later, if bandwidth use decreases, these buffered
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[8]
Chapter 1: QoS for Wireless Applications
packets may be sent. Policing is adapted for VoIP, but shaping is more adapted for non-latency-sensitive applications (TCP, for
example). Beyond their policing or shaping aspect, each congestion management method prioritizes packets in a specific way:
■
Priority queuing (PQ) guarantees strict service of a queue (servicing this queue in priority, regardless of the number of
packets in the other queues), which is good for the prioritized queue but may starve the other queues.
■
Custom queuing (CQ) allocates a percentage of the bandwidth to each queue, ensuring good load balancing to each queue,
but no real prioritization.
■
Weighted fair queuing (WFQ) sends in priority smaller packets (low-bandwidth consumption) and packets with a higher
ToS value, using an automatic classification algorithm. WFQ is implemented simply by entering the command fair-queue
at the interface level. A common variant, class-based weighted fair queuing (CBWFQ), allows you to define classes and the
amount of bandwidth each should get. Example 1-2 shows a policy map implementing CBWFQ.
Example 1-2
CBWFQ Policy Map
R1(config)# policy-map MyPolicy
R1(config-pmap)# class MyClass1
R1(config-pmap-c)# bandwidth percent 20
R1(config-pmap)# class MyClass2
R1(config-pmap-c)# bandwidth percent 30
R1(config-pmap)# class class-default
R1(config-pmap-c)# bandwidth percent 35
R1(config-pmap)# end
R1#
■
Low-latency queuing (LLQ), commonly used for VoIP, is a variation from CBWFQ where one queue is serviced in priority
(in a PQ logic), but only up to a certain amount of bandwidth (so as not to starve the other queues). For the other queues, you
define classes and amount of bandwidth. These queues are served in a CBWFQ manner. Notice that with LLQ, any packet in
the priority queue that exceeds the allocated bandwidth is dropped (otherwise, you would be back to simple PQ). Example 1-3
shows a policy map implementing LLQ. The keyword priority is the maker of an LLQ policy.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[9]
Chapter 1: QoS for Wireless Applications
Example 1-3
LLQ Policy Map
R1(config)# policy-map MyPolicy
R1(config-pmap)# class MyVoiceClass
R1(config-pmap-c)# priority percent 20
R1(config-pmap)# class MyClass2
R1(config-pmap-c)# bandwidth percent 30
R1(config-pmap)# class class-default
R1(config-pmap-c)# bandwidth percent 35
R1(config-pmap)# end
R1#
There are also mechanisms for congestion avoidance. For example, weighted random early detection (WRED) drops additional TCP
packets as the link becomes more congested (before link saturation), recognizing that TCP windowing will resend the lost packets and
reduce the TCP window (thus slowing down the flow). This mechanism avoids link oversubscription, but is adapted only for TCP. To
implement WRED, just use the keyword random-detect at the interface level.
For traffic using UDP (like VoIP), call admission control (CAC) allows only a certain number of concurrent calls, avoiding
oversubscription of the queue allocated for this traffic. CAC is usually implemented on the platform managing the calls.
Another mechanism, header compression, enables you to drop packet header size from 40 bytes (Layer 4 to Layer 2) to 2 bytes or 4
bytes, by simply indexing each packet. This technique works only on point-to-point links.
On slow-speed links, you can use link fragmentation and interleaving (LFI) to break large packets into smaller packets, and
interleave small and urgent packets. This technique is efficient to avoid delaying small packets behind large packets, but adds
additional overhead. (Each broken packet becomes several small packets, each with an individual header.)
After creating your policy, you can apply it at the interface level. On the Cisco MQC, you generally do so with the service-policy
{input | output} command. Example 1-4 shows a complete, but simple policy using MQC, making sure that voice traffic (RTP
audio) and voice signaling (SIP or Skinny, see Chapter 2) are marked properly, using LLQ for voice and CBWFQ for voice signaling
and any other traffic.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 10 ]
Chapter 1: QoS for Wireless Applications
Example 1-4
Complete MQC Policy Example
R1(config)# class-map match-any Voice
R1(config-cmap)# match protocol rtp audio
R1(config-cmap)# exit
R1(config)# class-map match-any Voice_signaling
R1(config-cmap)# match protocol sip
R1(config-cmap)# match protocol sccp
R1(config-cmap)# exit
R1(config)# policy-map MyVoicePolicy
R1(config-pmap)# class Voice
R1(config-pmap-c)# set ip dscp ef
R1(config-pmap-c)# priority percent 20
R1(config-pmap-c)# exit
R1(config-pmap)# class Voice_signaling
R1(config-pmap-c)# set ip dscp cs3
R1(config-pmap-c)# bandwidth percent 15
R1(config-pmap)# class class-default
R1(config-pmap-c)# bandwidth percent 50
R1(config-pmap-c)# exit
R1(config-pmap)# exit
R1(config)# interface FastEthernet 0/1
R1(config-if)# service-policy output MyVoicePolicy
R1(config-if)# end
R1#
QoS techniques are used throughout the entire network. At the network access layer, you use classification and marking, congestion
management (policing or shaping), admission control, and congestion avoidance. Congestion management and congestion avoidance
are also used in the core of the network. At the WAN edge of the network, you use the same techniques as at the access layer, with
header compression and LFI if needed.
As an IUWVN candidate, you are not supposed to be a QoS expert, but must have a good understanding of the basic QoS
mechanisms. Keep in mind that QoS does not solve congestion issues, but simply helps safeguard the more important traffic when
congestion occurs. When implementing QoS in a corporate network, start by defining the objectives to be achieved via QoS. Then,
analyze the different traffic types and create matching classification, marking, and policies. Always test before expanding the
configuration to the entire network. Once QoS is implemented, monitor the traffic to decide whether you need to change the policies.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 11 ]
Chapter 1: QoS for Wireless Applications
Table 1-1 shows recommendations for common traffic type marking. You might need to create more or fewer classes depending on
your implementation requirements, and you may also need to use another marking.
Note
AutoQoS is an automated
tool. As such, it does not
have the same intelligence of the network as a
QoS expert. For complex
deployments, seek the
help of a QoS expert.
Table 1-1 Cisco QoS Standard Baseline
Application
L2 CoS
IP Precedence
PHB
DSCP
IP routing
6
6
CS6
48
Voice
5
5
EF
46
Interactive video
4
4
AF41
34
Streaming video
4
4
CS4
32
Locally defined mission-critical data
3
3
—
25
Call signaling
3
3
AF31/CS3
26/24
Transactional data
2
2
AF21
18
Network management
2
2
CS2
16
Bulk data
1
1
AF11
10
Scavenger
1
1
CS1
8
Best effort
0
0
0
0
IP routing
6
6
CS6
48
In some cases, you can use Cisco AutoQoS to simplify your QoS implementation. AutoQoS is an intelligent macro via which you can
enter one or two simple AutoQoS commands to enable the appropriate features for the recommended QoS settings for an application
on a specific interface.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 12 ]
Chapter 1: QoS for Wireless Applications
Wireless QoS
QoS is also used in the wireless world. At the cell level, you can use wireless QoS to prioritize voice traffic over other traffic.
Wireless CAC is added to limit the number of concurrent calls in the cell. Between the access point (AP) and the controller, all traffic
is encapsulated into CAPWAP, and you need to take care of the CoS and ToS values on the transmitted packets. You may also need
to manipulate QoS values for packets entering or leaving the controller.
DCF Mechanism
QoS in the wireless cell was implemented through the 802.11e amendment ratified in 2005. Without 802.11e, media access works
through the Distributed Coordination Function (DCF) you learned in IUWNE. Each station that wants to send a frame has to wait.
Time is determined by two basic timers: the short interframe space (SIFS), which is the smallest time space in use (before 802.11n),
and the slot time (SlotTime), which is the speed at which stations count time (like a metronome). Table 1-2 shows the main values, as
they depend on the protocol you use.
Note
The DIFS, which is the
standard silence between
frames, is calculated as
DIFS = SIFS + 2 x
SlotTime.
Table 1-2 SIFS and SlotTime Values
Protocol
SIFS (Microseconds)
Slot Time (Microseconds)
DIFS (Microseconds)
802.11b
10
20
50
802.11g/n
10
9
28
802.11a/n
16
9
34
You need to know two other interframe spaces: the reduced interframe space (RIFS), which is a 2-microsecond silence used between
frames during an 802.11n frame burst; and the extended interframe space (EIFS), used when frames collide. The emitting station
detecting the collision has to wait an EIFS (calculated as SIFS + (8 x ACK) + Preamble length + PLCP header length + Distributed
interframe space [DIFS]) before retrying to send.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 13 ]
Chapter 1: QoS for Wireless Applications
When a station needs to send with DCF, it picks up a random number within a contention window (CW). This window is a range of
values between a minimum (CWmin) and a maximum (CWmax). For 802.11b, CWmin is 31. For 802.11a/g/n, CWmin is 15. For all
protocols, CWmax is 1023. For a first attempt to send a given frame, the station usually uses the CWmin value itself. The number
picked is called the backoff timer. The total time the station has to wait before sending is called the network allocation vector (NAV).
The station then waits for a DIFS and starts counting down from the chosen number (so, initially, NAV = DIFS + Backoff timer). At
every number time mark, the station listens to the cell. If no traffic is detected, the station waits a slot time and counts one number
less. If traffic is detected, the station tries to read the 802.11 header duration field (that expresses the time needed in microseconds
to send the frame, wait a SIFS, and receive an ACK frame for unicast transmissions), and increments its NAV to reflect the
additional time necessary for the other station transmission. (For example, if an 802.11a station countdown reached 11 when another
transmission was detected with a duration value of 90 microseconds [10 slot times], the station set is NAV to 21 and restart counting
from there, while the other station transmits.) The process repeats during the countdown for every new detected transmission. When
the countdown reaches 0, if no other transmission is detected, the station sends its frame. If the frame is multicast or broadcast, no
acknowledgment is expected. If the frame is unicast, the receiver waits a SIFS, and then sends back an acknowledgment (ACK)
frame. If no ACK is received, the emitter picks up a new backoff timer (typically twice the previous timer), waits an EIFS, and
then restarts counting down. If, after several unsuccessful transmissions, the backoff timer reaches CWmax, the station keeps using
CWmax until the frame is dropped or transmitted successfully.
Stations counting down attempt to read the duration field in other sending stations’ frames to update their NAV, but this might
not always be possible. The entire frame is sent at the same data rate, and the listening station might not be able to demodulate the
sending station frame. (For example, the sending station is close to the AP and uses 54-Mbps 64-QAM, and the listening station is at
the edge of the cell and cannot demodulate anything faster than 6 Mbps.) The listening station may be able to use the PHY header that
is sent slower than the 802.11 frame itself, and which also contains the duration of the frame (expressed in microseconds, or in bytes
and transmission speed). The PHY header does not specify whether the frame is broadcast or unicast. (The listening station does not
know if a SIFS/ACK sequence is expected after the frame is sent.) This limited information may result in collisions. VoWLAN cells
are built small to mitigate this issue and to ensure that most stations detect other stations’ frame headers.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 14 ]
Chapter 1: QoS for Wireless Applications
802.11e and WMM
The default DCF mechanism does not differentiate traffic. Small voice packets are sent the same way as large FTP packets. To
improve this mechanism, the 802.11e amendment introduces QoS mechanisms and creates eight priority values, called user priorities
(UP), with the same logic as the CoS values explained earlier. UPs are grouped two by two into access categories (AC), as shown in
Table 1-3. Most systems use QoS only down to the AC level, not down to the UP sublevel.
Note
Notice that CoS 3 and 0
are in the same AC. (3 is
best effort, and 0 is “no
tag,” which results in
best effort treatment.)
Table 1-3 UPs and ACs
Priority
CoS (802.1p)
802.11e Code
Highest
7
NC
6
VO
5
VI
4
CL
3
EE
0
BE
2
-
1
BK
Lowest
AC Code
AC Name (and Purpose)
AC_VO
Platinum (voice)
AC_VI
Gold (video)
AC_BE
Silver (best effort)
AC_BK
Bronze (background)
When a QoS station needs to send, it first allocates the packet to a QoS category. The station then picks up a backoff timer, but the
CW is smaller for packets with higher priority, as shown in Table 1-4.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 15 ]
Chapter 1: QoS for Wireless Applications
Note
Notice that most values are calculated from
CWMin.
Table 1-4 Firewall Policy of the Inner Router
802.11b
802.11a/g/n
CWMin
CWMax
CWMin
CWMax
AC_VO
7 (CWmin + 1) / 4 – 1)
15 (CWmin + 1) / 2 – 1
3 (CWmin + 1) / 4 – 1)
7 (CWmin + 1) / 2 – 1
AC_VI
15 (CWmin + 1) / 2 – 1
31 (CWMin)
5 (CWmin + 1) / 2 – 1
15 (CWMin)
AC_BE
31 (CWMin)
1023
15 (CWMin)
1023 (CWMax)
AC_BK
31 (CWMin)
1023 (CWMax)
15 (CWMin)
1023 (CWMax)
The station then waits an interframe space. It is not the DIFS like for DCF, but the arbitration interframe space (AIFS). The AIFS is
1 SIFS + AIFSN, where AIFSN (the AIFS number) is a number of slot times that depends on the queue to use. AIFSN is 2 for AC_
VO and AC_VI, 3 for AC_BE, and 7 for AC_BK. Notice that AIFS is always equal or longer than DIFS (never shorter).
The station then counts down from the backoff timer chosen for that queue. When the queue reaches 0, the frame is sent. All queues
count down in parallel inside the station. Because AIFS and CW are smaller for higher-priority queues, chances are higher to send
more voice packets faster than packets of the other queues. This is true for countdowns inside the station, and between stations of the
same cell.
802.11e also defines transmit opportunities (TXOP) that are transmitted by the QoS AP in its beacons (in a field called EDCA
Information Element) and express for each AC a service period (SP), which is the duration (in units of 32 microseconds) for which a
station can send in each period, for each AC. For example, the QoS AP may tell the cell that for the AC_VI queue, the service period
is 3.008 ms and the TXOP 4, which means that a station can send up to 4 video frames without stopping, per period of 3.008 ms. This
allows for a station to send more than one frame once it gets access to the medium. A station ready to send (NAV = 0) that needs to
send several frames, and knows that the TXOP information allows for these several frames to be sent (the TXOP is expressed in time
duration, so the station has to base its determination of the number of frames that can be sent on the data rate at which the station is
planning to send), is going to send several frames in a burst (called content-free burst [CFB]). TXOPs are configurable on the AP and
are usually higher (more frames can be sent in a burst) for higher-priority ACs.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 16 ]
Chapter 1: QoS for Wireless Applications
This queuing mechanism is different from the one used by non-QoS stations. Therefore, APs and stations implementing 802.11e do
not use DCF, but another coordination function called Hybrid Coordination Function (HCF). 802.11e defines two submodes:
■
Enhanced Distributed Channel Access (EDCA): Similar in concept to DCF (with the addition of the features described
earlier: ACs, AIFS and TXOPs). EDCA performs the Enhanced Distributed Coordination Function (EDCF).
■
HCF Controlled Channel Access (HCCA): The AP takes control of the cell and decides which station has the right to send.
This mode resembles an older and unused mode, called Point Coordination Function (PCF). Just like PCF, HCCA has not
been implemented by any vendor yet.
802.11e also improves power management mechanisms. With standard 802.11, a station saves battery power by sending a null
(empty) frame to the AP with the Power Management bit in the header set to 1. The AP then buffers subsequent traffic for that station.
The station wakes up at regular intervals to listen to the AP beacon, which contains a field called the Traffic Indication Map (TIM),
indicating the list of stations for which some traffic is buffered. Some beacons also contain a Delivery Traffic Indication Message
(DTIM), indicating that the AP has broadcast or multicast traffic that is going to be sent just after the beacon. The station should stay
awake to receive the broadcast/multicast frame. If the station sees its number in the TIM, it sends a special frame (PS-Poll) to the AP
to ask for the first buffered packet. The AP contends for medium access and sends the frame, indicating with the More Data bit in the
frame header if more packets are buffered. The process is repeated until the More Data bit is set to 0, showing that no more traffic is
buffered. This process consumes a lot of frames. 802.11e introduces Automatic Power Save Delivery (APSD), with two submodes:
■
With Scheduled APSD (S-APSD), the station and the AP negotiate a wakeup time. No further frame exchange is needed.
The AP recognizes when the station is going to wake up, and simply sends the buffered traffic in due time. S-APSD is not
implemented often.
■
With Unscheduled APSD (U-APSD), the station still informs (with the Power Management bit) the AP that the station goes
to doze mode, and still listens to the TIMs in the beacons. The station can then inform the AP about its awaken state at any
time, with any frame where the Power Management bit is set to 0 (no need for PS-Poll frames, and no need to inform the AP
just after a beacon). The AP then empties its buffer in a burst (no need for one trigger frame per buffered frame). U-APSD
is implemented often and dramatically increases the efficiency of the delivery mechanism. U-APSD not only improves
power save management (compared to PS-Poll), but also increases the cell call capacity (because there is less time wasted in
buffered frame management). With its flexible trigger mechanism, U-APSD also allows the voice client to synchronize the
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 17 ]
Chapter 1: QoS for Wireless Applications
transmission and reception of voice frames with the AP. The client can send a burst of frames, then go to power-save mode,
and then trigger the AP to send buffered frames as soon as the client’s receive buffer level gets low. This way, a voice client
wireless card can spend most of its time in power save mode, even during an active call.
Bursts are built with block acknowledgments, also introduced by 802.11e, where stations and APs can negotiate the exchange of burst
of frames. The entire burst is acknowledged once, with one block ACK frame.
The Wi-Fi Alliance (WFA) certifies as Wireless Multimedia (WMM) systems that implement EDCA (with at least four AC queues,
TXOPs, AIFS, and the specific queue timers explained earlier). The Wi-Fi Alliance also certifies as WMM Power Save systems that
implement WMM and U-APSD. S-APSD and HCCA are not tested in the certification (that is, they can be implemented, but are not
needed for the WMM certification).
QoS-Enabled Cells
A WMM station or AP adds a 2-byte QoS control field to the end of its 802.11 header that mentions (among other parameters) the
UP for the current frame. The non-WMM station assumes that this field is part of the frame body and simply ignores it, allowing nonWMM stations to coexist with WMM stations.
The 802.11e/WMM cell also implements several other efficiency mechanisms:
■
In each beacon, a QoS Basic Service Set Information Element (QBSS-IE) informs the cell about the number of stations
currently associated to the AP, the percentage of the AP radio resources currently used by these clients, and a count of the
space left for new stations. This QBSS-IE allows WMM roaming stations to choose the best AP, based on signal and space.
■
WMM stations can also (optionally) use Traffic Specification (TSPEC) to specify the type of traffic required to send and
receive (UP and volume upward and downward). The AP can notify the station if that traffic can be admitted or not, and can
subtract this traffic when informing the cell about the space left in the cell in the next QBSS IE (and TXOP information in
the beacon EDCA IE). The TSPEC is sent in a frame exchange called Add Traffic Specification (ADDTS). With EDCA, the
station can ignore the AP answer. TSPEC is important because it is used for CAC in the controller-based solution.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 18 ]
Chapter 1: QoS for Wireless Applications
Yet, 802.11e does not (and cannot) address issues related to frame sizes versus throughput. A lot of 802.11 cell time is spent in
overhead (interframe spaces, acknowledgments, 802.11 PHY and MAC headers). A small packet requires the same overhead as a
large packet. In proportion, time spent in overhead increases as the payload (packet size) decreases. This phenomenon makes that the
real throughput decreases as the packet size decreases. Table 1-5 shows example maximum throughput for some typical packet sizes.
Table 1-5 Typical Maximum Throughput for Common Packet Sizes
Data Rate
Throughput for 300 B Packets
Throughput for 900 B Packets
Throughput for 1500 B Packets
802.11g, 54 Mbps
11.4 Mbps
24.6 Mbps
31.4 Mbps
802.11b, 11 Mbps
2.2 Mbps
4.7 Mbps
6 Mbps
There is not much you can do to solve this issue. You have to be aware that throughput degrades as the packet size gets smaller. When
deploying a new application over a wireless network, always test to determine the real effective throughput.
QoS Marking Between APs and Controllers
QoS needs to be controlled when frames are sent between the controller and the APs. For each WLAN, you can define a QoS level
that determines the maximum AC allowed on the cell. Make sure that packets transmitted between the APs and the controller take the
entering packet tag into account, without exceeding the policy configured for the WLAN. Figure 1-2 summarizes the QoS tag control
mechanism used between APs and controllers. You need to understand and remember its principles.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 19 ]
Chapter 1: QoS for Wireless Applications
Payload
WMM or
non-WMM client
non-WMM client
WLAN with no 802.1p QoS Mapping CAPWAP Encapsulated
or incoming packet with no QoS
Payload
802.1p capped to WLAN QoS maximum
DSCP Payload
CAPWAP Encapsulated
WMM client
802.1p
802.11e DSCP Payload
DSCP
DSCP
Payload
DSCP copied to 802.11e
capped to WLAN QoS maximum
802.1p
DSCP
Payload
DSCP value
copied unchanged
AP
Access Ports
WLAN
Controller
Trunk
CAPWAP Encapsulated
DSCP
802.11e DSCP Payload
802.11e copied to DSCP
capped to WLAN QoS maximum
non-WMM client
Payload
802.11e
DSCP
Payload
802.11e copied to 802.1p
capped to WLAN QoS maximum
WLAN with no 802.1p QoS Mapping CAPWAP Encapsulated
or incoming packet with no QoS
Payload
802.1p
DSCP
Payload
DSCP value
copied unchanged
Payload
Figure 1-2 QoS Between APs and WLCs
Spend some time memorizing the principles displayed in this figure. A key element needed to understand the process is that the controller
usually connects to a switch through a trunk interface. An AP (in local mode) connects to a switch through an interface in access mode.
Therefore, 802.1Q (and 802.1p/CoS) is only available on the trunk link to the controller, not on the access link to the AP.
When a packet reaches the controller, the controller is always going to keep the DSCP original marking unchanged. This is true for
wired packets sent to the AP (DSCP inside the CAPWAP encapsulated frame and DSCP in the outer CAPWAP header are the DSCP
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 20 ]
Chapter 1: QoS for Wireless Applications
value in the incoming packet) and for packets received from the AP (the original DSCP value is maintained inside the CAPWAP
packet). But the controller is always going to cap the 802.1p value to whatever QoS value is set for the destination WLAN. For
example, if the incoming packet requests 802.1p 5 / DSCP 46 (EF) and if the WLAN maximum is AC_BE [802.1p 3], the packet sent
to the AP will show an inner and outer DSCP of 46, but an outer 802.1p of 3. Of course, if the requested QoS level is lower than the
WLAN maximum, the requested QoS level is granted.
This logic implies that you should set a QoS policy on your switch to handle this discrepancy. (Trust the CoS tag and re-mark the
DSCP tag accordingly, or proceed the opposite way if you trust your customer requested QoS levels.)
The AP is also going to keep the original DSCP marking unchanged (inside the CAPWAP encapsulated packet or the 802.11 frame).
But the AP always checks the WLAN policy, and caps to the WLAN maximum the 802.11 UP, or the CAPWAP outer DSCP.
(Because the AP is on an access port, the AP cannot use 802.1p tag.) For example, if the AP receives a frame from the cell with UP 6
and DSCP 46 (EF) and if the WLAN maximum is AC_BE [802.1p 3], the packet is forwarded from the AP toward the controller with
the same inner DSCP UP 6 and DSCP 46 (EF), but the outer DSCP is capped to the WLAN maximum, which is AF 21 or DSCP 18
(802.1p 3 translated into DSCP).
The H-REAP is a special case because it may reside on a trunk and switch traffic locally. The H-REAP basically follows both logics,
capping 802.1p when using an 802.1Q tag (and keeping DSCP unchanged), or capping DSCP when traffic is sent untagged (native
VLAN or access port).
This AP-to-wireless LAN controller (WLC) tagging behavior assumes that you configure a QoS level on your WLAN QoS tab (Silver
is default), but also enables wireless-to-wired mapping on the controller, from Wireless > QoS Profiles for the QoS level chosen for
the WLAN. Without wireless-to-wired mapping, or if the incoming packet does not have any QoS tag (non-WMM wireless client or
non-QoS wired source), no QoS is applied between the controller and the AP (and the AP uses the DCF queue to forward the frame to
the cell).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 21 ]
Chapter 1: QoS for Wireless Applications
Cisco, IETF, and IEEE QoS Policies
The highest 802.1p value is 7 and is kept for emergencies. The IEEE therefore states that the first value you should use for your most
urgent traffic, voice, should be 6. This is why voice is UP 6 in 802.11e. The IETF and Cisco state that before voice is forwarded, the
network should exist, and that CAPWAP traffic and routing updates should have a higher priority than user traffic, even voice. In a
Cisco network, CAPWAP control is tagged CS6 (802.1p 6), and voice EF (802.1p 5). Translation is done automatically for voice at
the controller and AP levels when you choose the default 802.1p mapping for the Platinum queue. (You see UP 6, but will get 802.1p
5 and EF.) The same logic applies for the other queues, as specified in Table 1-6.
Table 1-6 Cisco QoS Mapping
Traffic Type
Cisco Unified
Communications
IP DSCP
Cisco Unified
Communications
802.1p UP
IEEE 802.11e UP (Seen
in Wireless Cell and/or
on WLC QoS Profile)
Notes
Network control
56
7
N/A
Reserved for network control only
Internetwork control
48
6
7 (AC_VO)
CAPWAP control
Voice
46 (EF)
5
6 (AC_VO)
Controller: Platinum QoS profile
Video
34 (AF41)
4
5 (AC_VI)
Controller: Gold QoS profile
Voice control
24 (CS3)
3
4 (AC_VI)
—
Best effort
0 (BE)
0
3 (AC_BE) 0 (AC_BE)
Controller: Silver QoS profile
Background (Cisco
AVVID Gold background)
18 (AF21)
2
2 (AC_BK)
—
Background (Cisco
AVVID Silver background)
10 (AF11)
1
1 (AC_BK)
Controller: Bronze QoS profile
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 22 ]
Chapter 1: QoS for Wireless Applications
You should memorize Table 1-6. These values mean that you should leave the default mapping as it appears on the QoS profile pages.
If you change Platinum mapping to 802.1p 5, you will in fact get 802.1p 4! Notice that automatic mapping does not occur on the
autonomous AP. Figure 1-3 shows the mapping configuration pages on the controller and the autonomous AP.
Figure 1-3 QoS Mapping Pages in the WLC and Autonomous AP
When building your QoS policy, remember that standard CAPWAP control uses CS6 and that CAPWAP data QoS level depends
on the WLAN policy. When an AP first joins a controller and gets its configuration, this initial CAPWAP control exchange is sent
as best effort (no QoS tag). 802.1X/EAP traffic (between the AP, the WLC, and the RADIUS server) uses CS4. Exchanges between
controllers (for mobility messages or Radio Resource Management [RRM]), and multicast/broadcast traffic forwarded from the
controller to any WLAN through the APs, are sent as best effort (no QoS tag).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 23 ]
Chapter 1: QoS for Wireless Applications
CAPWAP does not represent a large volume of traffic. The discovery process consumes 203 bytes (97-byte request and 106-byte
response), the join phase 3000 bytes, and the configuration phase approximately 6000 bytes. Subsequent CAPWAP control traffic
represents 0.35 Kbps per AP. RRM represents 396 bytes exchanged every 60 seconds between controller pairs, and 2660 bytes every
180 seconds for updates. Each initial RRM contact consumes 1400 bytes, and each RRM parameter change 375 bytes per changed
AP. CAPWAP itself adds 60 bytes to each data frame (14 bytes for CAPWAP information, and 46 bytes for the outer Layer 4, 3, and
2 header).
CAPWAP control traffic should be kept with its CS6 QoS tag. Nevertheless, you may face customers worried about the impact of
CAPWAP traffic on the network, and who want to re-mark CAPWAP control with a lower QoS value. Although this is unnecessary
in most networks and not recommended, Example 1-5 shows a way to re-mark and rate-limit CAPWAP control traffic based on its
destination port. (The controller management interface IP address is 10.10.10.10 in this example.)
Example 1-5
Policy to Re-Mark and Rate-Limit CAPWAP Control Traffic
R1(config)# access-list 110 permit udp any host 10.10.10.10 eq 5246
R1(config)# class-map match-all CAPWAPCS6
R1(config-cmap)# match access-group 110
R1(config-cmap)# match dscp cs6
R1(config-cmap)# exit
R1(config)# policy-map CAPWAPCS6
R1(config-pmap)# class CAPWAPCS6
R1(config-pmap-c)# set dscp cs3
R1(config-pmap-c)# police 8000 conform-action transmit exceed-action set-dscp-transmit 24
R1(config-pmap-c)# exit
R1(config-pmap)# exit
R1(config)# interface FastEthernet 0
R1(config-if)# service-policy output CAPWAPCS6
R1(config-if)# end
R1#
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 24 ]
Chapter 1: QoS for Wireless Applications
Configuring QoS on Controllers
You can define the maximum QoS level expected for a WLAN from the WLAN QoS configuration tab. You can choose Platinum,
Gold, Silver, or Bronze for the WLAN. In the same tab, you can enable or disable WMM. WMM is allowed by default, which means
that the AP uses WMM for that WLAN (QoS information in the frame headers, QBSS IE in the beacons, TSPEC used in CAC).
WMM and non-WMM stations are allowed to join the cell. You can disable WMM, or change it to mandatory (so that non-WMM
stations can no longer associate).
On the same page, you can configure QoS parameters for the 7920 phone. The 7920 is an old 802.11b phone that does not support
WMM. Cisco developed a proprietary QBSS IE to help this phone roaming (called Cisco pre-standard draft 6 QBSS IE, sometimes
called CCX QBSS IE v1, and available in the WLAN QoS configuration tab as 7920 Client CAC), before the 802.11e amendment
was released. This information element was located in AP beacons exactly where the 802.11e QBSS IE is located today. Therefore,
you cannot enable 7920 Client CAC if you enable WMM (which activates the QBSS IE in the AP beacons). When 802.11e was
published, Cisco moved the proprietary QBSS IE deeper in the frame and updated the 7920 phone firmware (2.01 and later) to make
the phone aware of the new location, allowing both the Cisco proprietary QBSS and the 802.11e QBSS to coexist on the same AP.
This new mode is called 7920 AP CAC in the controller interface (CCX QBSS IE v2 in frame captures), and is compatible with
WMM. Enable 7920 AP CAC if you have 7920 phones, but only on the 2.4-GHz band. (The 7920 is 802.11b only.)
After defining the QoS level for the WLAN, you can navigate to Wireless > QoS > Profiles, click each profile (Platinum, Gold,
Silver, and Bronze), and activate the 802.1p mapping for each profile. Leave the default mapping (6, 5, 3, and 1, respectively),
knowing that this IEEE value will automatically be translated into the Cisco-recommended mapping (5, 4, 0, and 1, respectively).
From the same page, you can choose to allocate a maximum bandwidth value for each user using each WLAN using the configured
QoS profile. You can set a maximum bandwidth value for TCP packets (average data rate and burst data rate) or UDP packets
(average real time and burst real time). Figure 1-4 shows the main QoS items you need to configure on a controller.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 25 ]
Chapter 1: QoS for Wireless Applications
Figure 1-4 QoS-Related Items on a Controller
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 26 ]
Chapter 1: QoS for Wireless Applications
Your next stop can be on the Wireless > 802.11a/n | 802.11b/g/n > Media page, where you can configure wireless CAC for voice or
video, as shown in Figure 1-4. From the Wireless > 802.11a/n | 802.11b/g/n > EDCA parameters page, you can also refine the way
WMM allocates resources to QoS stations. The default mode, WMM, is adapted to general deployments (data, or large proportion of
data along with some voice). You can change the CWMin, CWMax, AIFS, and TXOPs allocated to each AC by changing the EDCA
Profile to Voice Optimized (most resources are allocated to voice, and few resources to other ACs), Voice and Video Optimized,
or SpectraLink Voice Priority. The AP prioritizes frames containing the SpectraLink Radio Protocol, protocol 119, because some
SpectraLink phones are not WMM. See Chapter 3, “VoWLAN Implementation,” for more details about these pages.
As part of your QoS configuration, you can also control the bandwidth allocated to users on Webauth WLANs. This is accomplished
by first creating a QoS role from Wireless > QoS > Roles. Create a new role, click Apply, and then click the role name to edit its
properties and allocate and bandwidth limitation for users with that profile. You can set a maximum bandwidth value for TCP packets
(average data rate and burst data rate) or UDP packets (average real time and burst real time). Then, when creating local users on the
controller, from Security > AAA > Local Net Users, you can check the Guest User Role check box and select the QoS profile you
created. The user bandwidth will be limited as defined in the QoS role. Notice that this function is available only for users of Webauth
WLANs. Figure 1-5 shows the main QoS profile configuration on a controller.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 27 ]
Chapter 1: QoS for Wireless Applications
Figure 1-5 WebAuth User QoS Profile Configuration on a Controller
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 28 ]
Chapter 1: QoS for Wireless Applications
Notice that you can configure all QoS items from Cisco Wireless Control System (WCS) through controller templates.
Configuring QoS on Autonomous APs
QoS on autonomous APs is slightly different from QoS on controllers. Some items are similar in concept to the controller parameters.
For example, from the Services > QoS > Access Categories tab, you can fine-tune the AP EDCA parameters. You can manually set
the CWMin, CWMax, TXOP, and AIFSN values for each AC. You can also choose the WFA default values (which is the equivalent
to WMM EDCA configuration on the WLC) or an optimized voice mode (equivalent to voice-optimized EDCA configuration on
the WLC). When choosing the voice mode, the AP changes the EDCA parameters, but also starts sending probe responses every
10 ms (called gratuitous probe responses). This feature is useful when phones are discovering WLANs by passively listening to
AP beacons. Standard beacon interval is 102.4 ms. By sending unsolicited probe responses every 10 ms, the AP expedites WLAN
discovery. The AP also starts prioritizing voice traffic, using an LLQ logic. Figure 1-6 shows the Access Categories tab.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 29 ]
Chapter 1: QoS for Wireless Applications
Figure 1-6 Autonomous AP Access Categories Tab
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 30 ]
Chapter 1: QoS for Wireless Applications
At the bottom of the same page, you can configure CAC to allow voice traffic to use up to a configurable maximum percentage of
the AP radio bandwidth. New calls are not allowed if their addition makes the overall voice traffic in the cell exceed the configurable
value. A percentage of that value is kept for roaming users (so that users roaming to the cell do not get disconnected even if the
maximum allowed bandwidth is already in use by local calls). Notice that roaming bandwidth is taken from the CAC maximum
bandwidth (for example, if CAC Max Channel capacity is set to 75% {default value] and Roam Channel Capacity to 6% [default
value], the actual bandwidth available for local calls is 69% [75 – 6]).
You can further configure this LLQ from Services > QoS > Streams, as shown in Figure 1-7. Bandwidth allocation for LLQ on
an IOS AP (which is a Layer 2 device) differs slightly from LLQ on a router. On the autonomous AP, LLQ means that as soon as a
packet arrives in the AP buffer for the queue set for LLQ, the packet is sent regardless of what other packets may be present in the
other queues. In that sense, it is a form of PQ. Bandwidth limitation for that queue is achieved by combining two other parameters:
■
Retries: Packets sent from the AP radio may fail (collisions). You can define for the LLQ queue how many times the AP
should try resending a packet (default is three attempts) before dropping it.
■
Rate: Lower down in the page, you can configure what rates should be used for LLQ packets. You can define Nominal rates
(rates the AP should try first), Non-Nominal rates (rates the AP should try if none of the nominal rates succeeds), and Disable
rates (rates that should not be used for LLQ).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 31 ]
Chapter 1: QoS for Wireless Applications
Figure 1-7 Autonomous AP LLQ Configuration
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 32 ]
Chapter 1: QoS for Wireless Applications
From the Services > QoS > Advancedpage, shown in Figure 1-8, you can configure parameters that are close to concepts present on
controllers, but configured with a different logic. You can enable or disable WMM for each radio. (On WLCs, WMM is configured at
the WLAN level.) You can also choose to apply the AVVID Priority mapping, which translates 802.11e UP 6 into 802.1p CoS 5. This
parameter is disabled by default (which means that 802.11e UP 6 translates into 802.1p CoS 6 by default). You need to enable this
parameter for the autonomous AP to behave like the controller and CAPWAP AP, translating the Voice UP 6 into the IETF/Ciscorecommended wired QoS level of CoS 5. Notice that this translation is valid only for voice traffic (not for the other ACs).
From the same page, you can configure the AP to prioritize voice traffic, regardless of any other QoS configuration on the AP, by
checking the QoS Element for Wireless Phones check box. Enabling this feature makes that the AP automatically starts prioritizing
voice traffic over its radio interfaces in a PQ logic (again, regardless of any other QoS configuration on the AP). The AP also starts
adding the QBSS IE in its beacons. It is the Cisco QBSS Version 2. If you check the Dot11e check box, the AP also includes the
802.11e QBSS IE. Figure 1-8 shows the QoS > Advanced.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 33 ]
Chapter 1: QoS for Wireless Applications
Figure 1-8 Autonomous AP QoS > Advanced Page
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 34 ]
Chapter 1: QoS for Wireless Applications
You can configure some parameters on autonomous APs that you cannot configure on controllers. For example, you can configure a
QoS policy to mark/prioritize traffic (just as you would on a router with the MQC, which is not possible on a controller). You do so
from Services > QoS > QoS Policies. From that page, you can define a new policy, providing a name and selecting (from drop-down
lists) the type of traffic that should be targeted (based on IP Precedence, DSCP value, or by targeting protocol 119). Lower down on
the page, you can decide which 802.11e user priority to apply to this traffic (useful for traffic sent out of the AP wireless interfaces).
You can also configure a traffic bandwidth limitation. (Make sure to take into account the interface bandwidth before configuring
such limitation, because the policy allows you to set a limit of up to 2 Gbps, which is too high for any radio or wired interface.) At the
bottom of the page, you can apply the policy to the wired or wireless interface (or to VLANs if VLANs are configured on the AP), in
the incoming or outgoing direction. You can apply the same policy at several locations. When checking the resulting policy from the
CLI, you would see the classic sequence class map, policy map, and service policy explained earlier in this chapter.
Configuring QoS on Routers and Switches
On routers, you can configure QoS using the MQC as explained earlier in this chapter. QoS configuration differs slightly on switches
because switch buffers are built differently from router buffers (and switches aim at moving frames between interfaces at hardware
speed). On multilayer switches (Layer 2 and Layer 3 switches that can switch frames and route packets), start by enabling QoS
support with the mls qos global command keyword. At each interface level, decide whether incoming QoS tags should be trusted.
For ports to CAPWAP APs (all modes), trust DSCP. This is true even for H-REAPs on trunks, because H-REAPs always get their
management interface on the native (untagged) VLAN and use this untagged VLAN to communicate with the controller (therefore,
CAPWAP control has DSCP tag CS6, but no 802.1p tag). For ports to controllers, trust CoS (unless you do not want to cap your
customers requested QoS levels, which is not the majority of cases).
Switches buffers commonly have four hardware outgoing queues. You can configure these queues with a policing logic (each queue
receives a percentage of the interface bandwidth, the excess is dropped) with the command srr-queue shape. (For example, srrqueue shape 25 25 25 25 allocates 25% of the interface bandwidth to each queue, packets in excess in each queue are dropped.) You
can also use a share logic, just like on routers (each queue receives a percentage of the interface bandwidth, the excess is buffered
and sent later if possible) with the command srr-queue share. You can use shape and share together. The shape command takes
precedence; share is ignored, except for queues for which the shape value is 0. In that case, share is used for that queue.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 35 ]
Chapter 1: QoS for Wireless Applications
You can also make that queue 1, which is by default going to match voice traffic, receives priority treatment with the command
priority-queue out. When you combine these three commands (shape, share, priority-queue) together, the result is effectively
LLQ for voice. Example 1-6 is a typical configuration for a port to a controller. Notice that the standard spanning-tree elements
were added. (The port is a trunk, only the VLANs needed by the controller are allowed, Port Fast is enabled to enable the port
without waiting for spanning tree to detect whether a switch is connected, and BPDU guard is added to disable the port if a switch is
connected in place of the controller.)
Example 1-6
Switch Configuration for Controller Ports
switch(config)# mls qos
switch(config)# interface gigabitethernet 0/1
switch(config-if)# switchport trunk encapsulation dot1q
switch(config-if)# switchport mode trunk
switch(config-if)# switchport trunk allowed vlans 10,20,30,100
switch(config-if)# spanning-tree portfast trunk
switch(config-if)# spanning-tree bpduguard enable
switch(config-if)# mls qos trust cos
switch(config-if)# srr-queue bandwidth share 10 10 60 20
switch(config-if)# srr-queue bandwidth shape 10 0 0 0
switch(config-if)# priority-queue out
switch(config-if)# end
Example 1-7 is a typical configuration for a port to a CAPWAP AP.
Example 1-7
Switch Configuration for AP Ports
switch(config)# mls qos
switch(config)# interface gigabitethernet 0/2
switch(config-if)# switchport access vlan 50
switch(config-if)# switchport mode access
switch(config-if)# spanning-tree portfast
switch(config-if)# spanning-tree bpduguard enable
switch(config-if)# mls qos trust dscp
switch(config-if)# srr-queue bandwidth share 10 10 60 20
switch(config-if)# srr-queue bandwidth shape 10 0 0 0
switch(config-if)# priority-queue out
switch(config-if)# end
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 36 ]
Chapter 1: QoS for Wireless Applications
The IOS AP may be on an access port or a trunk. For an IOS AP on an access port, you need to trust DSCP because there is no CoS.
For an IOS AP on a trunk port, you can trust DSCP, just like for other APs. If all WLANs are mapped to VLANs (that is, all WLAN
traffic is tagged with 802.1Q /802.1p when sent to the wired interface), you can also trust CoS, which may be useful if the switch is
only Layer 2 capable. You can check the QoS configuration on the switch with the show mls qos interface command.
Other important QoS elements configured on switches are the CoS-to-DSCP and the DSCP-to-CoS maps. When you trust CoS, the
Layer 3 switch checks the incoming frame DSCP value and rewrites this value if it does not match the expected DSCP value for the
trusted CoS. The same logic applies in reverse when you trust DSCP (CoS is rewritten). Therefore, whenever you trust CoS or DSCP,
you should verify the CoS-to-DSCP map and the DSCP-to-CoS map with the show mls qos maps [cos-dscp | dscp-cos] command.
If the default values do not match your requirements (for example, CoS 5 is often mapped by default to DSCP 40 instead of the
recommended DSCP 46 for voice traffic), you can change the mapping. To change the DSCP value you want for a trusted CoS value,
use the command mls-qos cos-dscp map. For this command, you provide eight DSCP values, to match the eight trusted CoS values
(from 0 to 7). To change the CoS value you want for a trusted DSCP value, use the command mls-qos dscp map. For this command,
you provide the DSCP value you trust, you add the keyword to, and you type the CoS value you want for that trusted DSCP value.
Example 1-8 shows a CoS-to-DSCP and DSCP-to-CoS map change.
Example 1-8
CoS-to-DSCP and DSCP-to-CoS Changes
switch(config)#
switch(config)#
switch(config)#
switch(config)#
switch(config)#
mls
mls
mls
mls
end
qos
qos map cos-dscp 0 8 16 26 32 46 48 56
qos map dscp-cos 26 to 3
qos map dscp-cos 46 to 5
Be aware that fewer possible CoS values exist (8) than possible DSCP values (64). Therefore, when you trust CoS, several DSCP
values will translate to the same CoS value. When you translate CoS values, there will be a default DSCP value for each CoS value,
which again limits the scope of values you will see in your network. Table 1-7 shows the default translation values when frames
transit through controllers and APs. When an incoming packet exceeds the maximum QoS level defined for the target WLAN (as
defined in the first column), the AP and WLC cap the outer QoS header as listed. The AP uses an outer DSCP marking, and the WLC
an outer CoS marking.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 37 ]
Chapter 1: QoS for Wireless Applications
Table 1-7 Capped Values for Each AC
AC
Max DSCP Value on APs
Max CoS Value on WLCs
Platinum
EF (DSCP 46)
5
Gold
AF 41 (DSCP 34)
4
Silver
AF 21 (DSCP 18)
3
Bronze
CS 1 (DSCP 8)
1
When these capped packets reach the switch, the untrusted value is rewritten. Table 1-8 shows standard equivalents for DSCP-to-CoS
and CoS-to-DSCP re-marking. The first value in each column is the trusted value, and the second value is the re-marked value. For
example, line two of the first column (EF [DSCP 46] -> 5) shows that when DSCP is trusted and the incoming packet shows a DSCP
value of 46 (EF), the packet Layer 2 CoS value is rewritten (regardless of the original CoS value) as CoS 5.
Table 1-8 Common DSCP-to-CoS and CoS-to-DSCP Re-Marking on Switches
Trusted DSCP Value, and CoS Re-Marking When DSCP Is
Trusted
Trusted CoS Value, and DSCP Re-Marking When CoS Is
Trusted
EF (DSCP 46) -> 5
5 -> EF (DSCP 46)
AF 41 (DSCP 34) -> 4
4 -> AF 41 (DSCP 34)
AF 21 (DSCP 18) -> 2
3 -> AF 31 (DSCP 26)
CS 1 (DSCP 8) -> 1
1 -> AF 11 (DSCP 10)
From these tables, you can see that a frame coming from the wireless space with UP 6 in a WLAN for which QoS level is set to Silver
will have an AF21 DSCP value (on the outer CAPWAP packet) when leaving the AP, which will be translated as 802.1p 2 on a trunk.
The same packet coming from the wired side through the controller will get 802.1p 3, which the switch will translate as AF 31. This
means that the same AC may not get the same outer tag, depending on the direction of the packet.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 38 ]
CCNP Wireless (642-742 IUWVN) Quick Reference
Chapter 2
VoWLAN Architecture
Quality of service (QoS) is a key element in a Voice over WLAN (VoWLAN) architecture. You also need to understand how
VoWLAN networks are built, the type of frames and volumes that VoWLAN devices exchange, and the proper way of designing
VoWLAN cells. Design is key knowledge expected from a CCNP Wireless professional. Too many VoWLAN networks were built by
simply adding VoWLAN phones to a wireless layout deployed with data traffic in mind. You need to clearly understand what makes
an effective WLAN for voice support. After mastering the design and deployment phase, you will be ready to configure your wireless
devices and make your VoWLAN a success.
Voice Architecture
Early analog telephone systems were quite simple. A cable pair was connected to the phone speaker, and another pair to the
microphone. Both pairs ran from the end-user location to a neighboring board where an operator managed calls. The user pressed
a lever to make a light blink on the operator board. The operator connected a speaker to the user microphone line connector on the
board, and a microphone to the user speaker line, which allowed communication between the operator and the user. When the user
explained the phone destination to be reached, the operator connected the user microphone line connector to the destination speaker
connector on the same board, and the user speaker line connector to the destination microphone connector on the same board. If the
destination was remote, the operator connected the user line to another operator closer to the destination, and the chain would repeat
until the user was connected to the final destination.
During the 20th century, automated central office (CO) switches appeared to replace operators, performing the same connection
functions via mechanical switches instead of human action. The development of communication brought more complex features
to determine the best route to a destination, manage inter-CO switch links (called trunks) availability and load, and so on. Trunks
became digital to allow sending more traffic over the same physical circuit. (The digital-to-analog conversion was done in the CO
switch, the end-user system staying analog.)
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 39 ]
Chapter 2: VoWLAN Architecture
Corporations started installing smaller versions of CO switches (called private branch exchange [PBX]) inside their facilities to
manage internal calls (and to avoid paying a fee for these calls). PBX often received additional functions specific to enterprise needs
(call parking, voicemail, music on hold, and so forth).
Several drivers pushed to a transition to VoIP. A major concern was cost. Having to dedicate an entire circuit for each call made it
very expensive. VoIP allows operators to use a single link to send several calls.
VoIP also offers many advanced features that are more complex to implement with analog systems, such as the following:
■
Advanced routing: The possibility to choose the best path to any destination based on link speed or cost or any other
criterion.
■
Unified messaging: Messages can be managed from IP-based applications that can be reached from anywhere in the world.
■
Applications integration: For example, the possibility with XML to receive information on the phone screen.
Because most corporations already have an IP network, VoIP is often seen as just adding a new function to an existing infrastructure,
not as a dramatic change. The phone itself is digital (and is called IP phone), and specialized servers (called application servers) can
be added to provide additional features, such as video telephony or conferencing. The IP phone connects, using IP-based protocols,
to a communications manager platform that provides the phone with an extension number and performs call management functions
(call admission or rejection based on destination or user profile, call setup and routing to destination, ongoing call monitoring, call
termination). You may find in the VoIP world specialized devices that do not exist in the analog telephony world. For example, the
gatekeeper (or call agent) provides call admission control (CAC), bandwidth control, and management. The gateway translates
between protocols (for example between VoIP and the standard public switched telephone network [PSTN]). The multipoint control
unit (MTU) provides real-time connectivity for participants to conference calls. The communications manager platform can perform
one or several of these functions. The communications manager can be centralized, which allows for easier management, but also
creates a single point of failure. In case of failure, a local simplified backup system using Survivable Remote Site Telephony (SRST)
can provide minimum functions to maintain part of the VoIP system. The manager can also be distributed to several locations, which
enhances resistance to failures, but also increases the system complexity.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 40 ]
Chapter 2: VoWLAN Architecture
Protocols are needed to communicate between VoIP systems. Of the many protocols, you need to know four of them:
■
H.323 is a suite of protocol developed by the ITU for audio and video communications. It is often described as an “umbrella
of protocols” because it contains several subprotocols to handle particular functions (for example, H.225 for call signaling
and Registration Admission and Status [RAS]), to establish, maintain, and manage calls; or H.245 for negotiation of each
endpoint capability and channels). H.323 is the most widely used protocol in the VoIP world.
■
Media Gateway Control Protocol (MGCP) is an example of master-slave protocol, where the endpoint (or media gateway
[MG]) is entirely controlled by the MG controller (MGC) down to the tone to play when the user presses a key on the phone.
■
Session Initiation Protocol (SIP), developed by the IETF, is (in contrast) very distributed. Each endpoint (user agent [UA])
uses technologies from the Internet (DNS, MIME, HTTP, and so on) to establish and maintain sessions. SIP is so flexible that
it is easy to create new features, and interoperability often becomes an issue between vendor implementations.
■
Skinny Client Control Protocol (SCCP) is a Cisco proprietary protocol that is used for communications between Cisco
Unified Communications Manager (CUCM), the Cisco implementation of the communication manager function described
earlier, and terminal endpoints. It is therefore different from the three other protocols listed here in its scope, because it is
limited to endpoint to CUCM connections. The CUCM then uses another protocol to communicate with external systems.
VoWLAN Flow
A VoWLAN IP phone is first a wireless client that needs to authenticate and associate to the wireless infrastructure (using the same
security mechanisms as other wireless devices, such as WPA/WPA2, 802.1X/EAP, and so on). The phone then needs to receive
(typically from the DHCP server) a TFTP server information, from where the phone downloads it configuration file, which contains
the IP address of the CUCM. It then sends SCCP messages to register to the CUCM and receive its profile (extension number,
allowed functions, and so on). The phone also receives information regarding which codec to use (the method to convert audio into
binary values and back).
When the user makes a call, the phone goes through the CUCM to reach the other endpoint. As soon as the other end picks up the call,
the voice flow is direct between phones. CUCM is then only involved to monitor to call, not for voice traffic between phones.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 41 ]
Chapter 2: VoWLAN Architecture
Codecs
A key element of VoIP communications is the codec (the method to convert audio into binary values and back). Codec engines
typically capture 20 ms segments of voice audio (50 segments per second) and convert each segment into a binary value. Codecs
are often a tradeoff between resulting packet size and sound quality. Quality can be measured with several tools, one of them being
the mean opinion score (MOS), which grades the rendered sound quality on a scale from 1 (bad) to 5 (excellent). G.711, also called
pulse code modulation (PCM), is the reference codec for the VoIP industry, and generates 160-byte-long packets for a MOS of 4.1.
Another common codec, G.729, generates 20-byte-long packets, for a MOS of 3.7 (packets are eight times smaller than G.711, but
MOS only 10% lower). To the packet size, you have to add the Layer 3 and Layer 4 overhead (40 bytes), and the 802.11 overhead (28
bytes without encryption). You can then calculate that a G.711 call consumes 91.2 Kbps per stream and that G.729 consumes 35.2
Kbps per stream. Because there must be one stream to the phone and one stream from the phone (these streams are sometimes called
call legs in the wireless world, although call legs are not voice streams in the VoIP world) to provide an impression of duplex (50
packets must be received and 50 packets must be sent each second), a G.711 call consumes close to 200 Kbps. The flow calculated
here is for the packets carrying the sound (using Real Time Protocol [RTP] built over UDP). Phones also exchange statistical and
signaling information using Real Time Control Protocol (RTCP). RTCP flow volume is low, less urgent than RTP, and is built on
TCP or UDP depending on the implementations.
Voice Quality Parameters
Voice packets need to reach the receiving end at regular pace to be played smoothly. The wireless function of the phone makes
sure that frames to and from the access point (AP) are sent and received as fast as possible. (At 54 Mbps, sending or receiving five
328-byte-long frames carrying 100 ms of G711 audio [5 x 20 ms] takes 3.5 ms.) RTP makes sure to reorder packets, and the phone
uses the time stamp in the RTP header to play the audio smoothly.
Delay, which describes the time taken by each packet to travel from the emitting phone to the receiving phone, is still a concern. The
ITU labels as commercial-quality calls that ensure that the end-to-end delay between phones is less than 150 ms (regardless of the
distance between both phones). Calls can still be conducted when delay exceeds 150 ms (without commercial quality), but as delay
increases, the voice quality degrades (silences, lost audio). For this reason, a constant concern for VoIP deployments is to keep delay
close to, or below, 150 ms. Another concern is jitter. Jitter measures on the receiver end the difference in delay between consecutive
packets. (If packet 1 takes 120 ms to reach the receiver, and packet 2 takes 140 ms, jitter is 20 ms.) As jitter increases, the voice
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 42 ]
Chapter 2: VoWLAN Architecture
quality degrades (metallic sound, clips, or dropped audios). Therefore, another issue for VoIP deployments is to control the end-toend link congestion, to ensure a low jitter (and not a high variation in the delay as the congestion conditions change). Lost packets (or
packets that are too delayed to be played) are also a concern. The receiving phone usually compensates by playing an intermediate
sound between the previous and the next packet, but audio quality degrades (metallic sound, silences).
For these reasons, VoWLAN networks are built to ensure that frames are not delayed in the cell (most VoWLAN devices and APs
drops frames that are delayed for more than 30 ms) and that losses are minimized. When losses increase beyond 1%, G.711 MOS falls
below 4.0. Calls are still possible, of course, but voice quality is degraded. Therefore, your aim is to design cells that do not generate
more than 1% loss. (This is the real loss due to dropped packets, or packets delayed for more than 30 ms, not the retry rate.)
VoWLAN Cell Design
RF Design
When designing VoWLANs, your main concern is to build fast and “efficient” cells, where frame collisions and multipath are reduced
(to limit losses and retries). A first element to this design is to limit the size of the cell. Frames from stations located far away from the
AP have a longer path to travel to the AP, and because they use lower data rates (data rate decreases as you move away from the AP),
frames stay longer “in the air.” Both elements (longer travel path, and longer time in the air due to lower data rate) increase the risk of
collisions and multipath. A common recommendation is to design cells so that the AP received signal strength indication (RSSI) at
the edge of the cell is still around –67 dBm. Data rate is a combination of RSSI and signal-to-noise ration (SNR). (A higher RSSI and
SNR allow for higher data rates.) Therefore, the –67 dBm recommendation is valid in a normal office environment where the noise
floor is at about –92 dBm to –94 dBm, ensuring a SNR at the edge of the cell of at least 25 dB.
RSSI is a relative value (measuring on a vendor proprietary scale, the ability for the receiving device to transform the signal back into
meaningful bits). Devices from different vendors may report different RSSI values for the same signal. Table 2-1 lists the minimum
RSSI (and SNR) recommendations for a Cisco 7921 or 7925 phone. Verify the vendor recommendations if you use other phones.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 43 ]
Chapter 2: VoWLAN Architecture
Table 2-1 Recommended Minimum RSSI and SNR for Cisco Phones
Target Data Rate
Recommended Min RSSI (dBm)
Recommended Min SNR (dB)
54
–56
40
36
–58
33
24
–62
27
18
–65
26
11 or 12
–67
25
9
–71
24
5.5 or 6
–74
23
2
–76
21
1
–79
19
The minimum RSSI for the 12-Mbps data rate is –67 dBm, and the minimum SNR 25 dB (resulting into a –92 dBm noise floor).
These values have to be combined. For example, if the noise floor is –88 dBm, a 25 dB SNR implies that the minimum RSSI needed
to achieve 12 Mbps is –63 dBm (88 – 25). Also notice that these values represent a 15-dB margin compared to recommendations for
data devices. For example, recommendations for a Cisco CB21AG card for data traffic are – 77 dBm RSSI and 12-dB SNR for a 24Mbps data rate.
Because –67 dBm is the recommended RSSI at the edge of the cell, Cisco also recommends disabling low data rates (those rates that
would be used when the RSSI is lower than –67 dBm). Otherwise, phones moving away from a cell may still stay connected when
the RSSI degrades below –67 dBm, instead of roaming to the next cell. Therefore, you should disable all rates below 12 Mbps for an
802.11g or 802.11a network. (Disable all rates below 11 Mbps for an 802.11b/g network.) Set 12 Mbps (11 Mbps for an 802.11bg/
network) to Mandatory, because the lowest mandatory rate is used by the AP to transmit broadcasts (beacons and so on), and set all
higher data rates to Supported.
Multicasts are sent at the highest mandatory rates in a Cisco network. Therefore, make sure that only the first allowed rate (12 Mbps
or 11 Mbps) is set to Mandatory in a voice network. If higher rates are also set to Mandatory, you may face one-way audio issues
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 44 ]
Chapter 2: VoWLAN Architecture
where a client at the edge of the cell requests multicast traffic (such as music on hold) at a low rate (12 Mbps, for example) and
receives the multicast reply at a higher rate that the client cannot demodulate (24 Mbps, for example, if this is the highest mandatory
rate in that cell). In a VoWLAN, there is only one mandatory rate: the first allowed rate (12 Mbps or 11 Mbps). Also be careful when
disabling low data rates not to leave holes in your coverage. Otherwise, you might face situations where devices present in these
coverage holes do not detect AP beacons sent at 11 Mbps or 12 Mbps and so try to discover service set identifiers (SSID) by sending
probe requests at a low data rate (1 or 6 Mbps) and maximum power. These discovery frames may collide with your VoWLAN
traffic, and the AP would have no way of communicating with this faraway device (because low rates are disabled on the AP).
Also keep in mind that VoIP frames are small. Therefore, 802.11 overhead (SIFS, ACKs, AIFS, and so on) represents a large
proportion of the time taken by stations to transmit. A consequence is that the time gain obtained by sending at high data rate (for
example, 54 Mbps) compared to slower (but still high) rates, such as 18 Mbps or 24 Mbps, is only marginal. A 230-byte frame (and
its physical header) is sent in 0.034 ms at 54 Mbps, and in 0.062 ms at 24 Mbps. But with the overhead, the overall throughput is
about 11 Mbps for 54 Mbps (for 230-byte-long to 300-byte-long frames), and 10.6 Mbps for 24 Mbps. Therefore, it is common to see
VoWLAN deployments where higher data rates (36 Mbps and higher) are simply disabled. Disabling these rates is not necessary, but
common.
The physical size of your cell depends on the AP power level. To avoid audio issues related to client and AP power mismatches, first
determine the VoWLAN client maximum power, and set the AP power (during the survey) to half the client max power. For example,
if the client max power is 40 mW, set the AP to 20 mW during the survey. For the final deployment, in a controller-based solution, let
Radio Resource Management (RRM) adjust the AP power if needed. Keep in mind that the maximum power depends on the band.
(For example, in the FCC regulatory domain, max power is 17 dBm for UNII-1, 24 dBm for UNII-2, and UNII-2e, and 30 dBm for
UNII-3.)
To allow a client at the edge of the cell to easily roam to another cell (instead of sticking to the current AP), plan for a 20% overlap
between cells for a 2.4-GHz VoWLAN, and about 15% or more for a 5-GHz network. This overlap recommendation is based on
each cell area. To calculate the distance (D) between APs, use the equation D = R2 / 2R (where R is the radius of the cell, up to the
–67 dBm RSSI point). Solving this equation for an overlap of 20% gives a D value of 1.374 for a standard radius of 1 (1.486 for 15%
overlap, and 1.611 for 10% overlap, which is a common recommendation for data cells). This means that if the edge of the 2.4-GHz
cell is 70 feet (21 m) from the AP, the next AP should be 96 feet (29 m [70 x 1.374, or 21 x 1.374]) away from the first AP. Because
5-GHz cells are usually 20% to 25% smaller than 2.4-GHz cells, separating cells based on the 20% overlap recommendation for 2.4-GHz
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 45 ]
Chapter 2: VoWLAN Architecture
networks usually provides the same inter-AP distance as when separating cells based on the 15% overlap recommendation for 5-GHz
networks.
Neighboring cells should, of course, be on different channels to avoid co-channel interferences. Cell isolation should be, as much as
possible, 19 dB. This means that a user at the edge of a cell on any given channel should receive only weak signals from other cells
on the same channel. These signals should be 19 dB weaker than the signal of the current active cell (therefore, –86 dBm or less if the
client is at the –67-dBm cell edge). This recommendation refers to the radio frequency (RF) signal (or energy). Keep in mind that the
client RF footprint (Layer 1 energy) is larger than the useful cell area (802.11 Layer 2 usable area). Traffic is not allowed beyond the
–67-dBm edge because lower data rates are disabled, but the AP RF energy expands far beyond the –67-dBm edge. The same logic
applies for the client signal. This design recommendation may be difficult to achieve, especially for 2.4-GHz networks (with only
three nonoverlapping channels) in open space environments, but is a design target. Figure 2-1 illustrates this concept.
The separation of same channel cells
should be: 19 dBm (at the edge of a cell,
signal from another AP on the same cell should be
19 dBm weaker than signal from the cell AP, i.e.
-86 dBm when standing at the -67 dBm edge)
The radius of the cell
should be -67 dBm
Distance between APs
should be 1.374 the Radius
Client RF
footprint
Purple
e
Yellow
Green
Purple
-86 dBm
-67 dBm
Client 802.11
frame usable
range
Yellow
ll
AP RF footprint: the footprint
is larger than the cell usable area
and has to be thought in 3D
Figure 2-1 VoWLAN Cell Design Concepts
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 46 ]
Chapter 2: VoWLAN Architecture
Take into account that RF signals expand in all directions with omnidirectional antennas, which may bring even more constraints in
a multifloor deployment. Isolation should be easier to achieve in the 5-GHz band, where up to 23 nonoverlapping channels may be
available. Nevertheless, parts of the 5-GHz band (namely the UNII-2 and UNII-2e segments) are affected by 802.11h requirements to
change channels if airport radar blasts (or other “valid emitters”) are detected. Sudden channel changes are disrupting for VoWLAN
calls (even if the AP informs the call about the change), and you may have to disable UNII-2 and UNII-2e channels (channels 52 to
60 and channels 100 to 140, respectively) in areas affected by airport radar blasts, which might limit the number of possible channels
in the 5-GHz spectrum to eight. Nevertheless, eight is still more than three, and most of the 5-GHz band is free from industrial,
scientific, medical (ISM) application interferers. Therefore, 5 GHz is the band of choice for VoWLAN deployments. Your choice
may be limited by your customer hardware.
These recommendations will help you design efficient cells, allowing a target per cell of 20 simultaneous voice conversations for
802.11a and 14 simultaneous voice conversations for 802.11g (or 7 to 8 simultaneous voice conversations for 802.11b/g). You can
use CAC on the controller to limit the number of calls per cell. (CAC limits the percentage of the AP bandwidth that can be used by
voice traffic.) In a dense user area, you might have to deploy several APs to limit the number of users per AP.
Environmental Considerations
You need to consider several other elements when designing VoWLANs. If your network supports several services (data and voice,
for example), try to separate traffic types, with VoWLAN on 5 GHz and data on 2.4 GHz. Data is usually bursty with large frames
that can be re-sent, whereas voice needs consistent throughput with small frames that cannot be re-sent. For these reasons, voice and
data traffic should be separated. This can also be done logically by creating two different SSIDs having two different QoS mappings.
In this case, the isolation is artificial because voice and data traffic still share (and compete for) the same RF space.
If your network uses location (context-aware services [CAS]), you might have many RFID tags on the 2.4-GHz band reporting to
the network at regular intervals. Most tags send small frames and do not associate to any WLAN, which limits their impact on the
cell. However, tags do not use CCA before sending (they send without checking whether the medium is free) and use low data rates
(usually 1 Mbps or 2 Mbps) and high power (up to 100 mW). If your network contains many of these tags, the volume of frames may
end up impacting VoWLANs in the 2.4-GHz band.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 47 ]
Chapter 2: VoWLAN Architecture
Obstacles may also limit the size of your cell. Table 2-2 provides the average signal attenuation values for common indoor obstacles.
Table 2-2 Common Obstacles and Standard 802.11 Signal Attenuation
Obstacle
Attenuation (dB)
Obstacle
Attenuation (dB)
Plasterboard wall
3
Metal door
6
Glass (with metal frame)
6
Metal door in brick wall
12
Cinderblock wall
4
Head position
3 to 6
Office window
3
Notice how the user position plays an important part in the signal level. Wireless signals are usually vertically polarized, and phones
are designed to be used in a given range of positions. A phone lifted 90 degrees from its expected position may lose up to 6 dBm of
RSSI. If the user’s head is between the phone and antenna, that position might also impact the call quality. An internal antenna may
be more affected by obstacles than an external antenna.
Roaming is another point to evaluate. A wireless client typically decides to roam when its connection to the current AP degrades.
The roaming criteria are vendor dependent, and can be based on max retry thresholds, RSSI and or SNR thresholds, or proprietary
load balancing schemes. RSSI and SNR thresholds are the most common triggers. To discover another possible AP, the client can
passively scan (listen to other channels to detect other AP beacons) or actively scan, sending probe requests that can be unicast
(mentioning the SSID name) or broadcast (the SSID field in the request is empty, forcing all APs in range to answer regardless of the
SSID they serve). Because interval between beacons is typically 102.4 ms, passive scanning may force the client to stay away from
the main channel for long periods of times. As a comparison, and although the 802.11 standard does not impose a probe response
time, probe requests usually get answered within 10 ms.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 48 ]
Chapter 2: VoWLAN Architecture
Scanning can be done in the background, while the client station is still associated to the previous AP (and before the client roaming
thresholds are exceeded) or done on-roam (the client starts scanning when the need to roam occurs).
You might find many variations in the way a VoWLAN client roams. For example, some clients only scan a subset of channels (for
example, 1, 6, and 11) to expedite the roaming process. Other clients roam to the first acceptable AP instead of waiting to discover all
APs on all channels whether roaming is needed while a call is in progress. Roaming efficiency varies greatly depending on the client
behavior (based on the client firmware and how all the criteria above are implemented in the driver logic). You should test roaming
and design roaming paths in your network (path that users are likely to travel while communicating) and optimize the inter-AP
distance to optimize roaming along those paths. In a controller-based solution, try to maximize intracontroller roaming. When a user
roams between two APs connected to the same controller, the controller simply has to register the new AP for the client. Roaming
takes less than 10 ms. Roaming between two APs connected to two different controllers, when both controllers are in the same subnet,
takes about 20 ms. If controllers are across a router, roaming can take 30 ms or more depending on the number of hops between
controllers. For roaming efficiency, Cisco recommendation is to use centralized controllers to maximize intra-controller roaming and
Layer 2 intercontroller roaming.
Roaming may be challenging in some scenarios (for example, when the roam occurs in an elevator traveling between floors). You can
rely on the APs on each floor, but this design generates frequent roaming events. Added to the fact that elevator cabins are good RF
insulator, this model often means VoWLAN traffic loss and disconnections. In some regions, you can install an AP at the top of the
elevator shaft (with a directional antenna pointing toward the cabin). If the shaft is too high to be covered with a single antenna, you
might be able to use a leaky cable acting as an antenna and running in a corner of the shaft (with this solution, the user is always close
to the “antenna”). You might also be able to add an Ethernet cable to the emergency phone cable already connecting the cabin to the
infrastructure and position the AP at low power inside the cabin. Check local regulations before adopting any model. (For example,
U.S. regulations forbid any element not related to the elevator functions in the shaft.) Figure 2-2 illustrates some of the possible
solutions.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 49 ]
2
Relying on floors coverage
Directional
Antenna
4
AP inside the cabin
3
Leaky cable in the shaft
1
Directional Antenna in the shaft
Chapter 2: VoWLAN Architecture
Leaky Cable
Figure 2-2 Elevator Shaft Coverage Possible Solutions
Keep in mind that the wireless client needs to reauthenticate when jumping to the new AP. With a pre-shared key (PSK) WLAN,
reauthentication is fast. With 802.1X/EAP, reauthentication may take time if the client has to renegotiate its credentials with a distant
AAA server. If the new AP SSID to VLAN mapping is different from the mapping on the old AP, the client IP address may have to
be renewed, which will drop the VoIP call.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 50 ]
Chapter 2: VoWLAN Architecture
Roaming Enhancement Mechanisms
To increase the efficiency of the roaming process, the 802.11i amendment introduced two roaming optimization mechanisms:
■
Proactive key caching (PKC): Allows a client and AP to cache a valid pairwise master key (PMK) for 1 hour. If the client
disconnects from the AP for less than 1 hour, the client can continue its communication with the AP upon reconnecting
without having to reauthenticate.
■
Pre-authentication: Allows a client to authenticate to several APs. The client first authenticates and associates to one AP,
and then authenticates (but does not associate) through the first AP to other APs. Just like PKC, the keys generated through
pre-authentication with the other APs are valid for 1 hour. A limitation of this model is that each authentication implies a new
dialog with the authentication, authorization, and accounting (AAA).
The 802.11-2007 standard also introduced the idea of opportunistic key caching (OKC), by which a client PMK can be transmitted to
several other APs. With OKC, the client does not need to reauthenticate, but simply provide a key ID and index to continue to use its
PMK with the new AP. A limitation of OKC is that the standard does not define the mechanism by which APs would exchange this
PMK (how are the other APs chosen by the first AP, how is the PMK transmitted).
The Fast Basic Service Set Transition (Fast BSS) 802.11r amendment published in 2008 also describes a mechanism for a secure
and fast roaming mechanism with QoS awareness. 802.11r is often seen as an 802.11i equivalent (for security), with aspects taken
from 802.11e (for QoS). With 802.11r, a Mobility Domain Information Element (MDIE) is created to group APs belonging to the
same roaming domain. Any AP (or controller) can be used as a central point. When authentication occurs, the PMK is sent to the
central point and distributed to any AP in the domain to which the client roams. Roaming is fast, and mechanisms are in place to
preserve the client QoS level while roaming (with TSPEC exchanges between APs about the client traffic needs). Cisco controller
code 7.0 and later incorporates support for 802.11r. (You cannot configure it; 802.11r is automatically activated for 802.11r-enabled
clients.)
The Cisco Compatible Extensions (CCX) program also incorporates many features to enhance roaming. One of these features is
Cisco Centralized Key Management (CCKM). CCKM provides fast, secure roaming, by establishing a key hierarchy upon initial
WLAN client authentication (with a main key and derived secondary keys) and uses that hierarchy to quickly establish a new key
when the client roams, passing a derived key to the new AP as needed to support the roaming client. CCKM requires support from the
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 51 ]
Chapter 2: VoWLAN Architecture
client. (The client has to be CCX compliant to accept this key structure and expect this automated roaming behavior from the wireless
infrastructure.) CCXv2 allows for CCKM support with LEAP. CCXv3 added support for CCKM with EAP-FAST, and CCXv4 added
support for CCKM with PEAP and EAP-TLS. You should know which CCX version allows for which CCKM roaming mechanism.
CCKM is sometimes seen as a form of OKC. It is similar to that concept when used with H-REAP APs. A client PMK is distributed
to all H-REAPs in the same H-REAP group connected to the same controller (as a client joins one H-REAP). The APs need to be
connected to the controller to receive the key, so this OKC mechanism works for H-REAPs in connected mode and WLANs with
central authentication, and H-REAPs in disconnected mode with central authentication, for already connected users. It does not
work for WLANs with local authentication, or with new users on H-REAPs disconnected from the controller. For Control and
Provisioning of Wireless Access Points (CAPWAP) Protocol APs in local mode, CCKM is more efficient than OKC (or than the
OKC variant that CCKM for H-REAPs is) because the key is transmitted only when needed, instead of being initially sent to all APs
that might require it.
CCXv2 also introduces AP assisted roaming. (APs ask CCX clients to report their previous AP BSSID, channel, and time since
disassociation, and distribute this information to the other clients associating to the cell to later help them discover new APs when
roaming.) CCXv3 creates the Enhanced Neighbor list, adding the previous AP SNR and RSSI value to the previous list of elements.
CCXv4 introduces (only for Intel clients because this is a Cisco and Intel joint program) the Enhanced Neighbor List Request Endto-End (code name E2E), by which the client can request the previous elements at will and therefore always get the latest information
before roaming.
CCXv3 also introduces Direct Roam requests, where an AP can force a client to roam to another AP (avoiding situations where sticky
clients stay connected to an AP instead of roaming to a better AP in range).
CCX also helps clients conserve energy, which is key for battery-based VoWLAN devices. With CCX, the AP defines an inner cell
area. When a client is in this zone (based on the client RSSI and SNR), the AP instructs the client to stop scanning. When the client
reaches the edge of the cell (called the outer area), the AP instructs the client to start scanning, thus reserving scanning to times when
it is needed. Figure 2-3 illustrates this mechanism. The AP can also instruct the client to increase (if the client signal at the AP level is
weak) or decrease (if the client is close to the AP, thus saving energy) its power level.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 52 ]
Chapter 2: VoWLAN Architecture
CCX messages exchanged between AP2 and the client, to help the client roaming process
Directed Roam Request: you are detected by AP3. Try to join this AP
on channel X if no better candidate APs are detected
Scanning Optimization: you reached the transition region, scan faster if needed
to discover other APs: you will soon need to roam
Enhanced Neighbor List: this is the list of neighboring APs reported
by other clients entering the cell (RSSI, SNR, channel, time of last beacon)
Use this list to optimize your roaming
Scanning Optimization: you reached my outer region, you should start scanning
in case you need to roam soon
AP3
Transition
Region
AP5
AP7
AP Specified Max Power: your signal gets weaker, please increase your signal
power level to maintain good communication
AP Specified Max Power: your signal is strong, you can decrease your signal
power level (and conserve battery)
AP2
AP6
AP4
Scanning Optimization: you reached my inner region, no need to scan
(save battery), and no need to roam
Enhanced Neighbor List: Entering my cell, please report about previous AP
and other detected APs parameters (RSSI, SNR, channel, time of last beacon)
CCKM: Entering my cell, a derived PMK was transmitted by the wireless
infrastructure. We use key index XYZ.
AP1
Client
Figure 2-3 CCX Mechanisms for Enhanced Roaming
VoWLAN Components
At the access layer, you need a wireless IP phone. This phone can be a Cisco 7921G phone, which is an 802.11a/b/g phone. This
phone supports G.711 and G.729 codecs, SCCP, SRST, and RTP/RTCP. The phone supports CCXv4, CDP and XML. On the
wireless side, the 7921G supports shared key (WEP), PSK (WPA/WPA2), LEAP, EAP-FAST, PEAP and TLS, with TKIP, AES/
CCMP, or WEP. The phone also supports WMM, TSPEC, PS-Poll, and U-APSD.
The Cisco 7925G provides similar features, but in a more rugged body (weather and shock resistant). The 7925G phone also has
an internal antenna; the 7921 has a partially external antenna. The 7925G offers a standard USB connector (instead or a proprietary
connector on the phone side) and Bluetooth.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 53 ]
Chapter 2: VoWLAN Architecture
Your customers may also use SpectraLink phones. Some of these phones are 802.11a/b/g and WMM compliant, but many models
are still 802.11b or 802.11bg only and do not support WMM. Some of these phones can support SCCP, but most of them use the
SpectraLink Voice Priority (SVP) and the SpectraLink Radio Protocol (SRP) to communicate with SpectraLink servers.
You may also face Vocera badges, especially in healthcare environments. These badges use a voice-recognition configuration to
convert names into phone extensions, allowing users to call a phone (or a group of phones) by pressing a button and saying a name.
These badges use the wireless infrastructure to connect to a Vocera server where the name to number mapping is configured. (One
of the deployment tasks is usually to record audio sounds (pronounced names) on the server and map these sounds to a numbered
extension.) The Vocera server can then communicate with other voice platforms (such as the CUCM) to connect to the requested
extension. Notice that the badges do not communicate directly with the CUCM, but just with the Vocera server. These badges do not
usually support WMM and often rely on multicast communication with the infrastructure or one another.
Some companies use Nokia dual-band phones. These phones are usually GSM and Wi-Fi, supporting SIP on the phone side, and
802.11b/g and WMM (but not TSPEC) on the wireless side. An SCCP client is available for these phones. With Cisco Mobility
Advantage, roaming between GSM and Wi-Fi is also possible.
All these phones connect to the wired infrastructure through CAPWAP APs and controllers, or autonomous APs, just like any other
wireless client (with or without the support of a AAA server like the Cisco Secure Access Control Server [ACS] server).
Notice that the AP mode may have an impact on VoWLAN support. As stated earlier, CCKM is not supported on H-REAPs in
disconnected mode or WLAN with local authentication. Voice is supported for APs in mesh mode only for enterprise mesh (indoor
mesh, with indoor APs). Voice is performed on best effort basis with outdoor mesh APs. When deploying voice over indoor mesh,
limit the network to three to four mesh access points (MAP) per root access point (RAP) and two hops max. The backhaul (inter-AP
mesh radio link) should offer –62-dBm RSSI or better and a 24-Mbps or better data rate.
Once associated to the wireless network, SCCP-capable phones can then register to Cisco Unified Communications Manager
(CUCM), which is the core call-processing component of the Cisco Unified Communications solution. Depending on the version,
CUCM is a program for Windows or Linux, or a virtual appliance. CUCM provides messaging, multimedia conferencing (voice and
video), and integrates with other systems to support other functions like email or web. CUCM supports up to 60,000 users per cluster.
A smaller version, Cisco Unified Communications Manager Express (CUCME), runs on a router with IOS, supports up to 240 users,
and is built for enterprise branch offices or small businesses.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 54 ]
Chapter 2: VoWLAN Architecture
VoWLAN Verification
After designing and deploying your VoWLAN based on your customer requirements and the recommendations listed in this chapter,
verify that the deployment offers the performances that your client needs.
A first step in this verification process is to use Cisco Wireless Control System (WCS) VoWLAN Readiness tool (which you can also
use for a predeployment simulation). Start by adding a map of the floor, and position the deployed APs on the floor map. Then, from
the upper-right drop-down list, launch the Inspect VoWLAN Readiness tool. The tool displays (for 802.11b/g or 802.11a) in green
the areas on the floor where an AP RSSI is higher than –67 dBm, yellow the areas between –67dBm and –75 dBm, and red the areas
where AP signal is below –75dBm. These values are preconfigured to match Cisco phones requirements and reflect the current AP
power levels. Tune these values to evaluate the coverage when the APs are at full power, and change the green to yellow thresholds
(possible range –40 dBm to –90 dBm) and yellow to red threshold (possible range –45 dBm to –95 dBm). Although an area depicted
in green does not guarantee a good VoWLAN (because the tool only measures the expected AP RSSI, and does not take into account
all possible variables), zones in yellow and red are a sign that you may face voice quality issues in these areas. Figure 2-4 shows the
VoWLAN Readiness tool.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 55 ]
Chapter 2: VoWLAN Architecture
Figure 2-4 WCS VoWLAN Readiness
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 56 ]
Chapter 2: VoWLAN Architecture
Perform a post-deployment site survey to verify the AP position, roaming path coverage, and RSSI and SNR values at the edge of
your cells. Perform the survey in “normal conditions” (not in and empty building, but when the expected users and furniture are
present). You can use a laptop with a site survey tool, but a simple phone may be enough to verify the coverage. (The phone is lighter,
and its battery lasts a lot longer for a simple signal-level verification.) The 7921G or 7925G phone can be put in survey mode (from
Settings > Status > Site Survey) and proves the detected APs for a given SSID, along with the channel, RSSI, BSSID, and channel
utilization. Figure 2-5 shows a Cisco 7921G phone screen when the phone is in survey mode.
Figure 2-5 7921G in Survey Mode
Evaluating the VoWLAN may require more complex tools, because this evaluation is a mix between voice and wireless
considerations. AirMagnet VoFi is a program built for this type of task that can identify and track VoWLAN voice calls, collect
statistics about the call, and report performances, including probable MOS value and other voice metrics (codec, call duration, and so
on), roaming performances, and wireless issues (losses, retries, SNR or RSSI, and so forth). Figure 2-6 shows the AirMagnet VoFi
interface.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 57 ]
Chapter 2: VoWLAN Architecture
Figure 2-6 AirMagnet VoFi
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 58 ]
CCNP Wireless (642-742 IUWVN) Quick Reference
Chapter 3
VoWLAN Implementation
Voice over WLAN (VoWLAN) implementation is a combination of items that must be configured with the same logic in mind. If one
element is misconfigured, your VoWLAN might not offer the expected performances. The items to cover go beyond voice parameters
alone, and also include security and radio frequency (RF) optimization.
Wireless Infrastructure Configuration
Controller Configuration
Voice Parameters
Cisco recommends to separate voice from data. Create a WLAN specifically for voice, and if possible map that WLAN to a different
dynamic interface and band from the ones used for data (5 GHz being preferred for voice). On the band chosen for voice, disable low
data rates from Wireless > 802.22.a | 802.11b/g > Network. For 802.11a VoWLANs, disable all data rates below 12 Mbps. Set Mbps
(and only 12 Mbps) to Mandatory. Set all higher data rates to Supported. Optionally, you can disable high data rates (36 Mbps and
higher). Proceed the same way for 802.11g VoWLANs. For 802.11b/g VoWLANs, disable all data rates below 11 Mbps. Set 11 Mbps
(and only 11 Mbps) to Mandatory. Set the data rates above to Supported. Optionally, you can disable high data rates (36 Mbps and
higher). Figure 3-1 summarizes these principles. Of course, your customer requirements may limit your possibilities, and you might
have to allow all rates for hybrid data/voice networks.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 59 ]
Chapter 3: VoWLAN Implementation
Figure 3-1 Voice and Data WLAN Separation Principles
Security Parameters
The voice WLAN security should be as high as the data WLANs, using 802.1X/EAP and AES/CCMP. Most voice over wireless
devices support strong authentication mechanisms. Encryption overhead should not impact voice quality in a well-designed network.
Be aware, though, that a phone CPU is slower than a PC CPU and might take longer to process the authentication phase. If you use
EAP-FAST with Protected Access Credential (PAC) provisioning with a Cisco 7921G or a Cisco 7925G phone, the phone takes 12
seconds to 15 seconds to process the new PAC (a PC takes less than a second). For this reason, the default EAP-Request Timeout
timer was extended from 1 second to 30 seconds on controller code 5.1.151 (and later). You should not need to change this value
when running a recent controller code, but this timer has been a source of many TAC cases in the past, so be mindful about timer
issues if EAP-FAST authentication works with a laptop but fails with a phone. You can change the timer value, if needed, from
Security > Local EAP > General Request Timeout (Seconds), as shown in Figure 3-2, or with the controller CLI command config
advanced eap-request-timeout value in sec.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 60 ]
Chapter 3: VoWLAN Implementation
Figure 3-2 EAP Request Timeout Configuration
Even if this timer is configured from the Local EAP page, be aware that it applies in fact to EAP authentication using the external
RADIUS server (not the controller local RADIUS server). You can verify the EAP request timeout value with the show advanced
eapcommand. Example 3-1 shows the default output of this command.
Example 3-1
Verification of the EAP Request Timeout Value with the show advanced eap Command
(Cisco Controller) >config advanced eap request-timeout 31
(Cisco Controller) >show advanced eap
EAP-Identity-Request Timeout (seconds)........... 30
EAP-Identity-Request Max Retries................. 2
EAP Key-Index for Dynamic WEP.................... 0
EAP Max-Login Ignore Identity Response........... enable
EAP-Request Timeout (seconds).................... 31
EAP-Request Max Retries.......................... 2
EAPOL-Key Timeout (milliseconds)................. 1000
EAPOL-Key Max Retries............................ 2
EAP-Broadcast Key Interval....................... 3600
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 61 ]
Chapter 3: VoWLAN Implementation
Note
7920 phones operate
only on the 2.4-GHz
band. (They are 802.11b
phones.) You do not need
to enable 7920 Client
CAC or 7920 AP CAC if
your voice WLAN operates only in the 5-GHz
band. 7920 phones are
not WMM.
QoS Parameters
From the WLAN QoS configuration tab, set the QoS level to Platinum to prioritize voice frames over the cell, and between the AP
and the controller. For this last function to be enabled, you also need to navigate to Wireless > QoS > QoS > Profile and enable the
Wired QoS 802.1p mapping. Keep the mapping to the default value (6); the AP and the controller automatically converts this IEEE
value into the Cisco Unified Communications recommended value for voice (5). As you enable 802.1p mapping for Platinum, also
enable the mapping for the other ACs (Gold, Silver, and Bronze), still keeping the default mappings.
Back on the WLAN QoS configuration tab, set WMM support to Enable if your VoWLAN phones support Wireless Multimedia
(WMM) (keep in mind that some phones, such as Vocera and some SpectraLink phones, do not support WMM) or Required if you
want to make sure that non-WMM devices will not be allowed to associate to the VoWLAN cell, as shown in Figure 3-3. WMM
support set to Enable allows both WMM and non-WMM phones to associate. WMM support set to Disable ignores WMM. If your
VoWLAN includes 7920 phones and is mapped to the 2.4-GHz band, enable 7920 AP CAC. If your WLAN does not need WMM
(WMM support set to Disable) and has 7920 phones with firmware older than 1.0.5, enable 7920 Client CAC instead.
Figure 3-3 Voice WLAN Configuration
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 62 ]
Chapter 3: VoWLAN Implementation
On the WLAN Advanced tab, you can tweak some parameters to optimize your VoWLAN configuration. For example, the default
session timeout is 1800 seconds (30 minutes). If you expect average phone call duration to exceed this value, extend the timer
accordingly to avoid the call from being interrupted by a reauthentication phase. (The 7921 and 7925 phones reassociate for each new
call, thus restarting the timer from 0 each time.) You might also want to enable AAA override, if you want to push a specific profile
from the authentication, authorization, and accounting (AAA) server for each VoWLAN client.
Peer-to-peer blocking mode allows you to block traffic coming from a wireless client if the destination is another wireless client. In a
VoWLAN, leave this parameter at Disabled, to allow calls from one wireless phone to another.
The Delivery Traffic Indication Map (DTIM) period is the number of beacon intervals between beacons containing buffered
broadcast (or multicast) information. Dozing phones are expected to wake up for each beacon containing a DTIM. DTIM valid values
are from 1 to 255, and the default value is 1. Increasing the DTIM interval increases battery life (because the wireless phone needs
to wake up less often), but also increases the risk of network congestion as more packets may be buffered. Cisco recommends using
a DTIM period of 2 with IP 7921G and 7925G phones. Check other phones vendor recommendations. (For example, Vocera badges
rely on multicast and prefer a DTIM period of 1 to limit the amount of buffered traffic.) Change the DTIM value as needed on the
band where the VoWLAN devices are located.
Client Band Select allows the wireless infrastructure to push 802.11a/b/g clients to the 5-GHz band. When a wireless client sends
probe requests on the 2.4-GHz and the 5-GHz bands, the AP answers in the 5-GHz band first, thus pushing clients to associate to
the 5-GHz band rather than the 2.4-GHz band. This is a good feature for mixed deployments, but delays voice device association.
Cisco recommends disabling Cisco Band Select for voice WLANs, and instead configuring dual-band VoWLAN phones to prefer the
5-GHz band (if the WLAN is available for both bands).
Similarly, client load-balancing allows the controller to load balance wireless clients during the association phase across access points
(AP). If the AP is overloaded (based on configurable client number or utilization thresholds), the controller returns an association
response with Deny – reason code 17 (too many stations), which should push the client to try to associate to another AP. This feature
is not adapted to VoWLAN, first because it delays the VoWLAN client association, but also because wireless call access control
(CAC) should take care of limiting the number of clients on each AP. Therefore, Cisco recommends disabling client load-balancing
for voice WLAN, as shown in Figure 3-4.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 63 ]
Chapter 3: VoWLAN Implementation
An AP spends most of its time on the service channel, but also scans other channels at regular intervals to detect rogues or other
parameters. By default, an AP does not leave the current channel if it has traffic in its buffer for higher-priority queues (user priority
[UP] 4, 5, or 6). Leave this parameter enabled (Off Channel Scanning Defer).
Figure 3-4 VoWLAN Advanced Tab Settings
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 64 ]
Chapter 3: VoWLAN Implementation
Once the WLAN is configured, navigate to Wireless > 802.11a | 802.11b/g > Media > Voice, where you can configure wireless
CAC. Two options are available, as shown in Figure 3-5:
■
ACM (Admission Control Mandatory) is a CAC mechanism based solely on the AP client’s bandwidth utilization.
■
Load-based CAC is a CAC mechanism that takes into account the AP client’s activity and any external 802.11 activity that
affects the AP bandwidth (for example, neighboring cells traffic on the same channel). Load-Based CAC is the recommended
choice.
On the same page, determine the percentage of the AP radio bandwidth that can be used for voice traffic (Max RF Bandwidth). The
default is 75%. When voice traffic reaches this threshold, new calls are denied: You see a Network Busy signal on the phone screen.
You do not hear a busy tone. To hear a busy tone, you must be connected to the Cisco Unified Communications Manager (CUCM).
(If there is not enough bandwidth on the AP to allow the phone traffic, connection to the CUCM is not possible.) You may hear a busy
tone if wireless CAC allows the call but CUCM CAC refuses the call. Notice that wireless CAC affects outgoing calls only. Incoming
calls are forwarded even if the CAC threshold is exceeded (because the controller has no way to inform the call source that the cell is
busy). Also remember that wireless CAC uses Traffic Specification (TSPEC). Phones that do not use TSPEC (such as some Nokia
dual-band phones or the Cisco 7920 phone) ignore the wireless CAC parameters and place a call even if the AP informs them through
wireless CAC that there is no space for the call.
You can also reserve some bandwidth for roaming users (Reserved Roaming Bandwidth) who enter the cell while on a call started
in another cell. This prevents these users from dropping their call just because their phone roamed to a busy cell. Notice that the
Reserved Roaming Bandwidth is taken from the Max RF Bandwidth. For example, if Max RF Bandwidth and Reserved Roaming
Bandwidth are set to their defaults (respectively 75%, which is quite a high value, and 6%), 69% of the AP bandwidth is available for
new calls generated from within the cell, and 6% for roaming users entering the cell while on call.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 65 ]
Chapter 3: VoWLAN Implementation
Figure 3-5 Media > Voice Configuration Parameters
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 66 ]
Chapter 3: VoWLAN Implementation
From the same page, you can enable Expedited Bandwidth, which allows CCXv5 VoWLAN clients to partially bypass the AP
bandwidth threshold. The CCXv5 client received from the CUCM a list of emergency numbers. When such a number is dialed,
a CCXv5 Emergency flag is sent to the AP within the 802.11 frame, and the call is forwarded even if the AP Max RF Bandwidth
threshold is exceeded. Emergency calls are placed if the AP radio utilization is less than 90% when using ACM, and less than 85%
when using load-based CAC. If the AP radio utilization is higher than these thresholds, emergency calls are also rejected.
Some Session Initiation Protocol (SIP) clients do not support TSPEC and CAC. For these clients, and when using static CAC, you
can enable SIP admission control. Do not enable this with load-based CAC. (Because load-based CAC takes the 802.11e information
into account when calculating bandwidth consumption, non-WMM SIP calls would be ignored by load-based CAC.) Start by enabling
Media Session Snooping from the Voice section of the Advanced configuration tab of the WLAN. The AP then starts tracking
SIP calls sent to the standard UDP port 5060. Notice that the AP tracks traffic sent to that port, regardless of the client support or
nonsupport for WMM. The AP cannot track traffic sent to another port (even if it is a SIP call). Once the WLAN is configured, you
can configure CAC for SIP calls from the Wireless > 802.11a | 802.11b/g > Media > Voice, by selecting the codec used by your SIP
calls (to determine each call bandwidth consumption), the sample size of each voice packet (20 ms by default), and the number of
calls you want to allow per AP radio (default is 0, showing that no limit is set). Notice that this feature limits only non-TSPEC SIP
calls. (You limit TSPEC calls by the standard CAC parameter at the top of the page.) Also, the limitation you configure applies to all
calls (locally initiated calls and calls roaming into the cell): When the limit is reached, new calls fail.
From Wireless > 802.11a | 802.11b/g > EDCA Parameter, choose the EDCA Profile best adapted to your traffic. The EDCA
parameter is set globally for all APs’ 5-GHz or 2.4-GHz radio on a given controller: Voice Optimized, Voice & Video Optimized,
SpectraLink Optimized, or standard WMM, as shown in Figure 3-6.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 67 ]
Chapter 3: VoWLAN Implementation
Figure 3-6 EDCA Parameters
You may also want to enable the Low Latency MAC option in some cases. With this parameter, the AP uses the Real-time Transport
Protocol (RTP) time stamp to decide whether a VoIP frame (802.11 UP 6) that was not received by a wireless client the first time
should be re-sent. If the time stamp shows that the frame has not been delayed for more than 30 ms, the AP retries (a second, then
maybe a third time if needed) to resend the frame before deleting it and sending the next frame instead. Keep in mind that, for voice, a
missing packet is better than a receiving-end empty buffer. This feature is great for VoWLANs, but has some limitations:
■
It works only if WMM is enabled (with any EDCA parameter applied).
■
On code 7.0.116 (on which the IUWVN exam is based), it does not work for 802.11n APs and clients. (You can still enable it,
but it will apply only to non-802.11 APs and clients, and only if 12-Mbps, 18-Mbps, and 24-Mbps are the only allowed rates,
all other rates being disabled.)
■
You should not enable this parameter for Cisco 7921G and Cisco 7925G phones. These phones have a dynamic buffer that
increases in size when cell conditions degrade. Therefore, frames will not arrive too late, even if the AP tries to resend several
times.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 68 ]
Chapter 3: VoWLAN Implementation
Roaming Parameters
To allow for seamless roaming, make sure that all controllers on a given roaming path recognize each other. Add each controller
management interface IP address, built-in MAC address, and mobility group name to all other controllers’ (on a given roaming
path) mobility list, from Controller > Mobility Management > Mobility Groups. Keep in mind that intracontroller roaming is
more efficient than intercontroller roaming. Also keep in mind that Cisco Centralized Key Management (CCKM) works only within
mobility groups. If roaming occurs between controllers in different mobility groups, CCKM is not functional. If you set the key
management from the WLAN Security tab to CCKM and use intercontroller and intermobility group roaming, authentication must
reoccur because the CCKM key index cannot be forwarded between mobility groups. (Authentication does not need to reoccur while
roaming between controllers members of the same mobility group.) If the key management is set to 802.1X, the credentials are
passed between controllers without the need for the client to reauthenticate, but the client driver may force the reauthentication. This
behavior is common, and this is exactly what CCKM support on the client side aims at avoiding. If the key is a pre-shared key (PSK),
reauthentication occurs, but this process is usually fast enough to be seamless for the client. Cisco recommends using intramobility
group roaming with CCKM, which is the fastest roaming method.
If roaming occurs between H-REAP APs, make sure that they are in the same H-REAP group and were connected to the same
controller when the client first associated (so that they could receive the client pairwise master key [PMK]). The WLAN should also
use central authentication (that is, not authentication on the H-REAP or a RADIUS called directly from the H-REAP). Otherwise,
roaming will mean reauthentication for the client, which might disrupt the call.
You can partially control the roaming client behavior from the Wireless > 802.11a | 802.11b/g > Client Roaming page, shown in
Figure 3-7. The AP communicates these values to Cisco Compatible Extensions (CCX) clients. By default, the AP tells the client to
stop scanning if the current AP signal is better than –72 dBm received signal strength indication (RSSI) (thus saving client battery
cycles that would be consumed by the scanning process). If the AP signal falls below this threshold, the client should start scanning
for a better AP and join any AP whose signal is by default 3 dB better (Hysteresis) than the current AP signal. The client must roam to
another AP if the current AP signal level falls below –85 dBm by default (Minimum RSSI). The client is by default given 5 seconds
to complete the roam (transition time). After this interval, the AP sends a deauthentication message to force the client off the cell. The
non-CCX does not get the initial roaming parameters, but does receive the deauthentication message. The default values are a bit low
for a VoWLAN because VoWLAN cells are usually built to be small. These values are applied to any client (voice or data) associated
with an AP on the relevant band, so you should test the values that are best adapted to each deployment scenario.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 69 ]
Chapter 3: VoWLAN Implementation
Figure 3-7 Client Roaming Parameters
RRM should be left enabled (it is enabled by default) on the controllers. RRM manages three main functions: AP transmit power
control (TPC), dynamic channel assignment (DCA), and coverage hole detection and correction.
Note
Controllers in the same
RF group do not need
to be in the same mobility group (that is, have
the same mobility group
name), although they
must be in each other’s
mobility list.
Controllers need to communicate to coordinate their effort for DCA and TPC. Without intercontroller communication, each controller
runs the algorithms independently, which may result in suboptimal or conflicting parameter changes. To ensure intercontroller
communication for RRM, controllers should know each other. This is done by configuring controllers mobility list, as explained
earlier, and making sure that controllers belong to the same RF group (configured from the Controller > General > RF Group
Name). APs periodically (every 60 seconds by default) send out neighbor messages over the air, containing a hash of the RF group
name (along with the AP basic service set identifier [BSSID] and a time stamp). APs using the same RF group name are able to
validate messages from each other. When APs on different controllers hear validated neighbor messages at a signal strength of –80
dBm or stronger, the controllers dynamically form a sub-RF group. This results in virtual groups of APs hearing each other. When
RF Grouping is enabled (which is the default state, and configured from Wireless > 802.11a | 802.11bg > RRM > RF Grouping), a
controller is elected to act as the group leader. You can let this election process be automatic or manually designate the leader and the
group members.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 70 ]
Chapter 3: VoWLAN Implementation
TPC
TPC aims at optimizing the AP power level to limit neighboring cell overlap. TPC needs at least four APs to be triggered (one AP
hearing at least three others), and reduces or increases the APs power so that each AP RSSI heard from the other APs reaches about
–70 dBm. Notice that TPC primarily aims at reducing the AP power, but can also increase the power level (for example, if one
AP gets disconnected and power needs to be increased to maintain the –70 dBm value between APs hearing each other). From the
Wireless > 802.11a | 802.11b/g > RRM > TPCpage shown in Figure 3-8, you can configure the target RSSI (–70 dBm is a good
value for most VoWLAN deployments) and the minimum and maximum power levels that the APs should not exceed when their
power is changed.
Figure 3-8 TPC Parameters
Coverage Hole Detection
Coverage hole detection aims at increasing the AP power (not decreasing it). It is not triggered by neighboring APs, but by clients
on a single AP. Therefore, it is strictly local to a controller, and is not an algorithm controller by the RF group. This feature is
enabled by default, and can be configured from Wireless > 802.11a | 802.11b/g > RRM Coverage, as shown in Figure 3-9. When
enough client RSSI signals fall below the data RSSI threshold for data clients, or voice RSSI thresholds for voice clients (clients
in the Platinum queue), the controller increases the AP power level for that radio. You can configure the number of clients below
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 71 ]
Chapter 3: VoWLAN Implementation
the thresholds in number (3 by default) or the percentage of the total number of clients on the AP radio (25% by default) needed to
trigger the algorithm. The first threshold to be reached triggers the algorithm. For example, if your AP radio has four clients, and two
of them (fewer than three, but more than 25% of clients) fall below the threshold, the algorithm is triggered and the AP power level
is increased. For VoWLANs, start with the default values and test to see whether the algorithm is triggered often, and then adjust if
necessary. With a threshold set too high, VoWLAN clients might lose calls because of coverage holes. With a threshold set too low,
your cell sizes might inflate beyond what your design anticipated. The algorithm may also be triggered by “sticky clients” that stay on
the same AP instead of roaming when the client moves away.
Figure 3-9 Coverage Hole Detection Parameters
DCA
Controllers always try to work with power levels first, and change the AP channels when power alone cannot solve the issue. An AP
cannot inform its clients when channel change is triggered, so channel change is more disruptive to the cell clients than power change.
Channel change is triggered by the DCA algorithm, which is enabled by default and can be configured from Wireless > 802.11a |
802.11bg > RRM >DCA, as shown in Figure 3-10.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 72 ]
Chapter 3: VoWLAN Implementation
You can configure the list of channels that can be used and the sensitivity of the algorithm. This sensitivity determines the anticipated
gain for the clients signal-to-noise ratio (SNR) required for the controller to trigger one or several AP channel changes. You can
change several parameters:
■
Avoid Foreign AP Interference (enabled by default): The controller takes into account neighboring 802.11 devices (and
avoids the affected channel in the relevant areas, when possible).
■
Avoid Cisco AP Load (disabled by default): The controller attempts to change in priority the channel of APs having the
lowest number of clients (so that the channel change disturbs as few clients as possible).
■
Avoid non- 802.11b Noise (enabled by default): The controller takes into account the non-802.11 noise when deciding on an
AP channel change.
■
Avoid Persistent Non-WiFi Interference: When non-802.11 interferers are detected and identified by 3500 or 3600 APs, the
controller takes into account the detected interferer behavior to avoid channels (in relevant areas) affected by interferers that
are known to repeatedly affect a given range of frequencies (even if the interferer is not detected permanently).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 73 ]
Chapter 3: VoWLAN Implementation
Figure 3-10 DCA Parameters
You can configure DCA channel sensitivity for the controller to change an AP channel if a minor SNR gain (High Sensitivity), a
medium SNR gain (Medium Sensitivity), or a high SNR gain (Low Sensitivity) is expected.
The APs scan other channels at regular intervals, to allow the controller to estimate what SNR gain would occur when changing an
AP channel. You can configure the controller to avoid sending an AP to scan another channel if the AP has high-priority packets in its
queue. This is done from the WLAN Advanced tab, Off Channel Scanning Defer section. UPs 4, 5, and 6 are checked by default (see
Figure 3-4). This feature should be left enabled for VoWLANs. Notice that the AP only defers its channel scanning if frames of the
relevant UP are present in the AP queue; incoming frames from wireless clients are not taken into account in this determination.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 74 ]
Chapter 3: VoWLAN Implementation
Controller Configuration for Specific Clients
The recommendations just noted apply when configuring the network for Cisco 7921G or Cisco 7925G phones and most other
phones. Other brands may have specific requirements:
■
Vocera badges are 802.11b/g only, and use the wireless infrastructure to connect to a Vocera server. Badges are configured
through the Vocera Badge Configuration Utility, installed on a PC. This PC must have specific TCP/IP properties defined
(10.0.0.1/24). The PC must also be connected to an isolated AP with a specific SSID (Vocera). When configuring a WLAN
for these badges, set the radio policy to 802.11b/g only. Set the QoS policy to Platinum, and the WMM policy to Allowed.
You also need to enable multicast support on the controller (see Chapter 4, “Multicast over Wireless”), as Vocera relies on
multicast traffic for several functions.
■
SpectraLink phones also use the wireless infrastructure to connect to a specific server, the SpectraLink Voice Priority (SVP)
server. Some SpectraLink phones use 802.11b/g only, others use 802.11a/b/g. Set your WLAN radio parameter accordingly.
Set the QoS level to Platinum, but the WMM policy is typically be set to Disabled. (Most of these phones do not support
WMM.) Set the EDCA parameters to SpectraLink Voice Priority. Some 802.11b SpectraLink phones only support long
preamble (navigate to Wireless > 802.11b/g/n > Network and uncheck Short Preamble). In the design phase, keep in mind
that SpectraLink recommends that at least one AP have signal coverage to the SpectraLink phone of –70 dBm in all areas
where the phone will be used. The phone requires coverage of at least –60 dBm or better to ensure a data rate of 11 Mbps for
802.11b phones and 54 Mbps for other phones. This recommendation implies setting the cell edge at –60 dBm RSSI (rather
than at –67 dBm).
■
Dual-band Nokia phones need the SSID to be broadcasted. (The phone uses passive scanning and cannot discover an SSID
if it is not broadcasted in beacons.) Their other WLAN parameters are similar to those of Cisco phones. Keep in mind that
most Nokia phones are 802.11b/g, not 802.11a. You can use SIP on a Nokia phone, or install an optional Skinny Call Control
Protocol (SCCP) client to connect the phone to Cisco Unified Communications Manager/Cisco Unified Communications
Manager Express (CUCM/CUCME). The Nokia phone appears as a Cisco 7970 phone (unless you upload a specific
firmware for the Nokia phone into the CUCM).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 75 ]
Chapter 3: VoWLAN Implementation
■
Wi-Fi enabled phone and tablet requirements may vary depending on the model. For example, the Apple iPad is a one-spatial
stream (single antenna) 802.11n-enabled device that operates in 2.4-GHz and 5-GHz spectrums using 20-MHz channels. The
Apple iPhone 4 is a one-spatial stream 802.11n-enabled device that operates in only the 2.4-GHz spectrum using 20-MHz
channels. The Cisco Cius tablet is a one-spatial stream (single antenna) 802.11n-enabled device that operates in the 2.4-GHz
spectrum using 20-MHz channels and the 5-GHz spectrum using 20-MHz or 40-MHz channels. To optimize your network
for these devices, disable the low rates. Enable Band Select (from the Advanced tab of the VoWLAN configuration page)
to push the client to the 5-GHz band, where more and larger channels (40 MHz) are available. Use ClientLink (by enabling
Beamforming from Wireless > 802.11a | 802.1bg > Network) to optimize the signal to these clients at the edge of the cell.
In all cases, you need to take these device requirements into account (if they are expected) when designing and configuring
your VoWLANs, even if they are not to be the primary clients of the VoWLAN.
All other parameters are identical for these special clients as for Cisco 7921G or 7925G phones.
Autonomous AP Configuration
Most recommendations are the same for VoWLANs on autonomous APs and controllers (SSID, VLAN, and whenever possible
band separation between VoWLANs and data WLANs, strong security on all WLANs, 12-Mbps minimum [and mandatory] rate for
802.11ag VoWLANs, 11-Mbps minimum [and mandatory] rate for 802.11bg networks, higher rates set to Supported).
On an autonomous AP, you can also configure VoWLAN traffic prioritization. You can do so by creating a quality of service (QoS)
policy from Services > QoS > QoS Policies, as shown in Figure 3-11.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 76 ]
Chapter 3: VoWLAN Implementation
Figure 3-11 IOS AP QoS Policy Page
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 77 ]
Chapter 3: VoWLAN Implementation
This policy consists of a class map determining the targeted traffic (Match Classification section of the page), a policy map
determining the policy to apply to the targeted traffic (Apply Class of Service and Rate Limiting sections of the page), and a service
policy that applies the policy map to an interface or a VLAN, in the incoming or outgoing direction (Apply Policies to Interface/
VLANs section of the page, as shown in Figure 3-12).
Figure 3-12 Apply Service Policy
From the Services > QoS > Advancedpage shown in Figure 3-13, you can disable WMM support for the 2.4-GHz or the 5-GHz
radio (default is enabled), enable IEEE 802.11e UP 6 to 802.1p 5 mapping (default is disabled, UP 6 being translated to 802.1p 6, and
vice versa). You can also enable QoS Elements for Wireless phones, which generates two behaviors:
■
The AP prioritizes voice packets over the wireless links, regardless of any QoS configuration.
■
The AP sends in its beacon the QoS Basic Service Set Information Element (QBSS-IE) in Cisco format (Cisco QBSS v2), or
both Cisco and 802.11e format (if you also check the dot11e check box).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 78 ]
Chapter 3: VoWLAN Implementation
Figure 3-13 IOS AP QoS > Advanced Tab
From the Services > QoS > Radio 0 2.4 GHz Access Categories | Radio 1 5 GHz Access Categoriespage shown in Figure 314, optimize the EDCA parameters by manually setting the QoS values (TXOP, slot time, CWMin, and CWMax). Also click the
Optimized Voice button to set all the EDCA parameters to values optimized for VoWLAN support. Click WFA Default to restore
the values to default WMM.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 79 ]
Chapter 3: VoWLAN Implementation
Figure 3-14 IOS AP QoS 5 GHz Access Category Page
From the same page, enable Admission Control for Voice, as shown in Figure 3-15. The values and principles are the same as on the
controller, except that autonomous APs only support static CAC (not load-based CAC). Also notice that seamless roaming between
autonomous APs is only supported at Layer 2 (APs in the same subnet) and when Wireless Domain Service (WDS) is configured
(from Wireless Services). WDS is used to cache the wireless client credentials and pass these credentials from one AP to the next as
the client roams. Without WDS, each AP is independent, and reauthentication must occur when roaming between APs. Therefore, the
Roam Channel Capacity feature applies only for roaming between APs in the same subnet when WDS is configured.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 80 ]
Chapter 3: VoWLAN Implementation
Figure 3-15 IOS AP CAC
From the Services > Streampage, shown in Figure 3-16, prioritize voice traffic by setting the Voice queue to Low Latency. (This
happens automatically when you click the Optimized Voice button in the Services > QoS > Radio 0 2.4 GHz Access Categories
| Radio 1 5 GHz Access Categories page.) The second column displays the amount of time that frames in the low-latency queue
(LLQ) should be re-sent in case of acknowledgment failure (frame not received). The default is 3, which is the right value for most
VoWLAN deployments.
Lower down the same page, configure the data rates at which LLQ frames should be sent. The AP tries the nominal rates first, and
then the non-nominal rates if the frame could not be used with any of the nominal rates. The disabled rates are never used for LLQ
frames (but can be used for the other queues because the Nominal/Non-Nominal/Disabled settings in this page apply only to the LLQ
frames).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 81 ]
Chapter 3: VoWLAN Implementation
Figure 3-16 IOS AP Stream Configuration
Wired Infrastructure Configuration
After configuring your wireless infrastructure, you also need to make sure that the voice infrastructure, on the wired side, is ready
to support your VoWLAN. A CCNP Wireless is not supposed to be a voice expert, but you must have enough understanding of the
underlying wired infrastructure to interact with the voice team (ask the right questions and understand what happens on the voice
side).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 82 ]
Chapter 3: VoWLAN Implementation
The voice team might also configure CAC. CAC, in the voice world, is built to allow or deny calls based on bandwidth availability
or profile restrictions. (For example, some employees may not be allowed to dial international numbers.) CAC related to bandwidth
availability might take into account only the local bandwidth availability (just like our wireless CAC); that is called topologyunaware CAC. Some VoIP systems can also check the end-to-end bandwidth availability (and reserve bandwidth when possible).
These systems are said to perform topology-aware CAC. A key element is, of course, the number dialed, which determines the
destination network. Therefore, another task of the voice team is to build dial plans, to determine how packets are routed, depending
on the number dialed. For example, international calls may be first routed to a regional office network before being sent to a main
international route, and emergency numbers may be dispatched to the local public switched telephone network (PSTN) network. Dial
plans may be very complex in large environments. Modern systems, such as the CUCM, make use of the ITU E.164 recommendation
to simplify call routing based on globalized numbers (instead of complex individual numbering structure based on branch, region, etc,
and different for each deployment).
DHCP/DNS Configuration
Once the VoWLAN phone is associated to the wireless infrastructure, the 802.11 side of the configuration is completed, and the voice
side starts. Your WLAN mapped the phone to a dynamic interface, for which a Dynamic Host Configuration Protocol (DHCP)
server was configured (when configuring the interface, from Controller > Interfaces). You need to make sure that the DHCP server
is configured to send to the phone an Option 150 (TFTP server IP address, also called next server IP address). The phone needs this
TFTP server to get its configuration file. It is common to use the CUCM or CUCME as the TFTP server address. It is also common to
use the CUCM as the DHCP server. The CUCM then takes care of all provisioning phases.
A Cisco IP phone that does not obtain a TFTP server IP address automatically tries to resolve the DNS name CiscoCM1. If you
cannot provision to the phone a TFTP server IP address, you can also make sure that the phone learns about a DNS server (DHCP
option 6), and then create an entry in the DNS server for CiscoCM1.
The configuration file contains, among other parameters, the IP address of the CUCM or CUCME that the phone is supposed to use.
The phone uses this element of information to contact the CUCM/CUCME, obtain the right firmware (CUCM/CUCME can provision
firmware to IP phones), and register. At that time, the phone obtains its extension number and an additional configuration file that lists
the features enabled on the phone (quick dial numbers, screen display, and list of features that are available to the user or blocked).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 83 ]
Chapter 3: VoWLAN Implementation
CUCM Configuration
The CUCM configuration itself depends on the version used. On CUCM 6.0, a first task is to connect through a web authentication
page to the CUCM Cisco Unified Serviceability component and make sure that TFTP and CUCM services are enabled. You can then
open a web browser session to the CUCM Service component and configure the phone part.
In the Cisco Unified CM Administration section of the tool, you can see that each phone consumes a certain number of device license
units (DLU). The number of DLUs consumed by each phone depends on the phone feature set. A Cisco 7921G or a Cisco 7925G uses
four DLUs. From System > Licensing, you can check the number of available DLUs left on the system, add DLUs (after buying them
from Cisco), and calculate how many DLUs are needed when adding new phones (based on the phone quantity and type).
Because each new phone consumes DLUs (and for security reasons), autoregistration is disabled on CUCM; phones cannot register
automatically. You can enable autoregistration when deploying new phones by navigating to System > Cisco Unified CM, selecting
the CUCM server you want to configure, and unchecking Auto-Registration Disabled on This Cisco Unified Communications
Manager. You should check this option back after the deployment is complete. After a phone has registered to a CUCM, it will be
allowed to register again, but new phones cannot register if autoregistration is disabled.
If you do not want to use autoregistration, you can manually add a phone by clicking Add New from the main Phone List page. You
must enter the phone MAC address and the list of features you want to enable on that phone. The phone is then allowed to register to
that CUCM.
After a phone is registered, you can also see the phone name in the main Phone List page, and further edit its settings (allowed
features). You can also change the phone extension (number) if the number that was allocated automatically during registration is not
the extension you wanted for that phone.
CUCME Configuration
Smaller networks may use CUCME rather than CUCM. Cisco Unified Communications Manager Express is a specific application
that runs on Cisco IOS. To install CUCME, you just need to ensure that your router and the Cisco IOS software it is running contain
CUCME. (See the Cisco website for to determine whether your IOS contains CUCME.) You can access the CUCME interface
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 84 ]
Chapter 3: VoWLAN Implementation
through a CLI or through a GUI. You can upgrade the CUCME files running on your router, or install the GUI portion as individual
files or TAR archives. If the content to install is an individual file, use the copy command to copy the files to the router flash:
Router# copy tftp://x.x.x.x/P00405000700.sbn flash:/file-url
If the content is a TAR file, use the archive tar command to extract the files to flash memory:
Router# archive tar /xtract source-url flash:/file-url
Note
The CUCME GUI portion is optional. (It adds
the graphical interface
support.)
If you want the CUCME to provide firmware files to the registering phones, you must also set up the IOS to act as a TFTP server,
by pointing to the folder in the flash containing each phone firmware file. Use the command tftp-server flash:/path to each phone
firmware file.
From the CLI, several commands prove useful to configure voice support on the CUCME:
■
telephony-service: Enters telephony service configuration mode.
■
web admin system name username {password string | secret {0 | 5} string}: Defines the username and password for a
system administrator. Also, use the global command ip http server to enable the web interface, ip http path to point to the
folder where the CUCME web interface files reside, and ip http authentication to determine how credentials are checked
(local list, AAA, and so on). Once the web interface is created, you can access it at http://CUCME_IP_Address/ccme.html.
■
max-ephones: CUCME does not use DLUs. Each CUCME supports a maximum number of phones based on its hardware
platform. When you enter the command max-ephones ?, the system displays the max value for the current router model.
■
max-dn: Limits the number of extensions (called ephone-dn) available in a CUCME.
■
ip source-address IP address port 2000: This mandatory command enables a router CUCME component to receive messages
from IP phones through the specified IP address and port. The IP address must be one of the router interfaces address. Because
CUCME uses SCCP, the default port is the standard SCCP port, TCP 2000.
■
load phone type firmware name (for example, load 7921 CP7921G-1.4.1): This command names the configuration file to push
to the specified type of IP phone.
■
ephone-dn number: Use this command to create directory (phone extension) numbers.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 85 ]
Chapter 3: VoWLAN Implementation
■
ephone number: Use this command to configure options for each phone (MAC address identifying the phone, phone screen
display, and so forth). For example, the subcommand button 1:1, the first button, in the upper-right part of the phone screen
(for a Wireless IP Phone 7921G or 7925G) displays the first ephone number. You would typically use the ephone number
itself (for example, button 1:5 on ephone 5). This allows the user to see the phone local extension on the screen.
■
create cnf-files: Use this command to generate the XML configuration files used for provisioning the phones. This command
is used as often as you make changes in the phone configuration parameters on the CUCME.
Infrastructure Security
Security is also an important aspect of VoWLAN deployments. Beyond physical security (controlling access to areas where the
VoWLAN phones signals can be received), security measures can be put in place at the voice infrastructure level, to secure the
communication between the phone and the CUCM, by authenticating or encrypting the files exchanged between the phone and
CUCM. Signaling traffic can also be encrypted. On CUCM, you can configure the phone to enhance security (for example, by
restricting the user access to the phone local web interface setting options).
Security is also configured for the VoWLAN. 802.1X/EAP authentication can be performed on the controller local RADIUS, or on a
local RADIUS, configured from Security > RADIUS > Authentication.
If you use WPA2 with a Cisco phone (7921 or 7925), check the phone firmware version. With firmware release 1.3.3 and earlier, the
7921G and 7925G phones support WPA2 or WPA with AES, but not with CCKM. This means that you can use WPA/AES or WPA2/
AES, but without CCKM (Fast Roaming will not work), or use WPA/TKIP with CCKM (which is the recommended choice, when
WPA is used with 802.1X/EAP). Firmware 1.3.4 or later allows you to use WPA2/AES with 802.1X and CCKM.
Also, if you use EAP-FAST, make sure that the controller allows enough time for the phone to process the PAC. (The phone needs
a little less than 20 seconds.) You can configure the EAP timeout from the CLI with the command config advanced eap requesttimeout 20, or from the web interface (in Security > Local EAP > General > EAP Request Timeout), as explained earlier.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 86 ]
Chapter 3: VoWLAN Implementation
You also need to configure the AAA server to support the VoWLAN. On a Cisco Secure ACS 4.2, navigate to Network Configuration
and add the controller as a RADIUS client (using the same shared secret you entered when adding the RADIUS server to the
controller). Then, navigate to User Setup and add the user credentials that will be sent by the VoWLAN phone users upon
authentication.
You must also configure the ACS to support the authentication schemes you need (EAP-FAST, PEAP, TLS, LEAP), from System
Configuration > Global Authentication Setup.
VoWLAN Client Configuration
When the network is ready, you can configure the VoWLAN phones. Different phones may have different configuration options, but
you are expected to have basic knowledge of the 7921G (and 7925G) phone’s configuration.
During the boot process, the phone goes through several phases, which you can identify by looking at the phone screen, as shown in
Figure 3-17:
■
Powering up (with the phone screen displaying the Cisco logo): The phone loads its last known firmware.
■
Locating Network services: The phone scans the 2.4-GHz and the 5-GHz band, looking for SSIDs configured in the phone
profiles (and the Cisco SSID, with open authentication and no encryption).
■
Configuring IP: The phone successfully associated to a WLAN and configures its IP address (through DHCP or a statically
configured IP address).
■
Configuring CM list: The phone has an IP address and tries to determine the address of the CUCM/CUCME to contact (from
the configuration file received from a TFTP server contacted after configuring the IP address, or using the CM’s IP addresses
remembered from previous boots).
■
Registering: The phone dialogs with the CUCM/CUCME and tries to register and obtain an extension number.
■
The phone then displays the current time, date, and your current options.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 87 ]
Chapter 3: VoWLAN Implementation
Figure 3-17 Cisco 7921G Phone Boot Screens
You can configure a Cisco 7921 or 7925 phone from the local keypad or through a web interface. You need to make sure that web access is
enabled for the phone in the CUCM or CUCME. From the CUCM interface, set the Web Access option to Full in the phone configuration
page. In CUCME, under the Telephony-Service Configuration section, enter the service phone webAccess [0|1|2] command. (Option 0
enables full access to the web interface, 1 disables the web interface, 2 enables the web interface in read-only mode.)
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 88 ]
Chapter 3: VoWLAN Implementation
This web interface can be accessed by opening a web browser session to the phone IP address. You can also connect a USB cable
between a PC and the phone. Before you can configure the phones using a USB connection, you must install the USB-Install-79217925.1-0-2a.exe application, (downloaded from Cisco website). This creates a virtual network connection for the USB connection
to the phone. You can edit this connection like any other network connection, and configure the PC for an IP address in the
192.168.1.0/24 range (for example, 192.168.1.101). The phone listens to its USB connection at 192.168.1.100/24. You can then
connect to the phone interface by opening a secure web browser session to https://192.168.1.100.
You can then configure the phone from the web interface or the phone keypad, as shown in Figure 3-18. Both possibilities allow for
the same configuration items, but they are grouped differently.
Figure 3-18 Configuring a Cisco 7921G Phone from the Keypad or Web Interface (with a USB Cable)
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 89 ]
Chapter 3: VoWLAN Implementation
To configure the phone from the local keypad, press the down arrow then the selection key to enter the settings. You can then access
the following submenus:
■
Phone Settings: Configures sound and display.
■
Network Profiles: Configures WLANs. From this menu, you can configure up to four different profiles, each with its security
mode (Open, WEP, Shared+WEP, LEAP, EAP-FAST, EAP-TLS, PEAP, or Auto) and IP addressing model (Static or DHCP).
You can also decide whether the phone should scan and associate to the configured SSID in either the 2.4-GHz or the 5-GHz
band, if the phone should prefer one band (but still use the other if the SSID is not found in the preferred band), or if the
phone should be restricted to one band.
■
System Configuration: Configures the phone interface security settings and the USB interface address
■
Device Information: Lists the phones parameters, such as CUCM details, network, WLAN, QoS. These parameters are seen
but cannot be configured.
■
Model information: Lists the phone MAC address, serial number, firmware version, and so forth.
■
Status: Displays statistics on the phone utilization; also provides a site survey menu to list detected BSSIDs with their RSSI
and channel load information.
Most configuration items are locked, to avoid accidental misconfiguration. You can unlock features by selecting the function you
need to access and pressing the sequence **#. You can also reset the phone to its factory defaults from the Phone Settings menu by
pressing **2.
VoWLAN Troubleshooting
In an ever-changing RF environment, issues arise, such as one-way audio or lost packets. From the controller Monitor > Clients >
Details page, you can monitor your clients and check important parameters, such as status (associated or not), RSSI/SNR at current
AP, WMM state, bytes sent and received, error rate, QoS levels, and so on. You can also monitor clients from Wireless Control
System (WCS) and use the Troubleshoot Client option in the Client Home tab to trace a client association process and troubleshoot
any association-related issue. From WCS, you can also use the VoWLAN Readiness tool (from Monitor > Maps > Select floor map
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 90 ]
Chapter 3: VoWLAN Implementation
> Inspect VoWLAN Readiness) described in Chapter 2 to evaluate any VoWLAN coverage issue. Areas not displayed in green
(displayed in yellow or red) are unlikely to support VoWLANs properly.
From WCS, you can also use the Voice Audit tool (from Tools > Voice Audit), shown in Figure 3-19, to evaluate each controller
configuration and verify whether all configuration elements match the VoWLAN recommendations. The elements to verify and
their recommended values are preselected (for example, CAC enabled, Platinum in QoS tab for the WLAN), but you can add or
remove elements and tune the readiness criteria based on your specific clients requirements. The voice audit report provides detailed
information on each violation. For each parameter, you can click a link to bring up a new window displaying the violation details.
Figure 3-19 WCS Voice Audit Report
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 91 ]
Chapter 3: VoWLAN Implementation
WCS 7.0 and later also offers a Voice Diagnostic tool (from Tools > Voice Diagnostic), shown in Figure 3-20, through which you
select two voice clients (based on MAC address) and launch the test while clients are on a VoWLAN call between each other. The test
can run from 5 minutes to 90 minutes, and displays a report about the call (uplink and downlink TSM, QoS, AC queue and noticeable
events over tested period).
Figure 3-20 WCS Voice Diagnostic
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 92 ]
Chapter 3: VoWLAN Implementation
You can also enable Traffic Stream Metrics (TSM) collection on a controller by checking the Metrics Collection option in the
Wireless > 802.11a | 802.11b/g > Mediapage (refer back to Figure 3-5). TSM monitors the four variables that can affect audio
quality: packet latency, packet jitter, packet loss, and roaming time. You can access the TSM statistics from the Monitor > Clients
page and the Wireless > Access Points > Radios > 802.11b/g | 802.11a page, by selecting the TSM option when hovering the mouse
over the blue chevron at the end of a client or an AP name. TSM information appears for each chosen client or AP in the form of a
table similar to the one displayed in Figure 3-21.
Figure 3-21 TSM Information
Notice that the client needs to be CCXv4 or later for TSM to display both uplink and downlink information. If the client is not CCXv4
or later, only downlink statistics are captured. This is because, without CCXv4, the client cannot inform the AP about when each
frame was sent.
TSM reports can also be generated from the WCS (from Reports > Reports Launch Pad > Performances > Traffic Stream
Metrics for TSM on all APs, and from Reports > Report Launchpad > Clients > Clients Traffic Stream Metrics for TSM on a
specific AP, SSID, or client). The status of each reported item is displayed with a color code (green, amber, or red) to help you to
immediately locate any TSM-related issues
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 93 ]
Chapter 3: VoWLAN Implementation
Controller code 7.0.116 and later also embarks CLI commands that are specific to VoWLAN monitoring and troubleshooting.
VoWLAN monitoring commands are enabled with the command debug client voice-diag enable (optionally followed with a specific
client MAC address, if you do not want statistics about all voice clients). The monitoring commands are in the form show client
voice-diag option, where the option can be tspec (TSPEC information for each voice client), rssi (RSSI information each second over
the past 5 seconds [5 values] for each voice client), or roam-history (information about each client past 3 roams).
Issues may also come from interferences. Use the Cisco Spectrum Expert or CleanAir reports to identify non-802.11 interferers that
may affect your calls. You may also want to use third-party tools to help your troubleshooting efforts. A site survey mapping tool
(such as AirMagnet Survey or Ekahau Site Survey) can help you verify each AP coverage area (With SNR/RSSI values). AirMagnet
VoFi Analyzer may also be a very valuable tool to provide detailed information about VoWLAN calls (802.11-related information,
but also VoIP information, such as end-to-end mean opinion score [MOS], delay, or jitter). Packet capture tools (such as Omnipeek
or Wireshark) may also prove useful.
When helping users troubleshoot VoWLAN issues, make sure to know what exactly happened, where, when, and if the issue affects
all phones, or only some phones, in a specific area or throughout the entire coverage area. Verify whether the phone is sending UP 6
frames, whether the issue affects both 2.4-GHz and 5-GHz bands or just one of them, and whether the issue occurs when calls are set
between clients on the same AP or between APs on different controllers. If your users experience one-way audio issues, check the
phones transmit powers, codecs, firmware version, and general configuration; verify the presence of firewalls between endpoints; and
check that controller configurations are similar (for intercontroller calls). You might want to use a test scenario like the one displayed
in Figure 3-22 to test the initial wireless call, and then roam from one AP on a controller to another AP on another controller and
verify that the call quality is acceptable. This helps you analyze your setup and compare it to the client deployment to isolate the
source of the problem.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 94 ]
Chapter 3: VoWLAN Implementation
CUCM
AP2
Phone 2
does not move
Ph
on
e
1
m
ov
es
in
th
is
di
re
ct
io
n
AP1
Figure 3-22 VoWLAN Test Scenario
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 95 ]
CCNP Wireless (642-742 IUWVN) Quick Reference
Chapter 4
Multicast over Wireless
IUWVN is more than just Voice over WLAN (VoWLAN), and also covers applications that are bandwidth intensive or time sensitive.
Multicast traffic is a specific case, at the boundary of both worlds. (A time-sensitive application, such as VoIP, uses multicast for
features like paging or music on hold; and bandwidth-intensive applications, such as video, often uses multicast to save bandwidth.)
Multicast on the wireless side of the network operates differently from the way it operates on the wired side of the network. You need
to understand what multicast is, where it is used in wireless, and how to configure your wireless and wired infrastructure for optimal
wireless multicast support.
Multicast Concepts
Multicast uses a specific range of addresses to deliver one packet to multiple destinations, reducing traffic volume and saving
bandwidth. Any application that needs to exchange information between several endpoints may use multicast. There are many types
of multicast use cases:
■
One-to-many: One source, many destinations
■
Many-to-many: Many sources sending to many destinations
■
Many-to-one: Many sources sending to one destination, where a station queries many stations using multicast, and receives
feedback from all subscribing stations
Multicast uses destination addresses that are not routable by default. Enabling multicast implies configuring the network to forward
multicast traffic where needed, and ensuring that the multicast flow only reaches the destination endpoints that need it. Multicast
IP addresses use Class D addresses and range from 224.0.0.0 through 239.255.255.255. 224.0.0.0/24 is reserved by the IANA for
network protocols and is used by routers and switches to exchange information (routing updates and so on). 224.0.1.0 through
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 96 ]
Chapter 4: Multicast over Wireless
238.255.255.255, are global addresses and are allocated dynamically throughout the Internet. 239.0.0.0 through 239.255.255.255 are
administratively scoped addresses and are reserved for use inside private domains. You should use these administrative addresses for
your LAN multicast traffic (unless you are provided a scope by the IANA).
The destination MAC address of a packet using a multicast destination IP address is also a multicast (Layer 2) address. The translation
between the IP multicast and MAC address is done by mapping the last 23 right (low-order) bits of the IP address into the last 23 right
(low-order) bits of the IEEE (Layer 2) MAC address. In the MAC address, the 0x01005e prefix shows that this MAC address is used
to map an IP multicast addresses into Layer 2 MAC addresses, as shown in Figure 4-1.
IP address, 32 Bits
+ 28 Bits
Structure:
1110
Example:
239.130.0.1
23 lower bits
are kept
1110 + 5 Bits Lost
1110 1111.1000 0010.0000 0000.0000 0001
1110 and
5 higher bits
are lost
23 lower bits are kept
01-00-5e (00000001.00000000.01011110.0)
is added to express “L3 Multicast
to L2 Multicast translation”
01-00-5e-0 added
01-00-5e-0 02-00-01
0000 0001.0000 0000.0101 1110.0 000 0010.0000 0000.0000 0001
In Hex
25 new Bits
23 Bits kept
01-00-5e-02-00-01
48 Bits
Figure 4-1 Layer 2 Multicast Address Generation from a Layer 3 Multicast Address
Because there are 28 bits of unique address space for an IP multicast address (32 minus the first 4 bits containing the 1110 Class D
prefix), and there are only 23 of these 28 bits mapped into the IEEE MAC address, 5 bits of the IP address are ignored. Therefore, be
aware that several Layer 3 addresses may map to the same Layer 2 multicast address. For example, the address 239.130.0.1 (1110111
1.10000010.00000000.00000001) translates to the same Layer 2 multicast address (01-00-5e-02-00-01) as any other multicast address
finishing with 130.0.1 (224.130.0.1, 225.130.0.1, 226.130.0.1, etc), but also any multicast address where the last 7 bits of the third
octet are 0000010 (that is, 10000010, which is 130, but also 000000010, which is 2). Therefore, 228.130.0.1, 228.02.0.1, 234.130.0.1,
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 97 ]
Chapter 4: Multicast over Wireless
and 234.02.0.1 all translate into the same multicast MAC address. Be mindful of this issue when choosing multicast addresses. Avoid
choosing, on the same network, Layer 3 multicast addresses that translate into the same Layer 2 MAC address. Otherwise, subscribers
to any of these multicast addresses will need to process the received frame before deciding, at Layer 3, if the packet should finally be
ignored or forwarded to the upper layers.
Protocols Used with Multicast
For multicast to be effective, a source must send traffic to a multicast destination address. On the receiver end, a multicast-enabled
application must subscribe to the multicast flow (this action is called joining a multicast group) and must know what multicast
address to subscribe to. Between the source and the destination, routers must link source and destination.
Internet Group Management Protocol (IGMP) is a protocol used when hosts want to join a multicast group. In its first version
(IGMPv1), the router sends a multicast membership query (every 60 seconds by default) to all hosts in the subnet (using the “allhosts” multicast address 224.0.0.1). Hosts reply by sending a membership report to 224.0.0.2, mentioning the multicast address they
want to join. (The router uses another protocol to get to the source of the multicast traffic and forward its traffic to the LAN.) No
process is in place to leave a group, timeouts are used. (No host replies to the router query for a specific group, and the query times
out after 180 second.) In its second version (IGMPv2), hosts still answer membership reports to the multicast address they want to
join, but they can also join a group without waiting for the router query (by sending unsolicited membership reports for a specific
multicast group address). Hosts can also send leave messages. The router can reply with a query specific to the group (instead of all
hosts) to check whether there are still members for the group in the subnet. Members reply with a membership report. If no member
replies (timeout for a specific group can be just a few seconds), the router stops forwarding traffic for that group (but still forwards
traffic for the other groups). IGMPv3 works almost like IGMPv2, but also allows hosts to indicate that they want to receive traffic
only from particular sources for a multicast group. Reports are sent to 224.0.0.22 instead of 224.0.0.2.
Multicast must also be managed at the switch level. As the multicast address is always used as a destination (never a source), a switch
cannot associate the multicast address to a port, and keeps flooding multicast to all ports. Two protocols enable you to resolve this issue:
■
Cisco Group Management Protocol (CGMP): A Cisco proprietary protocol that runs between the router and the switch.
The router sends to the switch (to the CGMP multicast address 0x0100.0cdd.dddd) messages that contain each multicast client
MAC address, multicast group, and a status (leave or join). The switch uses this information to update the list of ports where
multicast is needed.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 98 ]
Chapter 4: Multicast over Wireless
■
IGMP snooping: A switch-enabled feature used to intercept IGMP without the need for interaction with the router. The
switch becomes IGMP aware, and listens to the messages exchanged between the hosts and the router. It is heavier than
CGMP (as the switch must examine the content [Layer 7] of the IGMP exchanges to understand what port needs to be added
or removed to or from which group), but is more widely used than CGMP.
Beyond the local LAN, routers must also make sure that multicast traffic travels from the source to the destinations. Two distribution
models (or trees) exist:
Note
A switch uses the source
MAC address in frames
to associate a MAC address to a port. Frames
sent to that MAC address
are then only forwarded
to the associated port.
If the switch cannot associate a source MAC
address to a specific port,
frames for that MAC address are flooded to all
ports.
■
Dense mode: Uses source-rooted or shortest-path trees (SPT). With this model, a source sends multicast traffic, and all
routers on the path flood the traffic to all ports, reaching every corner of the network. This “push” model is a method for
delivering data to the receivers without the receivers needing to request the data, and is efficient in deployments where there
are active receivers on every subnet in the network. Routers that do not have clients for the multicast group send “prune”
messages toward the source, thus stopping the flood to segments where it is not needed. A new flood and prune cycle occurs
every 3 minutes by default.
■
Sparse mode: Uses shared trees and routers with a specific role (called rendezvous points [RP]). Multicast sources send their
multicast flow to the RPs. When a host needs a multicast flow, it sends an IGMP message to its local router. The local router
looks for the RP managing this multicast group, and forwards the query to the RP. The RP then relays the multicast flow down
toward the router and its client. The path between the source and the host goes through the RP, and is therefore longer than
with SPT (hence why SPT is called “shortest path”). This model avoids the flood/prune overhead of SPT, and is well adapted
to networks where only some clients require multicast traffic.
Routers relaying multicast traffic commonly use the Protocol Independent Multicast (PIM) protocol to identify the source, the
receivers and the mode (dense or sparse) to use for multicast routing. Notice that multicast routing and unicast routing work in
opposite directions. When a router receives a unicast packet, the routing decision is based on the destination address (where is the
packet going). With multicast, the routing decision is based on the source of the packet. The source is known and must be identified
(to avoid rooting loops), whereas the destination is a group of unknown (and variable in size and location) destinations. The router
uses Reverse Path Forwarding (RPF) to trace the source so as to prevent forwarding loops and to ensure the shortest path from the
source to the receivers (if the source identified by RPF is not on the same interface as the one where the packet was received, the
packet is dropped). RPF relies on the underlying unicast routing protocol running on the router to identify from which interface the
source can be reached. Any unicast protocol can be used (hence the “protocol independent” name).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 99 ]
Chapter 4: Multicast over Wireless
PIM can operate in dense and in sparse mode. Both modes can even be enabled on a given router, some interfaces being in sparse
mode, others in dense mode. A third mode specific to Cisco, sparse-dense, can use sparse mode and revert to dense if no RP is found.
PIM can also be used to identify the RPs, and even set specific RPs for specific groups. The commands needed for simple PIM-SM
and PIM sparse-dense mode deployments are the following:
■
The global command ip multicast-routing enables support for IP multicast on a router.
■
The interface command ip pim {dense-mode | sparse-mode | sparse-dense-mode} enables PIM operation on the selected
interface, respectively in dense mode, sparse mode, or hybrid mode (sparse mode, reverting to dense mode for a given
multicast group if a RP is not found for that group).
■
You should issue the global command ip pim send-rp-announce {interface type} scope {ttl} group-list {acl} on the router
that you want to be an RP. (For example, ip pim send-rp-announce loopback 0 scope 16 group-list 1 enables the router
to announce its loopback 0 interface IP address as an RP address for the multicast addresses allowed and listed by access
list 1; this announcement is sent up to 16 hops away from the router.) The router sends an Auto-RP message to 224.0.1.39,
announcing the router as a candidate RP for the groups in the range described by the access list.
■
The global command ip pim send-rp-discovery {interface type} scope {ttl} configures the router as an RP mapping agent;
it listens to the 224.0.1.39 address and sends an RP-to-group mapping message to 224.0.1.40. The RP mapping agent collects
the announcements from all candidate RPs for all multicast groups, and lists the best RP for each group. Other PIM routers
listen to 224.0.1.40 to automatically discover the best RP for each group.
Multicast in the Wireless World
If your network applications use multicast, you may need to forward multicast traffic to the wireless cell. Typical examples of
applications using multicast are video streams, VoIP, and music on hold. Vocera badges use multicast for paging functions and group
communication.
By default, multicast is not enabled on the controller. Multicast packets coming from the wired side are dropped and not forwarded
to the access points (AP) and their clients. Notice that multicast packets coming from wireless clients are forwarded to the wired side
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 100 ]
Chapter 4: Multicast over Wireless
(but by default not forwarded back to the other APs and their wireless clients). To enable multicast (and relay multicast traffic to the
APs and their clients), navigate to Controller > Multicast and check the Enable Global Multicast Mode check box. Then, navigate
to the Controller > General page and choose the multicast forwarding mode you want to enable. You can choose between Unicast
and Multicast mode, as shown in Figure 4-2.
Figure 4-2 Multicast Configuration on a Controller
Unicast is the legacy mode, and is adapted if the network between your controller and the APs does not support multicast. With
Multicast/Unicast, each incoming multicast frame is encapsulated into Control and Provisioning of Wireless Access Points
(CAPWAP) Protocol and sent as a unicast packet to each individual AP IP address. The AP decapsulates the packet and forwards the
multicast content to its radios if clients need it. This mode is the least efficient because the multicast packet needs to be encapsulated
and send as many times as the controller has access points (generating controller CPU overhead and additional load onto the network).
This mode is not supported on the 2100 and 2500 series branch controllers.
Multicast/Multicast is the preferred mode for WLC to AP multicast communication, since controller code release 3.2. When
you choose this mode, a new field appears where you enter a multicast address. (Cisco recommends choosing an address in the
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 101 ]
Chapter 4: Multicast over Wireless
administrative range 239.129.0.0 to 239.255.255.255.) This address is a multicast address shared between the controller and its
APs. Every AP that joins the controller learns about the controller multicast group address and registers (with IGMPv1, v2, or v3,
depending on what the wired infrastructure supports) with the group. When a multicast packet reaches the controller, the controller
encapsulates the multicast packet into CAPWAP and sends it to the controller multicast group address. The packet is sent only once
(thus saving bandwidth and CPU overhead), but as each AP has registered to the controller group multicast address, all APs receive
the encapsulated packets. Each AP decapsulates the packet and forwards the multicast content to its radios if clients need it, as shown
in Figure 4-3.
CAPWAP
data 10.10.10.1 239.130.0.1
Multicast Group:
Source Destination 239.129.1.5
IP
IP
S. IP = WLC
Management
239.129.1.5
data 10.10.10.1 239.130.0.1
data 10.10.10.1 239.130.0.1
One multicast packet in
CAPWAP APs
Multicast Group
One CAPWAP multicast
packet out
Only APs that have clients
for the group forward the packet
to their radio
data 10.10.10.1 239.130.0.1
Network replicates
packet as needed
Figure 4-3 WLC Multicast Group Encapsulation
Wireless clients, just like any other host, use IGMP messages to join multicast groups. These messages are relayed by the AP to
the controller. The controller then creates a multicast group identifier (MGID) entry that associates the multicast group, the client
VLAN, and client MAC address. When other wireless clients in the same VLAN join the same group, they are added to the MGID. A
new MGID entry is created for any new multicast group or for any new client VLAN for a multicast group.
The controller can then operate in two modes, as illustrated in Figure 4-4:
■
When IGMP snooping is not enabled (default mode), the controller relays all client multicast messages (join, leave, and
membership reports) to the wired infrastructure. The controller is transparent, and the uplink multicast router directly sees
the wireless clients as multicast receivers. This mode is somewhat rather bandwidth inefficient because identical messages
(membership reports, for example) coming from many clients in the same MGID are all relayed to the uplink router.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 102 ]
Chapter 4: Multicast over Wireless
■
When IGMP snooping is enabled (from Controller > Multicast and by checking the Enable IGMP Snooping check box),
the controller absorbs the client messages. This is the recommended mode. When a membership report is received from
a wireless client for a new MGID, the controller absorbs the query, but sends its own join message (using the controller
dynamic interface IP address associated to the client WLAN) to the wired infrastructure (the controller appears to the
multicast router as the client, and the wireless clients are not seen by the wired infrastructure). The controller absorbs the
same way as all the other multicast clients messages. It does not subscribe multiple times to the same multicast group (even
if multiple clients in the same VLAN are subscribing to that group), and answers on its own the membership queries sent by
the uplink multicast router. When the last wireless client in the MGID leaves the group, the controller stops answering the
uplink multicast router membership queries (and lets the multicast flow timeout). IGMP snooping makes multicast processing
more efficient, by limiting the number of multicast messages exchanged between the controller and the multicast source. On
the same Controller > Multicast page, you can configure the way the controller verifies whether there are still clients in each
MGID (refer to Figure 4-2). The controller sends membership queries at intervals defined by the IGMP query interval value
(20 seconds by default). If the controller does not receive a response through an IGMP report from a given client after the
IGMP timeout interval, the controller times out this particular client entry from the MGID table. When no clients are left for
a particular multicast group, the controller waits for the IGMP timeout value to expire and then deletes the MGID entry from
the controller.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 103 ]
Chapter 4: Multicast over Wireless
IGMP snooping disabled
IGMP snooping enabled
Mcast Traffic
Mcast Traffic
Group Address Iast Reporter
239.130.0.1
10.1.1.130
Group Address Iast Reporter
239.130.0.1
10.1.1.10
Controller
Dynamic interface
IP 10.1.1.10
Controller
Dynamic interface
IP 10.1.1.10
Multicast membership
reports and Multicast
Leave messages
Multicast membership
reports and Multicast
Leave messages
IP 10.1.1.130
IP 10.1.1.130
Figure 4-4 WLC IGM Snooping Modes
You need to remember several elements:
■
The controller MGID associates a multicast destination address to a VLAN (not a WLAN) and lists the wireless clients in
each group. The controller decision to forward a multicast packet is based on the fact that there is a MGID associated to the
incoming multicast packet.
■
When a controller forwards a multicast packet, it forwards it to all the access points. (All APs subscribe to the controller
multicast group and receive all forwarded multicast packets.) APs in monitor mode, sniffer mode, SE-Connect mode, or
Rogue Detector mode do not participate in data service, and therefore never join the controller multicast group (and therefore
do not receive multicast packets from the controller).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 104 ]
Chapter 4: Multicast over Wireless
■
In code 7.0.116, multicast packets forwarded from the controller are always sent with best effort (BE, no QoS tag), regardless
of the QoS level of the initial multicast packet entering the controller. This behavior changes in later codes. For example, a
multicast call placed from a Vocera badge in a WLAN with QoS level set to Platinum may be forwarded to the controller (by
the AP to which the badge connects) using DSCP 46 (RF), but will then be relayed by the controller through the best effort
queue.
■
APs only accept multicast packets received from the controller and sent to the controller multicast group address. All other
multicast packets received on the AP wired interface are dropped.
■
Upon receiving a multicast packet from the controller, the AP checks the MGID entry and the associated VLAN. If the AP
does not have any wireless client that joined the multicast group associated to the MGID, the multicast packet is dropped. If
the AP has at least one wireless client subscribed to the group, the AP forwards the multicast packet to all radios and WLANs
associated to the matching VLAN (regardless of which WLAN or radio the multicast client is located).
■
Some WLANs can be associated with an interface group (a group of several dynamic interfaces, and therefore several
VLANs). This feature is called VLAN pooling, and may be an issue because the same WLAN would be associated to
different MGIDs. If different clients on the same WLAN but different VLANs join the same multicast group, the same
multicast packet will be sent by the controller several times to the APs (once for each MGID), and therefore sent several times
over the same WLAN. To avoid this issue, in the WLAN > General tab enable the Multicast VLAN feature and choose
the dynamic interface and VLAN to be used for multicast traffic. All multicast traffic for this WLAN will be mapped to the
VLAN you choose, thus ensuring that each multicast packet will be sent once, regardless of the VLANs associated to the
WLAN.
■
Over the wireless medium, multicast packets are always sent using the QoS queue associated with the WLAN quality of
service (QoS) level (for example, user priority [UP] 6 for WLANs with a QoS level of Platinum) at the highest mandatory
rate. This point may be problematic, for two reasons:
■ Clients may be in an area of the cell where the highest mandatory rate cannot be read. (For example, the highest mandatory rate is set to 24 Mbps, and a client requests multicast from an area of the cell where only 6-Mbps
communication is possible to the AP: The client requests a multicast flow, but cannot demodulate the flow sent
back at 24 Mbps.) This issue is solved in voice networks by setting only one mandatory rate, the lowest allowed
rate (which becomes the lowest and highest mandatory rate because it is the only mandatory rate). See Chapter 3,
“VoWLAN Implementation.”
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 105 ]
Chapter 4: Multicast over Wireless
■ The client’s experience may be degraded in reverse. For example, a client communicating at 144.4-Mbps 802.11n
MCS may request a video flow and receive it as a multicast stream sent at highest mandatory rate (for example, 24
Mbps), making the multicast video quality a lot lower than a unicast video stream sent at 144.4 Mbps would allow.
This issue is resolved for video clients with VideoStream, as explained in Chapter 5, “Video and High-Bandwitdh
Applications over Wireless.”
■
When a multicast packet comes from a wireless client, it is encapsulated by the client AP and sent (as a unicast frame
exchanged between an AP and its controller) to the controller. The controller then forwards the frame to the wired network, to
the client associated VLAN and subnet. If multicast support is enabled, the controller also forwards the frame back to all APs
as explained earlier.
Roaming (between APs connected to different controllers) may be a field of concern with multicast. You must consider two scenarios,
as illustrated in Figure 4-5:
■
Both controllers are in the same subnet: In this case, roaming is seamless. As the client credentials are exchanged between
controllers, the first controller also sends to the new controller the client MGID information. The new controller immediately
sends an IGMP membership report to the upstream router. Because both controllers are in the same subnet, the upstream
router is the same router for both controllers. The membership report simply allows the intermediate switch (with IGMP
snooping or CGMP) to become aware that the new controller also needs the multicast traffic. The switch starts flooding the
multicast flow to the new controller, and the flow is forwarded to the client from the new controller seamlessly.
■
Controllers are in different subnets: In this case, it might take time for the new controller to reach the source of the
multicast flow. To avoid any delay, the first controller maintains the MGID and forwards the multicast packets to the new
controller. The client membership reports that reach the second controller are also forwarded to the first controller. In
summary, the multicast session is maintained through the first controller, not directly by the second controller. The second
controller directly handles new MGIDs, but uses the first controller to maintain existing multicast flow during roaming.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 106 ]
Chapter 4: Multicast over Wireless
Layer 2 roaming
Layer 3 roaming
Mcast Traffic
Mcast Traffic
IGMP
IGMP
Gi1/3
Gi1/2
Symmetric tunneling
IGMP
Figure 4-5 Multicast Forwarding While Roaming
When configuring multicast, the general recommendation is to configure one group per controller. Do not use the same multicast
group address on two different controllers. Otherwise, APs may receive and accept multicast packets coming from the wrong
controller.
Nevertheless, using distinct group addresses for each controller may result in multicast loops. In a loop scenario, a multicast packet
is sent by a controller to its APs, but also received by another controller that, seeing a multicast destination address, encapsulates it
and sends it to its own APs. The resulting packet, now encapsulated twice, reaches the first controller, where it is treated as a new
multicast packet, encapsulated again, and sent. This results in an endless loop. To avoid this risk, controllers ignore any received
multicast packet sent to a CAPWAP port (UDP 5246 and 5247) or Lightweight Access Point Protocol (LWAPP) port (UDP
12222 and 12223). For this reason, you should not configure applications to send multicast traffic to those ports. (Configure these
applications to use other ports.)
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 107 ]
Chapter 4: Multicast over Wireless
Multicast in Mesh Networks
Multicast can be an issue for mesh networks. For example, video feeds sent by a camera attached to a mesh AP (MAP) to a multicast
address travel across the backhaul to the controller, and then are sent back to all APs on the backhaul (as per the standard multicast
forwarding mechanism when traffic comes from a client on the wireless side). When multiple cameras are deployed, the backhaul can
easily be overloaded. Therefore, mesh networks offer three possible modes for multicast traffic forwarding, as shown in Figure 4-6:
■
Regular: This was the default mode on controller code 5.1 and older. Multicast traffic forwarding follows the standard rules,
and a multicast packet received by a mesh AP is forwarded to all its ports (toward the root access point [RAP]), down the
backhaul to the other MAPs, and to the wired ports of the AP if the wired port is connected). This mode can easily overload
the backhaul if many multicast sources are present.
■
In-only: A mesh AP forwards multicast traffic only on the way up, towards the RAP (never down the backhaul to other
MAPs). RAPs do not forward multicast packets coming from the wired interface toward the backhaul. MAPs always drop
multicast packets coming from the RAP direction, and accept multicast packets only if they come from a child AP and are
sent toward the RAP.
■
In-out: This is the default mode on controller code 5.2 and later, and is in hybrid mode. MAPs receiving a multicast packet
from a wired or wireless client forward this packet only toward the RAP (never down the backhaul to another RAP), but the
RAP does forward to the backhaul multicast packets coming from the wired interface. When such a multicast packet comes
from the RAP, MAPs accept to relay the packet down the backhaul.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 108 ]
Chapter 4: Multicast over Wireless
X
RAP
X
MAP 1
MAP 2
MAP 1
MAP 2
In-only mode
RAP
In-out mode
(default mode
in WLC code
5.2 and later)
RAP
MAP 1
MAP 2
Regular mode
(default mode
in WLC code
5.1 and older)
Figure 4-6 Mesh Multicast Modes
You can configure the multicast mode for mesh networks from the controller CLI with the following command:
config mesh multicast {regular | in | in-out}
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 109 ]
Chapter 4: Multicast over Wireless
You can verify the multicast configuration for mesh by issuing this command:
show mesh config
(Cisco Controller) >show mesh config
.../...
Mesh Multicast Mode.............................. In-Out
Intercontroller Mobility Messaging
When a new client associates with a controller, the controller informs the members of its mobility list about this new client. This helps
expedite the roaming process. (When the client joins an AP on another controller, the new controller knows which controller the client
came from, and where to ask credentials for this client.) These messages can be sent as unicast to each controller in the mobility list.
You can also use multicast for these messages. (It can be the same multicast address for all mobility groups known to the controller,
or a different multicast address for each known mobility group.) Using multicast reduces the number of packets exchanged between
controllers. You can configure the multicast addresses for mobility messaging from Controller > Mobility Management > Mobility
Multicast Messaging, as shown in Figure 4-7. Check Enable Multicast Messaging. You can then enter a multicast address for all
the controllers in your mobility group, and click the name of each mobility group known to the local controller to enter the same, or
another, multicast address that will be used for mobility messaging. Make sure to configure all controllers the same way (so that the
other controllers are aware of the address they should subscribe to in order to receive mobility information).
Mobility Group
Mobile
Announce
Message
As
so
ci
at
io
n
Mobility Group
Figure 4-7 Mobility Messaging
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 110 ]
Chapter 4 : Multicast over Wireless
Broadcast Forwarding
By default, broadcast forwarding is also disabled on a controller, regardless of your multicast forwarding configuration. Broadcast
packets coming from the wired side are dropped and not forwarded to the APs and their clients. You can enable broadcast
forwarding on the controller from the Controller > Generalpage (refer to Figure 4-2). Notice that the broadcast forwarding policy
is independent from the multicast policy, but still follows the multicast forwarding policy configuration. If multicast forwarding
is disabled or set to Multicast/Unicast and you enable broadcast forwarding, each received broadcast packet is encapsulated into
CAPWAP and sent to each AP through a unicast packet. This mode is less efficient because each broadcast packet is duplicated as
many times as there are APs associated to the controller. If Multicast/Multicast is enabled and you enable broadcast forwarding,
the controller uses the AP multicast group address to forward broadcast packets. Each received broadcast packet is encapsulated
into CAPWAP and sent, once, to the controller AP multicast group address. In both cases, each AP receiving the broadcast packet
forwards it to all its WLANs on all its radios.
Multicast on Autonomous APs
Multicast over autonomous APs is simpler than over a controller-based solution because the autonomous AP is a Layer 2 device
that translates directly frames between the 802.11 and the 802.3 domain. When a wireless client associates to an autonomous AP
and registers for a multicast stream, the uplink router registers that a join message was received from the AP port and forwards the
multicast traffic to the AP. When several WLAN are configured on a single AP radio, each WLAN is mapped to a VLAN. The
autonomous AP receives the multicast traffic from the switch on a given VLAN, and forwards it to all SSIDs associated with the
VLAN at the highest mandatory rate.
This mechanism is simple. However, you may still need to manage two items relevant to multicast: IGMP snooping and the
WorkGroup Bridge case.
IGMP Snooping
When a wireless client roams from one AP to another AP (even connected to the same switch), IGMP snooping or CGMP prevents
the switch from forwarding the multicast traffic to the new AP until the client sends a membership report, as shown in Figure 4-8
(phase 1); then, the switch learns that the new AP also needs the multicast flow and starts forwarding the multicast traffic to the new
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 111 ]
Chapter 4 : Multicast over Wireless
AP. This process may be disruptive for the client application using the multicast flow. To resolve this issue, check the Snooping
Helper check box (it is checked by default) in the IGMP Snooping section of the AP Services > QoS > Advanced page, as illustrated
in Figure 4-8. With this feature, the AP sends a general IGMP query to the network infrastructure on behalf of the client every time
the client associates or reassociates to the AP (phase 2). By doing so, the multicast stream is maintained for the client as it roams
(phase 3). Notice that the client still has to send a membership report for the new AP to be aware of the client multicast group
membership and start forwarding the received multicast flow over the WLAN.
1
2
3
3
Figure 4-8 IGMP Snooping on IOS APs
WorkGroup Bridges
WorkGroup Bridge (WGB) is a special mode of an autonomous AP, used for the AP to act as a sort of “shared wireless NIC” and
connect several non-802.11 wired clients to the wireless network. The WGB is seen by the wireless infrastructure as a client, of a
special type (WGB).
Note
This configuration is
done on the autonomous
AP, not on the WGB.
When multicast packets are sent in a WLAN, they are not acknowledged. (Only unicast frames are acknowledged, not multicast or
broadcast frames.) Therefore, there is no possibility to control or verify proper reception by all receivers. This might be an issue
when several clients that require the multicast flow are connected to the same WGB and the WGB does not receive the multicast
flow properly. To guarantee this delivery to the WGB, you can enable the Reliable Multicast to WGB feature on the relevant AP
radio web configuration page. You can configure the same feature from the AP CLI radio configuration submode with the command
infrastructure-client. infrastructure-client and Reliable Multicast to WGB are the same feature; the name is just different on the
AP CLI and web interface.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 112 ]
Chapter 4: Multicast over Wireless
When this feature is enabled, the AP first sends the multicast packet to the multicast Layer 2 address, but also sends it a second time
to the WGB, encapsulated into a unicast frame. The AP uses four MAC addresses in the header rather than three. (Address one is the
WGB, address two the BSSID/AP MAC address, address three the multicast MAC address, and address four the MAC address of the
original source of the multicast packet.) Because the frame is unicast, it is acknowledged by the WGB, thus ensuring proper delivery.
This feature results on the same multicast packet being sent several times (once in multicast mode, then as many unicast frames as
there are WGBs in the cell). For this reason, an autonomous AP does not accept more than 20 WGB clients per WLAN when this
feature is enabled.
Troubleshooting Multicast
Troubleshooting multicast in a wireless deployment (analyzing why a multicast flow is not properly received by wireless clients)
consists of two parts:
■
Start by checking that your wireless infrastructure is configured properly to support multicast, as explained throughout this
chapter.
■
Then, verify whether your wired infrastructure is forwarding multicast properly.
You can use the show ip mroute command to display on a router the state of multicast sources and groups (thus checking what
multicast groups the router knows and forwards). show ip mroute displays the multicast routing table, allowing you to check
incoming and outgoing interfaces for each group, along with RP information. You can also use the show ip pim command to list
discovered PIM neighbors (show ip pim neighbor) or list the interfaces (show ip pim interface) where PIM is enabled. When using
sparse mode, use the show ip pim rp family of commands to verify the RP information for each supported multicast group.
On a switch, use the show ip igmp snooping family of commands to verify the switch IGMP configuration and behavior.
You can use these commands on the wired side of the network to verify that the multicast flow does reach the controller, and that a
multicast packet sent from a controller to its APs is forwarded properly by all hops of the wired infrastructure, to all APs that need
that flow. Examples 4-1 and 4-2show such a verification. Example 4-1 is capture on the first router after the controller (toward the
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 113 ]
Chapter 4 : Multicast over Wireless
APs). The controller management/AP manager IP address is 10.1.1.10, and the controller AP multicast group 239.129.1.5. You want
to see an entry listing the controller and multicast group address pair (showing the controller as a multicast source), along with the
interfaces associated with this pair.
Example 4-1
WLC to AP Multicast Forwarding Verification
Router# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.129.1.5), 1w0d/stopped, RP 0.0.0.0, flags: DP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(10.1.1.10, 239.129.1.5), 00:00:22/00:02:37, flags: PT
Incoming interface: Vlan10, RPF nbr 0.0.0.0, RPF-MFD
Outgoing interface list: Null
(*, 224.0.1.40), 1w0d/00:02:09, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Vlan50, Forward/Sparse-Dense, 1w0d/00:00:00
Example 4-2 was captured on the last router before an AP. In this second example, you want to verify that the interfaces toward the
APs are listed as outgoing interfaces for the WLC AP multicast group address.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 114 ]
Chapter 4 : Multicast over Wireless
Example 4-2
WLC to AP Multicast Forwarding Verification (Cont.)
Router# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:57:24/00:02:09, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0.50, Forward/Sparse-Dense, 00:56:48/00:00:00
FastEthernet0/0.18, Forward/Sparse-Dense, 00:57:24/00:00:00
(*, 239.129.1.5), 00:57:12/00:02:12, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0.50, Forward/Sparse-Dense, 00:56:48/00:00:00
FastEthernet0/0.100, Forward/Sparse-Dense, 00:57:13/00:00:00
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 115 ]
CCNP Wireless (642-742 IUWVN) Quick Reference
Chapter 5
Video and High-Bandwidth
Applications over Wireless
Before designing a wireless network for high-bandwidth applications, you should run a site survey to evaluate the bandwidth
requirements of each application to be supported. Proper capacity planning is a key element for success. You need to know what data
rate and what bandwidth will be needed in each area of your cell. This implies understanding the bandwidth consumption of each
type of frames. One important bandwidth-intensive application is video. This section helps you determine the bandwidth consumption
needed by video applications, optimize multicast video traffic with VideoStream, and plan for a migration for 802.11n for increased
data rates and throughput.
Bandwidth Requirements
Frames Bandwidth Consumption
Keep in mind that bandwidth consumption has to take into account the application needs and the additional header overhead, with the
following standard values:
■
For voice traffic, a total of 40 bytes is allocated to the IP header (20 bytes), User Datagram Protocol (UDP) header (8 bytes),
and Real-time Transport Protocol (RTP) (12 bytes) header.
■
For video traffic, a total of 28 bytes is allocated to the IP header (20 bytes) and the UDP header (8 bytes).
■
With voice traffic, compressed Real-Time Protocol (cRTP) reduces the IP, UDP, and RTP headers to 2 or 4 bytes. (cRTP is
not available over Ethernet, and not available for video traffic.)
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 116 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
■
6 bytes allocated to the Multilink PPP header or the FRF.12 (Frame Relay Forum) Layer 2 header (when applicable).
■
1 byte for the end-of-frame flag on MP and Frame Relay frames (when applicable).
■
18 bytes for Ethernet Layer 2 headers, including 4 bytes for frame check sequence (FCS) or cyclic redundancy check (CRC)
(when applicable).
■
A standard (nonencrypted) 802.11 header in a standard indoor cell environment uses 18 bytes for the MAC address part (3
addresses), 6 bytes for the 802.11 specific sections of the header (2 bytes for Frame Control, 2 bytes for the Duration/ID, and
2 bytes for the Sequence Control fields), and 4 bytes for the FCS, resulting in a 28-byte overhead.
■
If the frame is Wireless Multimedia (WMM), a 2-byte 802.11e section has to be added to the header, resulting in 30-byte
overhead.
■
Wired Equivalent Privacy (WEP) encryption adds 8 bytes, Wi-Fi Protected Access / Temporal Key Integrity Protocol (WPA/
TKIP) adds 20 bytes, and WPA2 / Advanced Encryption Standard (WPA2/AES) adds 16 bytes to the frame.
■
802.11n adds 4 bytes with an HT Control field.
■
If the frame is transmitted over a wireless backbone (AP-to-AP link), a fourth address may be used on top of the first three.
Therefore, you need to determine the 802.11 mode to be used to determine the 802.11 overhead, which can range from 28 bytes (no
encryption, no WMM, no 802.11n, in a standard cell) to 60 bytes (WPA/TKIP with WMM and 802.11n sent over an AP-to-AP link).
The entire 802.11 frame, with its physical header (preamble and various parts of the physical header), its MAC header, and its
payload and 802.11 FCS, is called a physical protocol data unit (PPDU). If you remove the physical header, you have the physical
layer service data unit (PSDU). The PSDU is the Layer 2 frame (Layer 2 header, payload, and FCS). The PSDU is also called MAC
protocol data unit (MPDU). PSDU is the name used when looking at the frame from the physical layer (showing what the physical
layer is going to carry). MPDU is the name used when looking at the frame from the MAC layer (showing the entire frame). The
names are different but designate the same element. If you remove the MAC header and only keep the payload, you have the MAC
layer service data unit (MSDU). An easy way to remember these acronyms is to tell yourself that PDU is the entire content (PPDU
for the content down to the physical layer, and MPDU for the entire content down to the MAC layer) and that SDU is the carried
content (PSDU to designate the content carried by the physical layer; that is, the MAC header, payload and MAC FCS, and MSDU
for the payload carried by the MAC layer). This information is summarized in Figure 5-1.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 117 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
PPDU
PHY
preamble
2 Octets
Frame
Control
2 Octets
Duration/ID
6 Octets
Address 1
PHY header
PSDU / MPDU
6 Octets
6 Octets
2 Octets
6 Octets
Address 2
Address 3
Sequence
Control
Address 4
2 Octets
4 Octets
QoS Control
Only in AP to AP links
0-7935 Octets
HT Control
MSDU
4 Octets
FCS
Only in 802.11n frames
Only in WMM frames
and 802.11n frames
0-2304 in a standard 802.11 frame
0-4065 in a A-MPDU block
0-7935 in a A-MSDU block
Encryption bits are added to the max,
and not counted in the max size
Figure 5-1 802.11 Frame Components
This information may be useful for the wireless engineer because the maximum payload size for 802.11 (MSDU max size) is 2304
bytes for a standard frame. Maximum MPDU is 2346 bytes. The additional bytes related to encryption are not counted within the
maximum size. This means that you can send a 2304-byte payload, encrypt it with TKIP (thus adding 20 bytes), and then add a 30byte header (WMM in a standard cell) without exceeding the maximum transmission unit (MTU) (although the total frame is then
2354 bytes long). With 802.11n, frames can also be sent in blocks. You can aggregate several MSDUs, with one common Layer 2
and Layer 1 header. (This is called aggregated MSDU, or A-MSDU.) You can also aggregate several MPDUs, each MPDU with its
own Layer 2 header, and the entire block sharing a common physical header. (This is called Aggregated MPDU, or A-MPDU.) The
maximum size for a MSDU inside an 802.11n A-MPDU block is 4065 bytes. The maximum size for an 802.11n A-MSDU block
is 7935 bytes. An MSDU inside an A-MPDU is smaller, but you can have several MSDUs inside the A-MPDU (for up to a 64-KB
A-MPDU), whereas the A-MSDU size is limited to 7935 bytes.
These frame sizes may play a role in your bandwidth determination because some applications (such as video) may send large frames
that might need to be fragmented. Nevertheless, the maximum frame size on the Ethernet side of the network (except when jumbo
frames are supported) is usually 1500 bytes. Therefore, it is uncommon to see wireless frames reach these thresholds.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 118 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Unlike unicast frames, multicast and broadcast frames are not acknowledged. An acknowledgment frame is 14 bytes and is sent
at the first mandatory speed equal or below the speed at which the data frame was sent. Suppose there were an 802.11a network
where 6-Mbps, 12-Mbps, and 24-Mbps data rates are set to Mandatory. A unicast frame sent at 54 Mbps (or even 144.4 Mbps, with
802.11n) would be acknowledged at 24 Mbps. A unicast frame sent at 18 Mbps would be acknowledged at 12 Mbps. A unicast
frame sent at 12 Mbps would also be acknowledged at 12 Mbps. Acknowledgments impact the throughput of the network because
of the time they take to be transmitted, and the silences (interframe spaces) before and after. At 24 Mbps, with 802.11g, the ACK
adds 31 microseconds to the transmission (10 microseconds for the 802.11b/g short interframe space [SIFS] after the data frame,
16 microseconds for the ACK physical header, and 4.67 microseconds for the ACK 802.11 frame). With 802.11a, where the SIFS
is 16 microseconds, the ACK would add 37 microseconds to the communication. At 1 Mbps, the ACK adds 314 microseconds to
the transmission (10 microseconds for the 802.11b/g SIFS after the data frame, and 314 microseconds for the ACK 802.11 physical
header and frame). You can see that an ACK sent at 1 Mbps takes 10 times more airtime than the same ACK sent at 24 Mbps.
The impact on throughput also depends on the data frame size. Sending a 2346 frame (2304 bytes of data + 34 bytes of header and
FCS) at 54 Mbps takes 348 microseconds. The ACK represents a 0.1% overhead. Sending a QoS null frame (a data frame with an
empty body) takes 64 microseconds. The ACK represents a 58% overhead. This is why the previous chapters explained that smaller
frames are more affected by the overhead than larger frames. The same 2346 frame sent at 1 Mbps takes 18,768 microseconds
(or 18.768 ms) to be sent. Although the ACK itself represents a low percentage (1.2%), sending a frame at 1 Mbps takes 53 times
longer than at 54 Mbps. (The difference is even greater when using 802.11n rates.) This is why the previous chapters explained that
you needed to build small cells, where low data rates are disabled, for Voice over WLANs (VoWLAN). This is also true for any
application that is time sensitive or bandwidth intensive.
The duration is in fact actually longer for the data frame transmission because the emitting station has to wait a certain number of
slot times before sending, typically 31 slot times (a slot time is 9 or 20 microseconds, as detailed in Chapter 1, “QoS for Wireless
Applications”) for the first attempt. The station counts down to 0, and the countdown is interrupted every time another station sends.
The total time to wait is therefore often longer. If the transmission fails (no acknowledgment), the emitter has to wait an extended
interframe space (EIFS)(refer to Chapter 1), typically 364 microseconds with orthogonal frequency-division multiplexing
[OFDM]), and then doubles the number of slot times and restarts the countdown from that new number. This mechanism ensures that
retries considerably reduce the overall throughput (which is exactly the purpose of this mechanism, as collisions occur because there
are too many devices trying to send in the same timeframe). A way to resolve this issue is to reduce the number of devices trying to
send in a given timeframe, by slowing down the transmissions. When the number of slots reaches CWMax (1023 slots), the station
keeps using CWMax for its later attempts.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 119 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Chapter 2, “VoWLAN Architecture,” also explained that G.711, using 50 frames per second, each carrying a 160-byte payload, with
an additional 40 Bytes of overhead for IP plus UDP or RTP, and 28 Bytes of overhead for the 802.11 header and FCS, represents 228
B to send (1824 bits), 50 times per seconds, thus consuming (50 x 1824) 91200 bits per second, or 91.2 Kbps per stream (one stream
up, one stream down, resulting into a bandwidth consumption of 182.4 Kbps). This value is an approximation, because it does not take
into account the fact that several 160-byte payloads may be grouped into one single frame (which would decrease the effect of Layer
2 overhead), but also does not take into account the distributed interframe space (DIFS) or arbitration interframe space (AIFS), the
slot times to wait, or the acknowledgments and retries in case of failure, each factor actually increasing the real bandwidth or airtime
consumption. When taking all these elements into account (no retries, but DIFS, plus a standard 31 slot time and standard ACK),
sending 50 G.711 packets at 1 Mbps takes 102.4 ms. Sending the same packets at 54 Mbps takes 6.24 ms. Sending 50 G.729 packets
takes 46.4 ms at 1 Mbps (each packet contains 20 bytes of audio data), and 5.20 ms at 54 Mbps. By using this time consumption logic
(airtime) rather than a pure bandwidth consumption logic, you can see that there is not enough airtime to have 10 concurrent G.711
streams (5 calls) coexist at 1 Mbps. The situation is worsened by the fact that this airtime determination does not take into account the
fact that some frames are re-sent (because of collisions or because of rate shifting), ignores the other frames present in the cell that
consume airtime (beacons, probe requests are responses, and so on). You also have to keep in mind that there is no time distribution
between stations; in reality, the cell is sometimes idle while several stations are counting down. All these factors reduce the amount of
real available airtime, and this is what led to Cisco recommendation for up to 20 simultaneous voice conversations for 802.11a and 14
simultaneous voice conversations for 802.11g (or 7 to 8 simultaneous voice conversations for 802.11b/g) in Chapter 2, when low data
rates (lower than 11 or 12 Mbps) are disabled and cells small enough to allow for a maximum loss rate of 1%.
Video Codec Types and Bandwidth Needs
Airtime is a concern for time-sensitive applications, like VoIP, but also a concern for bandwidth-intensive applications, like video.
One difficulty is that there are many types of video codecs. (Video codecs are techniques used to convert images into bits, compress
and decompress the bits, and restore the image on the receiving end.) The type of codec depends on the type of video flow:
■
Some video applications are used for two-way, real-time exchange of images (interactive video). A typical example is video
conferencing. Users of these applications typically accept a lower (or at least sometimes imperfect) image quality, but expect
real time. Requirements are similar to VoWLANs: Loss rate should not exceed 1%, and end-to-end delay should not exceed
150 ms (30 ms max in the cell), with a 30-ms jitter max. Quality of service (QoS) is usually AF41 for these applications. A
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 120 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
codec for this type of application needs a fast compression and decompression algorithm because images are received in real
time and users cannot wait for the incoming images to be decompressed (otherwise there would be a disconnect between the
sound and the image). The codec also needs to be error resistant. This means that one image cannot build on the information
sent in the previous image. Otherwise, a lost image may prevent several subsequent images from being displayed properly.
The codec also needs to be bandwidth efficient, which means to render an image of acceptable quality, even over slow WAN
links. Ideally, the codec should also be able to suppress camera artifacts (shadows created by the camera).
■
Some video applications are used for streaming (a video feed is sent to a receiver, then played). These applications are usually
not real time. Some images can be buffered, decompressed, and then displayed while the machine decompresses later images
while simultaneously receiving other images. Requirements are closer to data applications: Loss rate should not exceed 5%,
acceptable delay is up to 4 to 5 seconds, and there are no specific jitter requirements. QoS is typically CS4 for corporate
videos, and may be lower for personal videos. A good codec for these applications should offer a high image quality.
Compression and decompression must be fast enough to allow for a smooth video. Its exact speed depends on the receiver
CPU and its ability to buffer the incoming video. Bandwidth efficiency and error resistance are less a concern than quality
rendering.
Common video conferencing codecs include ITU H.320 (for PSTN or video conferencing over ISDN), but also the ITU H.323 and its
subprotocols, heavily used over IP networks and mentioned in Chapter 2. H.323 also contains subprotocols defining communication
modes and codecs for video, such as H.261, H.263, and H.264.
Common streaming video codecs include the codecs developed by the Moving Picture Experts Group (MPEG):
■
MPEG-1: The first-generation format, defining a basic compression standard for audio and video, and including the Audio
Layer 3 (MP3) audio codec.
■
MPEG-2: The second-generation format for CD and broadcast-quality television.
■
MPEG-4: MPEG-3 was never released, and MPEG-4 achieves higher compression factors than MPEG-2 for the same
quality.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 121 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Most streaming video codecs use the MPEG format for the compression part, but can use different techniques and algorithms for
the decompression part. Theora, Xvid, DivX, wmv, and RealVideo are encoding techniques that allow MPEG decompression. Some
codecs allow the addition of Audio Video Interleave (AVI) to add audio stream to the video stream.
The bandwidth consumption of a video flow depends on the codec, but also on the image type. On one end of the spectrum, a lowquality video camera can send a video feed with an image of small format (320 x 240 pixels, or image points, or even less), in black
and white (each pixel is black, or white, thus consuming 1 bit), with the image being refreshed once per second. On the other end
of the spectrum, a high-quality application can send large images (for example 2048 x 1536 pixels), in full colors (32 bits for each
pixel), with a refresh rate of 60 images per second. The image size, color depth, and refresh rate affect the bandwidth consumption.
For example, a 320 x 240 black-and-white image encoded with MPEG-2 represents 2 KB (16 Kb). With 10 images per second, the
payload bandwidth consumes 0.16 Mbps (to which you have to add the 802.11 header and overhead). An uncompressed (raw, no
codec) 640 x 480 full-color (32-bit) image sent 24 times per second consumes close to 148 Mbps.
For video, the Layer 4, 3, and 2 overhead does not add a significant percentage to the frame size. Adding 70 bytes overhead to
account for Layer 4, Layer 3, and 802.11 overhead increases the 2 KB frame size by only 3%. The effect is even lower with better
resolutions. An important issue comes from the frame size. The frame is large and may sometimes exceed the 802.11 MTU of 2346
bytes. In that case, the frame is fragmented and sent in bursts, as shown in Figure 5-2.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 122 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
2
2
Frame
Control
Duration
/ ID
Bytes
6
6
Address 1
6
Address 2
2
6
Sequence
Control
Address 3
4
Frame
Body
Address 4
FCS
802.11 MAC Header
1
1
1
1
Order
1
More Data
4
Prot. Frame
2
Retry
2
Bits
Power Mqmt
Subtype
More Frag
Type
To DS
Protocol
Version
From DS
Frame Control Field
1
1
1
Sequence Control Field
Fragment
Number
Sequence Number
4
12
Bits
DIFS
PIFS
DIFS
Fragment
#1
SIFS
SIFS
ACK
Fragment
#2
SIFS
SIFS
ACK
Last
Fragment
SIFS
SIFS
ACK
Time
Figure 5-2 802.11 Fragment Burst
In a standard 802.11 burst, the More Data bit of the Frame Control field is set to 1 for all fragments except the last one. The Sequence
Control field mentions the burst sequence number (same number for all fragments in a burst), then the fragment number within
the burst (incremented for each new fragment in a given burst). The effect of a fragment burst on the cell relates to the medium
reservation mechanism. A normal frame (nonfragmented unicast data frame) reserves the medium (through the Duration field) for the
data frame transmission, one SIFS, and one ACK. When the frame is a fragment (except the last fragment), the Duration field reserves
the medium for the fragment transmission, one SIFS, one ACK, but also for the next fragment and subsequent SIFS and ACK, as
shown in Figure 5-2. This process ensures that the burst is not interrupted by other stations. Its consequence is that no other station
can send during the burst. When frames are large and sent at a slow data rate, other stations may be prevented from sending for a very
long time (at the scale of the cell). The burst can get interrupted only if one fragment is not acknowledged (in which case, the burst
stops). The effect of the burst depends on the data rate and the size of the payload. For example, sending a 4-KB payload in a burst of
two fragments takes less than 1 ms at 54 Mbps. Sending a 9.6 KB payload in a burst of five fragments at 1 Mbps takes close to 80 ms.
During that time lapse, no other station can send or receive.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 123 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Notice that 802.11e and WMM altered the burst process in several ways:
■
802.11e allows for contention-free bursts. When a station wants to send several frames in a row, it can send them in a burst.
The frames are separated by a SIFS, ACK, and then SIFS before the next frame. This process works for entire frames, not
fragments. (One station can send several frame payloads in a burst, but a fragment burst is different because it is sending one
payload, too large to be sent in a single frame.) Do not confuse 802.11e burst with fragment bursts.
■
Transmit opportunity (TXOP) limits the number of frames that can be sent per service period, for each AC. This ensures
that a station cannot starve the others and consume more than what the TXOP allows. The TXOP value is decided on the
controller by the Enhanced Distributed Channel Access (EDCA) profile (Voice Optimized, Voice & Video Optimized, and so
on) and can also be configured on the autonomous access point (AP). Regardless of what the station needs to send (fragment
bursts or frame blocks), a single-station airtime will be limited by the AP and the EDCA profile. Distributed Coordination
Function (DCF) (non-WMM stations or applications) can still send standard 802.11 bursts, and many video applications are
not WMM yet.
■
802.11e (and 802.11n) allow for block exchanges, where several MPDUs or MSDUs are grouped in sent as a larger frame.
This was described at the beginning of this chapter, and is further examined at the end of this chapter. Do not confuse a block
(which is a single, larger frame) and bursts (which are several individually acknowledged frames, each containing an entire
payload or a fragment).
VideoStream
Beyond the issues related to frame sizes and 802.11 overhead that can affect any wireless traffic and application (and that you have
to take into account when designing a network with VoIP or any time-sensitive or bandwidth-intensive application), video also
suffers from an additional issue: It is often sent through multicast. This mode saves bandwidth in wired networks, but because of
the differences between Ethernet and 802.11 networks, video over multicast creates three negative consequences when sent over a
wireless cell:
■
Multicast frames are not acknowledged: Corrupted frames are not detected and so are retransmitted. This is an issue
for codecs algorithms that depend on previous frames. Many codecs send a burst to build an image, and then only send
information about the part of the image that changes (for example, an object moving in a still landscape) until the image
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 124 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
changes and is refreshed completely. One or several missing packets may prevent the image from being refreshed properly
(resulting in large pixels in some areas of the image), or may prevent the new image from being built and displayed properly
(resulting in the entire image being pixelized). Frame corruption can be limited to one or a few recipients that suffer from
locally degraded cell conditions or can affect all recipients if the frame gets corrupted (due to collisions or interferences) at
the AP level. In both cases, the frame corruption cannot be corrected and the frame cannot be re-sent (because there is no
acknowledgment or feedback mechanism in 802.11). This is a major issue in wireless cells, where a 5% to 8% packet loss
(and retry) is common. With nonacknowledged frames, packet error rate (PER) is not the retry percentage anymore, but
becomes the effective loss rate.
■
Multicast frames are sent at the highest mandatory rate in Cisco wireless networks: The situation is even worse with
some other solutions that treat multicast frames as broadcasts and send them at the lowest mandatory speed. By default, 1
Mbps, 2 Mbps, 5.5 Mbps, and 11 Mbps are the mandatory rates in an 802.11b/g/n cell. These rates were defined as mandatory
by default by the 802.11 Working Group, to ensure that all stations can demodulate quadrature phase shift keying (QPSK)
QPSK, binary phase shift keying (BPSK), and both forms of complementary code keying (CCK). 6 Mbps, 12 Mbps, and 24
Mbps are mandatory in OFDM cells (802.11a/n, or 802.11g/n cells [with 802.11b rates disabled]). These rates were defined as
mandatory by default by the 802.11 Working Group, to ensure that all stations can demodulate QPSK, BPSK, and quadrature
amplitude modulation (QAM). The consequence is that, in a worst-case scenario, an 802.11b/g/n station communicates with
an AP at 144.4 Mbps and requests a video stream, sent via multicast; the stream is sent at 11 Mbps. An 802.11a/n station
communicates with an AP at 300 Mbps over a 40-MHz channel and requests a video stream, sent via multicast; the stream is
sent at 24 Mbps over a 20-MHz channel. (None of the 802.11n rates are mandatory.)
■
Multicast frames are forwarded as best effort between the controller and the AP: This limitation is due to the fact that
clients of several QoS levels may be requesting the same video stream, making it impossible to choose the “right” QoS level.
A consequence of this best effort transmission is that some video frames may be dropped or delayed between the controller
and the AP in case of congestion. Once on the AP, the multicast frame is sent at the QoS level set for the WLAN (for example,
Platinum, if the WLAN QoS level in the WLAN > QoS tab is set to Platinum). This mode is optimal if your network
separates applications per WLAN; for example, there is a voice-only WLAN, and another video-only WLAN. In mixed-cell
WLANs (for example, with voice and video devices), using the highest queue results in multicast packets competing with
highest-priority packets, which in this example may degrade voice calls.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 125 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
All these limitations result in a degraded user experience when multicast video is sent over the wireless cell. To resolve this issue,
Cisco wireless LAN controller (WLC) code 7.0.98 and later supports a new feature called VideoStream. VideoStream is configured
on the controller and combines several elements that work together to improve the video user experience. VideoStream enables
you to define multicast streams on the controller and reserve bandwidth on the AP radios for these streams. (This is called resource
reservation control [RRC]). You can give a priority value to each stream, to privilege the streams of higher importance (and reject
the other streams of lower importance, or decide to send them as standard multicast flows). Specific admission control is also put in
place to ensure that the AP radio is not used beyond a configurable percentage level. Admitted streams are forwarded between the
controller and the AP using the video QoS marking (AF41, CoS 4). The AP then converts the multicast frame into unicast frames
(done in hardware using Direct Memory Access [DMA] to make it efficient and light on the AP CPU) and sends it to each client as
a fast unicast flow (each client receiving the frame at theoptimal speed based on the client position in the cell). Figure 5-3 shows the
VideoStream process.
5
6
7
2
3
4
1.
2.
3.
4.
5.
Client sends IGMP join
WLC intercepts IGMP join
WLC sends AP RRC request
AP sends RRC response
WLC forwards join request
1
9
8
6. Multicast source sends
IGMP join response
7. Multicast stream sent
8. WLC forwards multicast
stream to AP
9. AP converts stream to
unicast and delivers to client
Figure 5-3 VideoStream Process
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 126 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
When using multicast video over wireless, you should enable VideoStream. To enable VideoStream on a controller, follow these
steps:
STEP 1. Enable multicast support on the controller, as explained in Chapter 4, “Multicast over Wireless,” and shown in Figure
4-2. Enable multicast, set the mode to Multicast/Multicast, and also enable IGMP snooping.
STEP 2. Enable VideoStream, by enabling Multicast Direct. You can do so on a per-band level from Wireless > 802.11a |
802.11b/g > Media > Media. Just check the Multicast Direct Enable check box in the Media Stream - Multicast Direct
Parameters section of the page. You can also enable Multicast Direct globally, for all bands, by checking Multicast
Direct under Wireless > Media Stream > General, as shown in Figure 5-4. Enabling the feature here populates some of
the other configuration parameters on the controller for VideoStream. After enabling Multicast Direct globally, you must
activate the feature on a per-WLAN basis. As soon as you check the Multicast Direct Enable check box on the Wireless
> Media Stream > General page, a new check box appears in each WLAN QoS tab, called Multicast Direct. You need
to check this Multicast Direct option to activate VideoStream on the matching WLAN. Because this option does not appear when you enable Multicast Direct at a band level, the common practice is to activate Multicast Direct globally (so
that the Multicast Direct option appears for all WLANs) and then optionally disable Multicast Direct for the band where
it is not needed.
• Multicast direct can be
enabled globally or for
each band
• Once enabled globally, it
can then be enabled or
disabled for each WLAN
WLANs > Edit WLAN > QoS
Wireless > 802.11a/n (802.11b/g/n) > Media > Media
Figure 5-4 Enabling Multicast Direct
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 127 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
STEP 3. On each band where Multicast Direct is enabled (from the Wireless > 802.11a | 802.11b/g > Media > Media page),
you can enable admission control for media streams, as shown in Figure 5-5. This parameter is configured with the
same logic as voice call admission control (CAC) but applies to all media streams allowed on the controller (see Step
5). In the General section of the page, Unicast Video Redirect is enabled automatically as soon as you enable Multicast
Direct globally. This feature allows the AP to receive a multicast flow transmitted from the controller and convert this
flow into unicast frames that are sent at optimal speed to each client in the cell needing the video flow. This means that
if four clients in the cell require the video, the AP is going to send the frame four times, each time in a unicast frame
to each client. This might seem less efficient than sending the frame once as multicast, but keep in mind that sending
an 8-KB burst takes 246 microseconds at 130 Mbps and 1108 microseconds (four times more) at 11 Mbps. The AP can
send four times the unicast frame (each of them being acknowledged and re-sent if needed) in the time it takes to send
the frame once over multicast. This efficiency increases with the burst size. (Sending a 16-KB burst takes 301 microseconds at 130 Mbps, and 2036 microseconds [six times more] at 11 Mbps.) In the Multicast Direct Admission Control
section of the page, you can enable Video CAC for VideoStream. Do not confuse this feature with CAC in Wireless >
802.11a | 802.11b/g > Media > Video, which applies to all video flows. Media bandwidth on the Media page is the sum
of voice and video traffic on a radio interface. You can refine the individual max values for voice and video from the
relevant tab (Voice or Video). The total of each page cannot exceed the total that is configured on the Media page. With
Multicast Direct Admission Control, you can also decide the percentage of the AP bandwidth that can be allocated to
media streams (video and voice), set to 85% by default. You can also decide the minimum data rate (6 Mbps by default)
at which a client should be operating to be allowed to be allocated a media stream (voice or video). This feature enables
you to prevent the AP from sending unicast video streams to clients that are to slow to benefit from the unicast feed. As
packets are unicast and acknowledged, unacknowledged packets can be re-sent. You can decide how many of the lost
packets should be re-sent (default is 80%).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 128 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Note
Bandwidth-intensive applications, such as video
or large file transfers,
would benefit from the
additional speed provided by 802.11n. Although
802.11n features benefit
all clients, VoWLANs are
usually not the main reason for 802.11n migration, because VoWLAN
applications use small
frames that do not require a high throughput,
but just consistent delay
and jitter.
Figure 5-5 Media Configuration
STEP 4. Lower down on the same page, you can further refine how many video streams are allowed per AP radio and per client.
Video will be sent using WMM marking, permitting video to be tagged with the appropriate (Gold) AC value. Therefore,
only WMM clients are expected (and allowed) to request and use the unicast video stream. If you have non-WMM
clients, you can also check the Best Effort QoS Admission check box to allow these clients to request and receive
VideoStream.
STEP 5. You can then define the multicast video streams that are allowed to be part of the VideoStream feature, from the Wireless
> Media Streams > Streamspage shown in Figure 5-6, specifying for each flow the stream name, destination multicast
address (or address range), and expected bandwidth. Only the multicast flows listed on this page are converted into unicast frames on each AP having client subscribed to the multicast group. The other multicast packets are sent as standard
multicast frames (highest mandatory rate) into the cell.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 129 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Lower down the same page, you can configure bandwidth reservation for the stream (RRC). RRC helps the controller
analyze the available bandwidth on the AP radio to only admit streams that have the space (and airtime) to be transmitted
properly. In this section of the page, you can allocate a priority level for each stream (from lowest priority 1 to highest
priority 8). If there is not enough space in a given cell for all the streams, the controller privileges the streams with the
higher priorities. You can also configure the average packet size and keep the RRC Periodic Update option checked to
force the controller to check the bandwidth availability at regular interval and alter the multicast stream admission accordingly. You can also decide what should happen if there is not enough space in the cell to send the current stream as
unicast flows (Violation option); the stream can then still be admitted and sent as regular multicast (default choice) or
dropped.
If you are not sure about the exact bandwidth consumption of a stream, you can choose an existing template from the
Select option from a predefined templates drop-down list. The controller then assigns the corresponding bandwidth to the
stream and sets the other parameters to default values (1200-byte average packet size, priority 4, RRC updates enabled,
violation Drop).
Figure 5-6 Video Streams Definition
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 130 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
STEP 6. From the Wireless > Media Stream > General page, in the Session Message Config section, you can configure what
message should be sent to users of a stream that has been dropped, as shown in Figure 5-4. You can enable the announcement state to send a message to the user, and you can decide whether you want to send an URL, email address, phone
number, or text message to the user. (Fields left blank are not sent.)
High-Bandwidth Applications Optimization with 802.11n
To enhance the support of high-bandwidth applications (such as video) over the wireless network, a common recommendation is to
use 802.11n. 802.11n describes three sets of techniques that can be implemented to improve the speed of stations at a given position
in the cell (802.11n can also be used to increase the cell size, but most deployments aim at increasing speed instead): multiple-in,
multiple out (MIMO), channel bonding, and frame aggregation. A fourth, optional feature is sometimes implemented, short guard
interval.
802.11n Features
MIMO
MIMO relies on the fact that 802.11n stations and APs may have more than one radio circuit, which allows them to receive and send
more than one flow at a time. The 802.11n amendment allows for up to four parallel flows (called streams). Current deployments
typically allow for two streams. Some new APs (such as the Cisco Aironet 3600 AP, not covered by the IUWVN exam) allow for
three streams. MIMO allows for three different features, as illustrated in Figure 5-7:
■
The emitter can send different simultaneous signals from various radios and associated antennas (called radio chains). The
802.11n receiver acquires these signals on all of its radios. Each of the receive radios independently decode the arriving
signals. Then, each receive signal is combined with the signals from the other radios. This results on additional throughput
(as, with a two-stream system, you use two radios and therefore send twice as many bits as a standard AP, in the same time).
This process is called spatial multiplexing.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 131 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
■
Because of multipath, a signal travels along different paths before reaching the receiver. This means that each receiving radio
secures the same signal several times (as many times as there are echoes of the signal bouncing against obstacles before
reaching the receiver), at slightly different points in time (because the time needed to reach the receiver depends on the
distance at which the signal bounces on an obstacle). With a technique called multi-ratio combining (MRC), the receiver
can combine these signals received on each antenna and radio chain and align them. This technique uses multipath as an
advantage to increase the strength of the received signal. A stronger signal means a higher data rate (because a data rate is
determined based on the received signal strength indication (RSSI) and signal-to-noise ratio [SNR]) at any point of the cell.
■
The emitter can also participate in this logic and send the same signal from several antennas. By carefully coordinating
these signals based on the feedback transmitted by the 802.11n receiving station, the emitter aims at making these signals
be received in phase (that is, aligned and received exactly at the same time), thus increasing the signal power level at the
receiving station, allowing for longer range or higher throughput. This process is called transmit beamforming (TxBF).
TxBF implies a feedback from an 802.11n receiver. To also help improve signals for non-802.11n clients, Cisco 802.11n APs use
a mechanism called ClientLink, by which the AP uses the signal received from the non-802.11n client on its various radio chains,
performs the MRC calculation to optimize the signal reception, and uses the same calculation to synchronize its signals when
responding to the client. This technique improves the non-802.11n client of up to 40% in distance or throughput. Notice that the client
must be OFDM to benefit from this technology. Cisco APs use this technique automatically for 802.11a clients, as soon as their RSSI
falls below –60 dBm, and for 802.11g clients when their RSSI falls below –50 dBm. An AP can support up to 15 ClientLink clients
at a time. A new feature, called ClientLink 2, is implemented in controller code 7.2 to increase the number of supported non-802.11n
stations to 128, and also benefit 802.11n stations, but the IUWVN exam focuses on controller code 7.0.116.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 132 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Multi-Ratio
Combining (MRC)
“abc”
t
MIMO AP
“abc”
“abc”
“abc”
t
“abc”
802.11n receiver aligns signal received on different radios
Transmit
Beamforming (TxBF)
t
“abc”
“abc”
MIMO AP
t + 1.2 μs
“abc”
802.11n emitter ensures that same signal sent from different radios
reach the receiver at the same time
(802.11n receiver with feedback, non 802.11n receiver with Cisco ClientLink)
Spatial
Multiplexing
“abc”
“abcdef”
MIMOAP
AP
MIMO
“def”
Each emitter radio sends different information, combined in 802.11n receiver
Figure 5-7 MIMO Features
Channel Bonding
802.11n allows for 40-MHz-wide channels, bonding two 20-MHz separated channels (36 and 40, for example, or 1 and 5). Within
this larger channel, subcarriers that were previously unused can be employed for data transmission, creating a 119-Mbps data rate
channel. Channel bonding is not recommended for the 2.4-GHz band because too few channels are available in that band for bonding
to improve the spectrum. In Cisco networks, channel bonding is supported only in the 5-GHz band.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 133 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Frame Aggregation
As explained at the beginning of this chapter, 802.11n stations can send aggregated frames. The frames can be aggregated at the
MSDU or the MPDU level (and are respectively called A-MSDU or A-MPDU). A-MPDU was initially thought more efficient
because each frame in the block had its own 802.11 header and its own encryption mode (theoretically allowing several destinations
in the same frame block burst, which makes sense because 70% of an average cell traffic flows from the AP to the clients).
Nevertheless, it later appeared that the entire block must be sent to the same destination (otherwise, the ACK frames from the various
receivers would collide). Also, the entire block is sent with one single QoS mechanism (countdown timer and AIFS), which also
limits the possibility of multiple recipients.
A-MSDU collects several payloads in a single frame (with one single 802.11 header, and therefore a single receiver, single QoS
level, and a single encryption mechanism). These characteristics make that A-MPDU and A_MSDU are similar in effects (on single
recipient for the entire block), but A-MSDU saves the overhead of multiple 802.11 headers. Therefore, A-MSDU was found to be
more efficient than A-MPDU and is the default mode on the Cisco solution (and most other vendors). With both modes, the block is
acknowledged in one global ACK frame.
Blocks were already described by 802.11e. The main difference is that the 802.11e block ACK frame is basically a block containing
each ACK frame for each frame in the data block, whereas the 802.11n block ACK frame is a smaller frame, with a simple index
listing the received frames (making the 802.11 block ACK more efficient than the 802.11e block ACK).
802.11e also describes frame bursts within one TXOP, allowing several frames to be sent in a burst, each frame being followed by a
SIFS and an ACK, then another SIFS (instead of a DIFS or a AIFS) before the next frame in the burst. 802.11n also describes frame
bursts, but each SIFS is followed by a shorter frame interval of 2 microseconds, called the reduced interframe space (RIFS), as
shown in Figure 5-8.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 134 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
A-MSDU
A-MPDU
Radio
Preamble
Radio
Header
MAC
MSDU 1 MSDU 2
Header
Radio
Preamble
Radio
MAC
MAC
MSDU 1
MSDU 2
Header Header 1
Header 2
802.11
Header
Packet
802.11
ACK
Packet
802.11n
Header
802.11
Header
...
MSDU 3
MSDU N
FCS
MAC
MSDU N FCS
Header N
...
802.11
ACK
Packet
Block ACKs
802.11n
Header
802.11n
Header
Packet
802.11n
ACK
Packet
Time
802.11 Standard frame transmission
802.11
Header
802.11
ACK
Packet
802.11e Burst
802.11e
Packet
Header
802.11
Header
sifs
difs
802.11e
ACK
sifs
802.11e
Header
802.11
ACK
Packet
rifs
sifs
difs
sifs
802.11e
ACK
Packet
sifs
sifs
802.11e
Header
Packet
sifs
802.11n Burst
802.11n
Header
802.11n
ACK
Packet
sifs
802.11n
Header
rifs
802.11n
ACK
Packet
sifs
802.11n
Header
Packet
rifs
Figure 5-8 802.11n Aggregation, Blocks and Bursts
Short Guard Interval
Another optional mechanism described in 802.11n is to use a shorter guard interval (the silence between two symbols, or data-bearing
sections, of an OFDM wave), 400 nanoseconds (ns) rather than 800 ns. This process increases the speed by 11%, but also increases
the risk of collisions in noisy environments.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 135 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Configuring 802.11n
All of these techniques allow 802.11n stations to achieve a data rate of up to 144 Mbps when using two streams (two radios chains),
short guard intervals, and 20-MHz channels; and 300 Mbps when using two streams (two radios chains), short guard intervals, and
40-MHz channels. Notice that, just like for the previous protocols, these rates represent the best transmission speed that a station
can achieve, not the overall throughput (or “download speed”), because 802.11 is half duplex and because some airtime is also used
by nondata frames and silences. To avoid the confusion between data rate as in “transmission speed” with data rate as in “download
speed,” data rates were renamed modulation and coding scheme (MCS) for 802.11n.
Cisco recommends that you keep 2.4-GHz 802.11n channels to 20 MHz, but allow 40-MHz channels in the 5-GHz band. You can
configure channel width from the Wireless > 802.11a/n > RRM > DCApage, as shown in Figure 5-9. You can also configure a 40MHz channel on a per-AP basis (to disable 40-MHz channels on an AP when 40 MHz is enabled globally, or enable 40 MHz on an
AP when 40 MHz is disabled globally).
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 136 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Figure 5-9 40-MHz Channel Configuration
When planning for an 802.11n deployment, keep in mind that a standard dual-band 802.11a/b/g AP can require 50 Mbps to 60 Mbps
throughput from the switch, but an 802.11n dual-band 802.11a/b/g/n AP can require up to 250 Mbps throughput from the switch.
Upgrading the wireless network to 802.11n may also mean upgrading the access switches to replace 100-Mbps Ethernet ports with
1-Gbps Ethernet ports. Upgrading the access switches may also mean upgrading the distribution and core switches. Most 802.11n
APs support 802.3af Power over Ethernet (PoE), but the 1250 AP requires enhanced power of up to 18 W to maintain the same
performances as the other 802.11n APs.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 137 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
Also keep in mind that 802.11n allows higher throughput at the same distance, the same throughput at a longer distance, or a little of
both, but not all of both at the same time.
802.11n is enabled by default on the controller, as shown in Figure 5-10. Notice that for 802.11n to operate on a WLAN, you must
enable WMM and Open Authentication or WPA2 with AES on the WLAN. (802.11n does not work with WEP or WPA or without
WMM.) You can disable 802.11n on a per-band level or disable some MCS whenever needed.
Figure 5-10 802.11n Configuration
Because not all clients are 802.11n yet, protection mechanisms are in place to avoid collisions from non-802.11n stations ignoring
802.11n traffic and generating collision issues. 802.11n allows Request to Send/Clear to Send (RTS/CTS) or CTS to self-send at a
non-802.11n rate to reserve the medium; the real frame is then sent at 802.11n. Non-802.11n stations detect the CTS to self or RTS/
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 138 ]
Chapter 5: Video and High-Bandwidth Applications over Wireless
CTS exchange, and stay silent for the duration mentioned in the Duration/ID field, which reserves the medium for the duration of the
CTS to self or RTS/CTS and the following 802.11n frame and its acknowledgment. 802.11n also allows a number of other possible
protection mechanisms for different protection scenarios. The most common is called the hybrid mode, where an 802.11n station
encapsulates an 802.11n frame into a non-802.11n frame. In other words, the frame first contains a standard non-802.11n header
that allows all stations to read the destination address and the duration of the frames. The payload contains the entire 802.11n frame,
with the 802.11n header (needed by the 802.11n stations to know the MCS and other parameters related, among others, to the MIMO
features in use in the frame). The protection mode chosen depends on each device capability. The hybrid mode is in place by default
in most networks, including the Cisco solution.
You configure ClientLink on a per-AP and per-radio basis. Navigate to Wireless > Access Points > Radios > 802.11a/n
(802.11b/g/n), select an AP, and click Configure. You can then enable ClientLink for this AP radio, as shown in Figure 5-11. The
default is disabled.
Figure 5-11 ClientLink
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
[ 139 ]
CCNP Wireless (642-742 IUWVN) Quick Reference
CCNP Wireless (642-742 IUWVN) Quick Reference
Jerome Henry
Copyright © 2012 Cisco Systems, Inc.
Trademark Acknowledgments
All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Cisco Press or Cisco Systems, Inc. cannot attest to the accuracy of this information. Use
of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
Feedback Information
Published by:
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book
is crafted with care and precision, undergoing rigorous development that involves the unique expertise of
members from the professional technical community.
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this ebook shall be reproduced, stored in a retrieval system, or transmitted
by any means, electronic, mechanical, photocopying, recording, or otherwise, without written permission
from the publisher. No patent liability is assumed with respect to the use of the information contained herein.
Although every precaution has been taken in the preparation of this book, the publisher and author assume no
responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use of the
information contained herein.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how we
could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us
through email at feedback@ciscopress.com. Please make sure to include the book title and ISBN in your
message.
We greatly appreciate your assistance.
Corporate and Government Sales
Cisco Press offers excellent discounts on this book when ordered in quantity for bulk purchases or specialsales. For more information, please contact:
First Release May 2012
ISBN-10: 1-58714-311-9
ISBN-13: 978-1-58714-311-3
U.S. Corporate and Government Sales 1-800-382-3419 corpsales@pearsontechgroup.com
For sales outside the United States, please contact: International Sales
international@pearsoned.com
Warning and Disclaimer
This book is designed to provide information about CCNP Wireless. Every effort has been made to make
this book as complete and as accurate as possible, but no warranty or fitness is implied.
The information is provided on an “as is” basis. The author, Cisco Press, and Cisco Systems, Inc. shall have
neither liability nor responsibility to any person or entity with respect to any loss or damages arising from
the information contained in this book or from the use of the discs or programs that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of Cisco Systems, Inc.
© 2012 Cisco Systems, Inc. All rights reserved. This publication is protected by copyright. Please see page 139 for more details.
Download