What Is Virtualization All About?
Dave Gibson
Senior Systems Engineer
Cisco Systems
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
1
Agenda
 Overview
 Compute (OS and Server) Virtualization
 Network virtualization
 Storage Virtualization
 Desktop and application Virtualization
 Conclusion
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
2
What are we going to talk about?
Virtualization
is the pooling and abstraction of
resources and services in a way that
masks the physical nature and
boundaries of those resources and
services from their users
http://www.gartner.com/DisplayDocument?id=399577
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
3
Virtualization is … well, not exactly new
 Nothing new! Concept known to mainframes back in the ’70s
Virtualization is not a new concept
Mainframe of the ‘70s were underutilized and over-engineered
http://www-07.ibm.com/systems/my/z/about/timeline/1970/
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
4
Virtualization back then
 Mainframe Virtualization:
Concept: split the computer into multiple virtual machines so different “tasks” can
be run separately and independently on the same mainframe.
If one virtual machine or “task” fails, other virtual machines are unaffected
VM #1
Task A
VM #2
Task B
VM #3
Task C
VM #4
Task D
VM #5
Task E
VM #6
Task F
VM #7
Task G
Logical VMs on a mainframe
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
5
Fast Forward to the 1990s
 Computers in the 1990s
Intel/AMD servers now very popular (known as “x86” servers)
Each server runs one Operating Systems such as Windows, Linux, etc.
Typical: one OS and one application per server
Server sprawl inevitable
Power, cooling, rackspace become problematic
File
Web
Server Server
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
File
Server
Web Domain
Server Server
DNS
App
Server Server
Cisco Public
File
Server
Each Server Running
1 Application
6
Fast Forward to 2000+
 Focus on reducing footprint
“Rack” form factor (6-20 servers
per cabinet)
“Blade” form factor (30-60 servers
per cabinet)
Helped alleviate some of the
footprint issues
Power and heat still a problem
 The more powerful the CPU …
… the lower the server utilization!
Average server utilization ranges
between 10-20%
Still one application per server
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
7
Today’s IT Challenges
 Server Sprawl
Power, space and cooling: one of the largest IT budget line items
One-application-per-server: high costs (equipment and
administration)
 Low Server and Infrastructure Utilization Rates
Result in excessive acquisition and maintenance costs
 High business continuity costs
HA & DR solutions built around hardware are very expensive
 Ability to respond to business needs is hampered
Provisioning new applications often a tedious process
 Securing environments
Security often accomplished through physical isolation: costly
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
8
Virtualization is the Key
Apply Mainframe Virtualization Concepts to x86 Servers:
 Use virtualization software to partition an Intel / AMD server to
work with several operating system and application “instances”
Database
Web
Application Servers
Email
File
Print
DNS
LDAP
Deploy several “virtual machines”
on one server using virtualization
software
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
9
Four Drivers Behind Virtualization
Hardware
Resources
Underutilized
Data Centers are
running out of
space
• CPU utilizations ~
10% - 25%
• One server – One
Application
• Multi-core even
more under-utilized
• Last 10+ years of
major server sprawl
• Exponential data
growth
• Server consolidation
projects just a start
Rising Energy
Costs
• As much as 50%
of the IT budget
• In the realm of
the CFO and
Facilities Mgr.
now!
Administration
Costs are
Increasing
• Number of
operators going
up
• Number of
Management
Applications
going up
Operational Flexibility
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
10
Other Significant Virtualization Benefits
 Some key benefits:
Ability to quickly spawn test and development environments
Provides failover capabilities to applications that can’t do it natively
Maximizes utilization of resources (compute & I/O capacity)
Server portability (migrate a server from one host to the other)
 Virtualization is not limited to servers
and OS
Network virtualization
Storage virtualization
Application virtualization
Desktop virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
11
Compute Resources Virtualization
Server/OS Virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
12
Virtual Machines
So what exactly is a virtual machine?
 A virtual machine is defined as a representation of a physical machine by
software that has its own set of virtual hardware upon which an operating
system and applications can be loaded. With virtualization each virtual
machine is provided with consistent virtual hardware regardless of the
underlying physical hardware that the host server is running. When you
create a VM a default set of virtual hardware is given to it. You can further
customize a VM by adding or removing additional virtual hardware as
needed by editing its configuration.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
13
Virtual Machines can provide you …
Hardware independence – VM sees the same
hardware instantiation regardless of the host
hardware underneath
Isolation – VM’s operating system is isolated and
independent from the host operating system and also
from the adjacent Virtual Machines.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
14
Hypervisor
What is a hypervisor?
 A hypervisor, also called a virtual machine manager (VMM), is a
program that allows multiple operating systems to share a single
hardware host. Each operating system appears to have the host's
processor, memory, and other resources all to itself. However, the
hypervisor is actually controlling the host processor and resources,
allocating what is needed to each operating system in turn and
making sure that the guest operating systems (called virtual
machines) cannot disrupt each other.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
15
It's all about Rings
 x86 CPUs provide a range of protection levels also known as rings in
which code can execute. Ring 0 has the highest level privilege and is
where the operating system kernel normally runs. Code executing in
Ring 0 is said to be running in system space, kernel mode or supervisor
mode. All other code such as applications running on the operating
system operate in less privileged rings, typically Ring 3.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
16
Rings in virtualization
Traditional systems
Operating system runs in privileged mode in Ring 0 and
owns the hardware
Applications run in Ring 3 with less privileges
Virtualized systems
VMM runs in privileged mode in Ring 0
Guest OS inside VMs are fooled into thinking they are
running in Ring 0, privileged instructions are trapped and
emulated by the VMM
Newer CPUs (AMD-V/Intel-VT) use a new privilege level
called Ring -1 for the VMM to reside allowing for better
performance as the VMM no longer needs to fool the Guest
OS that it is running in Ring 0.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
17
Typical Virtualization Architectures
Hardware Partitioning
Apps
...
OS
Dedicated Hypervisor
Apps
Apps
OS
OS
Adjustable
partitions
...
Hosted Hypervisor
Apps
Apps
OS
OS
...
Apps
OS
Hypervisor
Hypervisor
Partition
Controller
Host OS
Server
Server
Server
Server is subdivided into fractions
each of which can run an OS
Hypervisor provides fine-grained
timesharing of all resources
Hypervisor uses OS services to
do timesharing of all resources
Physical partitioning
IBM S/370 SI->PP & PP->SI,
Sun Domains, HP nPartitions
Hypervisor software/firmware
runs directly on server
Hypervisor software runs on
a host operating system
Logical partitioning
System p LPAR, HP vPartitions,
Sun Logical Domains
IBM System z LPAR
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
VMware ESX Server
Xen Hypervisor , KVM
Microsoft Hyper-V
Oracle VM
Cisco Public
VMware Server
Microsoft Virtual Server
HP Integrity VM
QEMU
18
Some server virtualization
architecture examples
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
19
VMware ESX Architecture
CPU is controlled by scheduler
and virtualized by the monitor
File
TCP/IP
Guest
Guest
System
Monitor supports:
•BT (Binary Translation)
Monitor
Monitor (BT, HW, PV)
•HW (Hardware Assist)
•PV (Paravirtualization)
Scheduler
Memory is allocated by the
VMkernel and virtualized
by the Monitor
Memory
Allocator
Virtual NIC
Virtual SCSI
Virtual Switch
File System
NIC Drivers
I/O Drivers
VMkernel
Physical
Hardware
Network and I/O devices are
emulated and proxied through
native device drivers
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
http://www.vmware.com/products/vsphere/
20
Microsoft Hyper-V Architecture
Provided by:
OS
Hyper-V
Parent Partition
Child Partitions
MS / XenSource
/ Novell
ISV/IHV/OEM
Virtualization Stack
WMI Provider
VM
Service
VM
Worker
Process
Applications
Applications
User Mode
Windows Server 2008
Windows
Kernel
Applications
Windows Server 2003,
2008
Windows
Kernel
VSP
Non hypervisor
aware OS
VSC
VMBus
VMBus
Xen-enabled
Linux Kernel
Linux
VSCs
Hypercall
Adapter
Emulation
VMBus
Kernel Mode
Windows Hypervisor
“Designed for Windows” Server Hardware
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
http://www.microsoft.com/hyperv
21
Xen 3.0 Architecture
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939
22
Evolution of
Virtualization
Going from Here…
BRKVIR-1001
App App
App App
App App
App App
X86
Windows
XP
X86
Windows
2003
X86
Suse
X86
Red Hat
12% Hardware
Utilization
15% Hardware
Utilization
18% Hardware
Utilization
10% Hardware
Utilization
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
26
… to There
App. A
App. B
App. C
App. D
X86
Windows
XP
X86
Windows
2003
X86
Suse
Linux
X86
Red Hat
Linux
x86 Multi-Core, Multi Processor
70% Hardware Utilization
Guest OS
Virtual machine monitor
Host OS
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
27
Virtualization Requirements
 1974 text from Popek and Goldberg: “For any computer
a virtual machine monitor may be constructed if the set
of sensitive instructions for that computer is a subset of
the set of privileged instructions”
 Complicated way of saying that the virtual machine
monitor needs a way of determining when a guest
executes sensitive instructions.
http://portal.acm.org/citation.cfm?doid=361011.361073
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
37
x86 Virtualization Challenges
 The IA-32 (x86) instruction set contains 17 sensitive,
unprivileged instructions that do not trap
Sensitive register instructions: read or
change sensitive registers and/or memory
locations such as a clock register or
interrupt registers:
SGDT, SIDT, SLDT, SMSW, PUSHF,
POPF, etc.
 The x86 fails the Popek-Golberg test!
 Keep in mind x86 OS are designed
to have full control over the
entire system
 However, massive economic interest in making it work
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
38
Virtualizing the x86 Processor: possible!
Recipe for x86 virtualization:
 Non-sensitive, non-protected instructions
can be executed directly
 Sensitive privileged instructions must trap
 Sensitive non-privileged instructions
must be detected
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
39
Several Ways to Virtualize an OS
 Container-based
OpenVZ, Linux VServer
 Host-based (Type-2 Hypervisors)
Microsoft Virtual Server, VMware
Server and Workstation
 Paravirtualization
Xen, [Microsoft Hyper-V], some VMware
ESX device drivers
 Full virtualization (Type-1 Hypervisors)
VMware ESX, Linux KVM, Microsoft Hyper-V, Xen 3.0
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
40
Container-Based Virtualization
 Virtual Machine Monitor (VMM) inside a Patched Host OS (Kernel)
 VMs fully isolated
 Host OS modified to isolate different VMs
Example: Kernel data structure changed
to add context ID to differentiate between
identical uids between different VMs
Host VM
Privileged
VM Admin
VM 1
VM n
Applications Applications
Thus VMs isolated from each other in kernel
VMM
Shared Host OS Image
 No full guest OS inside a container
 Fault isolation not possible (OS crash)
CPU
Memory
IO
Disk
Hardware
 Applications/users see container VM as a
virtual host/server
 VMs can be booted/shut down like regular OS
 Systems
Linux VServer, OpenVZ
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
41
Host-Based Virtualization (Type-2)
 Host OS (Windows, Linux)
 VMM inside the Host OS
Application
Application
OS 1
OS 2
Kernel-mode driver
 Multiple Guest OS support
 VMM emulates hardware for guest OSs
VMM
Host OS
Hardware
 Systems
Microsoft Virtual Server, VMware Workstation & Server
Host OS: XP, 2003, Linux
Guest OS: NT, 2000, 2003, Linux
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
42
Para-Virtualization
 VMM runs on “bare metal”
 Guest OS modified to make calls (“hypercalls”)
to or receive events from VMM
 Support of arbitrary guest OS not possible
Either modified OS or modified device drivers
 Few thousand lines of code change
VM 1
VM 2
Applications
Applications
Modified
Guest OS 1
Modified
Guest OS n
 Open-source OS modification easy
VMM
 Systems
CPU
Xen
Memory
IO
Disk
Hardware
Guest OS: XenoLinux, NetBSD, FreeBSD,
Solaris 10, Windows (in progress)
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
43
Native/Full Virtualization (Type-1)
 VMM runs on ‘bare metal’
 VMM virtualizes (emulates) hardware
Virtualizes x86 ISA, for example
 Guest OS unmodified
 VMs: Guest OS+Applications run
under the control of VMM
 Examples
VMware ESX, Microsoft Hyper-V
IBM z/VM
Linux KVM (Kernel VM)
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
44
What needs to be virtualized?
 Ideally all components
 CPU
Privileged Instructions
Sensitive Instructions
 Memory
 I/O
Network
Block/Disk
 Interrupts
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
45
A Closer Look at VMware’s ESX™
 Full virtualization
Runs on bare metal
Referred to as ‘Type-1 Hypervisor’
 ESX is the OS (and of course the VMM)
 ESX handles privileged executions from Guest kernels
Emulates hardware when appropriate
 Uses ‘Trap and Emulate’ and ‘Binary Translation’
 Guest OS run as if it were business as usual
Except they really run in user mode (including their kernels)
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
46
ESX Architecture
VMMs
(virtual hardware)
ESX Kernel
© http://www.vmware.com
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
47
Privileged Instruction Execution
 ESX employs trap-and-emulate
to execute Priv. Inst. on behalf of
Guest OS
Keep Shadow copies of Guest
OS’s Data Structures (states)
VM 1
VM 2
Applications
Applications
Guest OS 1
Guest OS n
Ring 3
Ring 1 or 3
Guest OS Traps to VMM
LGDT 0x00007002
GDT
Ring 0
Trap!
VMM
Shadow GDT
VMM emulates the instruction
CPU
VMM Updates or copies the required
states of Guest OS
Memory
IO
Disk
Hardware
Emulation works like exception handler
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
48
Sensitive Instruction Execution
 Sensitive instructions (SI) do not trap
 ESX intercepts execution of SI
VM 1
VM 2
Applications
Applications
Guest OS 1
Guest OS n
Ring 3
Binary Translation (BT): Rewrite
Guest OS instructions
Ring 1 or 3
Binary code (Hex stream of x86
instructions) of guest OS rewritten to
insert proper code
Ring 0
No modification to Guest OS – ESX does it
on the fly!
Eg: rewrite POPF (modify interrupt flag)
so it traps
Rewritten
POPF
IF
Inserted
Trap
VMM
Write IF
CPU
Memory
IO
Disk
Hardware
 int $99
; invoke syscall handler
void popf_handler(int vm_num, regs_t *regs)
{
regs->eflags = regs->esp;
regs->esp++;
}
popf
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
49
Today: Virtualization-Friendly x86!
 Recent processors include virtualization extensions to
circumvent the original x86 virtualization unfriendliness
Intel’s VT technology (VT-x, VT-i, VT-d)
AMD-V or Pacifica
 Extensions give VMM many conditions under which the
actions attempted by a Guest VM get trapped
 Note that due to performance issue (traps are CPU
expensive), some Type-1 Hypervisors actually do not
leverage all VT extensions and prefer another software
mechanism called Binary Translation (see the
ESX slides)
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
50
A quick word on VT extensions usage
 ESX does not leverage all VT extensions by default
 VMware spent years fine tuning binary translation
 ESX requires Intel-VT to support 64-bit guests. This is
not performance related. Intel removed some memory
protection logic using standard x86 instructions. To
achieve the same result for 64-bit guests ESX needs
some Intel-VT instructions.
 vSphere can leverage VT extensions on a per VM basis
http://communities.vmware.com/docs/DOC-9150
http://www.vmware.com/files/pdf/vsphere_performance_wp.pdf
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
51
What About Networking?
 Users naturally expect VMs to have access to network
 VMs don’t directly control networking hardware
x86 hw designed to be handled by only one device driver!
 When a VM communicates with the outside world, it:
… passes the packet to its local device driver …
… which in turns hands it to the virtual I/O stack …
… which in turns passes it to the physical NIC
 ESX gives VMs several device driver options:
Strict emulation of Intel’s e1000
Strict emulation of AMD’s PCnet 32 Lance
VMware vmxnet: paravirtualized!
 VMs have MAC addresses that appear on the wire
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
52
LAN Switching Challenge!
 Suppose VM_A and VM_B need to communicate
 They are on the same VLAN and subnet
Physical switch
Physical link
Hypervisor
VM A
MAC address A
VM B
MAC address B
 Will the LAN switch handle VM_A to VM_B traffic?
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
53
The reason for vSwitches
 VM-to-VM and VM to native host traffic handled via
software switch that lives inside ESX
VM-to-VM:
memory transfer
VM-to-native:
physical adapter
 Note: speed and duplex are irrelevant with virtual
adapters
http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
54
All good, except if you’re the LAN admin!
 To the LAN administrator, the picture is blurry
 LAN role typically limited
to provisioning a trunk to
ESX
 No visibility into VM-toVM traffic
 Troubleshooting
performance or
connectivity issues
challenging
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
55
The need for VM-aware networking
VMotion
• Cannot correlate traffic on
physical links—from multiple
VMs
VLAN
101
Cisco’s VN-Link:
•Extends network to the VM
•Consistent services
•Coordinated, coherent management
Cisco VN-Link Switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Problems:
• VMotion may move VMs across
physical ports—policy must
follow
• Impossible to view or apply
policy to locally switched traffic
Cisco Public
56
VN-Link solves our LAN switch challenge
 Virtual Network Link (VN-Link) is about:
VM-level network granularity
Mobility of network and security properties
 follow the VM
Policy-based configuration of VM interfaces
 Port Profiles
Non-disruptive operational model
 Assigns frame-level tag to each VM
6-byte VN-Tag identifier
 Nexus 1000V uses VN-Tags
Replaces ESX Hypervisor switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
57
VN-Link use case: Cisco Nexus 1000V
Boundary of network visibility
 Nexus 1000V provide
visibility down to the
individual VMs
 Policy can be configured
per-VM
 Policy can move around
within the ESX cluster
Cisco NX-OS
Command Line Interface!
Nexus 1000V
Distributed Virtual Switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
58
Demo: Nexus 1000V
What will you see?

Deployment & Management

Switching

VMotion & Visibility

Policy-based VM-connectivity
Server 2
Server 1
VM
#1
• ACLs
VM
#2
VM
#3
VM
#4
Nexus
Nexus 1000V -VEM
• MAC-based port security
VMW ESX
• Netflow
VM
#1
VM
#2
VM
#3
VM
#4
1000V DVS
Nexus 1000V -VEM
VMW ESX
• SPAN/ERSPAN

Mobile VM security
Nexus
5000
• Private VLANs
Nexus 1000V
Virtual Center
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
VSM
59
Network Virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
60
What Is Network Virtualization?
 Overlay of logical topologies (1:N)
 One physical network supports N virtual networks
Outsourced
IT Department
Virtual Topology 1
Quality Assurance
Network
Virtual Topology 2
Sandboxed Department
(Regulatory Compliance)
Virtual Topology 3
Physical Topology
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
61
What Is Network Virtualization?
 Overlay of physical topologies (N:1)
 N physical networks maps to 1 physical network
Security Network
Guest / Partner Network
Backup Network
Out-of-band management Network
Consolidated Network
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
62
Network Virtualization Classification
 Generally speaking, four areas in network virtualization
Control-plane virtualization
Data-plane virtualization
Management plane virtualization
Device pooling and clustering
 Control plane
Routing processes, generation of
packets (CDP etc.), etc.
 Data plane virtualization
Multiplexing N streams of
data traffic
 Management plane
CLI, file management, etc.
 Pooling & Clustering
VSS, vPC
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
63
Data Plane Virtualization
 Simple example: IEEE 802.1Q Virtual LANs
 802.1Q: 12 bits field  up to 4096 VLANs on same
physical cable
VLAN trunk
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
64
Data & Control Plane virtualization: VRF
 The VRF: Virtual Routing and Forwarding instance
VLAN Trunk, physical
interfaces, tunnels, etc.
VRF 1
VRF 2
VRF 3
Logical or
Physical Int
(Layer 3)
Logical or
Physical Int
(Layer 3)
Each VRF = separate
forwarding table
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
65
Control-Plane Virtualization ‘for VRFs’
 Example: per VRF routing protocol
One VRF could run OSPF while another runs EIGRP
 Goal
Isolation of routing and forwarding tables
Allows overlapping IP addresses between VRFs
10.10.10.0/30
10.10.20.0/30
VRF 1 [OSPF]
10.10.20.0/30
10.10.20.0/30
VRF 2 [EIGRP]
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
66
Control Plane Virtualization ‘for VLANs’
 Example: Spanning-tree protocol (IEEE 802.1D/W/S)
Loop-breaker in Ethernet topologies
 How is it virtualized?
One logical topology per VLAN (Per-VLAN Spanning-Tree)
 What’s in it for me?
Maximizes physical topology: overlay N logical topologies
Switch
Switch
Green VLAN
Red VLAN
Switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
67
Intersection of VLANs and VRFs
 It is easy to map
VLANs to VRFs at the
distribution layer
Intranet
Si
L3
 Provides safe and
easy way to isolate
logical networks
Si
 No uncontrolled
leaking from one to
the other
VLAN
Trunks
VLAN
Trunks
VRF Red
VRF Green
VLAN
VLAN
VLAN
VLAN
VLAN
BRKVIR-1001
20 Data
120 Voice
21 Red
22 Green
23 Blue
VRF Blue
© 2009 Cisco Systems, Inc. All rights reserved.
VLAN
VLAN
VLAN
VLAN
VLAN
Cisco Public
30 Data
130 Voice
31 Red
32 Green
33 Blue
 Maximizes physical
infrastructure
68
Nexus 7000 virtualization
 Nexus 7000 runs Cisco’s NXOS
Internal architecture differs from classic IOS
 NXOS is a true multiprogramming OS
Features a Linux kernel and user-space processes
Most features (BGP, HSRP, EIGRP, etc.) are individual processes
Direct benefit: fault isolation, process restartability
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
72
Nexus 7000’s Virtual Device Contexts
 True independent isolated partitions
 Currently up to 4 Virtual DCs per Nexus 7000
Concept of switchto/switchback and per-vDC access/isolation
 Somewhat similar to host-based virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
73
Inside Virtual Device Contexts
VDC builds fault domain around its processes. A local process crash does
not impact other VDCs.
 Fault Domain
Protocol Stack
…
Process XYZ
Process DEF
…
Process ABC
VDC B
Process XYZ
Process DEF
Process ABC
VDC A
Protocol Stack
VDCA
VDCB
Infrastructure
 Process “DEF” in VDC
B crashes
 Processes in VDC A
are not affected and
will continue to run
unimpeded
 This is a function of
the process modularity
of the OS and a VDC
specific IPC context
Linux 2.6 Kernel
Physical Switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
74
Virtualization Inside a VDC
Nexus
7000
VDC
VLAN VLAN VLAN
VRF VRF VRF
VLAN VLAN VLAN
VRF VRF VRF
Up to 4 VDCs
VLAN VLAN VLAN
VRF VRF VRF
Up to 4K VLANs, 256 VRFs
VDC
VLAN VLAN VLAN
VRF VRF VRF
VLAN VLAN VLAN
VRF VRF VRF
VLAN VLAN VLAN
VRF VRF VRF
Up to 4K VLANs, 256 VRFs
VDC
VLAN VLAN VLAN
VRF VRF VRF
VLAN VLAN VLAN
VRF VRF VRF
VLAN VLAN VLAN
VRF VRF VRF
Up to 4K VLANs, 256 VRFs
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
75
Device Pooling and/or Clustering
 Catalyst 6500’s Virtual Switch System (VSS)
 Nexus 7000’s Virtual Port Channel (vPC)
 It’s really clustering
 Clever packet
classification
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
Two switches
appear to be a
single switch to
outside world
Standard Port
Channel on
Downstream
Switches
76
Storage Virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
77
Storage Virtualization: Terminology
 Storage virtualization englobes various concepts
 Definitions may vary based on your interlocutor
For some, storage virtualization starts at virtual volumes
For others, it starts with Virtual SANs
 Example: unified I/O
Storage virtualization, network virtualization, both?
 We are going to cover
VSANs, NPIV & NPV, FlexAttach, Virtual Targets, Unified I/O
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
78
Virtual SANs (VSANs)
Department A
 SAN islands
Duplication of hardware resources
Just-in-time provisioning
SAN Islands
 VSAN: consolidation of SANs on one
physical infrastructure
 Just like VLANs, VSAN traffic carries
a tag in the frame
Department B
Department C
Virtual SANs
(VSANs)
Department A
Department B
Department C
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
79
VSAN Tagging
 Hardware-based isolation of
traffic between different VSANs
No special drivers or configuration
required for end nodes (hosts, disks, etc.)
Traffic tagged at ingress Fabric port and
carried across enhanced ISL
links between switches
VSAN Header is
Removed at Egress
Point
 Control-plane virtualization: dedicated
services per VSAN
Zone server, name server, etc.
Each service runs independently and
is managed/configured independently
Fibre Channel
Services for
Blue VSAN
Fibre Channel
Services for
Red VSAN
Trunking
E_Port
(TE_Port)
Enhanced ISL (EISL)
Trunk Carries
Tagged Traffic from
Multiple VSANs
Trunking
E_Port
(TE_Port)
VSAN Header is
Added at Ingress
Point Indicating
Membership
Fibre Channel
Services for
Blue VSAN
Fibre Channel
Services for
Red VSAN
No clue about
VSANs
N Port
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
F Ports
80
Blade servers domain ID explosion
 What explosion??
One domain ID per switch inside
blade enclosure
Theoretical maximum number of
Domain IDs is 239 per VSAN
Supported number of domains is
quite smaller:
EMC: 40 domains
Domain-id
0x0B
Domain-id
0x0A
0x0C
0x0D
0x0E
0x0F
Cisco Tested: 75
HP: 40 domains
 Manageability
Lots of switches to manage
Possible domain-ID overlap
Possible FSPF reconfiguration
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
81
Solution: N-Port Virtualizer (NPV)
 What is NPV?
NPV enables the switch to act as a proxy for connected hosts
NPV switch does not use a Domain ID
Inherits Domain ID from upstream fabric switch
No longer limited to Domain ID boundaries
 Manageability
Far less switches to manage – NPV very much plug and play
NPV-enabled switch is managed much like a host
Reduces SAN management complexity
NPV-switch: no zoning, name server, etc.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
82
N-Port Virtualization (NPV): An Overview
0A.1.1
 Switches in blade server enclosure
run NPV
FC
 Single domain ID
NP port
FC Switch A
Domain ID 0A
NP port
FC Switch B
NPV-aware switch(es)
Inherit domain ID from Core Switch
No name server, no zones, no
FSPF, etc.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
Blade
enclosure
 Administration simpler
F
NPIV
aware
All blades’ FCIDs
Start with 0A
83
I heard about NPIV. Is that NPV?
 NPIV is N-Port ID Virtualization
Feature typically relevant to HBA on server running OS virtualization
Provides a means to assign multiple Server Logins to a single
physical interface
One HBA = many virtual WWNs
 VM awareness inside the SAN Fabric
 NPV is N-Port Virtualizer
Feature typically relevant to FC switches
FC switch uplink to core SAN switch looks like “a lot of
HBAs”
NPV can work fine with NPIV: “Nested NPIV” – imagine ESX
connecting to a NPV-enabled FC switch
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
84
Without NPIV: Storage in VM-unaware
 Traditional scenario: 3 VMs on ESX, one physical HBA
VM1
VM2
VM3
E
S
X
N
F
Fibre Switch
Regular HBA
Storage LUNs
 No per-VM WWN. Only the ESX host has a WWN.
 no VM-awareness inside SAN fabric
 no VM-based LUN masking for instance
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
85
NPIV enables VM-aware storage
 NPIV-capable HBA assigns virtual WWNs to VMs
VM1
pWWN1
NP
Fibre Switch
VM2
pWWN2
VM3
NPIV-aware
HBA
Storage LUNs
pWWN3
 SAN Fabric is now aware of those WWNs
VM-aware zones or LUN masking, QoS, etc.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
86
WWN Virtualization: FlexAttach
 Port World Wide Name
Blade Server
Blade N
New
Blade
Blade 1
No Blade Switch
Config Change
….
HBAs have unique World-Wide
Names (similar to MAC address)
Flex Attach
NPV
 FlexAttach assigns a WWN
to a switch port
Each F-Port is assigned a
virtual WWN
No Switch
Zoning
Change
Burnt-in WWN is translated to
virtual WWN
SAN
 Benefits
No Array
Configuration
Change
One physical port = one fixed WWN
Storage
Control over WWN assignment
Replacing failed HBA or host simple
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
87
Unified I/O?
 Consolidation of FC and Ethernet traffic on same
infrastructure
 New protocols (FCoE, ‘Data Center Ethernet’) for
guaranteed QoS levels per traffic class
FC HBA
SAN (FC)
FC HBA
SAN (FC)
NIC
SAN (FCoE)
CNA
LAN (Ethernet)
LAN (Ethernet)
CNA
NIC
BRKVIR-1001
LAN (Ethernet)
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
89
Unified I/O Use Case
Today:
Today:
LAN
Management
FC HBA
SAN B
FC Traffic
FC HBA
FC Traffic
SAN A
NIC
Enet Traffic
 Parallel LAN/SAN Infrastructure
 Inefficient use of Network
Infrastructure
 5+ connections per server—
higher adapter and
cabling costs
Adds downstream port costs;
cap-ex and op-ex
NIC
Enet Traffic
Each connection adds additional
points of failure in the fabric
 Longer lead time for server
provisioning
 Multiple fault domains—
complex diagnostics
DCE and FCoE
Ethernet
FC
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
 Management complexity—
firmware, driver-patching,
versioning
90
Unified I/O Today
Today
LAN
Today
SAN B
SAN A
Management
 Parallel LAN/SAN Infrastructure
 Inefficient use of Network
Infrastructure
 5+ connections per server –
higher adapter and cabling
costs
Adds downstream port costs;
cap-ex and op-ex
Each connection adds additional
points of failure in the fabric
 Longer lead time for server
provisioning
 Multiple fault domains –
complex diagnostics
DCE and FCoE
Ethernet
FC
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
 Management complexity –
firmware, driver-patching,
versioning
91
Unified I/O Today
Unified I/O Phase 1
LAN
Unified I/O Phase 1
SAN B
SAN A
Management
 Reduction of server adapters
 Simplification of access layer
and cabling
 Gateway free implementation—
fits in installed base of existing
LAN and SAN
 L2 Multipathing Access—
Distribution
FCoE
Switch
 Lower TCO
 Fewer Cables
 Investment Protection (LANs
and SANs)
 Consistent Operational Model
DCE and FCoE
Ethernet
FC
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
92
Desktop & Application
Virtualization
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
93
Desktop Virtualization
 General concept
Move traditional desktop-based computing from the local
desktop to the datacenter
 Why?
Desktop average utilization usually quite low
High acquisition, maintenance & operation costs
IT staff must be present in the field
Difficulty to keep unified OS image across the board
No or poor control over data stored locally
Provisioning a new laptop often a long process
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
94
Benefits of moving desktops to the DC
 Benefits of desktop centralization
Upgrade, patch, backup desktops in a single location
Secure confidential information in a secure data center
Rapidly provision new desktops
Regain control over standard desktop images
 significant cost reduction
 Drawbacks
Somewhat disruptive to users (where did my PC go?)
Initial roll out more complex than installing a new PC locally
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
95
High-level concept
 Desktop runs as a VM in the DC
 Thin clients replace PCs
“zero client” hardware options exist!
 Connection broker authenticates
clients, selects desktop, secures
connection and delivers desktop
using a Display Protocol
MSFT RDP, HP RGS, PCoIP, etc.
 Several commercial offerings
Citrix XenDesktop, VMware View
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
96
Application Virtualization
 Application Virtualization here refers to decoupling the
application and its data from the OS
Application Encapsulation and Isolation
 Application is provided as a ready-to-execute container
No need to install agents or the full application, the container
comes with all the required DLLs, Registry settings and
dependencies
 Virtualized applications can be placed on shared file
repository and used by many users concurrently
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
97
Application Virtualization: demo
 “VWware Thinapped” version of Firefox 2.0
 Can run concurrently with another Firefox version
 Does not need to be installed – just click and execute!
 Application is totally sandboxed
Patching, upgrading extremely simplified: done in one central
location for hundreds or thousands of users
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
98
Example: VMware ThinApp
 Application is
packaged as a
.exe/.msi file
 Containers
includes virtual
registry, file system
and OS
 Virtual OS
launches required
DLLs and handles
file system calls
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
99
Conclusion
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
100
Virtualization – What is in for me?
 Virtualization is an overloaded term
Heck, this presentation will be made virtual soon. I suppose
“recorded and available on demand” wasn’t catchy enough 
 A collection of technologies to allow a more flexible and
more efficient use of hardware resources
Computing and networking capacity constantly increase:
virtualization is a great way to maximize their utilization
 Assembled in an end to end architecture these
technologies provide the agility to respond to business
requirements
 Your student will see this when they hit the working
world.
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
101
Recommended Reading
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
103
Questions ?
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
105
BRKVIR-1001
© 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public
106