Introduction to Microsoft Virtualization & Host Configuration

advertisement
Virtualizing your Datacenter
with Windows Server 2012 R2 & System Center 2012 R2
Module Title
1
Introduction to Microsoft Virtualization & Host Configuration
2
VM Clustering
VM Configuration, Mobility & Replication, Network Virt and
3
Service Templates
4
Private Clouds & System Center 2012 R2 Datacenter
Virtualization with the Hybrid Cloud, VMware Management,
5
Integration & Migration
Microsoft
Virtual
Academy
Microsoft
Virtual
Academy
SCVMM01
DC01
HYPER-V01
HYPER-V02
FS01
Automation
Orchestrator
vCenter Orchestrator
Service Mgmt.
Service Manager
vCloud Automation Center
Protection
Monitoring
Data Protection Manager
System Center 2012 R2
Operations Manager
vSphere Data Protection
vCloud Suite
vCenter&Ops
Mgmt. Suite
vCenter
Self-Service
App Controller
vCloud Director
VM Management
Virtual Machine Manager
vCenter Server
Hypervisor
Hyper-V
vSphere Hypervisor
Automation
Service Mgmt.
Orchestrator
Standard
Datacenter
Service
# of Physical
CPUs per Manager
2
License
Protection
Monitoring
Self-Service
VM Management
Hypervisor
vCenter
Orchestrator
vCloud Suite Licensing
System Center 2012 R2 Licensing
2
2 + Host
Unlimited
Data Protection
Manager
# of Managed OSE’s
per License
Std.
Adv.
Ent.
vCloud
Automation
Center
# of Physical CPUs
1
1
1
per License
Unlimited
VMs on Hosts
vSphere Data
Protection
# of Managed OSE’s
per License
Includes all SC Mgmt.
Components
Yes
Yes
Includes vSphere
5.1 Enterprise Plus
Yes
Yes
Yes
Includes SQL Server
for Mgmt. Server Use
Yes
Yes
Includes vCenter 5.5
No
No
No
Operations Manager
App Controller
$1,323
Open No Level (NL) &
Software Assurance
(L&SA) 2 year Pricing
$3,607
Virtual Machine Manager
Windows Server 2012 R2 Inc. Hyper-V
Hyper-V Server 2012 R2 = Free Download
vCenter Ops Mgmt. Suite
No
vCloud No
Director
Includes all required
database licenses
Retail Pricing per
CPU (No S&S)
$4,995
$7,495
vCenter Server
No
$11,495
vSphere 5.5 Standalone Per CPU Pricing (Excl. S&S):
Standard = $995
Enterprise = $2,875
Enterprise Plus = $3,495
vSphere Hypervisor
Massive scalability for the
most demanding workloads
Hosts
•
Support for up to 320 logical processors
& 4TB physical memory per host
•
Support for up to 1,024 virtual machines
per host
Clusters
•
Support for up to 64 physical nodes &
8,000 virtual machines per cluster
Virtual Machines
•
Support for up to 64 virtual processors
and 1TB memory per VM
•
Supports in-guest NUMA
System
Host
VM
Cluster
1.
2.
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Logical Processors
320
320
320
Physical Memory
4TB
4TB
4TB
Virtual CPUs per Host
2,048
4,096
4,096
Virtual CPUs per VM
64
8
641
1TB
1TB
1TB
1,024
512
512
Guest NUMA
Yes
Yes
Yes
Maximum Nodes
64
N/A2
32
8,000
N/A2
4,000
Resource
Memory per VM
Active VMs per Host
Maximum VMs
vSphere 5.5 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM
For clustering/high availability, customers must purchase vSphere
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html,
http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf
Centralized, Scalable
Management of Hyper-V
•
Supports up to 1,000 Hyper-V hosts &
25,000 virtual machines per VMM Server
•
Supports Hyper-V hosts in trusted &
untrusted domains, disjointed
namespace & perimeter networks
•
Supports Hyper-V from 2008 R2 SP1
through to 2012 R2
•
Comprehensive fabric management
capabilities across Compute, Network &
Storage
•
End to end VM management across
heterogeneous hosts & clouds
{
Deep Discovery Prior to
Hyper-V Deployment
4
Through integration with the BMC, VMM can
wake a physical server & collect information to
determine appropriate deployment
2
1. OOB Reboot
2. Boot from PXE
3. Authorize PXE boot
5
3
4. Download VMM customized WinPE
5. Execute a set of calls in WinPE to collect
hardware inventory data (network
adapters and disks)
6. Send hardware data back to VMM
6
1
Virtualization Deployment with VMM
Centralized, Automated Bare
Metal Hyper-V Deployment
Post-deep discovery, VMM will deploy a
Hyper-V image to the physical server
1. OOB Reboot
3
2. Boot from PXE
3. Authorize PXE boot
4. Download VMM customized WinPE
5. Run generic command execution scripts
and configure partitions
6. Download VHD & Inject Drivers
The host is then domain joined, added to
VMM Management & post-install scripts
executed
5
Capability
Microsoft
VMware
Deployment from DVD
Yes
Yes
Deployment from USB
Yes
Yes
PXE Deployment - Stateful
Yes – WDS, MDT, SCCM, SCVMM
Yes – PXE/Auto Deploy1
PXE Deployment - Stateless
No
Yes – Auto Deploy
Virtualization Host Configuration
Granular, Centralized
Configuration of Hosts
Virtual Machine Manager 2012 R2 provides
complete, centralized hardware configuration
for Hyper-V hosts
Hardware – Allows the admin to configure
local storage, networking, BMC settings etc.
Storage – Allows the admin control granular
storage settings, such as adding an iSCSI or
FC array LUN to the host, or an SMB share.
Virtual Switches – A detailed view of the
virtual switches associated with physical
network adaptors.
Migration Settings – Configuration of Live
Migration settings, such as LM network,
simultaneous migrations
iSCSI & Fibre
Channel
Multi-Path
I/O Support
Offloaded
Data Transfer
Native 4K
Disk Support
Integrate with existing
storage investments
quickly and easily
Inbox for resiliency,
increased performance
& partner extensibility
Offloads storageintensive tasks to the
SAN
Take advantage of
enhanced density
and reliability
VMM Storage Management
Centralized Management &
Provisioning of Storage
System Center
Virtual Machine Manager 2012 R2
Storage Management
VMM can discover & manage local and
remote storage, including SANs, Pools, LUNs,
disks, volumes, and virtual disks.
VMM supports iSCSI & Fibre Channel Block
Storage & File-based Storage
VMM integrates with WS SMAPI for discovery
of:
•
SMI-S, SMP, and Spaces Devices
•
Disk & Volume management
•
iSCSI/FC/SAS HBA initiator management
R2: 10x faster enumeration of storage
Block Storage
File Storage
Integrated iSCSI Target
Transform Windows Server
2012 R2 into an iSCSI SAN
Integrated Role within Windows Server &
manageable via GUI, PowerShell
Ideal for Network & Diskless Boot, Server
Application Storage, Heterogeneous Storage
& Development, Test & Lab Deployments
Supports up to 64TB VHDX, Thin
Provisioning, Dynamic & Differencing. Also
supports secure zeroing of disk for Fixed size
disk deployments.
Scalable up to 544 sessions & 256 LUNs per
iSCSI Target Server & can be clustered for
resilience
Complete VMM Management via SMI-S
VMM iSCSI & Fibre Channel Integration
Improved Support for Fibre
Channel Fabrics
Once discovered, VMM can centrally manage
key iSCSI & Fibre Channel capabilities.
iSCSI - Connects Hyper-V hosts to iSCSI
portal and logs on to iSCSI target ports
including multiple sessions for MPIO.
Fibre Channel - Add target ports to Zone
•
Zone Management, Member
Management, Zoneset Management
Once connected, VMM can create and assign
LUNs, initialize disks, create partitions,
volumes etc.
VMM can also remove capacity, unmounts
volumes, mask LUNs etc.
Storage
Spaces
Storage
Tiering*
Data
Deduplication
Hyper-V over
SMB 3.0
Transform high-volume,
low cost disks into
flexible, resilient
virtualized storage
Pool HDD & SSD and
automatically move hot
data to SSD for
increased performance
Reduce file storage
consumption, now
supported for live VDI
virtual hard disks*
Ease of provisioning,
increased flexibility &
seamless integration
with high performance
*New in Windows Server 2012 R2
Inbox solution for Windows to
manage storage
Virtualize storage by grouping industrystandard disks into storage pools
Pools are sliced into virtual disks, or Spaces.
Spaces can be Thin Provisioned, and can be
striped across all physical disks in a pool.
Mirroring or Parity are also
supported.
Windows then creates a volume on the
Space, and allows data to be placed on the
volume.
Spaces can use DAS only (local to the chassis,
or via SAS)
Optimizing storage
performance on Spaces
Disk pool consists of both high performance
SSDs and higher capacity HDDs
Hot data is moved automatically to SSD and
cold data to HDD using
Sub-File-Level data movement
With write-back-caching, SSD absorb random
writes that are typical in virtualized
deployments
Admins can pin hot files to SSDs manually to
drive high performance
New PowerShell cmdlets are available for the
management of storage tiers
SSD Tier - 400GB EMLC SAS SSD
Hot Data
Store Hyper-V VMs on SMB
3.0 File Shares
Simplified Provisioning & Management
Low OPEX and CAPEX
Adding multiple NICs in File Servers unlocks
SMB Multichannel – enables higher
throughput and reliability. Requires NICs of
same type and speed.
Using RDMA capable NICs unlocks SMB
Direct offloading network I/O processing to
the NIC.
SMB Direct provides high throughput and
low latency and can reach 40Gbps (RoCE) and
56Gbps (Infiniband) speeds
\\SOFSFileServerName\VMs
File Storage Integration
Comprehensive, Integrated
File Storage Management
VMM supports network shares via SMB 3.0
on NAS device from storage vendors such as
EMC and NetApp
VMM supports integration and management
with standalone and clustered file servers
VMM will quickly discover and inventory
selected File Storage
VMM allows the selection, and now, the
classification of existing File Shares to
streamline VM placement
VMM allows IT Admin to assign Shares to
Hyper-V hosts for VM placement, handling
ACL’ing automatically.
Scale-Out File Server
Low Cost, High Performance,
Resilient Shared Storage
Clustered file server for storing Hyper-V
virtual machine files, on file shares
Scale Out File Server (4 Nodes)
FS1
FS2
FS3
FS4
High reliability, availability, manageability, and
performance that you would expect from a
SAN
Clustered
Spaces
Active-Active file shares - file shares online
simultaneously
Clustered
Pools
Increased bandwidth – as more SOFS nodes
are added
CHKDSK with zero downtime & CSV Cache
Created & Managed by VMM, both from
existing Windows Servers & Bare Metal
JBOD
Storage
via
Shared
SAS
Scale-Out File Server Deployment
Centralized, Managed
Deployment of File Storage
VMM can not only manage standalone File
Servers, but can deploy Scale-Out File
Servers, even to Bare Metal
For Bare Metal deployment, a physical profile
determines the characteristics of the File
Server
Existing Windows Servers can be transformed
into a SOFS, right within VMM
Once imported, VMM can transform
individual disks into highly available,
dynamic pools, complete with classification.
VMM can then create the resilient Spaces &
File Shares within the Storage Pool
Storage & Fabric Classification
Granular Classification of
Storage & FC Fabrics
VMM can classify storage at a granular level
to abstract storage detail:
•
Volumes (including local host disks &
Direct Attached Storage)
•
File Shares (Standalone & SOFS-based)
•
Storage Pools & SAN LUNs
•
Fibre Channel Fabrics - Helps to identify
fabric using friendly names.
Support for efficient & simplified
deployment of VMs to classifications
Now integrated with Clouds
In-box Disk Encryption to
Protect Sensitive Data
Data Protection, built in
•
Supports Used Disk Space Only
Encryption
•
Integrates with TPM chip
•
Network Unlock & AD Integration
Multiple Disk Type Support
•
Direct Attached Storage (DAS)
•
Traditional SAN LUN
•
Cluster Shared Volumes
•
Windows Server 2012 File Server Share
Hyper-V
(2012 R2)
vSphere Hypervisor
vSphere 5.5
Enterprise Plus
iSCSI/FC Support
Yes
Yes
Yes
3rd Party Multipathing (MPIO)
Yes
No
Yes (VAMP)1
Yes (ODX)
No
Yes (VAAI)2
Yes (Spaces)
No
Yes (vSAN)3
Yes
No
Yes4
Yes (SMB 3.0)
Yes (NFS)
Yes (NFS)
Data Deduplication
Yes
No
No
Storage Encryption
Yes
No
No
Capability
SAN Offload Capability
Storage Virtualization
Storage Tiering
Network File System Support
1.
2.
3.
4.
vSphere API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5
vSphere API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5
vSphere vSAN is still in beta
vSphere Flash Read Cache has a write-through caching mechanism only, so reads only are accelerated. vSAN also has SSD
caching capabilities built in, acting as a read cache & write buffer.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/files/pdf/vsphere/VMwarevSphere-Platform-Whats-New.pdf, http://www.vmware.com/products/vsphere/compare.html,
Integrated Solution for
Network Card Resiliency
•
Vendor agnostic and shipped inbox
•
Provides local or remote management
through Windows PowerShell or UI
•
Enables teams of up to 32 network
adapters
•
Aggregates bandwidth from multiple
network adapters whilst providing traffic
failover in the event of NIC outage
•
Includes multiple nodes: switch dependent
and independent
•
Multiple traffic distribution algorithms:
Hyper-V Switch Port, Hashing and
Dynamic Load Balancing
Virtual
adapters
Team network
adapter
Team network
adapter
Connecting VMs to each
other, and the outside world
3 Types of Hyper-V Network
• Private = VM to VM Communication
• Internal = VM to VM to Host (loopback)
• External = VM to Outside & Host
Each vNIC can have multiple VLANs attached to it,
however if using the GUI, only a single VLAN ID can
be specified.
Set-VMNetworkAdapterVlan -VMName VM01
-Trunk -AllowedVlanIdList 14,22,40
Creating an external network transforms the chosen
physical NIC into a switch and removes TCP/IP stack
and other protocols
Optional host vNIC is created to allow
communication of host out of the physical NIC
Hyper-V Host
VM1
VM2
Layer-2 Network Switch for
Virtual Machine Connectivity
Hyper–V host
Virtual machine
Extensible Switch
•
Virtual Ethernet switch that runs in the
management OS of the host
•
Exists on Windows Server Hyper-V, and
Windows Client Hyper-V
•
Managed programmatically
•
Extensible by partners and customers
•
Virtual machines connect to the
extensible switch with their
virtual network adaptor
•
Can bind to a physical NIC or team
•
Bypassed by SR-IOV
Virtual machine
Network
application
Virtual network
adapter
Virtual machine
Network
application
Network
application
Virtual network
adapter
Virtual network
adapter
Hyper-V
Extensible Switch
Physical network
adapter
Physical switch
Layer-2 Network Switch for
Virtual Machine Connectivity
Granular In-box Capabilities
•
Isolated (Private) VLAN (PVLANs)
•
ARP/ND Poisoning (spoofing)
protection
•
DHCP Guard protection
•
Virtual Port ACLs
•
Trunk Mode to VMs
•
Network Traffic Monitoring
•
PowerShell & WMI Interfaces for
extensibility
Hyper–V host
Virtual machine
Virtual machine
Network
application
Virtual network
adapter
Virtual machine
Network
application
Network
application
Virtual network
adapter
Virtual network
adapter
Hyper-V
Extensible Switch
Physical network
adapter
Physical switch
Build Extensions for Capturing,
Filtering & Forwarding
2 Platforms for Extensions
•
Network Device Interface Specification
(NDIS) filter drivers
•
Windows Filtering Platform (WFP)
callout drivers
Extensions
•
NDIS filter drivers
•
WFP callout drivers
•
Ingress filtering
•
Destination lookup and forwarding
•
Egress filtering
Virtual Machine
Virtual Machine
Parent Partition
VM NIC
Host NIC
Virtual Switch
Extension Protocol
Capture
Extensions
Extension
A
Filtering
Extensions
Extension
C
Forwarding
Extension
Extension
D
Extension Miniport
Physical NIC
Hyper-V Extensible Switch architecture
VM NIC
Build Extensions for Capturing,
Filtering & Forwarding
Many Key Features
Virtual Machine
Virtual Machine
Parent Partition
VM NIC
Host NIC
•
Extension monitoring & uniqueness
•
Extensions that learn VM life cycle
•
Extensions that can veto state changes
Extension Protocol
•
Multiple extensions on same switch
Capture
Extensions
Extension
A
Several Partner Solutions Available
•
Cisco – Nexus 1000V & UCS-VMFEX
•
NEC – ProgrammableFlow PF1000
•
5nine – Security Manager
•
InMon - SFlow
Virtual Switch
Filtering
Extensions
Extension
C
Forwarding
Extension
Extension
D
Extension Miniport
Physical NIC
Hyper-V Extensible Switch architecture
VM NIC
Hyper-V
(2012 R2)
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Integrated NIC Teaming
Yes
Yes
Yes
Extensible Network Switch
Yes
No
Replaceable
Confirmed Partner Solutions
5
N/A
2
Private Virtual LAN (PVLAN)
Yes
No
Yes1
ARP Spoofing Protection
Yes
No
vCloud/Partner2
DHCP Snooping Protection
Yes
No
vCloud/Partner2
Virtual Port ACLs
Yes
No
vCloud/Partner2
Trunk Mode to Virtual Machines
Yes
No
Yes3
Port Monitoring
Yes
Per Port Group
Yes3
Port Mirroring
Yes
Per Port Group
Yes3
Advanced Networking Capability
1.
2.
3.
The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.5 and
is replaceable (By Partners such as Cisco/IBM) rather than extensible.
ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the vCloud Networking & Security package, which is part
of the vCloud Suite or a Partner solution, all of which are additional purchases
Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which
is available in the Enterprise Plus edition of vSphere 5.5
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/,
http://www.vmware.com/technical-resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere51-Network-Technical-Whitepaper.pdf, and http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html, http://www.vmware.com/products/vcloudnetwork-security,
Comprehensive Network Management
Integrated management of the
software defined network
Top of rack switch management and
integration for configuration and compliance
Logical network management: named
networks that serve particular functions
in your environment i.e. backend
IP address pool management and integration
with IP address management
Host and VM network switch management
Load balancer integration and
automated deployment
Network virtualization deployment
and management
Top of Rack Switch Integration
Synchronize & Integrate ToR
Settings with VMM
Physical switch management and integration
built into VMM using in-box or partnersupplied provider
Switches running Open Management
Infrastructure (OMI)
Communicating using
WS-MAN
Switch Management PowerShell Cmdlets
Common management interface across
multiple network vendors
Automate common network management
tasks
Manage compliancy between VMM, Hyper-V
Hosts & physical switches.
OMI
OMI
OMI
Logical Networks
Abstraction of Infrastructure
Networks with VMM
Logical networks are named networks that
serve particular functions i.e. “Backend,”
“Frontend,” or “Backup”.
Used to organize and simplify network
assignments
Logical network is a container for network
sites, IP subnet & VLAN information
Supports VLANs & PVLAN Isolation
Hosts & Host Groups can be associated with
Logical Networks
IP Addresses can be assigned to Host & VM
NICs from Static IP Pools
Static IP Pool Management in VMM
IP Address Management for
Hosts & Virtual Machines
VMM can maintain centralized control of host
& VM IP address assignment
IP Pools defined and associated with a
Logical Network & Site
VMM supports specifying IP range, along
with VIPs & IP address reservations
Each IP Pool can have Gateway, DNS & WINS
Configured.
IP address pools support both IPv4 and IPv6
addresses, but not in the same pool.
IP addresses assigned on VM creation, and
retrieved on VM deletion
The Logical Switch
Centralized Configuration of
Network Adaptors across Hosts
Combines key VMM networking constructs to
standardize deployment across multiple hosts
within the infrastructure:
•
Uplink Port Profiles
•
Virtual Port Profiles for vNICs
•
Port Classifications for vNICs
•
Switch Extensions
Logical Switches support compliance &
remediation
Logical Switches support Host NIC Teaming
& Converged Networking
Native Port Profile for Uplinks
Port Classification
Native Port Profile for vNIC
Port Classification
Native Port Profile for vNIC
Uplink Port Profiles
Host Physical Network Adaptor
Configuration with VMM
Uplink Port Profile – centralized
configuration of physical NIC settings that
VMM will apply upon assigning a Logical
Switch to a Hyper-V host.
Teaming – Automatically created when
assigned to multiple physical NICs, but
admin can select LB algorithm &
teaming mode
Sites – Assign the relevant network sites &
logical networks that will be supported by
this uplink port profile
Virtual Port Profiles
Host Physical Network Adaptor
Configuration with VMM
Virtual Port Profile – Used to pre-configure
VM or Host vNICs with specific settings.
Offloading – Admins can enable offload
capabilities for a specific vNIC Port Profile.
Dynamic VMq, IPsec Task Offload & SR-IOV
are available choices.
Security – Admins can enable key Hyper-V
security settings for the vNIC Profile, such as
DHCP Guard, or enable Guest Teaming.
QoS – Admins can configure QoS bandwidth
settings for the vNIC Profile so when
assigned to VMs, their traffic may be
limited/guaranteed.
Increased efficiency of network
processing on Hyper-V hosts
Without VMQ
•
Hyper-V Virtual Switch is responsible for
routing & sorting packets for VMs
•
This leads to increased CPU processing, all
focused on CPU0
With VMQ
•
Physical NIC creates virtual network
queues for each VM to reduce host CPU
With Dynamic VMQ
•
Processor cores dynamically allocated for
a better spread of network traffic
processing
Hyper-V Host
Hyper-V Host
Hyper-V Host
Integrated with NIC hardware
for increased performance
•
Standard that allows PCI Express devices
to be shared by multiple VMs
•
More direct hardware path for I/O
•
Reduces network latency, CPU utilization
for processing traffic and increases
throughput
•
SR-IOV capable physical NICs contain
virtual functions that are securely
mapped to VM
•
This bypasses the Hyper-V Extensible
Switch
•
Full support for Live Migration
Virtual Machine
VM Network Stack
Synthetic NIC
Virtual Function
Hyper-V
Extensible Switch
SR-IOV NIC
VF
VF
VF
Achieve desired levels of
networking performance
Bandwidth Management
•
Establishes a bandwidth floor
•
Assigns specified bandwidth for each type
of traffic
•
•
Helps to ensure fair sharing during
congestion
Can exceed quota with no congestion
2 Mechanisms
•
Enhanced packet scheduler (software)
•
Network adapter with DCB support
(hardware)
Relative minimum
bandwidth
Normal
priority
W=1
High
priority
Strict minimum
bandwidth
Bronze
tenant
Critical
W=2
W=5
Silver
tenant
100 MB
Hyper-V Extensible Switch
Gold
tenant
200 MB
Hyper-V Extensible Switch
1 Gbps
Bandwidth
oversubscription
Gold
tenant
Gold
tenant
500 MB
Gold
tenant
500 MB
Hyper-V Extensible Switch
NIC Teaming
1 Gbps
1 Gbps
500 MB
500 MB
Port Classifications
Abstract Technical Depth from
Virtual Network Adaptors
Port Classifications – provides a global
name for identifying different types of virtual
network adapter port profiles
Cross-Switch - Classification can be used
across multiple logical switches while the
settings for the classification remain specific
to each logical switch
Simplification – Similar to Storage
Classification, Port Classification used to
abstract technical detail when deploying VMs
with certain vNICs. Useful in Self-Service
scenarios.
Constructing the Logical Switch
Combining Building Blocks to
Standardize NIC Configuration
Simple Setup – Define the name and
whether SR-IOV will be used by VMs.
SR-IOV can only be enabled at switch
creation time.
Switch Extensions – Pre-installed/
Configured extensions available for use with
this Logical Switch are chosen at this stage
Teaming – Decide whether this logical switch
will bind to individual NICs, or to NICs that
VMM should team automatically.
Virtual Ports – Define which port
classifications and virtual port profiles can be
used with this Logical Switch
Deploying the Logical Switch
Applying Standardized
Configuration Across Hosts
Assignment – VMM can assign logical
switches directly to the Hyper-V hosts.
Teaming or No Teaming – Your logical
switch properties will determine if multiple
NICs are required or not
Converged Networking – VMM can create
Host Virtual Network Adaptors for isolating
host traffic types i.e. Live Migration, CSV, SMB
3.0 Storage, Management etc. It will also
issue IP addresses from it’s IP Pool. This is
useful with hosts that have just 2 x 10GbE
adaptors but require multiple separate,
resilient networks.
Download