FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2

advertisement
FlexPod Datacenter with Citrix XenDesktop 7.1 and
Citrix XenServer 6.2
Single Server and 2000-Seat Virtual Desktop Infrastructure with Citrix XenDesktop
7.1 Built on Cisco UCS B200-M3 Blade Servers with NetApp® FAS3200-Series and
Citrix XenServer 6.2
Last Updated: June 5, 2014
Building Architectures to Solve Business Problems
2
About the Authors
About the Authors
Jeff Nichols, Technical Marketing Engineer, VDI Performance and Solutions Team, Cisco Systems
Jeff Nichols is a Cisco Unified Computing System architect, focusing on Virtual Desktop and
Application solutions with extensive experience with VMware ESX/ESXi, XenDesktop, XenApp and
Microsoft Remote Desktop Services. He has expert product knowledge in application, desktop and
server virtualization across all three major hypervisor platforms and supporting infrastructures including
but not limited to Windows Active Directory and Group Policies, User Profiles, DNS, DHCP and major
storage platforms
Chris Rodriguez, Technical Marketing Engineer, NetApp
Chris Rodriguez (C-Rod) has been involved with Citrix since late 1990's. He is a reference architect for
running Citrix XenDesktop and XenServer on NetApp storage. Chris has over 10 years of Enterprise
Storage and Citrix experience at various companies including NetApp, Network Computing Devices,
Dell and others. Previously, Chris had been in the field for three years with NetApp implementing
XenDesktop and XenServer running on NetApp storage. Chris's extensive field experience brings a great
deal of hands-on knowledge that can be shared with others.
David La Motta, Technical Marketing Engineer, NetApp
David La Motta is focused on developing, validating and supporting cloud-related solutions that include
NetApp products. He has also authored and served as architect on many of NetApp software products,
most notably in virtualization and cloud. Before his current role, David was a software engineer at Cisco
Systems developing management interfaces for different transport mechanisms. He holds a Bachelor's
degree in Computer Science from the University of New Orleans.
Erick Arteaga, Sr. Software Test Engineer 2, Citrix
At Citrix Systems Erick Arteaga is a Senior Software Test Engineer 2 with the Citrix Solutions Lab,
focusing on the testing and validation of end-to-end real-world Desktop and Application Virtualization
Solutions resulting in the creation of customer focused Reference Designs. He has years of experience
in the IT industry including server and desktop virtualization deployment and maintenance.
Hector Jhong, Manager 2, Product Development, Virtualization Solution, Citrix
At Citrix Systems Hector Jhong is a Virtualization Solutions Manager for the Citrix Solutions Lab,
focusing on the design, testing and validation of end-to-end real-world customer Solutions resulting in
the creation of customer focused Reference Designs. He has years of software testing, software test
automation and virtualization experience.
Vadim Lebedev, Sr. Software Test Engineer 2, Citrix
At Citrix Systems Vadim Lebedev is a Senior Software Test Engineer 2 with the Citrix Solutions Lab,
focusing on the testing and validation of end-to-end real-world Desktop and Application Virtualization
Solutions resulting in the creation of customer focused Reference Designs. He has years of experience
in server and desktop virtualization as well as being a former XenServer Escalation team member.
3
About the Authors
Acknowledgments
We would like to thank the following for their contribution to this Cisco Validated Design:
Mike Brennan, Sr. Technical Marketing Engineer, VDI Performance and Solutions Team Lead, Cisco
Systems
Mike Brennan is a Cisco Unified Computing System architect, focusing on Virtual Desktop
Infrastructure solutions with extensive experience with EMC VNX, VMware ESX/ESXi, XenDesktop
and Provisioning Services. He has expert product knowledge in application and desktop virtualization
across all three major hypervisor platforms, both major desktop brokers, Microsoft Windows Active
Directory, User Profile Management, DNS, DHCP and Cisco networking technologies.
Hardik Patel, Support Engineer, Cisco Systems
Hardik Patel is a Virtualization System Engineer at Cisco with SSVPG. Hardik has over 9 years of
experience with server virtualization and core application in the virtual environment with area of focus
in design and implementation of systems and virtualization, manage and administration, UCS, storage
and network configurations. Hardik holds Masters degree in Computer Science with various career
oriented certification in virtualization, network and Microsoft.
4
About the Authors
About Cisco Validated Design (CVD) Program
The CVD program consists of systems and solutions designed, tested, and documented to facilitate
faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING
FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF
THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,
the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone,
iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace
Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners.
The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2014 Cisco Systems, Inc. All rights reserved
About Cisco Validated Design (CVD) Program
5
FlexPod Datacenter with Citrix XenDesktop
7.1 and Citrix XenServer 6.2
Overview
About this Document
This document provides a Reference Architecture for a 2000-Seat Virtual Desktop Infrastructure using
Citrix XenDesktop 7.1 built on Cisco UCS B200-M3 server blades with NetApp FAS 3250 and the Citrix
XenServer 6.2 SP1 hypervisor platform.
The landscape of desktop virtualization is changing constantly. New, high performance Cisco UCS
Blade Servers and Cisco UCS unified fabric combined with the latest generation NetApp storage
controllers running NetApp Clustered Data ONTAP results in a more compact, powerful, reliable and
efficient platform.
In addition, the advances in the Citrix XenDesktop 7.1 system, which now incorporates both traditional
hosted virtual Windows 7 or Windows 8 desktops, hosted applications and hosted shared Server 2008
R2 or Server 2012 R2 server desktops (formerly delivered by Citrix XenApp,) provides unparalleled
scale and management simplicity while extending the Citrix HDX FlexCast models to additional mobile
devices
This document provides the architecture and design of a virtual desktop infrastructure for 2000 mixed
use-case users. The infrastructure is 100% virtualized on XenServer 6.2 SP1 with third-generation Cisco
UCS B-Services B200 M3 blade servers booting through Fiber Channel Protocol (FCP) from a clustered
NetApp FAS 3250 storage array. The virtual desktops are powered using Citrix Provisioning Server 7.1
and Citrix XenDesktop 7.1, with a mix of hosted shared desktops (72.5%) and pooled hosted virtual
Windows 7 desktops (27.5%) to support the user population. Where applicable, the document provides
best practice recommendations and sizing guidelines for customer deployments of XenDesktop 7.1 on
the Cisco Unified Computing System.
Solution Component Benefits
Each of the components of the overall solution materially contributes to the value of functional design
contained in this document.
Overview
Benefits of Cisco Unified Computing System
Cisco Unified Computing System™ (UCS) is the first converged data center platform that combines
industry-standard, x86-architecture servers with networking and storage access into a single converged
system. The system is entirely programmable using unified, model-based management to simplify and
speed deployment of enterprise-class applications and services running in bare-metal, virtualized, and
cloud computing environments.
Benefits of the Unified Computing System include:
Architectural flexibility
•
Cisco UCS B-Series blade servers for infrastructure and virtual workload hosting
•
Cisco UCS C-Series rack-mount servers for infrastructure and virtual workload Hosting
•
Cisco UCS 6200 Series second generation fabric interconnects provide unified blade, network and
storage connectivity
•
Cisco UCS 5108 Blade Chassis provide the perfect environment for multi-server type,
multi-purpose workloads in a single containment
Infrastructure Simplicity
•
Converged, simplified architecture drives increased IT productivity
•
Cisco UCS management results in flexible, agile, high performance, self-integrating information
technology with faster ROI
•
Fabric Extender technology reduces the number of system components to purchase, configure and
maintain
•
Standards-based, high bandwidth, low latency virtualization-aware unified fabric delivers high
density, excellent virtual desktop user-experience
Business Agility
•
Model-based management means faster deployment of new capacity for rapid and accurate
scalability
•
Scale up to 20 Chassis and up to 160 blades in a single Cisco UCS management domain
•
Scale to multiple Cisco UCS Domains with Cisco UCS Central within and across data centers
globally
•
Deploy and manage storage, network and Cisco UCS server infrastructure with Cisco UCS Director
Benefits of Cisco Nexus Physical Switching
The Cisco Nexus product family includes lines of physical unified port layer 2, 10 GB switches, fabric
extenders, and virtual distributed switching technologies. In our study, we utilized Cisco Nexus 5548UP
physical switches and Cisco Nexus 5548UP Unified Port Layer 2 Switches.
The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity,
and business agility, with support for networking standards. For traditional, virtualized, unified, and
high-performance computing (HPC) environments, it offers a long list of IT and business advantages,
including:
Architectural Flexibility
•
Unified ports that support traditional Ethernet, Fiber Channel (FC), and Fiber Channel over Ethernet
(FCoE)
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
7
Overview
•
Synchronizes system clocks with accuracy of less than one microsecond, based on IEEE 1588
•
Offers converged Fabric extensibility, based on emerging standard IEEE 802.1BR, with Fabric
Extender (FEX) Technology portfolio
•
Infrastructure Simplicity
•
Common high-density, high-performance, data-center-class, fixed-form-factor platform
•
Consolidates LAN and storage
•
Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic
•
Supports storage traffic, including iSCSI, NAS, FC, RoE, and IBoE
Business Agility
•
Meets diverse data center deployments on one platform
•
Provides rapid migration and transition for traditional and evolving technologies
•
Offers performance and scalability to meet growing business needs
Specifications at-a Glance
•
A 1 -rack-unit, 1/10 Gigabit Ethernet switch
•
32 fixed Unified Ports on base chassis and one expansion slot totaling 48 ports
•
The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fiber Channel, and
Ethernet or FCoE
•
Throughput of up to 960 Gbps
Benefits of NetApp Clustered Data ONTAP Storage Controllers
With the release of NetApp clustered Data ONTAP, NetApp was the first to market with enterprise-ready,
unified scale-out storage. Developed from a solid foundation of proven Data ONTAP technology and
innovation, clustered Data ONTAP is the basis for virtualized shared storage infrastructures that are
architected for nondisruptive operations over the lifetime of the system. For details on how to configure
clustered Data ONTAP with Citrix XenServer, refer to TR-3732: Citrix XenServer and NetApp Storage
Best Practices.
All clustering technologies follow a common set of guiding principles. These principles include the
following:
•
Nondisruptive operation. The key to efficiency and the basis of clustering is the ability to make sure
that the cluster does not fail-ever.
•
Virtualized access is the managed entity. Direct interaction with the nodes that make up the cluster
is in and of itself a violation of the term cluster. During the initial configuration of the cluster, direct
node access is a necessity; however, steady-state operations are abstracted from the nodes as the user
interacts with the cluster as a single entity.
•
Data mobility and container transparency. The end result of clustering-that is, the nondisruptive
collection of independent nodes working together and presented as one holistic solution-is the
ability of data to move freely within the boundaries of the cluster.
•
Balance load across clustered storage controller nodes with no interruption to the end user.
•
Mix models of hardware in a cluster for scaling up or scaling out. You can start with a lower-costing
model and increase your performance when demand requires to higher-costing models of storage
controllers without losing your investment.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
8
Overview
•
Delegated management and ubiquitous access. In large complex clusters, the ability to delegate or
segment features and functions into containers that can be acted upon independently of the cluster
means the workload can be isolated; it is important to note that the cluster architecture itself must
not place these isolations. This should not be confused with security concerns around the content
being accessed.
Scale-Out
Data centers require agility. In a data center, each storage controller has CPU, memory, and disk shelves
limits. Scale out means that as the storage environment grows, additional controllers can be added
seamlessly to the resource pool residing on a shared storage infrastructure. Host and client connections
as well as storage repositories can be moved seamlessly and non-disruptively anywhere within the
resource pool.
The benefits of scale out are:
•
Non-disruptive operations
•
Ability to keep adding thousands of users to virtual desktop environment without downtime
•
Offers operational simplicity and flexibility
NetApp clustered Data ONTAP is the first product offering a complete scale-out solution; an intelligent,
adaptable, always-available storage infrastructure, utilizing proven storage efficiency for today's highly
virtualized environments.
Figure 1
Scale-Out
Multiprotocol Unified Storage
Multiprotocol unified architecture is the ability to support multiple data access protocols concurrently
in the same storage system, over a whole range of different controller and disk storage types. Data
ONTAP 7G and 7-Mode have long been capable of this, and now clustered Data ONTAP supports an
even wider range of data access protocols.
The supported protocols are:
•
NFS v3, v4, and v4.1, including pNFS
•
SMB 1, 2, 2.1, and 3, including support for nondisruptive failover in Microsoft Hyper-V
•
iSCSI
•
Fibre Channel
•
FCoE
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
9
Overview
Multi-Tenancy
Isolated servers and data storage can result in low utilization, gross inefficiency, and inability to respond
to changing business needs. Cloud architecture, delivering IT as a service (ITaaS), can overcome these
limitations while reducing future IT expenditure.
The storage virtual machine (SVM), formerly called Vserver, is the primary logical cluster component.
Each SVM can create volumes, logical interfaces, and protocol access. With clustered Data ONTAP,
each tenant's virtual desktops and data can be separated to different SVMs. The administrator of each
SVM has the rights to provision volumes and other SVM-specific operations. This is particularly
advantageous for service providers or any multi-tenant environments in which workload separation is
desired.
Figure 2 shows the multi-tenancy concept in clustered Data ONTAP.
Figure 2
Multi-tenancy Concept
NetApp Storage Cluster Components
It is important to address some key terms early in the text to establish a common knowledge baseline for
the remainder of this publication.
•
Cluster. The information boundary and domain within which information moves. The cluster is
where high availability is defined between physical nodes and where SVMs operate.
•
Node. A physical entity running Data ONTAP. This physical entity can be a traditional NetApp FAS
controller; a supported third-party array front ended by a V-Series controller; or NetApp's virtual
storage appliance (VSA), Data ONTAP-V™.
•
SVM, formerly called Vserver. A secure virtualized storage controller that behaves and appears to
the end user as a physical entity (similar to a VM). It is connected to one or more nodes through
internal networking relationships (covered later in this document). It is the highest visible element
to an external consumer, abstracting the layer of interaction from the physical nodes. Based on these
two statements, it is the entity used to provision cluster resources and can be compartmentalized in
a secured manner to prevent access to other parts of the cluster.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
10
Overview
Clustered Data ONTAP Networking Concepts
The physical interfaces on a node are referred to as ports. IP addresses are assigned to logical interfaces
(LIFs). LIFs are logically connected to a port in much the same way that VM virtual network adapter
and VMkernel ports connect to physical adapters, except without the constructs of virtual switches and
port groups. Physical ports can be grouped into interface groups. VLANs can be created on top of
physical ports or interface groups. LIFs can be associated with a port, interface group, or VLAN.
Figure 3 shows the clustered Data ONTAP network concept.
Figure 3
Ports and LIFs Example
Cluster Management
For complete and consistent management of storage and SAN infrastructure, NetApp recommends using
the tools listed in Table 1, unless specified otherwise.
Table 1
Recommended Tools
Task
SVM management
Switch management and zoning switch vendor
Volume and LUN provisioning and
management
Management Tools
OnCommand® System Manager
GUI or CLI interfaces
NetApp Virtual Storage Console for Citrix
XenServer
Benefits of Citrix XenServer 6.2 SP1
Cloud proven virtualization that is used by the world's largest clouds, directly integrates with Citrix
CloudPlatform and Apache CloudStack and is built on an open and resilient cloud architecture.
Open source, community driven virtualization that accelerates innovation, feature richness and 3rd party
integration from a strong community of users, ecosystem partners and industry leading contributors.
Value leader without compromise from a cost effective and enterprise-ready cloud proven platform that
is trusted to power the largest clouds, run mission critical applications and large scale desktop
virtualization deployments.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
11
Summary of Main Findings
Virtualize any infrastructure from clouds, servers or desktops with a proven, high performance
virtualization platform.
Benefits of Citrix XenDesktop 7.1
There are many reasons to consider a virtual desktop solution. An ever growing and diverse base of
users, an expanding number of traditional desktops, an increase in security mandates and government
regulations, and the introduction of Bring Your Own Device (BYOD) initiatives are factors that add to
the cost and complexity of delivering and managing desktop and application services.
Citrix XenDesktop™ 7 transforms the delivery of Microsoft Windows apps and desktops into a secure,
centrally managed service that users can access on any device, anywhere. The release focuses on
delivering these benefits:
•
Mobilizing Microsoft Windows application delivery, bringing thousands of corporate applications
to mobile devices with a native-touch experience and high performance
•
Reducing costs with simplified and centralized management and automated operations
•
Securing data by centralizing information and effectively controlling access
Citrix XenDesktop 7 promotes mobility, allowing users to search for and subscribe to published
resources, enabling a service delivery model that is cloud-ready.
This release follows a new unified FlexCast 2.0 architecture for provisioning all Windows apps and
desktops either on hosted-shared RDS servers or VDI-based virtual machines. The new architecture
combines simplified and integrated provisioning with personalization tools. Whether a customer is
creating a system to deliver just apps or complete desktops, Citrix XenDesktop 7 leverages common
policies and cohesive tools to govern infrastructure resources and access.
Audience
This document describes the architecture and deployment procedures of an infrastructure comprised of
Cisco, NetApp, and Citrix hypervisor and desktop virtualization products. The intended audience of this
document includes, but is not limited to, sales engineers, field consultants, professional services, IT
managers, partner engineering, and customers who want to deploy the solution described in this
document.
Summary of Main Findings
The combination of technologies from Cisco Systems, Inc., Citrix Systems, Inc. and NetApp, Inc.
produced a highly efficient, robust and affordable desktop virtualization solution for a hosted virtual
desktop and hosted shared desktop mixed deployment supporting different use cases. Key components
of the solution include:
•
Cisco's Desktop Virtualization Converged Design with FlexPod, providing our customers with a
turnkey physical and virtual infrastructure specifically designed to support 2000 desktop users in a
highly available proven design. This architecture is well suited for deployments of all sizes
including large departmental and enterprise deployments of virtual desktop infrastructure.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
12
Summary of Main Findings
•
More power, same size. Cisco UCS B200 M3 half-width blade with dual 10-core 2.7 GHz Intel Ivy
Bridge (E5-2680v2) processors and 384GB of memory supports ~25% more virtual desktop
workloads than the previously released Sandy Bridge processors on the same hardware. The Intel
Xeon E5-2680 v2 10-core processors used in this study provided a balance between increased
per-blade capacity and cost.
•
Fault-tolerance with high availability built-in to the design. The 2000-user design is based on using
two Cisco Unified Computing System chassis with twelve Cisco UCS B200 M3 blades for
virtualized desktop workloads and two B200 M3 blades for virtualized infrastructure workloads.
The design provides N+1 Server fault tolerance for hosted virtual desktops, hosted shared desktops
and infrastructure services.
•
An aggressive boot scenario stress tested to the limits. The 2000-user mixed hosted virtual desktop
and hosted shared desktop environment booted and registered with the XenDesktop 7.1 Delivery
Controllers in under 15 minutes, providing our customers with an extremely fast, reliable cold-start
desktop virtualization system.
•
Simulated login storms stress tested to max capacity. All 2000 simulated users logged in and started
running workloads up to steady state in 30-minutes without overwhelming the processors,
exhausting memory or exhausting the storage subsystems, providing customers with a desktop
virtualization system that can easily handle the most demanding login and startup storms.
•
Tier 0 storage on Cisco UCS Blade Servers in the form of two 400GB SSDs in a RAID 0 array are
capable of off-loading the non-persistent, high IO Citrix Provisioning Services write cache drives
for pooled Windows 7 Hosted Virtual Desktops and Windows Server 2012 Hosted Shared Desktop
sessions, thereby extending the capabilities of the NetApp FAS 3250 storage system
•
Ultra-condensed computing for the datacenter. The rack space required to support the 2000-user
system is a single rack of approximately 32 rack units, conserving valuable data center floor space.
•
Pure Virtualization: This CVD presents a validated design that is 100% virtualized on XenServer
6.2 SP1. All of the virtual desktops, user data, profiles, and supporting infrastructure components,
including Active Directory, Citrix Provisioning Servers, Microsoft SQL Servers, Citrix XenDesktop
Delivery Controllers, and Citrix XenDesktop RDS (XenApp) servers were hosted as virtual
machines. This provides customers with complete flexibility for maintenance and capacity additions
because the entire system runs on the FlexPod converged infrastructure with stateless Cisco UCS
Blade servers and NetApp unified storage with Clustered Data ONTAP.
•
Industry leadership with the new Cisco UCS Manager 2.1.3(a) software that simplifies scaling,
guarantees consistency, and eases maintenance. Cisco's ongoing development efforts with Cisco
UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer environments are
consistent locally, across Cisco UCS Domains and across the globe. Our software suite offers
increasingly simplified operational and deployment management and it continues to widen the span
of control for customer organizations' subject matter experts in compute, storage and network.
•
10G unified fabric validation on second generation 6200 Series Fabric Interconnects as Cisco runs
more challenging workload testing, while maintaining unsurpassed user response times.
•
NetApp FAS with Clustered Data ONTAP provides industry-leading storage solutions that
efficiently handle the most demanding IO bursts (e.g. login storms), profile management, and user
data management, provide VM backup and restores, deliver simple and flexible business
continuance, and help reduce storage cost per desktop.
•
NetApp FAS provides a very simple storage architecture for hosting all user data components (VMs,
profiles, user data) on the same storage array.
•
NetApp Clustered Data ONTAP system enables users to seamlessly add, upgrade or remove storage
infrastructure to meet the needs of the virtual desktops.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
13
Architecture
•
NetApp virtual storage console for XenServer (VSC) has deep integration with Citrix XenCenter.
This provides easy button automation for key storage tasks like datastore provisioning, storage
resize, data deduplication, backup and recovery, etc. directly from within XenCenter.
•
Latest and greatest virtual desktop and application product. Citrix XenDesktop™ 7.1 follows a new
unified product architecture that supports both hosted-shared desktops and applications (RDS) and
complete virtual desktops (VDI). This new XenDesktop release simplifies tasks associated with
large-scale VDI management. This modular solution supports seamless delivery of Windows apps
and desktops as the number of users increase. In addition, HDX enhancements help to optimize
performance and improve the user experience across a variety of endpoint device types, from
workstations to mobile devices including laptops, tablets, and smartphones.
•
Optimized to achieve the best possible performance and scale. For hosted shared desktop sessions,
the best performance was achieved when the number of vCPUs assigned to the XenDesktop 7 RDS
virtual machines did not exceed the number of hyper-threaded cores available on the server. In other
words, maximum performance is obtained when not over-committing the CPU resources for the
virtual machines running RDS.
•
Provisioning desktop machines made easy. Citrix Provisioning Services created hosted virtual
desktops as well as hosted shared desktops for this solution using a single method for both, the "PVS
XenDesktop Setup Wizard".
Architecture
Hardware Deployed
The architecture deployed is highly modular. While each customer's environment might vary in its exact
configuration, once the reference architecture contained in this document is built, it can easily be scaled
as requirements and demands change. This includes scaling both up (adding additional resources within
a Cisco UCS Domain) and out (adding additional Cisco UCS Domains and NetApp FAS Storage arrays).
The 2000-user XenDesktop 7 solution includes Cisco networking, Cisco UCS and NetApp FAS storage,
which fits into a single data center rack, including the access layer network switches.
This validated design document details the deployment of the 2000-user configurations for a mixed
XenDesktop workload featuring the following software:
•
Citrix XenDesktop 7.1 Pooled Hosted Virtual Desktops with PVS write cache on CIFS
•
Citrix XenDesktop 7.1 Shared Hosted Virtual Desktops with PVS write cache on CIFS
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Citrix StoreFront 2.1
•
Citrix XenServer 6.2 SP1Hypervisor
•
Microsoft Windows Server 2012 and Windows 7 32-bit virtual machine Operating Systems
•
Microsoft SQL Server 2012 SP1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
14
Architecture
Figure 4
Workload Architecture
The workload contains the following hardware as shown in Figure 4:
•
Two Cisco Nexus 5548UP Layer 2 Access Switches
•
Two Cisco UCS 6248UP Series Fabric Interconnects
•
Two Cisco UCS 5108 Blade Server Chassis with two 2204XP IO Modules per chassis
•
Four Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 384GB RAM, and
VIC1240 mezzanine cards for the 550 hosted Windows 7 virtual desktop workloads with N+1 server
fault tolerance.
•
Eight Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 256 GB RAM, and
VIC1240 mezzanine cards for the 1450 hosted shared Windows Server 2012 server desktop
workloads with N+1 server fault tolerance.
•
Two Cisco UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240
mezzanine cards for the infrastructure virtualized workloads
•
Two node NetApp FAS 3250 dual controller storage system running clustered Data ONTAP, 4 disk
shelves, (2 shelves per node), converged and 10GE ports for FC and NFS/CIFS connectivity
respectively.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
15
Architecture
•
(Not Shown) One Cisco UCS 5108 Blade Server Chassis with 3 UCS B250 M3 blade servers with
Intel E5-2650 processors, 192 GB RAM, and VIC1240 mezzanine cards for the Login VSI launcher
infrastructure
The NetApp FAS 3250 disk shelf configurations are detailed in section NetApp Storage Architecture
Design.
Logical Architecture
The logical architecture of the validated is designed to support 2000 users within two chassis and
fourteen blades, which provides physical redundancy for the chassis and blade servers for each
workload. Table 2 outlines all the servers in the configurations
Table 2
Infrastructure Architecture
Server Name
Location
Purpose
HSD-01, 03, 05,
07
HVD-01, 03
Physical – Chassis 1
XenDesktop 7.1 HSD XenServer 6.2 SP1
Physical – Chassis 1
XenDesktop 7.1 HVD XenServer 6.2 SP1
HSD-02, 04, 06,
08
HVD-01, 03
XenAD
XenDesktop1
XenPVS1
Physical – Chassis 2
XenDesktop 7.1 HSD XenServer 6.2 SP1
Physical – Chassis 2
Virtual – INFRA-1
Virtual – INFRA-1
Virtual – INFRA-1
XenDesktop 7.1 HVD XenServer 6.2 SP1
Active Directory Domain Controller
XenDesktop 7.1 controller
Provisioning Services 7.1 streaming server
XenStoreFront1
XDSQL1
Virtual – INFRA-1
Virtual – INFRA-1
StoreFront Services server
SQL Server (clustered)
XenLic
XenAD1
XenDesktop2
XenPVS2
XenPVS3
XenStoreFront2
XDSQL2
XenVSC
Virtual – INFRA-1
Virtual – INFRA-2
Virtual – INFRA-2
Virtual – INFRA-2
Virtual – INFRA-2
Virtual – INFRA-2
Virtual – INFRA-2
Virtual – INFRA-2
XenDesktop 7.1 License server
Active Directory Domain Controller
XenDesktop 7.1 controller
Provisioning Services 7.1 streaming server
Provisioning Services 7.1 streaming server
StoreFront Services server
SQL Server (clustered)
NetApp VSC server
Software Revisions
This section includes the software versions of the primary products installed in the environment.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
16
Architecture
Table 3
Software Revisions
Vendor
Cisco
Cisco
Product
UCS Component Firmware
UCS Manager
Version
2.1(3a)
2.1(3a)
Citrix
Citrix
Citrix
NetApp
XenDesktop
Provisioning Services
StoreFront Services
Virtual Storage Console for XenServer
7.1.0.4033
7.1.0.4022
2.1.0.17
2.0.1
Configuration Guidelines
The 2000 User Citrix XenDesktop 7.1 solution described in this document provides details for
configuring a fully redundant, highly-available configuration. Configuration guidelines are provided
that refer to which redundant component is being configured with each step, whether that be A or B. For
example Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco
UCS Fabric Interconnects are configured similarly.
This document is intended to allow the reader to configure the Citrix XenDesktop 7.1 customer
environment as stand-alone solution.
NetApp Configuration Guidelines
The following section is a summary of NetApp best practices that will be discussed in this document.
Storage configuration:
1.
Make sure the root volume is on its own three-drive aggregate.
2.
Create one data aggregate per controller.
3.
Create multiple volumes on each storage controller (node) for HSD and HVD. As a
recommendation, limit 400 VDI sessions per volume.
4.
If switchless storage cluster, make sure that the switchless option is set.
5.
Create load-sharing mirrors for all storage virtual machine's root volumes.
6.
Create a minimum of one logical interface (LIF) per volume (storage repository).
7.
Create LIF failover groups, assign them to LIFs, and enable the failover groups assigned to the LIFs.
8.
Assign the same port on each clustered storage node to the same LIF.
9.
Use the latest release of clustered Data ONTAP.
10. Use the latest release of shelf firmware and disk firmware.
Networking configuration for storage:
1.
Switch ports connected to the NetApp storage controllers need to be set to edge ports to turn
spanning tree is off. Also, make sure that portfast is enabled.
2.
Set flow control to none on the switch, storage controller, and XenServer ports.
3.
Make sure that "suspend-individual" is set to "no" on the switch.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
17
Architecture
4.
Use jumbo frames on the NFS data network.
5.
The NFS data network should be nonroutable.
6.
Segregate the CIFS network and NFS data network on different ports / ifgrps to eliminate the
possibility of MTU mismatch errors.
7.
Run the data network (Ethernet) on dedicated 10GbE cards (not UTA/CNA cards) in the storage
controllers.
Citrix considerations for storage:
1.
Use NetApp Virtual Storage Console (VSC) for Citrix XenServer to provision the storage
repositories on the storage.
2.
Use the VSC for resizing or applying deduplication to the storage repositories.
3.
Use NFS volumes for the storage repositories.
4.
Do not dedupe the write cache volumes on the storage.
5.
Use dedupe on the infrastructure volumes.
6.
Thin provision the write cache infrastructure volumes at the storage layer.
7.
Consider the use of SMB3 for hosting the PVS vDisk. SMB3 uses persistent file handles, which
makes it more resilient to failures.
8.
Use a profile manager for profiles and CIFS; NetApp recommends Citrix UPM.
9.
Use redirected folders for the home directories on the CIFS shares.
Monitoring, management and sizing:
1.
NetApp recommends UCS Director for servers, storage, and switch infrastructure.
2.
NetApp recommends OnCommand Balance to monitor VDI I/O from guests to storage.
3.
Have a NetApp system engineer or a NetApp partner use the NetApp SPM sizing tool to size the
virtual desktop solution. When sizing CIFS, NetApp recommends sizing with a heavy user
workload. For sizing storage for this CVD, the following assumptions were made:
– 80% CIFS user concurrency
– 10GB per user for home directory space with 35% deduplication space savings.
– Each VM used 2GB of RAM. PVS write cache is sized at 5GB per desktop for
non-persistent/pooled, and 2GB for persistent desktops with personal vDisk.
VLAN
The VLAN configuration recommended for the environment includes a total of six VLANs as outlined
in Table 4.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
18
Infrastructure Components
Table 4
VLAN Configuration
VLAN Name
Default
VM-Infra
VLAN ID
6
803
Use
Native VLAN
Infrastructure and Virtual
MGMT-IB
STORAGE
801
804
In Band Management Network
IP Storage VLAN for NFS and CIFS
XenServer Resource Pools
Citrix XenServer is an industry and value leading open source virtualization platform for managing
cloud, server and desktop virtual infrastructures. Organizations of any size can install XenServer in less
than ten minutes to virtualize even the most demanding workloads and automate management processes
- increasing IT flexibility and agility and lowering costs. With a rich set of management and automation
capabilities, a simple and affordable pricing model and optimizations for virtual desktop and cloud
computing, XenServer is designed to optimize private datacenters and clouds today and in the future.
Infrastructure Components
This section describes the infrastructure components used in the solution outlined in this study.
Cisco Unified Computing System (UCS)
Cisco UCS is a set of pre-integrated data center components that comprises blade servers, adapters,
fabric interconnects, and extenders that are integrated under a common embedded management system.
This approach results in far fewer system components and much better manageability, operational
efficiencies, and flexibility than comparable data center platforms.
Cisco Unified Computing System Components
Cisco UCS components are shown in Figure 5.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
19
Infrastructure Components
Figure 5
Cisco Unified Computing System Components
The Cisco UCS is designed from the ground up to be programmable and self-integrating. A server's
entire hardware stack, ranging from server firmware and settings to network profiles, is configured
through model-based management. With Cisco virtual interface cards, even the number and type of I/O
interfaces is programmed dynamically, making every server ready to power any workload at any time.
With model-based management, administrators manipulate a model of a desired system configuration,
associate a model's service profile with hardware resources and the system configures itself to match the
model. This automation speeds provisioning and workload migration with accurate and rapid scalability.
The result is increased IT staff productivity, improved compliance, and reduced risk of failures due to
inconsistent configurations.
Cisco Fabric Extender technology reduces the number of system components to purchase, configure,
manage, and maintain by condensing three network layers into one. It eliminates both blade server and
hypervisor-based switches by connecting fabric interconnect ports directly to individual blade servers
and virtual machines. Virtual networks are now managed exactly as physical networks are, but with
massive scalability. This represents a radical simplification over traditional systems, reducing capital
and operating costs while increasing business agility, simplifying and speeding deployment, and
improving performance.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
20
Infrastructure Components
Fabric Interconnect
Cisco UCS Fabric Interconnects create a unified network fabric throughout the Cisco UCS. They provide
uniform access to both networks and storage, eliminating the barriers to deploying a fully virtualized
environment based on a flexible, programmable pool of resources.
Cisco Fabric Interconnects comprise a family of line-rate, low-latency, lossless 10-GE, Cisco Data
Center Ethernet, and FCoE interconnect switches. Based on the same switching technology as the Cisco
Nexus 5000 Series, Cisco UCS 6000 Series Fabric Interconnects provide the additional features and
management capabilities that make them the central nervous system of Cisco UCS.
The Cisco UCS Manager software runs inside the Cisco UCS Fabric Interconnects. The Cisco UCS 6000
Series Fabric Interconnects expand the UCS networking portfolio and offer higher capacity, higher port
density, and lower power consumption. These interconnects provide the management and
communication backbone for the Cisco UCS B-Series Blades and Cisco UCS Blade Server Chassis.
All chassis and all blades that are attached to the Fabric Interconnects are part of a single, highly
available management domain. By supporting unified fabric, the Cisco UCS 6200 Series provides the
flexibility to support LAN and SAN connectivity for all blades within its domain right at configuration
time. Typically deployed in redundant pairs, the Cisco UCS Fabric Interconnect provides uniform access
to both networks and storage, facilitating a fully virtualized environment.
The Cisco UCS Fabric Interconnect family is currently comprised of the Cisco 6100 Series and Cisco
6200 Series of Fabric Interconnects.
Cisco UCS 6248UP 48-Port Fabric Interconnect
The Cisco UCS 6248UP 48-Port Fabric Interconnect is a 1 RU, 10-GE, Cisco Data Center Ethernet,
FCoE interconnect providing more than 1Tbps throughput with low latency. It has 32 fixed ports of Fibre
Channel, 10-GE, Cisco Data Center Ethernet, and FCoE SFP+ ports.
One expansion module slot can be up to sixteen additional ports of Fibre Channel, 10-GE, Cisco Data
Center Ethernet, and FCoE SFP+.
Cisco UCS 6248UP 48-Port Fabric Interconnects were used in this study.
Cisco UCS 2200 Series IO Module
The Cisco UCS 2100/2200 Series FEX multiplexes and forwards all traffic from blade servers in a
chassis to a parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic,
even traffic between blades on the same chassis, or VMs on the same blade, is forwarded to the parent
interconnect, where network profiles are managed efficiently and effectively by the Fabric Interconnect.
At the core of the Cisco UCS Fabric Extender are ASIC processors developed by Cisco that multiplex
all traffic.
•
Up to two fabric extenders can be placed in a blade chassis.
•
Cisco UCS 2104 has eight 10GBASE-KR connections to the blade chassis mid-plane, with one
connection per fabric extender for each of the chassis' eight half slots. This gives each half-slot blade
server access to each of two 10-Gbps unified fabric-based networks via SFP+ sockets for both
throughput and redundancy. It has 4 ports connecting up the fabric interconnect.
•
Cisco UCS 2208 has thirty-two 10GBASE-KR connections to the blade chassis midplane, with one
connection per fabric extender for each of the chassis' eight half slots. This gives each half-slot blade
server access to each of two 4x10-Gbps unified fabric-based networks via SFP+ sockets for both
throughput and redundancy. It has 8 ports connecting up the fabric interconnect.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
21
Infrastructure Components
Note
Cisco UCS 2208 fabric extenders were utilized in this study.
Cisco UCS Chassis
The Cisco UCS 5108 Series Blade Server Chassis is a 6 RU blade chassis that will accept up to eight
half-width Cisco UCS B-Series Blade Servers or up to four full-width Cisco UCS B-Series Blade
Servers, or a combination of the two. The UCS 5108 Series Blade Server Chassis can accept four
redundant power supplies with automatic load-sharing and failover and two Cisco UCS (either 2100 or
2200 series) Fabric Extenders. The chassis is managed by Cisco UCS Chassis Management Controllers,
which are mounted in the Cisco UCS Fabric Extenders and work in conjunction with the Cisco UCS
Manager to control the chassis and its components.
A single Cisco UCS managed domain can theoretically scale to up to 40 individual chassis and 320 blade
servers. At this time Cisco supports up to 20 individual chassis and 160 blade servers.
Basing the I/O infrastructure on a 10-Gbps unified network fabric allows the Cisco UCS to have a
streamlined chassis with a simple yet comprehensive set of I/O options. The result is a chassis that has
only five basic components:
•
The physical chassis with passive midplane and active environmental monitoring circuitry
•
Four power supply bays with power entry in the rear, and hot-swappable power supply units
accessible from the front panel
•
Eight hot-swappable fan trays, each with two fans
•
Two fabric extender slots accessible from the back panel
•
Eight blade server slots accessible from the front panel
Cisco UCS B200 M3 Blade Server
Cisco UCS B200 M3 is a third generation half-slot, two-socket Blade Server. The Cisco UCS B200 M3
harnesses the power of the latest Intel® Xeon® processor E5-2600 v2 product family, with up to 768
GB of RAM (using 32GB DIMMs), two optional SAS/SATA/SSD disk drives, and up to dual 4x 10
Gigabit Ethernet throughput, utilizing our VIC 1240 LAN on motherboard (LOM) design. The Cisco
UCS B200 M3 further extends the capabilities of Cisco UCS by delivering new levels of manageability,
performance, energy efficiency, reliability, security, and I/O bandwidth for enterprise-class
virtualization and other mainstream data center workloads.
In addition, customers who initially purchased Cisco UCS B200M3 blade servers with Intel E5-2600
series processors, can field upgrade their blades to the second generation E5-2600 processors, providing
increased processor capacity and providing investment protection
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
22
Infrastructure Components
Figure 6
Cisco UCS B200 M3 Server
Cisco UCS VIC1240 Converged Network adapter
A Cisco® innovation, the Cisco UCS Virtual Interface Card (VIC) 1240 (Figure 1) is a 4-port 10 Gigabit
Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM)
designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in
combination with an optional Port Expander, the Cisco UCS VIC 1240 capabilities can be expanded to
eight ports of 10 Gigabit Ethernet.
The Cisco UCS VIC 1240 enables a policy-based, stateless, agile server infrastructure that can present
up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either
network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1240
supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the
Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
23
Infrastructure Components
Figure 7
Cisco UCS VIC 1240 Converged Network Adapter
The Cisco UCS VIC1240 virtual interface cards are deployed in the Cisco UCS B-Series B200 M3 blade
servers.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
24
Infrastructure Components
Figure 8
The Evolving Workplace Landscape
Some of the key drivers for desktop virtualization are increased data security and reduced TCO through
increased control and reduced management costs.
Cisco Data Center Infrastructure for Desktop Virtualization
Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure:
simplification, security, and scalability. The software combined with platform modularity provides a
simplified, secure, and scalable desktop virtualization platform (Figure 11).
Figure 9
Citrix XenDesktop on Cisco Unified Computing System
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
25
Infrastructure Components
Simplified
Cisco UCS provides a radical new approach to industry standard computing and provides the heart of
the data center infrastructure for desktop virtualization and the Cisco Virtualization Experience (VXI).
Among the many features and benefits of Cisco UCS are the drastic reductions in the number of servers
needed and number of cables per server and the ability to very quickly deploy or re-provision servers
through Cisco UCS Service Profiles. With fewer servers and cables to manage and with streamlined
server and virtual desktop provisioning, operations are significantly simplified. Thousands of desktops
can be provisioned in minutes with Cisco Service Profiles and Cisco storage partners' storage-based
cloning. This speeds time to productivity for end users, improves business agility, and allows IT
resources to be allocated to other tasks.
IT tasks are further simplified through reduced management complexity, provided by the highly
integrated Cisco UCS Manager, along with fewer servers, interfaces, and cables to manage and maintain.
This is possible due to the industry-leading, highest virtual desktop density per blade of Cisco UCS
along with the reduced cabling and port count due to the unified fabric and unified ports of Cisco UCS
and desktop virtualization data center infrastructure.
Simplification also leads to improved and more rapid success of a desktop virtualization
implementation. Cisco and its partners -Citrix (XenDesktop and Provisioning Server) and NetApp - have
developed integrated, validated architectures, including available pre-defined, validated infrastructure
packages, known as FlexPod.
Secure
While virtual desktops are inherently more secure than their physical world predecessors, they introduce
new security considerations. Desktop virtualization significantly increases the need for virtual
machine-level awareness of policy and security, especially given the dynamic and fluid nature of virtual
machine mobility across an extended computing infrastructure. The ease with which new virtual
desktops can proliferate magnifies the importance of a virtualization-aware network and security
infrastructure. Cisco UCS and Nexus data center infrastructure for desktop virtualization provides
stronger data center, network, and desktop security with comprehensive security from the desktop to the
hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine-aware policies
and administration, and network security across the LAN and WAN infrastructure.
Scalable
Growth of a desktop virtualization solution is all but inevitable and it is critical to have a solution that
can scale predictably with that growth. The Cisco solution supports more virtual desktops per server and
additional servers scale with near linear performance. Cisco data center infrastructure provides a flexible
platform for growth and improves business agility. Cisco UCS Service Profiles allow for on-demand
desktop provisioning, making it easy to deploy dozens or thousands of additional desktops.
Each additional Cisco UCS server provides near linear performance and utilizes Cisco's dense memory
servers and unified fabric to avoid desktop virtualization bottlenecks. The high performance, low latency
network supports high volumes of virtual desktop traffic, including high resolution video and
communications.
Cisco UCS and Nexus data center infrastructure is an ideal platform for growth, with transparent scaling
of server, network, and storage resources to support desktop virtualization.
Savings and Success
As demonstrated above, the simplified, secure, scalable Cisco data center infrastructure solution for
desktop virtualization will save time and cost. There will be faster payback, better ROI, and lower TCO
with the industry's highest virtual desktop density per server, meaning there will be fewer servers
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
26
Infrastructure Components
needed, reducing both capital expenditures (CapEx) and operating expenditures (OpEx). There will also
be much lower network infrastructure costs, with fewer cables per server and fewer ports required, via
the Cisco UCS architecture and unified fabric.
The simplified deployment of Cisco UCS for desktop virtualization speeds up time to productivity and
enhances business agility. IT staff and end users are more productive more quickly and the business can
react to new opportunities by simply deploying virtual desktops whenever and wherever they are needed.
The high performance Cisco systems and network deliver a near-native end-user experience, allowing
users to be productive anytime, anywhere.
Cisco Services
Cisco offers assistance for customers in the analysis, planning, implementation, and support phases of
the VDI lifecycle. These services are provided by the Cisco Advanced Services group. Some examples
of Cisco services include:
•
Cisco VXI Unified Solution Support
•
Cisco VXI Desktop Virtualization Strategy Service
•
Cisco VXI Desktop Virtualization Planning and Design Service
The Solution: A Unified, Pre-Tested and Validated Infrastructure
To meet the challenges of designing and implementing a modular desktop infrastructure, Cisco, Citrix,
NetApp and Microsoft have collaborated to create the data center solution for virtual desktops outlined
in this document.
Key elements of the solution include:
•
A shared infrastructure that can scale easily
•
A shared infrastructure that can accommodate a variety of virtual desktop workloads
Cisco Networking Infrastructure
This section describes the Cisco networking infrastructure components used in the configuration.
Cisco Nexus 5548 Switch
The Cisco Nexus 5548 Switch is a 1RU, 10 Gigabit Ethernet, FCoE access-layer switch built to provide
more than 500 Gbps throughput with very low latency. It has 20 fixed 10 Gigabit Ethernet and FCoE
ports that accept modules and cables meeting the Small Form-Factor Pluggable Plus (SFP+) form factor.
One expansion module slot can be configured to support up to six additional 10 Gigabit Ethernet and
FCoE ports, up to eight FC ports, or a combination of both. The switch has a single serial console port
and a single out-of-band 10/100/1000-Mbps Ethernet management port. Two N+1 redundant,
hot-pluggable power supplies and five N+1 redundant, hot-pluggable fan modules provide highly
reliable front-to-back cooling.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
27
Infrastructure Components
Figure 10
Cisco Nexus 5548UP Unified Port Switch
Cisco Nexus 5500 Series Feature Highlights
The switch family's rich feature set makes the series ideal for rack-level, access-layer applications. It
protects investments in data center racks with standards-based Ethernet and FCoE features that allow IT
departments to consolidate networks based on their own requirements and timing.
The combination of high port density, wire-speed performance, and extremely low latency makes the
switch an ideal product to meet the growing demand for 10 Gigabit Ethernet at the rack level. The switch
family has sufficient port density to support single or multiple racks fully populated with blade and
rack-mount servers.
Built for today's data centers, the switches are designed just like the servers they support. Ports and
power connections are at the rear, closer to server ports, helping keep cable lengths as short and efficient
as possible. Hot-swappable power and cooling modules can be accessed from the front panel, where
status lights offer an at-a-glance view of switch operation. Front-to-back cooling is consistent with
server designs, supporting efficient data center hot-aisle and cold-aisle designs. Serviceability is
enhanced with all customer replaceable units accessible from the front panel. The use of SFP+ ports
offers increased flexibility to use a range of interconnect solutions, including copper for short runs and
fibre for long runs.
FCoE and IEEE data center bridging features support I/O consolidation, ease management of multiple
traffic flows, and optimize performance. Although implementing SAN consolidation requires only the
lossless fabric provided by the Ethernet pause mechanism, the Cisco Nexus 5500 Series switches
provide additional features that create an even more easily managed, high-performance, unified network
fabric.
Features and Benefits
This section details the specific features and benefits provided by the Cisco Nexus 5500 Series.
10GB Ethernet, FCoE, and Unified Fabric Features
The Cisco Nexus 5500 Series is first and foremost a family of outstanding access switches for 10 Gigabit
Ethernet connectivity. Most of the features on the switches are designed for high performance with 10
Gigabit Ethernet. The Cisco Nexus 5500 Series also supports FCoE on each 10 Gigabit Ethernet port
that can be used to implement a unified data center fabric, consolidating LAN, SAN, and server
clustering traffic.
Low Latency
The cut-through switching technology used in the Cisco Nexus 5500 Series ASICs enables the product
to offer a low latency of 3.2 microseconds, which remains constant regardless of the size of the packet
being switched. This latency was measured on fully configured interfaces, with access control lists
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
28
Infrastructure Components
(ACLs), QoS, and all other data path features turned on. The low latency on the Cisco Nexus 5500 Series
enables application-to-application latency on the order of 10 microseconds (depending on the NIC).
These numbers, together with the congestion management features described in the next section, make
the Cisco Nexus 5500 Series a great choice for latency-sensitive environments.
Other features include:
•
Nonblocking Line-Rate Performance
•
Single-Stage Fabric
•
Congestion Management
•
Virtual Output Queues
•
Lossless Ethernet (Priority Flow Control)
•
Delayed Drop FC over Ethernet
•
Hardware-Level I/O Consolidation
•
End-Port Virtualization.
NetApp FAS3200-Series
The FAS3200 series delivers leading performance and scale for SAN and NAS workloads in the
mid-range storage market. The new FAS3200 systems offer up to 80 percent more performance and 100
percent more capacity than previous systems, raising the bar for value in the midrange. For more
information, see http://www.netapp.com/us/products/storage-systems/fas3200/index.aspx.
Benefits
•
Designed for agility, providing intelligent management, immortal operations, and infinite scaling
•
Flash ready with up to 4TB of flash to boost performance
•
Flash optimized with more choices and flexibility for application acceleration
•
Cluster enabled to offer nondisruptive operations, eliminating planned and unplanned downtime
•
Industry-leading storage efficiency lowers storage costs on day one and over time
Target Customers and Environment
•
Medium to large enterprises
•
Regional data centers, replicated sites, and departmental systems
•
Midsize businesses that need full-featured and efficient storage with advanced availability and
performance
•
FAS3200 series is an ideal solution for high-capacity environments, server, and desktop
virtualization, Windows storage consolidation, data protection, and disaster recovery for midsized
businesses and distributed enterprise.
The FAS3200 series continues the tradition of NetApp price/performance leadership in the mid-range
family while introducing new features and capabilities needed by enterprises making long-term storage
investments with today's budget. Key FAS/V3200 innovations include an I/O expansion module (IOXM)
that provides configuration flexibility for enabling HA configurations in either 3U or 6U footprints, with
the 6U configuration offering 50% more slot density than that of previous-generation FAS3100 systems.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
29
Infrastructure Components
In addition to better performance and slot density, FAS/V3200 also offers reliability, availability,
serviceability, and manageability (RASM) with the integrated service processor (SP), the next
generation of remote management in the NetApp storage family. Key FAS3200-series features include:
•
Higher performance than that of the FAS/V3100 series
•
Two PCIe v2.0 (Gen 2) PCIe slots in the controller
•
I/O expansion module (IOXM) that provides 50% more expansion slots than the FAS3100
•
Onboard SAS ports for DS2246, DS4243, DS4246, and DS4486 shelves or tape connectivity
•
Integrated SP, next-generation RLM and BMC, which increase FAS/V3200 RASM
NetApp FAS3250 Clustered Data ONTAP
Requirements
Physical site where storage
system needs to be installed
must be ready
Storage system connectivity
requirements
Reference
Site Requirements
Guide
Comments
Refer to the “Site Preparation”
section
Site Requirements
Guide
Refer to the “System
Connectivity Requirements”
section
Refer to the “Circuit Breaker,
Power Outlet Balancing,
System Cabinet Power Cord
Plugs, and Console Pinout
Requirements” section
Refer to the “FAS32xx/V32xx
Series Systems” section
Storage system general power Site Requirements
Guide
requirements
Storage system
model-specific requirements
Site Requirements
Guide
System Configuration Guides
System configuration guides provide supported hardware and software components for the specific Data
ONTAP version. These online guides provide configuration information for all NetApp storage
appliances currently supported by the Data ONTAP software. They also provide a table of component
compatibilities.
1.
Make sure that the hardware and software components are supported with the version of Data
ONTAP that you plan to install by checking the System Configuration Guides at the NetApp Support
site.
2.
Click the appropriate NetApp storage appliance and then click the component you want to view.
Alternatively, to compare components by storage appliance, click a component and then click the
NetApp storage appliance you want to view.
Controllers
Follow the physical installation procedures for the controllers in the FAS3200 series documentation at
the NetApp Support site.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
30
Infrastructure Components
DS4243 Disk Shelves
Follow the procedures in the Disk Shelf Installation and Setup section of the DS4243 Disk Shelf
Overview to install a disk shelf for a new storage system.
Follow the procedures for proper cabling with the controller model as described in the SAS Disk Shelves
Universal SAS and ACP Cabling Guide
The following information applies to DS4243 disk shelves:
•
SAS disk drives use software-based disk ownership. Ownership of a disk drive is assigned to a
specific storage system by writing software ownership information on the disk drive rather than by
using the topography of the storage system's physical connections.
•
Connectivity terms used: shelf-to-shelf (daisy-chain), controller-to-shelf (top connections), and
shelf-to controller (bottom connections).
•
Unique disk shelf IDs must be set per storage system (a number from 0 through 98).
•
Disk shelf power must be turned on to change the digital display shelf ID. The digital display is on
the front of the disk shelf.
•
Disk shelves must be power cycled after the shelf ID is changed for it to take effect.
•
Changing the shelf ID on a disk shelf that is part of an existing storage system running Data ONTAP
requires that you wait at least 30 seconds before turning the power back on so that Data ONTAP can
properly delete the old disk shelf address and update the copy of the new disk shelf address.
•
Changing the shelf ID on a disk shelf that is part of a new storage system installation (the disk shelf
is not yet running Data ONTAP) requires no wait; you can immediately power cycle the disk shelf.
Enhancements in Citrix XenDesktop 7
Built on the Avalon™ architecture, Citrix XenDesktop™ 7 includes significant enhancements to help
customers deliver Windows apps and desktops as mobile services while addressing management
complexity and associated costs. Enhancements in this release include:
•
A new unified product architecture-the latest generation FlexCast architecture-and administrative
interfaces designed to deliver both hosted-shared applications (RDS) and complete virtual desktops
(VDI). Unlike previous software releases that required separate Citrix XenApp farms and
XenDesktop infrastructures, this new release allows administrators to deploy a single infrastructure
and employ a consistent set of management tools for mixed desktop and app workloads.
•
New and improved management interfaces. XenDesktop 7 includes two new purpose-built
management consoles-one for automating workload provisioning and app publishing and the second
for real-time monitoring of the infrastructure.
•
Enhanced HDX technologies. Since mobile technologies and devices are increasingly pervasive,
Citrix has engineered new and improved HDX technologies to improve the user experience for
hosted Windows apps and desktops delivered on laptops, tablets, and smartphones.
•
Unified App Store. The release includes a self-service Windows app store, implemented through
Citrix StoreFront services, that provides a single, simple, and consistent aggregation point for all
user services. IT can publish apps, desktops, and data services to the StoreFront, from which users
can search and subscribe to services.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
31
Infrastructure Components
FlexCast Technology
In Citrix XenDesktop 7, FlexCast Management Architecture (FMA) is responsible for delivering and
managing hosted-shared RDS apps and complete VDI desktops. By using Citrix Receiver with
XenDesktop 7, users have a device-native experience on endpoints including Windows, Mac, Linux,
iOS, Android, ChromeOS, HTML5, and Blackberry.
Figure 11 illustrates an overview of the unified FlexCast architecture and underlying
Figure 11
Overview of the Unified FlexCast Architecture
The FlexCast components are as follows:
•
Citrix Receiver. Running on user endpoints, Receiver provides users with self-service access to
resources published on XenDesktop servers. Receiver combines ease of deployment and use,
supplying fast, secure access to hosted applications, desktops, and data. Receiver also provides
on-demand access to Windows, Web, and Software-as-a-Service (SaaS) applications.
•
Citrix StoreFront. StoreFront authenticates users and manages catalogs of desktops and
applications. Users can search StoreFront catalogs and subscribe to published services through
Citrix Receiver.
•
Citrix Studio. Using the new and improved Studio interface, administrators can easily configure and
manage XenDesktop deployments. Studio provides wizards to guide the process of setting up an
environment, creating desktops, and assigning desktops to users, automating provisioning and
application publishing. It also allows administration tasks to be customized and delegated to match
site operational requirements.
•
Delivery Controller. The Delivery Controller is responsible for distributing applications and
desktops, managing user access, and optimizing connections to applications. Each site has one or
more delivery controllers.
•
Server OS Machines. These are virtual or physical machines (based on a Windows Server operating
system) that deliver RDS applications or hosted shared desktops to users.
•
Desktop OS Machines. These are virtual or physical machines (based on a Windows Desktop
operating system) that deliver personalized VDI desktops or applications that run on a desktop
operating system.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
32
Infrastructure Components
•
Remote PC. XenDesktop with Remote PC allows IT to centrally deploy secure remote access to all
Windows PCs on the corporate network. It is a comprehensive solution that delivers fast, secure
remote access to all the corporate apps and data on an office PC from any device.
•
Virtual Delivery Agent. A Virtual Delivery Agent is installed on each virtual or physical machine
(within the server or desktop OS) and manages each user connection for application and desktop
services. The agent allows OS machines to register with the Delivery Controllers and governs the
HDX connection between these machines and Citrix Receiver.
•
Citrix Director. Citrix Director is a powerful administrative tool that helps administrators quickly
troubleshoot and resolve issues. It supports real-time assessment, site health and performance
metrics, and end user experience monitoring. Citrix EdgeSight® reports are available from within
the Director console and provide historical trending and correlation for capacity planning and
service level assurance.
•
Citrix Provisioning Services 7.1. This new release of Citrix Provisioning Services (PVS) technology
is responsible for streaming a shared virtual disk (vDisk) image to the configured Server OS or
Desktop OS machines. This streaming capability allows VMs to be provisioned and re-provisioned
in real-time from a single image, eliminating the need to patch individual systems and conserving
storage. All patching is done in one place and then streamed at boot-up. PVS supports image
management for both RDS and VDI-based machines, including support for image snapshots and
rollbacks.
High-Definition User Experience (HDX) Technology
•
High-Definition User Experience (HDX) technology in this release is optimized to improve the user
experience for hosted Windows apps on mobile devices. Specific enhancements include:
•
HDX Mobile™ technology, designed to cope with the variability and packet loss inherent in today's
mobile networks. HDX technology supports deep compression and redirection, taking advantage of
advanced codec acceleration and an industry-leading H.264-based compression algorithm. The
technology enables dramatic improvements in frame rates while requiring significantly less
bandwidth. HDX technology offers users a rich multimedia experience and optimized performance
for voice and video collaboration.
•
HDX Touch technology enables mobile navigation capabilities similar to native apps, without
rewrites or porting of existing Windows applications. Optimizations support native menu controls,
multi-touch gestures, and intelligent sensing of text-entry fields, providing a native application look
and feel.
•
HDX 3D Pro uses advanced server-side GPU resources for compression and rendering of the latest
OpenGL and DirectX professional graphics apps. GPU support includes both dedicated user and
shared user workloads.
Citrix XenDesktop 7 Desktop and Application Services
IT departments strive to deliver application services to a broad range of enterprise users that have
varying performance, personalization, and mobility requirements. Citrix XenDesktop 7 allows IT to
configure and deliver any type of virtual desktop or application; hosted or local, and optimize delivery
to meet individual user requirements, while simplifying operations, securing data, and reducing costs.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
33
Infrastructure Components
Figure 12
Desktop and Application
With previous product releases, administrators had to deploy separate XenApp farms and XenDesktop
sites to support both hosted shared RDS and VDI desktops. As shown above, the new XenDesktop 7
release allows administrators to create a single infrastructure that supports multiple modes of service
delivery, including:
•
Application Virtualization and Hosting (RDS). Applications are installed on or streamed to
Windows servers in the data center and remotely displayed to users' desktops and devices.
•
Hosted Shared Desktops (RDS). Multiple user sessions share a single, locked-down Windows
Server environment running in the datacenter and accessing a core set of apps. This model of service
delivery is ideal for task workers using low intensity applications, and enables more desktops per
host compared to VDI.
•
Pooled VDI Desktops. This approach leverages a single desktop OS image to create multiple thinly
provisioned or streamed desktops. Optionally, desktops can be configured with a Personal vDisk to
maintain user application, profile and data differences that are not part of the base image. This
approach replaces the need for dedicated desktops, and is generally deployed to address the desktop
needs of knowledge workers that run more intensive application workloads.
•
VM Hosted Apps (16 bit, 32 bit, or 64 bit Windows apps). Applications are hosted on virtual
desktops running Windows 7, XP, or Vista and then remotely displayed to users' physical or virtual
desktops and devices.
This CVD focuses on delivering a mixed workload consisting of hosted shared desktops (HSD or RDS)
and hosted virtual desktops (HVD or VDI).
Citrix Provisioning Services
One significant advantage to service delivery through RDS and VDI is how these technologies simplify
desktop administration and management. Citrix Provisioning Services (PVS) takes the approach of
streaming a single shared virtual disk (vDisk) image rather than provisioning and distributing multiple
OS image copies across multiple virtual machines. One advantage of this approach is that it constrains
the number of disk images that must be managed, even as the number of desktops grows, ensuring image
consistency. At the same time, using a single shared image (rather than hundreds or thousands of desktop
images) significantly reduces the required storage footprint and dramatically simplifies image
management.
Since there is a single master image, patch management is simple and reliable. All patching is done on
the master image, which is then streamed as needed. When an updated image is ready for production,
the administrator simply reboots to deploy the new image. Rolling back to a previous image is done in
the same manner. Local hard disk drives in user systems can be used for runtime data caching or, in some
scenarios, removed entirely, lowering power usage, system failure rates, and security risks.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
34
Infrastructure Components
After installing and configuring PVS components, a vDisk is created from a device's hard drive by taking
a snapshot of the OS and application image, and then storing that image as a vDisk file on the network.
vDisks can exist on a Provisioning Server, file share, or in larger deployments (as in this CVD), on a
storage system with which the Provisioning Server can communicate (through iSCSI, SAN, NAS, and
CIFS). vDisks can be assigned to a single target device in Private Image Mode, or to multiple target
devices in Standard Image Mode.
When a user device boots, the appropriate vDisk is located based on the boot configuration and mounted
on the Provisioning Server. The software on that vDisk is then streamed to the target device and appears
like a regular hard drive to the system. Instead of pulling all the vDisk contents down to the target device
(as is done with some imaging deployment solutions), the data is brought across the network in real time,
as needed. This greatly improves the overall user experience since it minimizes desktop startup time.
This release of PVS extends built-in administrator roles to support delegated administration based on
groups that already exist within the network (Windows or Active Directory Groups). All group members
share the same administrative privileges within a farm. An administrator may have multiple roles if they
belong to more than one group.
Citrix XenServer 6.2 SP1—Citrix
Citrix® XenServer® is an industry leading, open source platform for cost-effective cloud, server and
desktop virtualization infrastructures. Organizations of any size can install XenServer in less than ten
minutes to virtualize even the most demanding workloads and automate management processes increasing IT flexibility and agility, and lowering costs. With a rich set of management and automation
capabilities, a simple and affordable pricing model and optimizations for virtual desktop and cloud
computing, XenServer is designed to optimize private datacenters and clouds today and in the future.
Key features include:
•
A datacenter automation suite so that businesses can automate key IT processes to improve service
delivery, business continuity for virtual environments resulting in both time and money savings
while providing more responsive IT services. Key capabilities include site recovery, high
availability, host power management and memory optimization.
•
Optimizations for high density cloud and desktop environments to ensure the highest performance
and data security including integration with other industry leading products including Citrix
CloudPlatform and Citrix XenDesktop.
•
A high-performance virtualization platform that includes the Xen ProjectTM hypervisor,
XenMotion® live migration, Storage XenMotion®, the XenCenter® management console,
XenServer Conversion Manager for VMware to XenServer conversions.
•
A suite of advanced integration and management tools that includes provisioning services,
role-based administration, performance reporting and altering, automated snapshots and recovery,
integration with third-party storage
New Features in XenServer 6.2 SP1 include:
•
Support for hardware-accelerated vGPUs based on the NVIDIA GRID technology. Customers who
have NVIDIA GRID K1 or GRID K2 cards installed in their systems can use this technology to share
GPUs between multiple Virtual Machines. When combined with XenDesktop HDX 3D Pro, this
enables the use of rich 3D applications, such as CAD, to be used by up to 64 concurrent VMs per
server1.
•
The latest versions of Windows 8.1 and Windows Server 2012R2 can be installed using the
Windows 8 and Windows Server 2012 templates to enable more use case and further your
organizations breadth of virtualized workloads.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
35
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
•
The Site Recovery wizard allows multiple fibre-channel LUNs to be connected in a single step,
dramatically reducing the time to recover complex environments in the event of a disaster.
•
Simple, per socket licensing available as a perpetual or annual license, that is competitively priced
for all organizations - enterprises, SMBs, service providers - and all types of deployments - server,
desktop and cloud.
•
Platform improvements that deliver increased performance, scalability and workload density.
XenServer 6.2 SP1 can now run 500 virtual machines per host, support an amazing 4,000 vCPUs
per host and improves boot storm performance by 40%.
•
Enhanced XenDesktop integration including XenServer 6.2 SP1 as the virtualization platform for
Project Avalon, faster desktop logins (clone on boot without Intellicache), Desktop Director alerts
for low resources (memory, CPU, disk, network) and preemptive actions to prevent hosts from
becoming unusable.
Architecture and Design of XenDesktop 7.1 on Cisco Unified
Computing System and NetApp FAS Storage
Design Fundamentals
There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base
of user devices, complexity in management of traditional desktops, security, and even Bring Your Own
Computer (BYOC) to work programs. The first step in designing a virtual desktop solution is to
understand the user community and the type of tasks that are required to successfully execute their role.
The following user classifications are provided:
•
Knowledge Workers today do not just work in their offices all day - they attend meetings, visit
branch offices, work from home, and even coffee shops. These anywhere workers expect access to
all of their same applications and data wherever they are.
•
External Contractors are increasingly part of your everyday business. They need access to certain
portions of your applications and data, yet administrators still have little control over the devices
they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost
of providing these workers a device vs. the security risk of allowing them access from their own
devices.
•
Task Workers perform a set of well-defined tasks. These workers access a small set of applications
and have limited requirements from their PCs. However, since these workers are interacting with
your customers, partners, and employees, they have access to your most critical data.
•
Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to
connect to a network. In addition, these workers expect the ability to personalize their PCs, by
installing their own applications and storing their own data, such as photos and music, on these
devices.
•
Shared Workstation users are often found in state-of-the-art University and business computer labs,
conference rooms or training centers. Shared workstation environments have the constant
requirement to re-provision desktops with the latest operating systems and applications as the needs
of the organization change, tops the list.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
36
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
After the user classifications have been identified and the business requirements for each user
classification have been defined, it becomes essential to evaluate the types of virtual desktops that are
needed based on user requirements. There are essentially five potential desktops environments for each
user:
•
Traditional PC: A traditional PC is what typically constituted a desktop environment: physical
device with a locally installed operating system.
•
Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts
through a delivery protocol. With hosted, server-based desktops, a single installed instance of a
server operating system, such as Microsoft Windows Server 2012, is shared by multiple users
simultaneously. Each user receives a desktop "session" and works in an isolated memory space.
Changes made by one user could impact the other users.
•
Hosted Virtual Desktop: A hosted virtual desktop is a virtual desktop running either on virtualization
layer (ESX, XenServer, Hyper-V or any supported hypervisor) or on bare metal hardware. The user
does not work with and sit in front of the desktop, but instead the user interacts through a delivery
protocol.
•
Published Applications: Published applications run entirely on the XenApp RDS server and the user
interacts through a delivery protocol. With published applications, a single installed instance of an
application, such as Microsoft Office 2012, is shared by multiple users simultaneously. Each user
receives an application "session" and works in an isolated memory space.
•
Streamed Applications: Streamed desktops and applications run entirely on the user's local client
device and are sent from a server on demand. The user interacts with the application or desktop
directly but the resources may only available while they are connected to the network.
•
Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user's local
device and continues to operate when disconnected from the network. In this case, the user's local
device is used as a type 1 hypervisor and is synced with the data center when the device is connected
to the network.
For the purposes of the validation represented in this document both Citrix XenDesktop 7.1 hosted
virtual desktops and hosted shared server desktops were validated. Each of the sections provides some
fundamental design decisions for this environment.
Understanding Applications and Data
When the desktop user groups and sub-groups have been identified, the next task is to catalog group
application and data requirements. This can be one of the most time-consuming processes in the VDI
planning exercise, but is essential for the VDI project's success. If the applications and data are not
identified and co-located, performance will be negatively affected.
The process of analyzing the variety of application and data pairs for an organization will likely be
complicated by the inclusion cloud applications, like SalesForce.com. This application and data analysis
is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning process.
There are a variety of third party tools available to assist organizations with this crucial exercise.
Project Planning and Solution Sizing Sample Questions
Now that user groups, their applications and their data requirements are understood, some key project
and solution sizing questions may be considered.
General project questions should be addressed at the outset, including:
•
Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications
and data?
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
37
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
•
Is there infrastructure and budget in place to run the pilot program?
•
Are the required skill sets to execute the VDI project available? Can we hire or contract for them?
•
Do we have end user experience performance metrics identified for each desktop sub-group?
•
How will we measure success or failure?
•
What is the future implication of success or failure?
Provided below is a short, non-exhaustive list of sizing questions that should be addressed for each user
sub-group:
•
What is the desktop OS planned? Windows 7 or Windows 8?
•
32 bit or 64 bit desktop OS?
•
How many virtual desktops will be deployed in the pilot? In production? All Windows 7/8?
•
How much memory per target desktop group desktop?
•
Are there any rich media, Flash, or graphics-intensive workloads?
•
What is the end point graphics processing capability?
•
Will XenDesktop RDS be used for Hosted Shared Server Desktops or exclusively XenDesktop
HVD?
•
Are there XenDesktop hosted applications planned? Are they packaged or installed?
•
Will Provisioning Server or Machine Creation Services be used for virtual desktop deployment?
•
What is the hypervisor for the solution?
•
What is the storage configuration in the existing environment?
•
Are there sufficient IOPS available for the write-intensive VDI workload?
•
Will there be storage dedicated and tuned for VDI service?
•
Is there a voice component to the desktop?
•
Is anti-virus a part of the image?
•
Is user profile management (e.g., non-roaming profile based) part of the solution?
•
What is the fault tolerance, failover, disaster recovery plan?
•
Are there additional desktop sub-group specific questions?
Desktop Virtualization Design Fundamentals
An ever growing and diverse base of user devices, complexity in management of traditional desktops,
security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a
virtual desktop solution. When evaluating a Desktop Virtualization deployment, consider the following:
Citrix Design Fundamentals
With Citrix XenDesktop 7, the method you choose to provide applications or desktops to users depends
on the types of applications and desktops you are hosting and available system resources, as well as the
types of users and user experience you want to provide.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
38
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Server OS
machines
You want: Inexpensive server-based delivery to minimize the cost of delivering
applications to a large number of users, while providing a secure, high-definition user
experience.
Your users: Perform well-defined tasks and do not require personalization or offline
access to applications. Users may include task workers such as call center operators
and retail workers, or users that share workstations.
Application types: Any application.
Desktop OS
machines
You want: A client-based application delivery solution that is secure, provides
centralized management, and supports a large number of users per host server (or
hypervisor), while providing users with applications that display seamlessly in
high-definition.
Your users: Are internal, external contractors, third-party collaborators, and other
provisional team members. Users do not require off-line access to hosted
applications.
Application types: Applications that might not work well with other applications or
might interact with the operating system, such as .NET framework. These types of
applications are ideal for hosting on virtual machines.
Applications running on older operating systems such as Windows XP or Windows
Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application
on its own virtual machine, if one machine fails, it does not impact other users.
Remote PC
Access
You want: Employees with secure remote access to a physical computer without
using a VPN. For example, the user may be accessing their physical desktop PC from
home or through a public WIFI hotspot. Depending upon the location, you may want
to restrict the ability to print or copy and paste outside of the desktop. This method
enables BYO device support without migrating desktop images into the datacenter.
Your users: Employees or contractors that have the option to work from home, but
need access to specific software or data on their corporate desktops to perform their
jobs remotely.
Host: The same as Desktop OS machines.
Application types: Applications that are delivered from an office computer and
display seamlessly in high definition on the remote user's device.
For the Cisco Validated Design described in this document, Hosted Shared (using Server OS machines)
and Hosted Virtual Desktops (using Desktop OS machines) were configured and tested. The following
sections discuss fundamental design decisions relative to this environment.
Citrix Hosted Shared Desktop Design Fundamentals
Citrix XenDesktop 7 integrates Hosted Shared and VDI desktop virtualization technologies into a
unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering
Windows applications and desktops as a service.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
39
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Users can select applications from an easy-to-use "store" that is accessible from tablets, smartphones,
PCs, Macs, and thin clients. XenDesktop delivers a native touch-optimized experience with HDX
high-definition performance, even over mobile networks.
Machine Catalogs
Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity
called a Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make
sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are
configured to run either a Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop
OS (for hosted pooled VDI desktops).
Delivery Groups
To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines
from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications,
or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of
allocating machines and applications to users. In a Delivery Group, you can:
•
Use machines from multiple catalogs
•
Allocate a user to multiple machines
•
Allocate multiple users to one machine
As part of the creation process, you specify the following Delivery Group properties:
•
Users, groups, and applications allocated to Delivery Groups
•
Desktop settings to match users' needs
•
Desktop power management options
Figure 13 illustrates how users access desktops and applications through machine catalogs and delivery
groups. (Note that only Server OS and Desktop OS Machines are configured in this CVD configuration
to support hosted shared and pooled virtual desktops.)
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
40
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Figure 13
Accessing Desktops and Applications through Machine Catalogs and Delivery Groups
Hypervisor Selection
Citrix XenDesktop is hypervisor-agnostic, so any of the following three hypervisors can be used to host
RDS and VDI based desktops:
Note
•
XenServer: Citrix® XenServer® is a complete, managed server virtualization platform built on the
powerful Xen® hypervisor. Xen technology is widely acknowledged as the fastest and most secure
virtualization software in the industry. XenServer is designed for efficient management of Windows
and Linux virtual servers and delivers cost-effective server consolidation and business continuity.
More information on XenServer can be obtained at the web site:
http://www.citrix.com/products/xenserver/overview.html.
•
Hyper-V: Microsoft Windows Server with Hyper-V is available in a Standard, Server Core and free
Hyper-V Server versions. More information on Hyper-V can be obtained at the Microsoft web site:
http://www.microsoft.com/en-us/server-cloud/windows-server/default.aspx.
•
VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center
server software and the hypervisor software that virtualizes the hardware resources on the servers.
It offers features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion,
VMFS, and a multipathing storage layer. More information on vSphere can be obtained at the
VMware web site:
http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html.
For this CVD, the hypervisor is Citrix XenServer 6.2 SP1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
41
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Citrix Provisioning Services
Citrix XenDesktop 7.1 can be deployed with or without Citrix Provisioning Services (PVS). The
advantage of using Citrix PVS is that it allows computers to be provisioned and re-provisioned in
real-time from a single shared-disk image. In this way Citrix PVS greatly reduces the amount of storage
required in comparison to other methods of provisioning virtual desktops.
Citrix PVS can create desktops as Pooled or Private:
•
Private Desktop: A private desktop is a single desktop assigned to one distinct user.
•
Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to
multiple desktop instances upon boot.
When considering a PVS deployment, there are some design decisions that need to be made regarding
the write cache for the virtual desktop devices that leverage provisioning services. The write cache is a
cache of all data that the target device has written. If data is written to the PVS vDisk in a caching mode,
the data is not written back to the base vDisk. Instead it is written to a write cache file in one of the
following locations:
•
Cache on device hard drive. Write cache exists as a file in NTFS format, located on the
target-device's hard drive. This write cache option frees up the Provisioning Server since it does not
have to process write requests and does not have the finite limitation of RAM.
•
Cache on device hard drive persisted. (Experimental Phase) This is the same as "Cache on device
hard drive", except that the cache persists. At this time, this method is an experimental feature only,
and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later). This method
also requires a different bootstrap.
•
Cache in device RAM. Write cache can exist as a temporary file in the target device's RAM. This
provides the fastest method of disk access since memory access is always faster than disk access.
•
Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and
is only available for Windows 7 and Server 2008 R2 and later. When RAM is zero, the target device
write cache is only written to the local disk. When RAM is not zero, the target device write cache is
written to RAM first. When RAM is full, the least recently used block of data is written to the local
differencing disk to accommodate newer data on RAM. The amount of RAM specified is the
non-paged kernel memory that the target device will consume.
•
Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this
configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and
network traffic. For additional security, the Provisioning Server can be configured to encrypt write
cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data
provides data protection in the event a hard drive is stolen.
•
Cache on server persisted. This cache option allows for the saving of changes between reboots.
Using this option, after rebooting, a target device is able to retrieve changes made from previous
sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each
target device that accesses the vDisk automatically has a device-specific, writable disk file created.
Any changes made to the vDisk image are written to that file, which is not automatically deleted
upon shutdown.
The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine
Creation Services (MCS), which is integrated directly with the XenDesktop Studio console.
For this study, we used PVS 7.1 for managing Pooled Desktops with cache on device storage for each
virtual machine so that the design would scale to many thousands of desktops. Provisioning Server 7.1
was used for Active Directory machine account creation and management as well as for streaming the
shared disk to the hypervisor hosts.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
42
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Example Citrix XenDesktop 7.1 Deployments
Two examples of typical XenDesktop deployments are the following:
•
A distributed components configuration
•
A multiple site configuration
Distributed Components Configuration
You can distribute the components of your deployment among a greater number of servers, or provide
greater scalability and failover by increasing the number of controllers in your site. You can install
management consoles on separate computers to manage the deployment remotely. A distributed
deployment is necessary for an infrastructure based on remote access through NetScaler Gateway
(formerly called Access Gateway).
Figure 14 illustrates an example of a distributed components configuration. A simplified version of this
configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described
in this document deploys Citrix XenDesktop in a configuration that resembles this distributed
components configuration shown.
Figure 14
Distributed Components
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
43
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Multiple Site Configuration
If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most
appropriate site and StoreFront to deliver desktops and applications to users.
Figure 15 depicting multiple sites, each site is split into two data centers, with the database mirrored or
clustered between the data centers to provide a high availability configuration. Having two sites
globally, rather than just one, minimizes the amount of unnecessary WAN traffic. A separate Studio
console is required to manage each site; sites cannot be managed as a single entity. You can use Director
to support users across sites.
Citrix NetScaler accelerates application performance, load balances servers, increases security, and
optimizes the user experience. In this example, two NetScalers are used to provide a high availability
configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the
DMZ to provide a multi-site, fault-tolerant solution. Two Cisco blade servers host infrastructure services
(AD, DNS, DHCP, Profile, SQL, Citrix XenDesktop management, and web servers).
Figure 15
Multiple Site Configuration
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
44
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
NetApp Storage Architecture Design
Virtual desktop solution includes delivering OS, user and corporate-application management, user
profile and data management.
NetApp highly recommends implementing virtual layering technologies to separate the various
components of a desktop (such as base OS image, user profiles and settings, corporate apps,
user-installed apps, and user data into manageable entities called layers. Layers help to achieve the
lowest storage cost per desktop since the storage no longer has to be sized for peak IOPS and intelligent
data management policies. For example, storage efficiency, Snapshot™ based backup and recovery can
be applied to the different layers of the desktop.
Some of the key benefits of virtual desktop layering are:
•
Ease of VDI image management. Individual desktops no longer have to be patched or updated
individually. This results in cost savings as the storage array no longer has to be sized for write I/O
storms.
•
Efficient data management. Separating the different desktop components into layers allows for the
application of intelligent data management policies (such as deduplication, NetApp Snapshot
backups and so on) on different layers as required. For example, you can enable deduplication on
storage volumes that host Citrix personal vDisks and user data.
•
Ease of application rollout and updates. Allows the ease of managing the roll out of new applications
and updates to existing applications.
•
Improved end-user experience. Provides users the freedom to install applications and allow
persistence of these applications upon updates to desktop OS or applications.
High-Level Architecture Design
This section outlines the recommended storage architecture for deploying a mix of various XenDesktop
FlexCast delivery models such as hosted VDI, hosted-shared desktops, along with intelligent VDI
layering (such as profile management and user data management) on the same NetApp clustered Data
ONTAP storage array.
For hosted-shared desktops and hosted VDI, the following is the storage best practice for the OS vDisk,
write cache disk, profile management, user data management, and application virtualization:
•
PVS vDisk. PVS vDisk CIFS/SMB 3 is used to host the PVS vDisk. CIFS/SMB 3 allows the same
vDisk to be shared among multiple PVS servers and still has resiliency during the storage node
failover. This results in significant operational savings and architecture simplicity.
•
PVS write cache file. The PVS write cache file is hosted on NFS storage repositories for simplicity
and scalability. Deduplication should not be enabled on this volume, due to the fact that the rate of
change is too great. The PVS write cache file should be set for thin provisioning at the storage layer.
•
Profile management. To make sure that the user profiles and settings are preserved. We leverage the
profile management software Citrix UPM to redirect the user profiles to the CIFS home directories.
•
User data management. NetApp recommends hosting the user data on CIFS home directories to
preserve data upon VM reboot or redeploy.
•
Monitoring and management. NetApp recommends using OnCommand Balance and Citrix Desktop
Director to provide end-to-end monitoring and management of the solution.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
45
Architecture and Design of XenDesktop 7.1 on Cisco Unified Computing System and NetApp FAS Storage
Storage Deployment
Two FAS3250 nodes with four DS4243 disk shelves were used in this solution to support 1464
hosted-shared desktops (HSD) sessions and 550 users of hosted VDI (HVD). The version of clustered
Data ONTAP is 8.2P5.
To support the differing security, backup, performance, and data sharing needs of users, we group
physical data storage resources on your storage system into one or more aggregates. You can design and
configure your aggregates to provide the appropriate level of performance and redundancy for your
storage requirements. For information about best practices for working with aggregates, see TR-3437:
Storage Subsystem Resiliency Guide.
You can create an aggregate to provide storage to one or more volumes. Aggregates are physical storage
objects, pooling the underlying disks to provide storage; aggregates are associated with a specific node
in the cluster.
Table 5 lists the aggregate configuration information.
Table 5
Aggregate Name
Aggregate Configuration Information
Owner Node
Name
R4E08NA3250aggr0_R4E08NA3250_01
CL-01
R4E08NA3250aggr0_R4E08NA3250_02
CL-02
R4E08NA3250DATA_R4E08NA3250_01
CL-01
R4E08NA3250DATA_R4E08NA3250_02
CL-02
Disk Count
(By Type)
Block
Type
RAID
Type
RAID
Has
HA
Group
Mroot
Policy
Size
Volume
Size
Nominal
3@450GB_SAS_15k 64_bit
raid_dp
16 cfo
True
367.36 GB
3@450GB_SAS_15k 64_bit
raid_dp
16 cfo
True
367.36 GB
42@450GB_SAS_15k 64_bit
raid_dp
21 sfo
False
13.63 TB
42@450GB_SAS_15k 64_bit
raid_dp
21 sfo
False
13.63 TB
Volumes are data containers that enable you to partition and manage your data. Volumes are the
highest-level logical storage objects. Unlike aggregates, which are composed of physical storage
resources, volumes are completely logical objects. Understanding the types of volumes and their
associated capabilities enables you to design your storage architecture for maximum storage efficiency
and ease of administration.
A FlexVol® volume is a data container associated with a storage virtual machine. It gets its storage from
a single associated aggregate, which it might share with other FlexVol volumes or Infinite Volumes. It
can be used to contain files in a NAS environment, or LUNs in a SAN environment.
Table 6 lists the FlexVol configuration.
Table 6
Cluster
Name
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
FlexVol Configuration
Containing
Aggregate
DATA_R4E08N
CIFS
AOSQL_CIFS
A3250_02
DATA_R4E08N
CIFS
User_Profiles
A3250_02
DATA_R4E08N
CIFS
vDisk
A3250_02
Hosted_Shar Hosted_Shared DATA_R4E08N
ed
_WC_00
A3250_01
SVM Name
Volume Name
Type
Snapshot
Policy
Export
Policy
Security
Style
RW
default
default
ntfs
RW
default
default
ntfs
RW
default
default
ntfs
RW
none
Hosted_Sha
unix
red3
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
46
Size
Nominal
200.00
GB
75.00
GB
500.00
GB
180.00
GB
Solution Validation
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
R4E08NA32
50-CL
Hosted_Shar
ed
Hosted_Shar
ed
Hosted_Shar
ed
Hosted_Shared
_WC_01
Hosted_Shared
_WC_02
Hosted_Shared
_WC_03
DATA_R4E08N
A3250_01
DATA_R4E08N
A3250_02
DATA_R4E08N
A3250_02
DATA_R4E08N
Hosted_VDI HVD_WC
A3250_02
DATA_R4E08N
Hosted_VDI VDI_WC
A3250_02
Infrastructure_V DATA_R4E08N
Infrastructure
SC
A3250_02
DATA_R4E08N
SanBoot
BootData
A3250_01
RW
none
RW
none
RW
none
RW
none
RW
none
RW
none
RW
none
Hosted_Sha
red4
Hosted_Sha
red5
Hosted_Sha
red6
Hosted_VDI
2
Hosted_VDI
1
Infrastructur
e0
default
unix
unix
unix
180.00
GB
180.00
GB
180.00
GB
unix
1.95 TB
unix
1.95 TB
unix
1.95 TB
unix
412.50
GB
Figure 16 illustrates the storage layout.
725 HSD user write cache is on node 1. 725 HSD users and 550 HVD users write cache is on node 2.
One CIFS storage virtual server is created for HVD users. XenServer 6.2 SP1 SAN boot volume is on
node 1 and infrastructure virtual storage server is on node 2.
Figure 16
Storage Layout
Solution Validation
This section details the configuration and tuning that was performed on the individual components to
produce a complete, validated solution.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
47
Solution Validation
Configuration Topology for a Scalable XenDesktop 7.1 Mixed Workload
Desktop Virtualization Solution
Figure 17
FlexPod C-Mode XenDesktop 7.1 Architecture Block Diagram
Figure 17 illustrates the architectural diagram for the purpose of this study. The architecture is divided
into four distinct layers:
•
Cisco UCS Compute Platform
•
The Virtual Desktop Infrastructure that runs on UCS blade hypervisor hosts
•
Network Access layer and LAN
•
Storage Access Network (SAN) and NetApp FAS 3250 Cluster Mode deployment
Figure 18 illustrates the physical configuration of the 2000 seat Citrix XenDesktop 7.1 environment.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
48
Solution Validation
Figure 18
Detailed Architecture of the Filer XenDesktop 7.1
Cisco Unified Computing System Configuration
This section talks about the Cisco UCS configuration that was done as part of the infrastructure build
out. The racking, power and installation of the chassis are described in the install guide (see
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/ucs5108_install.html) and
it is beyond the scope of this document. More details on each step can be found in the following
documents:
•
Cisco UCS Manager Configuration Guides
http://www.cisco.com/en/US/partner/products/ps10281/products_installation_and_configuration_g
uides_list.html
•
Cisco UCS CLI Configuration Guide
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/cli/config/guide/2-1/b_UCSM_C
LI_Configuration_Guide_2_1.pdf
•
Cisco UCS-M GUI Configuration Guide
http://www.cisco.com/en/US/partner/docs/unified_computing/ucs/sw/gui/config/guide/2.1/b_UCS
M_GUI_Configuration_Guide_2_1.html
Base Cisco UCS System Configuration
To configure the Cisco Unified Computing System, perform the following steps:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
49
Solution Validation
1
Bring up the Fabric interconnect and from a Serial Console connection set the IP address, gateway, and the hostname
of the primary fabric interconnect. Now bring up the second fabric interconnect after connecting the dual cables
between them. The second fabric interconnect automatically recognizes the primary and ask if you want to be part of
the cluster, answer yes and set the IP address, gateway and the hostname. Once this is done all access to the FI can be
done remotely. You will also configure the virtual IP address to connect to the FI, you need a total of three IP address
to bring it online. You can also wire up the chassis to the FI, using either 1, 2 or 4 links per IO Module, depending on
your application bandwidth requirement. We connected all the four links to each module.
Now connect using your favorite browser to the Virtual IP and launch the UCS-Manager. The Java based UCSM
will let you do everything that you could do from the CLI. We will highlight the GUI methodology here.
First check the firmware on the system and see if it is current. Visit
http://software.cisco.com/download/release.html?mdfid=283612660&softwareid=283655658&release=2.0(4d)
&relind=AVAILABLE&rellifecycle=&reltype=latest to download the most current UCS Infrastructure and UCS
Manager software. Use the UCS Manager Equipment tab in the left pane, then the Firmware Management tab in the
right pane and Packages sub-tab to view the packages on the system. Use the Download Tasks tab to download
needed software to the FI. The firmware release used in this paper is 2.1(1a).
2
3
If the firmware is not current, follow the installation and upgrade guide to upgrade the UCS Manager firmware. We
will use UCS Policy in Service Profiles later in this document to update all UCS components in the solution. Note:
The Bios and Board Controller version numbers do not track the IO Module, Adapter, nor CIMC controller version
numbers in the packages.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
50
Solution Validation
4
Configure and enable the server ports on the FI. These are the ports that will connect the chassis to the FIs.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
51
Solution Validation
5
Configure and enable uplink Ethernet ports:
and FC uplink ports:
Use the Configure Unified Ports, Configure Expansion Module Ports to configure FC uplinks. Note: In this example,
we configured six FC ports, two of which are in use.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
52
Solution Validation
5a
On the LAN tab in the Navigator pane, configure the required Port Channels and Uplink Interfaces on both Fabric
Interconnects:
6
Expand the Chassis node in the left pane, the click on each chassis in the left pane, then click Acknowledge Chassis
in the right pane to bring the chassis online and enable blade discovery.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
53
Solution Validation
7
Use the Admin tab in the left pane, to configure logging, users and authentication, key management,
communications, statistics, time zone and NTP services, and Licensing. Configuring your Management IP Pool
(which provides IP based access to the KVM of each UCS Blade Server,) Time zone Management (including NTP
time source(s)) and uploading your license files are critical steps in the process.
8
8.1
Create all the pools: MAC pool, WWPN pool, WWNN pool, UUID pool, Server pool
From the LAN tab in the navigator, under the Pools node, we created a MAC address pool of sufficient size for the
environment. In this project, we created a single pool with two address ranges for expandability.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
54
Solution Validation
8.2
For Fiber Channel connectivity, WWNN and WWPN pools must be created from the SAN tab in the navigator pane,
in the Pools node:
8.3
For this project, we used a single VSAN, the default VSAN with ID 1:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
55
Solution Validation
8.4
The next pool we created is the Server UUID pool. On the Servers tab in the Navigator page under the Pools node we
created a single UUID Pool for the test environment. Each UCS Blade Server requires a unique UUID to be assigned
by its Service profile.
8.5
We created two Server Pools for use in our Service Profile Templates as selection criteria for automated profile
association. Server Pools were created on the Servers tab in the navigation page under the Pools node. Only the pool
name was created, no servers were added:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
56
Solution Validation
8.6
We created two Server Pool Policy Qualifications to identify the blade server model for placement into the correct
pool using the Service Profile Template. In this case we used Chassis ids to select the servers. (We could have used
slots or server models to make the selection.)
8.7
The next step in automating the server selection process is to create corresponding Server Pool Policies for each UCS
Blade Server model, utilizing the Server Pool and Server Pool Policy Qualifications created earlier.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
57
Solution Validation
9
Virtual Host Bus Adapter templates were created for FC SAN connectivity from the SAN tab under the Polices
node, one template for each fabric:
Create at least one HBA template for each Fabric Interconnect if block storage will be used. We used the WWPN
pool created earlier and the QoS Policy created in the section below.
10
On the LAN tab in the navigator pane, configure the VLANs for the environment:
In this project we utilized seven VLANs to accommodate our four ethernet system classes, a separate VLAN for
infrastructure services, and XenServer Management shared VLAN 801.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
58
Solution Validation
11
On the LAN tab in the navigator pane, under the policies node configure the vNIC templates that will be used in the
Service Profiles. In this project, we utilize four virtual NICs per host.
11a
Create vNIC templates for both fabrics, check Enable Failover, select VLANs supported on adapter (optional,) set
the MTU size, select the MAC Pool and QoS Policy, then click OK. We created four VNIC templates for
Infrastructure, Hypervisor management, Storage and VM Traffic. We select our alternate VNIC Template to load
balance the traffic on each side of fabric.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
59
Solution Validation
12
Create boot from SAN policy that was used for both B230 M2 and B200 M3 blades, using the WWNs from the
FAS3250 storage system as SAN targets.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
60
Solution Validation
13
Create performance BIOS Policies for each blade type to insure optimal performance. The following screen captures
show the settings for the B200 M3 blades used in this study:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
61
Solution Validation
The remaining Advanced tab settings are at platform default or not configured. Similarly, the Boot Options and Server
Management tabs‘ settings are at defaults.
Note: Be sure to Save Changes at the bottom of the page to preserve this setting. Be sure to add this policy to your blade
service profile template.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
62
Solution Validation
14
B200 M3 Host Firmware Package polices were set for UCS version 2.1.3a:
15
Create a service profile template using the pools, templates, and policies configured above.
In this project, we created one template for the UCS B200 M3 Blade Server models used.
Follow through each section, utilizing the policies and objects you created earlier, then click Finish.
Note: On the Operational Policies screen, select the appropriate performance BIOS policy you created earlier to
insure maximum LV DIMM performance.
Note: For automatic deployment of service profiles from your template(s), you must associate a server pool that
contains blades with the template.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
63
Solution Validation
16a
On the Create Service Profile Template wizard, we entered a unique name, selected the type as updating, and
selected the VDA-UUID-Suffix_Pool created earlier, then clicked
Next.
We selected the Expert configuration option on the Networking page and clicked Add in the adapters window:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
64
Solution Validation
In the Create vNIC window, we entered a unique Name, checked the Use LAN Connectivity Template checkbox,
selected the vNIC Template from the drop down, and the Adapter Policy the same way.
We repeated the process for the remaining vNIC, resulting in the following:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
65
Solution Validation
16b
On the Storage page, we selected the Expert mode, we selected the WWNN Pool we created earlier from the drop
down list and then click Add.
Note that we used the default Local Storage configuration in this project. Local drives on the blades were not used.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
66
Solution Validation
16c
On the Create HBA page, we entered a name (FCO) and checked Use SAN Connectivity Template, which changed
the display to the following:
We repeated the process for the remaining vHBA , resulting in the following:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
67
Solution Validation
Click Next to continue.
Click Next on the Zoning window since our FCoE zoning will be handled by the Nexus 5548UP switches.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
68
Solution Validation
We accepted the System automatic placement of VNICs and VHBAs.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
69
Solution Validation
We selected the Boot from SAN policy Multipath-BFS-XD, created in Section 6.4.5 from the drop-down,
then proceeded:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
70
Solution Validation
We did not create a Maintenance Policy for the project, so we clicked Next to continue:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
71
Solution Validation
On the Server Assignment page, make the following selections from the drop-downs and click the expand
arrow on the Firmware Management box as shown:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
72
Solution Validation
On the Operational Policies page, we expanded the BIOS Configuration section and selected the BIOS
Policy for the B200 M3 created earlier, then clicked Finish to complete the Service Profile Template:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
73
Solution Validation
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
74
Solution Validation
17
Now that we had created the Service Profile Templates for each UCS Blade Server model used in the
project, we used them to create the appropriate number of Service Profiles. To do so, in the Servers tab in
the navigation page, in the Service Profile Templates node, we expanded the root and selected Service
Template B200 M3, then clicked on Create Service Profiles from Template in the right pane, Actions area:
18
We provided the naming prefix and the number of Service Profiles to create and clicked OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
75
Solution Validation
19
Cisco UCS Manager created the requisite number of profiles and because of the Associated Server Pool
and Server Pool Qualification policy, the B200 M3 blades in the test environment began automatically
associating with the proper Service Profile.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
76
Solution Validation
20
We verified that each server had a profile and that it received the correct profile.
QoS and CoS in Cisco Unified Computing System
Cisco Unified Computing System provides different system classes of service to implement the quality
of service including:
•
System classes that specify the global configuration for certain types of traffic across the entire
system
•
QoS policies that assign system classes for individual vNICs
•
Flow control policies that determine how uplink Ethernet ports handle pause frames
•
Applications like the Cisco Unified Computing System and other time sensitive applications have
to adhere to a strict QOS for optimal performance
System Class Configuration
Systems Class is the global operation where entire system interfaces are with defined QoS rules.
•
By default system has Best Effort Class and FCoE Class
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
77
Solution Validation
•
Best effort is equivalent in MQC terminology as "match any"
•
FCoE is special Class define for FCoE traffic. In MQC terminology "match cos 3"
•
System class allowed with 4 more users define class with following configurable rules
•
CoS to Class Map
•
Weight: Bandwidth
•
Per class MTU
•
Property of Class (Drop v/s no drop)
•
Max MTU per Class allowed is 9216
•
Through Cisco Unified Computing System we can map one CoS value to particular class
•
Apart from FcoE class there can be only one more class can be configured as no-drop property
•
Weight can be configured based on 0 to 10 numbers. Internally system will calculate the bandwidth
based on following equation (there will be rounding off the number)
(Weight of the given priority * 100)
% b/w shared of given Class = ________________________________
Sum of weights of all priority
Cisco UCS System Class Configuration
Cisco Unified Computing System defines user class names as follows.
•
Platinum
•
Gold
•
Silver
•
Bronze
Table 7
Name Table Map Between Cisco Unified Computing System and the NXOS
Cisco UCS Names
Best effort
FC
Platinum
Gold
Silver
Bronze
Table 8
Class to CoS Map by Default in Cisco Unified Computing System
Cisco UCS Class Names
Best effort
Fc
Platinum
Gold
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
78
NXOS Names
Class-default
Class-fc
Class-Platinum
Class-Gold
Class-Silver
Class-Bronze
Cisco UCS Default Class Value
Match any
3
5
4
Solution Validation
Silver
Bronze
Table 9
2
1
Default Weight in Cisco Unified Computing System
Cisco UCS Class Names
Best effort
Fc
Weight
5
5
Steps to Enable QOS on the Cisco Unified Computing System
For this study, we utilized four Cisco UCS QoS System Classes to priorities four types of traffic in the
infrastructure:
Table 10
Cisco UCS Qos Priority
Platinum
Gold
Silver
Bronze
QoS Priority to vNIC and VLAN Mapping
vNIC Assignment
eth2, eth3
eth4, eth5
eth0, eth1
eth6, eth7
VLAN Supported
804 (Storage)
800 (VDA)
801 (Management)
802 (vMotion)
Configure Platinum, Gold, Silver and Bronze policies by checking the enabled box. For the Platinum
Policy, used for NFS storage, was configured for Jumbo Frames in the MTU column. Notice the option
to set no packet drop policy during this configuration.
Figure 19
UCS QoS System Class Configuration
Next, in the LAN tab under Policies, Root, QoS Polices, verify QoS Policies Platinum, Gold, Silver and
Bronze exist, with each QoS policy mapped to its corresponding Priority.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
79
Solution Validation
Figure 20
Cisco UCS QoS Policy Configuration
Finally, include the corresponding QoS Policy into each vNIC template using the QoS policy drop-down,
using the QoS Priority to vNIC and VLAN Mapping table above.
Figure 21
Utilize QoS Policy in vNIC Template
LAN Configuration
The access layer LAN configuration consists of a pair of Cisco Nexus 5548s (N5Ks,) a family member
of our low-latency, line-rate, 10 Gigabit Ethernet and FCoE switches for our VDI deployment.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
80
Solution Validation
Cisco UCS and NetApp Ethernet Connectivity
Two 10 Gigabit Ethernet uplink ports are configured on each of the Cisco UCS 6248 fabric
interconnects, and they are connected to the Cisco Nexus 5548 pair in a bow tie manner as shown below
in a port channel.
The 6248 Fabric Interconnect is in End host mode, as we are doing both Fiber Channel as well as
Ethernet (NAS) data access as per the recommended best practice of the Cisco Unified Computing
System. We built this out for scale and have provisioned 20 GB per Fabric Interconnect for ethernet
(Figure 32) and 20 GB per Fabric Interconnect for FC
The FAS 3250s are also equipped with two dual-port 10G X1117A adapters which are connected to the
pair of N5Ks downstream. Both paths are active providing failover capability. This allows end-to-end
10G access for file-based storage traffic. We have implemented jumbo frames on the ports and have
priority flow control on, with Platinum CoS and QoS assigned to the vNICs carrying the storage data
access on the Fabric Interconnects.
Note
The upstream configuration is beyond the scope of this document; there are some good reference
document [4] that talks about best practices of using the Cisco Nexus 5000 and 7000 Series Switches.
New with the Nexus 5500 series is an available Layer 3 module that was not used in these tests and that
will not be covered in this document.
Figure 22
Ethernet Network Configuration with Upstream Cisco Nexus 5500 Series from the Cisco Unified
Computing System 6200 Series Fabric Interconnects and NetApp FAS3250
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
81
Solution Validation
SAN Configuration
The same pair of Nexus 5548UP switches was used in the configuration to connect between the FCP
ports on the NetApp FAS 3250 and the FCP ports of the Cisco UCS 6248 Fabric Interconnects.
Boot from SAN Benefits
Booting from SAN is another key feature which helps in moving towards stateless computing in which
there is no static binding between a physical server and the OS/applications it is tasked to run. The OS
is installed on a SAN LUN and boot from SAN policy is applied to the service profile template or the
service profile. If the service profile were to be moved to another server, the pwwn of the HBAs and the
Boot from SAN (BFS) policy also moves along with it. The new server now takes the same exact
character of the old server, providing the true unique stateless nature of the UCS Blade Server.
The key benefits of booting from the network are as follows:
•
Reduce Server Footprints: Boot from SAN alleviates the necessity for each server to have its own
direct-attached disk, eliminating internal disks as a potential point of failure. Thin diskless servers
also take up less facility space, require less power, and are generally less expensive because they
have fewer hardware components.
•
Disaster and Server Failure Recovery: All the boot information and production data stored on a local
SAN can be replicated to a SAN at a remote disaster recovery site. If a disaster destroys functionality
of the servers at the primary site, the remote site can take over with minimal downtime.
•
Recovery from server failures is simplified in a SAN environment. With the help of snapshots,
mirrors of a failed server can be recovered quickly by booting from the original copy of its image.
As a result, boot from SAN can greatly reduce the time required for server recovery.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
82
Solution Validation
•
High Availability: A typical data center is highly redundant in nature - redundant paths, redundant
disks and redundant storage controllers. When operating system images are stored on disks in the
SAN, it supports high availability and eliminates the potential for mechanical failure of a local disk.
•
Rapid Redeployment: Businesses that experience temporary high production workloads can take
advantage of SAN technologies to clone the boot image and distribute the image to multiple servers
for rapid deployment. Such servers may only need to be in production for hours or days and can be
readily removed when the production need has been met. Highly efficient deployment of boot
images makes temporary server usage a cost effective endeavor.
•
Centralized Image Management: When operating system images are stored on networked disks, all
upgrades and fixes can be managed at a centralized location. Changes made to disks in a storage
array are readily accessible by each server.
•
With Boot from SAN, the image resides on a SAN LUN and the server communicates with the SAN
through a host bus adapter (HBA). The HBAs BIOS contain the instructions that enable the server
to find the boot disk. All FCoE-capable Converged Network Adapter (CNA) cards supported on
Cisco UCS B-series blade servers support Boot from SAN.
•
After power on self-test (POST), the server hardware component fetches the boot device that is
designated as the boot device in the hardware BOIS settings. Once the hardware detects the boot
device, it follows the regular boot process.
Configuring Boot from SAN Overview
There are three distinct phases during the configuration of Boot from SAN. The high-level procedures
are:
1.
SAN configuration on the Nexus 5548UPs
2.
Storage array host initiator configuration
3.
Cisco UCS configuration of Boot from SAN policy in the service profile
In each the following sections, each high level phase will be discussed.
SAN Configuration on Cisco Nexus 5548UP
The FCoE and NPIV features have to be turned on in the Nexus 5500 series switch. Make sure you have
10 GB SFP+ modules connected to the Nexus 5548UP ports. The port mode is set to AUTO as well as
the speed is set to AUTO. Rate mode is "dedicated" and when everything is configured correctly you
should see something like the output below on a Nexus 5500 series switch for a given port (for example,
Fc1/17).
Note
A Cisco Nexus 5500 series switch supports multiple VSAN configurations. Two VSANs were deployed
in this study: VSAN 500 on Fabric A and VSAN 501 on Fabric B.
Cisco Fabric Manager can also be used to get a overall picture of the SAN configuration and zoning
information. As discussed earlier, the SAN zoning is done upfront for all the pwwns of the initiators with
the NETAPP FAS 3250 target pwwns.
The steps to prepare the Nexus 5548UPs for boot from SAN follow. We show only the configuration on
Fabric A. The same commands are used to configure the Nexus 5548UP for Fabric B, but are not shown
here. The complete configuration for both Nexus 5548UP switches are contained in the appendix to this
document.
Enter configuration mode on each switch:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
83
Solution Validation
config t
Start by adding the npiv and fcoe features to both Nexus 5548UP switches:
feature npiv
feature fcoe
Verify that the feature is enabled on both switches
show feature | grep npiv
npiv
show feature | grep fcoe
fcoe-npv
1
fcoe
1
enabled
1
enabled
disabled
# show interface brief
-----------------------------------------------------------------------------Interface Vsan
Admin Admin
Status
SFP Oper Oper
Port
Mode
Trunk
Mode Speed Channel
Mode
(Gbps)
-----------------------------------------------------------------------------fc1/17
1
auto
on
up
swl
F
8
-fc1/18
1
auto
on
up
swl
F
8
-•
The FCP connection was used for configuring boot from SAN for all of server blades
•
Single vSAN zoning was set up on the Nexus 5548's to make those FAS3250 LUNs visible to the
infrastructure and test servers.
An example SAN zone configuration is shown below on the Fabric A side:
zone name B200M3-CH3-BL1-FC0 vsan 1
member pwwn 20:01:00:a0:98:14:93:b6
!
[FAS3250-C0]
member pwwn 20:00:00:25:b5:c1:00:9f
!
[B200M3-CH3-BL1-fc0]
member pwwn 20:02:00:a0:98:14:93:b6
!
[FAS3250-LIF2-C0]
Where 20:00:00:25:b5:c1:00:9f is the blade server pwwn of their respective Converged Network
Adapters (CNAs) that are part of the Fabric A side.
The NETAPP FCP target ports are 20:01:00:a0:98:14:93:b6/20:02:00:a0:98:14:93:b6 and belong to one
logical interface port on the FCP modules on the FAS 3250s.
Similar zoning is done on the second Nexus 5548 in the pair to take care of the Fabric B side:
zone name B200M3-CH3-BL1-FC1A vsan 1
member pwwn 20:05:00:a0:98:14:93:b6
!
[FAS3250-D0]
member pwwn 20:06:00:a0:98:14:93:b6
!
[FAS3250-LF2-D0]
member pwwn 20:00:00:25:b5:c1:00:8a
!
[B200M3-CH3-BL1-fc1]
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
84
Solution Validation
Where 20:00:00:25:b5:c1:00:8a is the blade server pwwn of their respective Converged Network
Adapters (CNAs) that are part of the Fabric A side.
The NETAPP FCP target ports are 20:05:00:a0:98:14:93:b6/20:06:00:a0:98:14:93:b6 and belong to one
logical interface port on the FCP modules on the FAS 3250s.
Figure 23
FAS 3250 FCP Target Ports
For detailed Nexus 5500 series switch configuration, refer to Cisco Nexus 5500 Series NX-OS SAN
Switching Configuration Guide. (See the Reference Section of this document for a link.)
NetApp Storage Configuration for XenServer 6.2 Infrastructure
A storage system running Data ONTAP has a main unit, which is the hardware device that receives and
sends data. Depending on the platform, a storage system uses storage on disk shelves, third-party
storage, or both.
The storage system for this solution includes of the following components:
•
The storage controller, which is the component of a storage system that runs the Data ONTAP
operating system and controls its disk subsystem.
•
The disk shelves, which contain disk drives and are attached to a storage system.
Cluster Details
You can group HA pairs of nodes together to form a scalable cluster. Creating a cluster enables the nodes
to pool their resources and distribute work across the cluster, while presenting administrators with a
single entity to manage. Clustering also enables continuous service to end users if individual nodes go
offline.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
85
Solution Validation
A cluster can contain up to 24 nodes (or up to 10 nodes if it contains a storage virtual machine with an
Infinite Volume) for NAS based clusters and up to 8 nodes for SAN based clusters (as of Data ONTAP
8.2). Each node in the cluster can view and manage the same volumes as any other node in the cluster.
The total filesystem namespace, which includes all of the volumes and their resultant paths, spans the
cluster.
If you have a two-node cluster, you must configure cluster high availability (HA). For more information,
see the Clustered Data ONTAP High-Availability Configuration Guide.
The nodes in a cluster communicate over a dedicated, physically isolated, dual-fabric and secure
Ethernet network. The cluster logical interfaces (LIFs) on each node in the cluster must be on the same
subnet. For information about network management for cluster and nodes, see the Clustered Data
ONTAP Network Management Guide.
For information about setting up a cluster or joining a node to the cluster, see the Clustered Data ONTAP
Software Setup Guide.
Cluster Create in Clustered Data ONTAP
Cluster Detail
Cluster name
Cluster Detail Value
<<var_clustername>>
Clustered Data ONTAP base license
<<var_cluster_base_license_key>>
Cluster management IP address
<<var_clustermgmt_ip>>
Cluster management netmask
<<var_clustermgmt_mask>>
Cluster management port
<<var_clustermgmt_port>>
Cluster management gateway
<<var_clustermgmt_gateway>>
Cluster Node01 IP address
<<var_node01_mgmt_ip>>
Cluster Node01 netmask
<<var_node01_mgmt_mask>>
Cluster Node01 gateway
<<var_node01_mgmt_gateway>>
The first node in the cluster performs the cluster create operation. All other nodes perform a
cluster join operation. The first node in the cluster is considered Node01.
1. During the first node boot, the Cluster Setup wizard starts running on the console.
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:
Note
If a login prompt appears instead of the Cluster Setup wizard, start the wizard by logging in using the
factory default settings and then enter the cluster setup command.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
86
Solution Validation
2. Enter the following command to create a new cluster:
create
3. The system defaults are displayed.
System Defaults:
Private cluster network ports [e1a,e2a].
Cluster port MTU values will be set to 9000.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]:
4. NetApp recommends accepting the system defaults. To accept the system defaults, press
Enter.
Note
Cluster is created; this can take a minute or two.
5. The steps to create a cluster are displayed.
Enter the cluster name: <<var_clustername>>
Enter the cluster base license key: <<var_cluster_base_license_key>>
Creating cluster <<var_clustername>>
Enter additional license key[]:
Note
For this validated architecture we recommend you install license keys for SnapRestore®, NFS, and FCP.
After you finish entering the license keys, press Enter.
Enter the cluster administrators (username “admin”) password: <<var_password>>
Retype the password: <<var_password>>
Enter the cluster management interface port [e0a]: e0a
Enter the cluster management interface IP address: <<var_clustermgmt_ip>>
Enter the cluster management interface netmask: <<var_clustermgmt_mask>>
Enter the cluster management interface default gateway: <<var_clustermgmt_gateway>>
6. Enter the DNS domain name.
Enter the DNS domain names:<<var_dns_domain_name>>
Enter the name server IP addresses:<<var_nameserver_ip>>
Note
If you have more than one name server IP address, separate them with a comma.
7. Set up the node.
Where
Enter
Enter
enter
Enter
Note
is the controller located []:<<var_node_location>>
the node management interface port [e0M]: e0b
the node management interface IP address: <<var_node01_mgmt_ip>>
the node management interface netmask:<<var_node01_mgmt_mask>>
the node management interface default gateway:<<var_node01_mgmt_gateway>>
The node management interface should be in a different subnet than the cluster management interface.
The node management interfaces can reside on the out-of-band management network, and the cluster
management interface can be on the in-band management network.
8. Press Enter to accept the AutoSupport™ message.
9. Reboot node 01.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
87
Solution Validation
system node reboot <<var_node01>>
y
10. When you see Press Ctrl-C for Boot Menu, enter:
Ctrl – C
11. Select 5 to boot into maintenance mode.
5
12. When prompted Continue with boot?, enter y.
13. To verify the HA status of your environment, run the following command:
ha show
If either component is not in HA mode, use the ha modify command to put the components in
HA mode.
14. To see how many disks are unowned, enter:
disk show -a
Note
No disks should be owned in this list.
15. Assign disks.
This reference architecture allocates half the disks to each controller. However, workload design
could dictate different percentages.
disk assign –n <<var_#_of_disks>>
16. Reboot the controller.
halt
17. At the LOADER-A prompt, enter:
autoboot
Cluster Join in Clustered Data ONTAP
Cluster Detail
Cluster name
Cluster Detail Value
<<var_clustername>>
Cluster management IP address
<<var_clustermgmt_ip>>
Cluster Node02 IP address
<<var_node02_mgmt_ip>>
Cluster Node02 netmask
<<var_node02_mgmt_mask>>
Cluster Node02 gateway
<<var_node02_mgmt_gateway>>
The first node in the cluster performs the cluster create operation. All other nodes perform a
cluster join operation. The first node in the cluster is considered Node01, and the node joining the
cluster in this example is Node02.
The first node in the cluster performs the cluster create operation. All other nodes perform
a cluster join operation. The first node in the cluster is considered Node01, and the node
joining the cluster in this example is Node02.
1.
During the node boot, the Cluster Setup wizard starts running on the console.
Welcome to the cluster setup wizard.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
88
Solution Validation
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:
If a login prompt displays instead of the Cluster Setup wizard, start the wizard by logging in
using the factory default settings, and then enter the cluster setup command.
2. Enter the following command to join a cluster:
join
3. The system defaults are displayed.
System Defaults:
Private cluster network ports [e1a,e2a].
Cluster port MTU values will be set to 9000.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]:
4. NetApp recommends accepting the system defaults. To accept the system defaults, press
Enter.
The cluster creation can take a minute or two.
5. The steps to create a cluster are displayed.
Enter the name of the cluster you would like to join [<<var_clustername>>]:Enter
Note
The node should find the cluster name.
6. Set up the node.
Enter
Enter
Enter
Enter
the
the
the
the
node
node
node
node
management
management
management
management
interface
interface
interface
interface
port [e0M]: e0b
IP address: <<var_node02_mgmt_ip>>
netmask: Enter
default gateway: Enter
7. The node management interface should be in a subnet different from the cluster management
interface. The node management interfaces can reside on the out-of-band management
network, and the cluster management interface can be on the in-band management network.
8. Press Enter to accept the AutoSupport™ message.
9. Log in to the cluster interface with the admin user ID and <<var_password>>.
10. Reboot node 02.
system node reboot <<var_node02>>
y
11. When you see Press Ctrl-C for Boot Menu, enter:
Ctrl – C
12. Select 5 to boot into maintenance mode.
5
13. At the question, Continue with boot? enter:
y
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
89
Solution Validation
14. To verify the HA status of your environment, enter:
If either component is not in HA mode, use the ha modify command to put the components
in HA mode.
ha show
15. To see how many disks are unowned, enter:
disk show -a
16. Assign disks.
Note
This reference architecture allocates half the disks to each controller. Workload design could dictate
different percentages, however. Assign all remaining disks to node 02.
disk assign –n <<var_#_of_disks>>
17. Reboot the controller:
halt
18. At the LOADER-A prompt, enter:
autoboot
19. Press Ctrl-C for boot menu when prompted.
Ctrl-C
Log in to the Cluster
1. Open an SSH connection to cluster IP or host name and log in as the admin user with the
password you provided earlier.
Table 11
Cluster Details.
Cluster Name
Node Name
System Model
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL-01
R4E08NA3250-CL-02
FAS3250
FAS3250
HA Partner Node
Name
R4E08NA3250-CL-02
R4E08NA3250-CL-01
Data ONTAP Version
8.2P5
8.2P5
Firmware Details
With Data ONTAP 8.2, you must upgrade to the latest service processor (SP) firmware to take advantage
of the latest updates available for the remote management device.
1.
Using a web browser, connect to http://support.netapp.com/NOW/cgi-bin/fw.
2. Navigate to the Service Process Image for installation from the Data ONTAP prompt page for
your storage platform.
3. Proceed to the download page for the latest release of the SP firmware for your storage
platform.
4. Using the instructions on this page, update the SPs on both nodes in your cluster. You will
need to download the .zip file to a web server that is reachable from the cluster management
interface. In step 1a of the instructions substitute the following command: system image
get –node * -package http://web_server_name/path/SP_FW.zip
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
90
Solution Validation
Also, instead of run local, use system node run <<var_nodename>>, then execute
steps 2–6 on each node.
Configure the Service Processor on Node 01
1. From the cluster shell, enter the following command:
system node run <<var_node01>> sp setup
2. Enter the following to set up the SP:
Would you like to configure the SP? Y
Would you like to enable DHCP on the SP LAN interface? no
Please enter the IP address of the SP[]: <<var_node01_sp_ip>>
Please enter the netmask of the SP[]: <<var_node01_sp_mask>>
Please enter the IP address for the SP gateway[]: <<var_node01_sp_gateway>>
Configure the Service Processor on Node 02
1. From the cluster shell, enter the following command:
system node run <<var_node02>> sp setup
2. Enter the following to set up the SP:
Would you like to configure the SP? Y
Would you like to enable DHCP on the SP LAN interface? no
Please enter the IP address of the SP[]: <<var_node02_sp_ip>>
Please enter the netmask of the SP[]: <<var_node02_sp_mask>>
Please enter the IP address for the SP gateway[]: <<var_node02_sp_gateway>>
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
91
Solution Validation
Table 12
Relevant Firmware Details for Each Node.
Node
Firmware
Node Name
Shelf Firmware
R4E08NA3250-CL-01 5.2.1
IOM3: A:0160, B:0160
R4E08NA3250-CL-02 5.2.1
IOM3: A:0160, B:0160
Table 13
Drive Firmware
X411_HVIPC420A15: NA02
X411_S15K7420A15: NA03
X411_HVIPC420A15: NA02
X411_S15K7420A15: NA03
Remote Mgmt
Firmware
SP: 1.4.1
SP: 1.4.1
Expansion Cards Present in Each Node
Node Name
System Model
R4E08NA3250-CL-01
FAS3250
R4E08NA3250-CL-02
FAS3250
PCI Slot Inventory
slot 1: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 2: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 3: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 4: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 5: X1971A: Flash Cache 512 GB
slot 6: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 1: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 2: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 3: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 4: X1117A: Intel Dual 10G IX1-SFP+ NIC
slot 5: X1971A: Flash Cache 512 GB
slot 6: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
Disk Firmware Updates
It is recommended to upgrade disk firmware to the latest release level to avoid any potential outages and
data loss. Older disk firmware runs a risk of having a double disk fault, causing controller panic and data
corruption.
Follow the steps detailed in the Instructions for Downloading and Installing Disk Firmware on the
NetApp Support Site.
Licensing
Starting with Data ONTAP 8.2, all license keys are 28 characters in length. Licenses installed prior to
Data ONTAP 8.2 will continue to work after upgrading to Data ONTAP 8.2. However, if you need to
reinstall a license (for example, you deleted a previously installed license and want to reinstall it in Data
ONTAP 8.2, or you perform a controller replacement procedure for a node in a cluster running Data
ONTAP 8.2 or later), Data ONTAP requires that you enter the license key in the 28-character format.
Table 14
Cluster Name
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
Licensed Software for Clusters Running Data ONTAP versions 8.2.
Owner
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
92
Package
base
cifs
fcp
Solution Validation
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
insight_balance
nfs
Storage Virtual Machine
A storage virtual machine (SVM, also known as a Vserver) is a secure virtual storage server, which
contains data volumes and one or more LIFs through which it serves data to the clients.
A SVM appears as a single dedicated server to the clients. Each SVM has a separate administrator
authentication domain and can be managed independently by its SVM administrator.
In a cluster, a SVM facilitates data access. A cluster must have at least one SVM to serve data. SVMs
use the storage and network resources of the cluster. However, the volumes and LIFs are exclusive to
the SVM. Multiple SVM can coexist in a single cluster without being bound to any node in a cluster.
However, they are bound to the physical cluster on which they exist.
In Data ONTAP 8.2, a SVM can either contain one or more FlexVol volumes, or a single Infinite Volume.
A cluster can either have one or more SVMs with FlexVol volumes or one SVM with Infinite Volume.
To create an infrastructure Vserver, complete the following steps:
1.
Run the Vserver setup wizard.
vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.
You can enter the following commands at any time:
"help" or "?" if you want to have a question clarified,
"back" if you want to change your answers to previous questions, and
"exit" if you want to quit the Vserver Setup Wizard. Any changes
you made before typing "exit" will be applied.
You can restart the Vserver Setup Wizard by typing "vserver setup". To accept a default
or omit a question, do not enter a value.
Step 1. Create a Vserver.
You can type "back", "exit", or "help" at any question.
2. Enter the Vserver name.
Enter the Vserver name:Infrastructure
3. Select the Vserver data protocols to configure.
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:nfs, fcp
4. Select the Vserver client services to configure.
Choose the Vserver client services to configure {ldap, nis, dns}:Enter
5. Enter the Vserver’s root volume aggregate:
Enter the Vserver's root volume aggregate {aggr01, aggr02} [aggr01]:aggr01
6. Enter the Vserver language setting. English is the default [C].
Enter the Vserver language setting, or "help" to see all languages [C]:Enter
7. Enter the Vserver’s security style:
Enter the Vservers root volume’s security style {unix, ntfs, mixed]} [unix]: Enter
8.
Answer no to Do you want to create a data volume?
Do you want to create a data volume?
9.
{yes, no} [Yes]: no
Answer no to Do you want to create a logical interface?
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
93
Solution Validation
Do you want to create a logical interface?
{yes, no} [Yes]: no
10. Answer no to Do you want to Configure FCP? {yes, no} [yes]: no.
Do you want to Configure FCP? {yes, no} [yes]: no
11. Add the two data aggregates to the Infrastructure aggregate list for NetApp Virtual Console.
vserver modify –vserver Infrastructure –aggr-list aggr01, aggr02
FC Service in Clustered Data ONTAP
1. Create the FC service on each Vserver. This command also starts the FC service and sets the
FC alias to the name of the Vserver.
fcp create -vserver Infrastructure
HTTPS Access in Clustered Data ONTAP
Secure access to the storage controller must be configured.
1. Increase the privilege level to access the certificate commands.
set -privilege advanced
Do you want to continue? {y|n}: y
2. Generally, a self-signed certificate is already in place. Check it with the following command:
security certificate show
3. Run the following commands as one-time commands to generate and install self-signed
certificates:
You can also use the security certificate delete command to delete expired certificates
security certificate create -vserver Infrastructure -common-name
<<var_security_cert_vserver_common_name>> -size 2048 -country <<var_country_code>> -state
<<var_state>> -locality <<var_city>> -organization <<var_org>> -unit <<var_unit>> -email
<<var_storage_admin_email>>
security certificate create -vserver <<var_clustername>> -common-name
<<var_security_cert_cluster_common_name>> -size 2048 -country <<var_country_code>> -state
<<var_state>> -locality <<var_city>> -organization <<var_org>> -unit <<var_unit>> -email
<<var_storage_admin_email>>
security certificate create -vserver <<var_node01>> -common-name
<<var_security_cert_node01_common_name>> -size 2048 -country <<var_country_code>> -state
<<var_state>> -locality <<var_city>> -organization <<var_org>> -unit <<var_unit>> -email
<<var_storage_admin_email>>
security certificate create -vserver <<var_node02>> -common-name
<<var_security_cert_node02_common_name>> -size 2048 -country <<var_country_code>> -state
<<var_state>> -locality <<var_city>> -organization <<var_org>> -unit <<var_unit>> -email
<<var_storage_admin_email>>
4. Configure and enable SSL and HTTPS access and disable Telnet access.
system services web modify -external true -sslv3-enabled true
Do you want to continue {y|n}: y
system services firewall policy delete -policy mgmt -service http -action allow
system services firewall policy create -policy mgmt -service http -action deny -ip-list
0.0.0.0/0
system services firewall policy delete -policy mgmt -service telnet -action allow
system services firewall policy create -policy mgmt -service telnet -action deny -ip-list
0.0.0.0/0
security ssl modify –vserver Infrastructure –certificate
<<var_security_cert_vserver_common_name>> -enabled true
y
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
94
Solution Validation
security ssl modify -vserver <<var_clustername>> -certificate
<<var_security_cert_cluster_common_name>> -enabled true
y
security ssl modify -vserver <<var_node01>> -certificate
<<var_security_cert_node01_common_name>> -enabled true
y
security ssl modify -vserver <<var_node02>> -certificate
<<var_security_cert_node02_common_name>> -enabled true
y
set –privilege admin
It is normal for some of these commands to return an error message stating that the entry does not exist.
Storage Virtual Machine Configuration
The following table lists the storage virtual machine configuration.
Cluster Name
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
Name Server
Switch
CIFS
data
cifs
file, nis
Hosted_Shared
data
nfs
file, nis
Hosted_VDI
data
nfs
file, nis
Infrastructure
data
nfs, cifs
file, nis
SanBoot
data
fcp
file, nis
The following table lists the storage virtual machine storage configuration.
SVM Name
Type
Allowed Protocols
Name Mapping
Switch
file
file
file
file
file
Cluster Name
SVM Name
Root Volume
Root Aggregate
R4E08NA3250-C
L
Root Volume
Language
Security Style
CIFS
ntfs
en_us
CIFS_root
DATA_R4E08NA3
250_02
Hosted_Shared unix
en_us
Hosted_Share
d_root
Hosted_VDI
unix
en_us
Infrastructure
ntfs
en_us
SanBoot
unix
en_us
R4E08NA3250-C
L
R4E08NA3250-C
L
R4E08NA3250-C
L
R4E08NA3250-C
L
Hosted_VDI_r
oot
Infrastructure_
root
SanBoot_root
Aggregate List
DATA_R4E08NA3
DATA_R4E08NA3 250_01,
DATA_R4E08NA3
250_01
250_02
DATA_R4E08NA3 DATA_R4E08NA3
250_02
250_02
DATA_R4E08NA3 DATA_R4E08NA3
250_02
250_02
DATA_R4E08NA3
250_01
Network Configuration
The storage system supports physical network interfaces, such as Ethernet, Converged Network Adapter
(CNA) and virtual network interfaces, such as interface groups, and virtual local area networks
(VLANs). Physical and/or virtual network interfaces have user definable attributes such as MTU, speed,
and flow control.
Logical Network Interfaces (LIFs) are virtual network interfaces associated with SVMs and are assigned
to failover groups, which are made up of physical ports, interface groups and/or VLANs. A LIF is an IP
address with associated characteristics, such as a role, a home port, a home node, a routing group, a list
of ports to fail over to and a firewall policy.
IPv4 and IPv6 are supported on all storage platforms starting with clustered Data ONTAP 8.2.
Storage system supports or may support the following types of physical network interfaces depending
on platform:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
95
Solution Validation
•
10/100/1000 Ethernet
•
10 Gigabit Ethernet
•
CNA / FCoE
Most storage system models have a physical network interface named e0M. It is a low-bandwidth
interface of 100Mbps and is used only for Data ONTAP management activities, such as running a Telnet,
SSH or RSH session. This physical Ethernet port, e0M, is also shared by the storage controllers'
out-of-band remote management port (platform dependent) which is also known as one of following:
baseboard management controller (BMC), remote LAN management (RLM) or service processor (SP).
Physical Interfaces
Ports are either physical ports (NICs), or virtualized ports such as interface groups or VLANs. Interface
groups treat several physical ports as a single port, while VLANs subdivide a physical port into multiple
separate virtual ports.
Network Port Settings
Network ports can have roles that define their purpose and their default behavior. Port roles limit the
types of LIFs that can be bound to a port. Network ports can have four roles: node management, cluster,
data, and intercluster.
Node Name
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
Table 15
Network Port Settings
Port Name
Link Status
Port Type
Role
a0a
up
if_group
data
9000 full/-
a0a-804
up
vlan
data
9000 full/-
a0b
up
if_group
data
1500 full/-
a0b-803
up
vlan
data
1500 full/-
e0a
up
physical
data
1500 full/none
e0b
up
physical
data
1500 full/none
e0M
up
physical
node_mgmt
1500 full/full
e1a
up
physical
cluster
9000 none/none
e1b
up
physical
data
9000 none/none
e2a
up
physical
cluster
9000 none/none
e2b
up
physical
data
9000 none/none
e3a
up
physical
data
1500 none/none
e3b
down
physical
data
9000 none/none
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
96
MTU Size
Flow Control
(Admin/Oper)
Solution Validation
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
e4a
up
physical
data
1500 none/none
e4b
down
physical
data
9000 none/none
a0a
up
if_group
data
9000 full/-
a0a-804
up
vlan
data
9000 full/-
a0b
up
if_group
data
1500 full/-
a0b-803
up
vlan
data
1500 full/-
e0a
up
physical
data
1500 full/none
e0b
up
physical
data
1500 full/none
e0M
up
physical
node_mgmt
1500 full/full
e1a
up
physical
cluster
9000 none/none
e1b
up
physical
data
9000 none/none
e2a
up
physical
cluster
9000 none/none
e2b
up
physical
data
9000 none/none
e3a
up
physical
data
1500 none/none
e3b
down
physical
data
9000 none/none
e4a
up
physical
data
1500 none/none
e4b
down
physical
data
9000 none/none
Jumbo Frames in Clustered Data ONTAP
1. To configure a clustered Data ONTAP network port to use jumbo frames (which usually have
an MTU of 9,000 bytes), run the following command from the cluster shell:
network port modify –node <<var_node01>> -port i0a-<<var_nfs_vlan_id>> -mtu 9000
WARNING: Changing the network port settings will cause a serveral second interruption in
carrier.
Do you want to continue? {y|n}: y
network port modify –node <<var_node02>> -port i0a-<<var_nfs_vlan_id>> -mtu 9000
WARNING: Changing the network port settings will cause a serveral second interruption in
carrier.
Do you want to continue? {y|n}: y
Network Port Interface Group Settings
An interface group is a port aggregate containing two or more physical ports that acts as a single trunk
port. Expanded capabilities include increased resiliency, increased availability, and load distribution.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
97
Solution Validation
You can create three different types of interface groups on your storage system: single-mode, static
multimode, and dynamic multimode interface groups.
Each interface group provides different levels of fault tolerance. Multimode interface groups provide
methods for load balancing network traffic.
IFGRP LACP in Clustered Data ONTAP
This type of interface group requires two or more Ethernet interfaces and a switch that supports LACP.
Therefore, make sure that the switch is configured properly.
1.
Run the following commands on the command line to create interface groups (ifgrps).
ifgrp create
network port
network port
ifgrp create
network port
network port
Note
-node
ifgrp
ifgrp
-node
ifgrp
ifgrp
<<var_node01>>
add-port -node
add-port -node
<<var_node02>>
add-port -node
add-port -node
-ifgrp i0a -distr-func port -mode multimode_lacp
<<var_node01>> -ifgrp i0a -port e3a
<<var_node01>> -ifgrp i0a -port e4a
-ifgrp i0a -distr-func port -mode multimode_lacp
<<var_node02>> -ifgrp i0a -port e3a
<<var_node02>> -ifgrp i0a -port e4a
All interfaces must be in the down status before being added to an interface group.
The interface group name must follow the standard naming convention of x0x.
Table 16
Node Name
R4E08NA3250-CL-0
1
R4E08NA3250-CL-0
1
R4E08NA3250-CL-0
2
R4E08NA3250-CL-0
2
Network Port Interfaces Group Settings
Ifgrp Name
a0a
a0b
a0a
a0b
Mode
multimode_lac
p
multimode_lac
p
multimode_lac
p
multimode_lac
p
Distribution
Function
Ports
ip
e1b, e2b
ip
e3a, e4a
ip
e1b, e2b
ip
e3a, e4a
Network Port VLAN Settings
VLANs provide logical segmentation of networks by creating separate broadcast domains. A VLAN can
span multiple physical network segments. The end stations belonging to a VLAN are related by function
or application.
VLAN in Clustered Data ONTAP
1. Create NFS VLANs.
network port vlan create –node <<var_node01>> -vlan-name i0a-<<var_nfs_vlan_id>>
network port vlan create –node <<var_node02>> -vlan-name i0a-<<var_nfs_vlan_id>>
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
98
Solution Validation
Table 17
Node Name
R4E08NA3250-CL-01
R4E08NA3250-CL-01
R4E08NA3250-CL-02
R4E08NA3250-CL-02
Network Port VLAN Settings
Interface Name
a0a-804
a0b-803
a0a-804
a0b-803
VLAN ID
804
803
804
803
Parent Interface
a0a
a0b
a0a
a0b
GVRP Enabled
Logical Interfaces
A LIF (logical interface) is an IP address with associated characteristics, such as a role, a home port, a
home node, a routing group, a list of ports to fail over to, and a firewall policy. You can configure LIFs
on ports over which the cluster sends and receives communications over the network.
LIFs can be hosted on the following ports:
•
Physical ports that are not part of interface groups
•
Interface groups
•
VLANs
•
Physical ports or interface groups that host VLANs
While configuring SAN protocols such as FC on a LIF, it will be associated with a WWPN.
A LIF role determines the kind of traffic that is supported over the LIF, along with the failover rules that
apply and the firewall restrictions that are in place. A LIF can have any one of the five roles: node
management, cluster management, cluster, inter-cluster, and data.
•
Node-management LIF
The LIF that provides a dedicated IP address for managing a particular node and gets created at the
time of creating or joining the cluster. These LIFs are used for system maintenance, for example,
when a node becomes inaccessible from the cluster. Node-management LIFs can be configured on
either node-management or data ports.
The node-management LIF can fail over to other data or node-management ports on the same node.
Sessions established to SNMP and NTP servers use the node-management LIF. AutoSupport
requests are sent from the node-management LIF.
•
Cluster-management LIF
The LIF that provides a single management interface for the entire cluster. Cluster-management
LIFs can be configured on node-management or data ports.
The LIF can fail over to any node-management or data port in the cluster. It cannot fail over to
cluster or inter-cluster ports.
•
Cluster LIF
The LIF that is used for intra-cluster traffic. Cluster LIFs can be configured only on cluster ports.
Note
Cluster LIFs need not be created on 10-GbE network ports in FAS2040 and FAS2220 platforms.
These interfaces can fail over between cluster ports on the same node, but they cannot be migrated
or failed over to a remote node. When a new node joins a cluster, IP addresses are generated
automatically. However, if you want to assign IP addresses manually to the cluster LIFs, you must
make sure that the new IP addresses are in the same subnet range as the existing cluster LIFs.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
99
Solution Validation
•
Inter-cluster LIF
The LIF that is used for cross-cluster communication, backup, and replication. Inter-cluster LIFs can
be configured on data ports or inter-cluster ports. You must create an inter-cluster LIF on each node
in the cluster before a cluster peering relationship can be established.
These LIFs can fail over to data or inter-cluster ports on the same node, but they cannot be migrated
or failed over to another node in the cluster.
•
Data LIF (NAS)
The LIF that is associated with an SVM and is used for communicating with clients. Data LIFs can
be configured only on data ports.
You can have multiple data LIFs on a port. These interfaces can migrate or fail over throughout the
cluster. You can modify a data LIF to serve as a Vserver management LIF by modifying its firewall
policy to mgmt.
Sessions established to NIS, LDAP, Active Directory, WINS, and DNS servers use data LIFs.
LIF failover refers to the automatic migration of a LIF in response to a link failure on the LIF's current
network port. When such a port failure is detected, the LIF is migrated to a working port.
A failover group contains a set of network ports (physical, VLANs, and interface groups) on one or more
nodes. A LIF can subscribe to a failover group. The network ports that are present in the failover group
define the failover targets for the LIF.
NFS LIF in Clustered Data ONTAP
1. Create an NFS logical interface (LIF).
network interface create -vserver Infrastructure –Infra_NFS -role data -data-protocol nfs
-home-node <<var_node01>> -home-port a0a-804 –address <<var_node01_nfs_lif_ip>> -netmask
<<var_node01_nfs_lif_mask>> -status-admin up –failover-policy nextavail –firewall-policy
data –auto-revert true –use-failover-group enabled –failover-group nfs
FCP LIF in Clustered Data ONTAP
1. Create four FCP LIFs, two on each node.
network interface create -vserver SanBoot -lif R4E08NA3250-CL-01_fc_lif_1
-data-protocol fcp -home-node <<var_node01>> -home-port 0c
network interface create -vserver SanBoot -lif R4E08NA3250-CL-01_fc_lif_2
-data-protocol fcp -home-node <<var_node02>> -home-port 0d
network interface create -vserver SanBoot -lif R4E08NA3250-CL-02_fc_lif_1
-data-protocol fcp -home-node <<var_node01>> -home-port 0c
network interface create -vserver SanBoot -lif R4E08NA3250-CL-02_fc_lif_2
-data-protocol fcp -home-node <<var_node02>> -home-port 0d
-role data
-role data
-role data
-role data
All Network Logical Interfaces
This section pertains to LIFs with all the possible roles: node-management, cluster-management, cluster,
inter-cluster and data.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
100
Solution Validation
Table 18
Cluster
Name
SVM Name
R4E08NA325
CIFS
0-CL
R4E08NA325
CIFS
0-CL
R4E08NA325
CIFS
0-CL
R4E08NA325 Hosted_Shar
0-CL
ed
R4E08NA325 Hosted_Shar
0-CL
ed
R4E08NA325 Hosted_Shar
0-CL
ed
R4E08NA325 Hosted_Shar
0-CL
ed
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
R4E08NA325
0-CL
Hosted_VDI
All Network LIF Settings
Interface
Name
CIFS_AOS
QL
CIFS_User
_Profiles
CIFS_vDis
k
Hosted_Sh
ared_WS_
00
Hosted_Sh
ared_WS_
01
Hosted_Sh
ared_WS_
02
Hosted_Sh
ared_WS_
03
Hosted_VD
I_WS
Data
Protoc
ols
10.218.241.103/
24
10.218.241.101/
24
10.218.241.100/
24
cifs
cifs
cifs
data
up/up
data
up/up
data
up/up
d192.168.11.0/
25
data
up/up
nfs
192.168.11.18/2
5
data
d192.168.11.0/
25
data
up/up
nfs
192.168.11.19/2
5
data
d192.168.11.0/
25
data
up/up
nfs
192.168.11.20/2
5
data
d192.168.11.0/
25
data
up/up
data
up/up
data
up/up
data
up/up
192.168.11.12/2
5
10.218.241.104/
24
192.168.11.10/2
5
nfs
Infrastructure
Infra_NFS
nfs
R4E08NA325
0-CL
R4E08NA325
0-CL-01
R4E08NA325
0-CL-01
R4E08NA325
0-CL-01
R4E08NA325
0-CL-02
R4E08NA325
0-CL-02
R4E08NA325
0-CL-02
cluster_mg
mt
none
clus1
none
clus2
none
mgmt1
none
clus1
none
clus2
none
mgmt1
none
R4E08NA325
SanBoot
0-CL
data
d10.218.241.0/
24
d10.218.241.0/
24
d10.218.241.0/
24
(Admin/O
per)
data
cifs
R4E08NA325
SanBoot
0-CL
data
Routing Group Role
192.168.11.11/2
5
Infra_CIFS
R4E08NA3
250-CL-01
_fc_lif_1
R4E08NA3
250-CL-01
_fc_lif_2
R4E08NA3
250-CL-02
_fc_lif_1
data
Status
nfs
Infrastructure
R4E08NA325
SanBoot
0-CL
IP Address
Firewall
Policy
10.218.253.2/27
data
data
data
mgmt
169.254.114.114
cluster
/16
169.254.138.139
cluster
/16
d192.168.11.0/
25
d10.218.241.0/
24
d192.168.11.0/
25
c10.218.253.0/
27
c169.254.0.0/1
6
c169.254.0.0/1
6
n10.218.253.0/
27
c169.254.0.0/1
6
c169.254.0.0/1
6
n10.218.253.0/
27
cluster_
up/up
mgmt
cluster
up/up
cluster
up/up
node_
mgmt
up/up
cluster
up/up
cluster
up/up
node_
mgmt
up/up
fcp
data
up/up
fcp
data
up/up
fcp
data
up/up
10.218.253.3/27
mgmt
169.254.94.220/
cluster
16
169.254.139.145
cluster
/16
10.218.253.4/27
mgmt
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
101
Solution Validation
R4E08NA325
SanBoot
0-CL
R4E08NA3
250-CL-02
_fc_lif_2
fcp
data
up/up
Network Failover Groups
Failover groups for LIFs can be system defined or user defined. Additionally, a failover group called
clusterwide exists and is maintained automatically.
Failover groups are of the following types:
•
System-defined failover groups: Failover groups that automatically manage LIF failover targets on
a per-LIF basis.
•
These failover groups contain data ports from a maximum of two nodes. The data ports include all
the data ports on the home node and all the data ports on another node in the cluster, for redundancy.
•
User-defined failover groups: Customized failover groups that can be created when the system
defined failover groups do not meet your requirements.
•
For example, you can create a failover group consisting of all 10GbE ports that enables LIFs to fail
over only to the high-bandwidth ports.
•
Clusterwide failover group: Failover group that consists of all the data ports in the cluster and
defines the default failover group for the cluster-management LIF.
Failover Groups Management in Clustered Data ONTAP
1.
Create a management port failover group.
network interface failover-groups create -failover-group mgmt -node <<var_node01>> -port
e0a
network interface failover-groups create -failover-group mgmt -node <<var_node02>> -port
e0a
Assign Management Failover Group to Cluster Management LIF
1. Assign the management port failover group to the cluster management LIF.
network interface modify –vserver <<var_clustername>> -lif cluster_mgmt –failover-group
mgmt
Failover Groups Node Management in Clustered Data ONTAP
1. Create a management port failover group.
network interface
-port e0b
network interface
-port e0M
network interface
-port e0b
network interface
-port e0M
failover-groups create -failover-group node-mgmt01 -node <<var_node01>>
failover-groups create -failover-group node-mgmt01 -node <<var_node01>>
failover-groups create -failover-group node-mgmt02 -node <<var_node02>>
failover-groups create -failover-group node-mgmt02 -node <<var_node02>>
Assign Node Management Failover Groups to Node Management LIFs
1. Assign the management port failover group to the cluster management LIF.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
102
Solution Validation
network interface modify –vserver <<var_node01>> -lif mgmt1 –auto-revert true
–use-failover-group enabled –failover-group node-mgmt01
network interface modify –vserver <<var_node02>> -lif mgmt1 –auto-revert true
–use-failover-group enabled –failover-group node-mgmt02
Table 19
Network Failover Groups
Cluster Name
Failover Group Name
R4E08NA3250-CL
CIFS
R4E08NA3250-CL
clusterwide
R4E08NA3250-CL
MGMT
R4E08NA3250-CL
NFS
Members
R4E08NA3250-CL-01: a0b-803
R4E08NA3250-CL-02: a0b-803
R4E08NA3250-CL-01: a0a
R4E08NA3250-CL-01: a0b
R4E08NA3250-CL-01: e0a
R4E08NA3250-CL-01: e0b
R4E08NA3250-CL-01: e3b
R4E08NA3250-CL-01: e4b
R4E08NA3250-CL-02: a0a
R4E08NA3250-CL-02: a0b
R4E08NA3250-CL-02: e0a
R4E08NA3250-CL-02: e0b
R4E08NA3250-CL-02: e3b
R4E08NA3250-CL-02: e4b
R4E08NA3250-CL-01: e0a
R4E08NA3250-CL-02: e0a
R4E08NA3250-CL-01: a0a-804
R4E08NA3250-CL-02: a0a-804
Network LIF Failover Settings
The following table shows all network LIF failover settings.
Cluster Name
SVM Name
R4E08NA3250-CL CIFS
R4E08NA3250-CL CIFS
R4E08NA3250-CL CIFS
R4E08NA3250-CL Hosted_Shared
R4E08NA3250-CL Hosted_Shared
R4E08NA3250-CL Hosted_Shared
R4E08NA3250-CL Hosted_Shared
R4E08NA3250-CL Hosted_VDI
Interface Name
Home Node
R4E08NA3250-CL
-02
CIFS_User_Profile R4E08NA3250-CL
s
-02
R4E08NA3250-CL
CIFS_vDisk
-02
Hosted_Shared_
R4E08NA3250-CL
WS_00
-01
Hosted_Shared_
R4E08NA3250-CL
WS_01
-01
Hosted_Shared_
R4E08NA3250-CL
WS_02
-02
Hosted_Shared_
R4E08NA3250-CL
WS_03
-02
R4E08NA3250-CL
Hosted_VDI_WS
-02
CIFS_AOSQL
Home Port
Failover
Group
Auto
Revert
a0b-803
CIFS
False
a0b-803
CIFS
False
a0b-803
CIFS
False
a0a-804
NFS
False
a0a-804
NFS
False
a0a-804
NFS
False
a0a-804
NFS
False
a0a-804
NFS
False
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
103
Solution Validation
R4E08NA3250-CL Infrastructure
Infra_CIFS
R4E08NA3250-CL Infrastructure
Infra_NFS
R4E08NA3250-CL R4E08NA3250-CL cluster_mgmt
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL SanBoot
R4E08NA3250-CL SanBoot
R4E08NA3250-CL SanBoot
R4E08NA3250-CL SanBoot
clus1
clus2
mgmt1
clus1
clus2
mgmt1
R4E08NA3250-CL
-01_fc_lif_1
R4E08NA3250-CL
-01_fc_lif_2
R4E08NA3250-CL
-02_fc_lif_1
R4E08NA3250-CL
-02_fc_lif_2
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
R4E08NA3250-CL
-01
R4E08NA3250-CL
-01
R4E08NA3250-CL
-02
R4E08NA3250-CL
-02
a0b-803
CIFS
False
a0a-804
NFS
False
e0a
MGMT
False
e1a
e2a
e0a
e1a
e2a
e0a
system-defi
ned
system-defi
ned
system-defi
ned
system-defi
ned
system-defi
ned
system-defi
ned
True
True
True
True
True
True
0c
disabled
False
0d
disabled
False
0c
disabled
False
0d
disabled
False
Installing and Configuring Citrix XenServer
Overview of Citrix XenServer
Citrix XenServer is an industry and value leading open source virtualization platform for managing
cloud, server and desktop virtual infrastructures. Organizations of any size can install XenServer in less
than ten minutes to virtualize even the most demanding workloads and automate management processes
- increasing IT flexibility and agility and lowering costs. With a rich set of management and automation
capabilities, a simple and affordable pricing model and optimizations for virtual desktop and cloud
computing, XenServer is designed to optimize private datacenters and clouds today and in the future.
Install Citrix XenServer 6.2 SP1
Note
Installing XenServer will overwrite data on any hard drives that you select to use for the installation.
Back up data that you wish to preserve before proceeding.
This installation covers the Virtual media Installation using Cisco UCS media
1.
Boot the computer from the installation CD.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
104
Solution Validation
Tip
Throughout the installation, quickly advance to the next screen by pressing F12. Use Tab to move
between elements, and Space or Enter to select. For general help, press F1.
Note
If a System Hardware warning screen is displayed and you suspect that hardware virtualization assist
support is available on your system, check the support site of your hardware manufacturer for BIOS
upgrades.
2.
Choose Advanced installation screen by pressing F2.
3.
Type multipath and press Enter.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
105
Solution Validation
4.
The Welcome to XenServer Setup screen is displayed.
5.
The XenServer End User License Agreement (EULA) is displayed. Choose Accept EULA to
proceed.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
106
Solution Validation
6.
Choose Perform clean installation and OK to proceed.
7.
If you have multiple local hard disks, choose a Primary Disk for the installation. Select OK.
8.
Choose which disk(s) you would like to use for virtual machine storage.
9.
Select installation source Local media and then choose Ok to proceed.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
107
Solution Validation
10. Indicate if you want to verify the integrity of the installation media. If you select Verify installation
source, the MD5 checksum of the packages is calculated and checked against the known value.
Verification may take some time. Make your selection and choose Ok to proceed.
11. Set and confirm a root password, which XenCenter will use to connect to the XenServer host. You
will also use this password (with username "root") to log into xsconsole, the system configuration
console.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
108
Solution Validation
12. Set up the primary management interface that will be used to connect to XenCenter. If your
computer has multiple NICs, select the NIC which you wish to use for management. Choose OK to
proceed.
13. Configure the Management NIC IP address by choosing Automatic configuration (DHCP) to
configure the NIC using DHCP, or vStatic configuration to manually configure the NIC.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
109
Solution Validation
14. Enter the desired hostname for the server in the field provided. Select OK to proceed.
15. Select your time zone - the geographical area and then city. You can type the first letter of the desired
locale to jump to the first entry that begins with this letter. Choose OK to proceed.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
110
Solution Validation
16. Specify how you would like the server to determine local time: using NTP or manual time entry.
Make your selection, and choose OK to proceed.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
111
Solution Validation
17. If using NTP, either select NTP is configured by my DHCP server to have DHCP set the time server
or enter at least one NTP server name or IP address in the fields below. Choose OK.
18. Select Install XenServer.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
112
Solution Validation
After the installation is complete and the server is fully booted you need to update the multipath.conf
configuration file, and update initrd.
Login to the XenServer 6.2 SP1 and go to LocalCommand shell
Backup original multipath.conf file:
cp /etc/multipath.conf /etc/multipath.conf.xs6.2orig
Copy the initrd and decompress it. Note: All work is done in a temporary directory
cp /boot/initrd.img-2.6.15-1-686-smp /tmp/initrd.img.gz
gunzip -v /tmp/initrd.img.gz
#
cd /tmp/initrdmountmod
cpio -i < /tmp/initrd.imgcd
Copy the image, decompress it
cp initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xen.img
initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xen.img.gz
gunzip -v initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xen.img.gz
Extract the content of the cpio archive:
mkdir initrdupdate
cd initrdupdate/
cpio -i < ../initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xen.img
Update etc/multipath.conf:
echo 'defaults {
flush_on_last_del
no
dev_loss_tmo
30
fast_io_fail_tmo
off
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss.*"
}
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
113
Solution Validation
devices {
device {
vendor
product
prio
hardware_handler
}
}
' > etc/multipath.conf
"NETAPP"
"LUN.*"
"alua"
"1 alua"
Pack the modified files back to cpio 'newc' format.
find ./ | cpio -H newc -o >
../initrd-2.6.32.43-.4.1.xs1.8.0.847.170785xenNetApp.img
Zip the archive file
cd ..
gzip initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xenNetApp.img
Configure boot for new inirtd
cp initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xenNetApp.img.gz
/boot/initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xenNetApp.img
ln -s initrd-2.6.32.43-0.4.1.xs1.8.0.847.170785xenNetApp.img
initrd-2.6-xen.img
Install XenCenter
XenCenter must be installed on a remote Windows machine that can connect to the XenServer host
through your network. The.NET framework version 3.5 must also be installed on this workstation.
The XenCenter installation media is bundled with the XenServer installation media. You can also
download the latest version of XenCenter from www.citrix.com/xenserver.
1. Before installing XenCenter, be sure to uninstall any previous version.
2 Launch the installer.
If installing from a XenServer installation CD:
a. Insert the CD into the DVD drive of the computer which you want to run XenCenter.
b. Open the client_install folder on the CD. Double-click XenCenter.msi to begin the installation.
3. Follow the Setup wizard, which allows you to modify the default destination folder and then to
install XenCenter.
4. Connecting XenCenter to the XenServer Host.
To connect XenCenter to the XenServer host:
1. Launch XenCenter. The program opens to the Home tab.
2. Click the Add New Server icon.
3. Enter the IP address of the XenServer host in the Server field. Type the root username and
password that you set during XenServer installation. Click Add.
4. The first time you add a new host, the Save and Restore Connection State.Dialog box appears.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
114
Solution Validation
This enables you to set your preferences for storing your host connection information and
automatically restoring host connections.
If you later need to change your preferences, you can do so using XenCenter or the Windows Registry
Editor. To do so in XenCenter: from the main menu, select Tools and then Options. The Options dialog
box opens. Select the Save and Restoretab and set your preferences. Click OK to save your changes. To
do so using the Windows Registry Editor, navigate to the key
HKEY_LOCAL_MACHINE\Software\Citrix\XenCenter (if you installed XenServer for use by all
users) and add a key named AllowCredentialSave with the string value true or false.
Upgrade XenServer 6.2 to XenServer 6.2 SP1
Note
•
Download the update file to a known location.
•
Extract the xsupdate file from the zip.
•
Upload the xsupdate file to the Pool Master by entering the following commands:
Where hostname is the Pool Master's IP address or DNS name.
xe patch-upload -s <hostname> -u root -pw <password>
file-name=<path_to_update_file>\XS62ESP1.xsupdate
•
XenServer assigns the update file a UUID, which this command prints. Note the UUID.
0850b186-4d47-11e3-a720-001b2151a503
•
Apply the Service Pack to all hosts in the pool, specifying the UUID of the Service Pack:
xe -s <hostname> -u root -pw <password> patch-pool-apply
uuid=0850b186-4d47-11e3-a720-001b2151a503
•
Verify that the update was applied by using the patch-list command.
xe patch-list -s <hostname> -u root -pw <password> name-label=XS62ESP1
If the update is successful, the hosts field will contain the UUIDs of the hosts this patch was successfully
applied to. This should be a complete list of all hosts in the pool.
To verify in XenCenter that the update has been applied correctly, select the Pool, and then click the
General tab. This displays the Pool properties. In the Updates section, ensure that the update is listed as
Fully Applied.
The Service Pack is applied to all hosts in the pool, but it will not take effect until each host has been
rebooted. Reboot the hosts sequentially, starting with the Pool Master. For each host, migrate the VMs
that you wish to keep running, and shutdown the remaining VMs before rebooting the host.
Pool Configuration
The environment is set up as follows:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
115
Solution Validation
DC-INF (2 B200M3s)
DC-HVD (4 B200M3s)
DC-HSD (8 B200M3s)
CL-Pool1 (14 B250M2)
The DC-INF cluster consists of 2 B200s and was used to host all the virtual servers for the XenDesktop
Infrastructure.
The following clusters were used to host desktop models deployed in this solution:
DC-HVD (Hosted Virtual Desktops)
DC-HSD (Hosted Shared Desktops)
CL-Pool1, the client launcher pool, was used to host Login VSI launchers, Launcher Provisioning
Services (PVS), and the Login VSI console. A pool of 14 XenServer 6.2 SP1 was configured for this
purpose. This pool is not required to implement the solution design.
All XenServer hosts were setup with the following segregated networks to handle their own network
traffic: Management, VDA, Server Infrastructure, and Storage.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
116
Solution Validation
Installing and Configuring Citrix XenDesktop 7.1
Overview of Citrix XenDesktop
Citrix XenDesktop delivers Windows apps and desktops as secure mobile services. With XenDesktop,
IT can mobilize the business, while reducing costs by centralizing control and security for intellectual
property. Incorporating the full power of XenApp, XenDesktop can deliver full desktops or just the apps
to any device. HDX technologies enable XenDesktop to deliver a native touch-enabled look-and-feel
that is optimized for the type of device, as well as the network.
To prepare the required infrastructure to support the Citrix XenDesktop Hosted Virtual Desktop and
Hosted Shared Desktop environment, the following process was followed.
Four XenDesktop Delivery Controllers were virtualized on XenServer 6.2 SP1 hosted on Cisco B200
M3 infrastructure blades.
The Desktop Studio is the main administration console where hosts, machine catalogs, desktop groups
and applications are created and managed. The Desktop Studio is where HDX policy is configured and
applied to the site. The Desktop Studio is a Microsoft Management Console snap in and fully supports
PowerShell.
Pre-requisites
Please go to the following link for a list of pre-requisites for XenDesktop 7.1:
http://support.citrix.com/proddocs/topic/xendesktop-71/cds-system-requirements-71.html
For this Test environment we used a 3 node Microsoft SQL Server 2012 cluster with Always On. See
the referenced documentation for setup of Microsoft SQL server 2012 Always on.
http://msdn.microsoft.com/en-us/library/jj215886.aspx
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
117
Solution Validation
In the XenDesktop Controller 7.1 setup we go through the process of pointing to a database; in this case
we used the Microsoft SQL server AlwaysON cluster listener information to create a database for
XenDesktop Controller 7.1
Install Citrix XenDesktop, Citrix XenDesktop Studio, and Citrix License Server
Note
The steps identified below show the process used when installing XenDesktop, XenDesktop Studio and
optional components using graphical interface.
1.
Start the XenDesktop installation wizard. Click Start.
2.
Click Delivery Controller. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
118
Solution Validation
3.
Select Accept the Software license agreement, click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
119
Solution Validation
4.
Select the components to be installed: Delivery Controller, Studio, Director and License Server.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
120
Solution Validation
Note
Desktop Director was installed on only the first XenDesktop Controller.
5.
Verify that "Install SQL Server Express" and "Install Windows Remote Assistance" is NOT selected
in the Features page. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
121
Solution Validation
6.
Select "Automatically" to configure Firewall settings. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
122
Solution Validation
7.
Click "Install" in the Summary page to continue installation.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
123
Solution Validation
Configuring the Citrix License Server and Licenses
1.
Open the License Server Configuration Tool.
2.
Accept the Default ports and provide the password for the Admin account.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
124
Solution Validation
3.
Click OK.
4.
Go to Start > All Programs > Citrix > Management Consoles and click License Administration
Console.
5.
Click the Administration button.
6.
Enter the Admin credentials.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
125
Solution Validation
7.
Click Submit.
8.
Click the Vendor Daemon Configuration tab on the left part of the screen.
9.
Click Import License.
10. Click Browse to locate the license file you are applying to the server.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
126
Solution Validation
11. Select the file and click Open.
12. Click Import License.
13. Validate that the import was successful.
14. Click OK.
15. Click the Dashboard button.
16. Validate that the necessary licenses have been installed.
Create SQL Database for Citrix XenDesktop
1.
Open Desktop Studio: go to Start > All Programs > Citrix\ Desktop Studio. Select Get started, Create
a Site.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
127
Solution Validation
2.
Select Configure the Site and start delivering applications and Desktops to users. Enter a name for
the site. Click Next.
3.
Enter the SQL Always ON Listener IP address or name. Click Next to create a new Database.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
128
Solution Validation
4.
Enter the licensing server name. In this instance it would be localhost:27000.
5.
Select use an existing licenses and Select Citrix XenDesktop Platinum.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
129
Solution Validation
6.
Enter the XenServer Pool Master IP address or Name, username and password for DC-HVD or
DC-SHD Pools. (choose one since you will configure the second pool later). Enter a name associated
with the pool name. Select "Other Tools" for PVS streamed machines. Click Next.
7.
Select No in "Do you want to add an App-V publishing server to this Deployment?". Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
130
Solution Validation
8.
Click Finish to complete the installation.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
131
Solution Validation
Configure the Citrix XenDesktop Site Hosts and Storage
1.
In Citrix Studio, browse to Hosting (under Configuration).
2.
On the right panel, click "Add a Connection and Resources".
3.
Select Create a New Connection. Name the connection. Enter the DC-HSD or DC-HVD pool Master
address, username and password. Select Other Tools. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
132
Solution Validation
4.
Select All Scope Objects. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
133
Solution Validation
5.
Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
134
Solution Validation
6.
Create additional hosts per Storage Repository and Network.
7.
Click Add a connection on the right pane.
8.
Select use an existing connection, and select the Connection created in the steps above from the drop
down. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
135
Solution Validation
9.
Select the Network designated for HVD or HSD traffic. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
136
Solution Validation
10. Select the Storage designated for Hosting your HVD or HSD virtual machines.
Note
For HSD, we broke it up into four volumes (see Netapp NFS Volumes for HSD), you will need to create
four different connections and resources; 1 per volume. For HVD, create one connection. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
137
Solution Validation
11. Click Finish.
Note
Repeat this wizard for each NFS volume.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
138
Solution Validation
Configure Citrix XenDesktop HDX Policies
When testing with Login VSI, a XenDesktop policy should be created to disable client printer mappings,
Disable Drive mappings, and Disable Flash re-direction which are enabled by default. HDX policies
configured and applied in Citrix Desktop Studio.
1.
Open Desktop Studio and click Policy.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
139
Solution Validation
2.
On the right pane, click Create Policy.
3.
From the drop-down, menu select Printing. Click Select for Client Printer redirection.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
140
Solution Validation
4.
A Window will pop up. Select Prohibited. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
141
Solution Validation
5.
Back to the policies menu, select Auto-create client Printers.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
142
Solution Validation
6.
From the drop-down, select Do not auto-create client printers. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
143
Solution Validation
7.
Click Next on the Policy menu.
8.
Select Assign to selected user and machine objects. Click Assign to user or group.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
144
Solution Validation
9.
A window will pop-up, select mode Allow, and click Browse. Assign Domain users and click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
145
Solution Validation
10. Enter a name for the policy, and click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
146
Solution Validation
11. Click Create Policy. On the drop-down, select Adobe Flash Delivery. Click Select on Flash
Acceleration.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
147
Solution Validation
12. A window will pop-up, click Disabled and OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
148
Solution Validation
13. Click Next.
14. Select assign to selected user and machine objects. Click assign on User or Group.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
149
Solution Validation
15. Select mode Allow and click Browse. Assign Domain users and click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
150
Solution Validation
16. Enter a name for the policy, and click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
151
Solution Validation
17. Click Create Policy. On the drop down menu, select File Redirection. Click Select Auto Connect
client drives.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
152
Solution Validation
18. Select Disabled. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
153
Solution Validation
19. Click Select on Client drive redirection.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
154
Solution Validation
20. Select Prohibited. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
155
Solution Validation
21. Click Next.
22. Select assign to Selected user and machine objects. Click Assign on User or Group.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
156
Solution Validation
23. Select mode Allow, and click Browse. Assign Domain users and click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
157
Solution Validation
24. Enter a name for the policy, and click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
158
Solution Validation
Configure the Citrix XenDesktop Desktop Group and Options
Note
PVS XenDesktop Wizard is used for Catalog and VM creation.
1.
Browse to Citrix studio, click Delivery Groups. On the right pane click Create Delivery Group.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
159
Solution Validation
2.
Click Do not show this again and click Next.
3.
Select a catalog, and enter a number of machines to add. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
160
Solution Validation
4.
Select Use the machines to deliver Desktops. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
161
Solution Validation
5.
Click Add users. Click Browse and enter domain users. Click OK and Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
162
Solution Validation
6.
Select Manually, using a StoreFront server address that I will provide later. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
163
Solution Validation
7.
Enter a Delivery group name, Display name, and Description. Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
164
Solution Validation
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
165
Solution Validation
Installing and Configuring Citrix Provisioning Services (PVS) 7.1
Pre-requisites
In most implementations, there is a single vDisk providing the standard image for multiple target
devices. The more target devices using the same vDisk image, the less vDisks need to be created; making
vDisk management easier. In order to have a single vDisk, all target devices must have certain
similarities to ensure that the OS has all of the drivers it requires to run properly. The three key
components that should be consistent are the motherboard, network card, or video card.
Disk storage management is very important because a Provisioning Server can have many vDisks stored
on it, and each disk can be several gigabytes in size. Your streaming performance can be improved using
a RAID array, SAN, or NAS.
Software and hardware requirements are available at
http://support.citrix.com/proddocs/topic/provisioning-7/pvs-install-task1-plan-6-0.html
Provisioning Server to Provisioning Server Communication
Each Provisioning Server must be configured to use the same ports (UDP) in order to communicate with
each other (uses the Messaging Manager). At least five ports must exist in the port range selected. The
port range is configured on the Stream Services dialog when the Configuration Wizard is run.
Note
If configuring for a high availability (HA), all Provisioning Servers selected as failover servers must
reside within the same site. HA is not intended to cross between sites.
The first port in the default range is UDP 6890 and the last port is 6909.
Provisioning Servers to Target Device Communication
Each Provisioning Server must be configured to use the same ports (UDP) in order to communicate with
target devices (uses the Stream Process). The port range is configured using the Console's Network tab
on the Server Properties dialog.
The default ports include:
UDP 6910 6930
Target Device to Provisioning Services Communication
Target devices communicate with Provisioning Services using the following ports:
UDP 6901, 6902, 6905
Note
Unlike Provisioning Servers to target device ports numbers, target device to Provisioning Services
cannot be configured.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
166
Solution Validation
Login Server Communication
Each Provisioning Server that will be used as a login server must be configured on the Stream Servers
Boot List dialog when the Configuration Wizard is run.
The default port for login servers to use is UDP 6910.
Console Communication
The Soap Server is used when accessing the Console. The ports (TCP) are configured on the Stream
Services dialog when the Configuration Wizard is run.
The default ports are TCP 54321 and 54322 (Provisioning Services automatically sets a second port by
incrementing the port number entered by 1; 54321 + 1).
If this value is modified, the following command must be run.
For PowerShell: MCLI-Run SetupConnection
For MCLI: MCLI Run SetupConnection
Note
Refer to the Provisioning Server Programmers Guides for details.
TFTP Communication
The TFTP port value is stored in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BNTFTP\Parameters Port
The TFTP port defaults to UDP 69.
TSB Communication
The TSB port value is stored in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PVSTSB\Parameters Port
The TSB port defaults to UDP 6969.
Port Fast
Port Fast must be enabled.
Network Card
PXE 0.99j, PXE 2.1 or later.
Network Addressing
DHCP
PVS Storage
A multi-server accessible storage for storing vDisk.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
167
Solution Validation
Storage Configuration for Provisioning Services
The test environment utilized a NetApp 2-node 3250 cluster system to provide storage for PVS 7.1
virtual machines and vDisks.
Provisioning Services server farm uses a NetApp CIFS volume for Windows 7 SP1 and Windows Server
2012 vDisks storage.
The Launcher vDisks were stored in a separate NetApp system and are not required to implement the
design.
Install Provisioning Services 7.1
1.
Locate the PVS_Server_x64.exe and run the executable. Select Server Installation and Install
Server.
2.
Click Install.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
168
Solution Validation
3.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
169
Solution Validation
4.
Select "I accept the terms in the license agreement" and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
170
Solution Validation
5.
Enter a User Name and Organization information. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
171
Solution Validation
6.
Select Default Path installation, and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
172
Solution Validation
7.
Click Install.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
173
Solution Validation
8.
Click Finish to complete the installation.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
174
Solution Validation
Configure PVS Using the Provisioning Services Configuration Wizard
To configure PVS, follow these steps:
1.
The PVS configuration Wizard will display after installation or you can start the PVS Configuration
wizard from the Start > Program Files > Citrix > Provisioning Services. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
175
Solution Validation
2.
In the DHCP services window, select "The service that runs on another computer". Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
176
Solution Validation
3.
PXE services select "The service that runs on this computer". Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
177
Solution Validation
4.
Select Create Farm. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
178
Solution Validation
5.
Within the Database Server window, enter the DB Server name or IP address and the instance name.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
179
Solution Validation
6.
In the New Farm window, enter the environment specific information for the Farm Name, Site
Name, and Collection Name. Additionally, choose the appropriate Active Directory group that will
be identified as the Farm Administrator group. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
180
Solution Validation
7.
Enter a name for PVS storage. Select the path to storage. In this test we used the Netapp CIFS share.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
181
Solution Validation
8.
Enter a license server name. Check Validate license server version and communication. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
182
Solution Validation
9.
Configure PVS streaming service NIC. Check the checkbox for your corresponding 10Gbps NIC for
streaming. Select and Highlight your NIC for Management traffic. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
183
Solution Validation
10. Check "Use the Provisioning Services TFTP service". Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
184
Solution Validation
11. Enter the four PVS servers in your farm. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
185
Solution Validation
12. Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
186
Solution Validation
Adding PVS Servers to the Farm
To add PVS servers, follow these steps:
1.
Select Join existing Farm. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
187
Solution Validation
2.
Enter Database information. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
188
Solution Validation
3.
Select the existing PVS database from the drop down box. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
189
Solution Validation
4.
Select Existing site Name. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
190
Solution Validation
5.
Select Existing Store. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
191
Solution Validation
6.
Select Network Service Account. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
192
Solution Validation
7.
Select Automate computer account password updates in 7 days. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
193
Solution Validation
8.
Select your corresponding NIC that is configured for the farm PVS streaming service NIC. Click
Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
194
Solution Validation
9.
Select "Use the Provisioning Services TFTP service". Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
195
Solution Validation
10. List the first four PVS servers in your farm. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
196
Solution Validation
11. Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
197
Solution Validation
Installing and Configuring Citrix StoreFront 2.1 for Citrix XenDesktop
During the installation, the Storefront Installation Wizard installs all prerequisites.
For command-line installations, you must install the prerequisite software and Windows roles before
installing Storefront You can deploy prerequisites with PowerShell cmdlets, the Microsoft
ServerManagerCmd.exe command, or the Microsoft Deployment Image Servicing and Management
(DISM) tool.
If installation of a required Windows role or other software requires a restart (reboot), restart the server
before starting Storefront installation.
Pre-requisites
When planning your installation, Citrix recommends that you allow at least an additional 2 GB of RAM
for StoreFront over and above the requirements of any other products installed on the server. The
subscription store service requires a minimum of 5 MB disk space, plus approximately 8 MB for every
1000 application subscriptions. All other hardware specifications must meet the minimum requirements
for the installed operating system.
Citrix has tested and provides support for StoreFront installations on the following platforms.
•
Windows Server 2012 R2 Datacenter and Standard editions
•
Windows Server 2012 Datacenter and Standard editions
•
Windows Server 2008 R2 Service Pack 1 Enterprise and Standard editions
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
198
Solution Validation
Microsoft Internet Information Services (IIS) and Microsoft .NET Framework are required on the server.
If either of these prerequisites is installed but not enabled, the StoreFront installer enables them before
installing the product. Windows PowerShell and Microsoft Management Console, which are both default
components of Windows Server, must be installed on the web server before you can install StoreFront.
The relative path to StoreFront in IIS must be the same on all the servers in a group.
StoreFront uses the following ports for communications. Ensure your firewalls and other network
devices permit access to these ports.
TCP ports 80 and 443 are used for HTTP and HTTPS communications, respectively, and must be
accessible from both inside and outside the corporate network.
TCP port 808 is used for communications between StoreFront servers and must be accessible from inside
the corporate network.
A TCP port randomly selected from all unreserved ports is used for communications between the
StoreFront servers in a server group. When you install StoreFront, a Windows Firewall rule is configured
enabling access to the StoreFront executable. However, since the port is assigned randomly, you must
ensure that any firewalls or other devices on your internal network do not block traffic to any of the
unassigned TCP ports.
TCP port 8008 is used by Receiver for HTML5, where enabled, for communications from local users on
the internal network to the servers providing their desktops and applications.
StoreFront supports both pure IPv6 networks and dual-stack IPv4/IPv6 environments.
In this deployment of Storefront we deployed 2 Storefront Servers, and 2 Citrix NetScaler VPX virtual
appliances to load balance the traffic to the storefront site.
This includes the requirement for a DNS A record and IP address for load balancing purposes.
See NetScaler VPX deployment section for further deployment details.
Install StoreFront
To install StoreFront, follow these steps:
1.
Note
Log in to the StoreFront server using an account with local administrator permissions. Run the
XenDesktop 7.1 installation media and click start.
Storefront is included in the XenDesktop installation package.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
199
Solution Validation
2.
Click Citrix StoreFront.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
200
Solution Validation
3.
Select "I have read understand and accept The software license agreement" and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
201
Solution Validation
4.
Use the default installation path and Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
202
Solution Validation
5.
Select Configure Firewall settings automatically. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
203
Solution Validation
6.
Review the installation details and click Install.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
204
Solution Validation
7.
Click Finish. Follow this same procedure on your second StoreFront Server.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
205
Solution Validation
Configure StoreFront Site
To configure the StoreFront site, follow these steps:
1.
Start the storefront management console from start /all programs/Citrix/Citrix StoreFront.
2.
Click Create a new deployment.
3.
Enter a base URL. In this instance, we use a Citrix NetScaler VPX to load balance between 2
StoreFront Nodes. Enter a URL based on the DNS record created earlier.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
206
Solution Validation
4.
Enter a Store name and Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
207
Solution Validation
5.
Click Add to enter a delivery Controller.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
208
Solution Validation
6.
Enter a display name, select Type "XenDesktop", Transport type: "HTTP" and click Add to enter
your delivery controller names. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
209
Solution Validation
7.
In the Remote Access menu, select None and click Create.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
210
Solution Validation
8.
Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
211
Solution Validation
Create and Join Multiple Server Group
To create and join multiple Server Group, follow these steps:
1.
If the Citrix StoreFront management console is not already open after the installation of StoreFront,
click Start/All Programs/Citrix/Citrix StoreFront.
2.
In the left pane of the Citrix StoreFront management console, click Server Group.
3.
In the right "Actions" pane, click Add Server.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
212
Solution Validation
4.
A window will pop up, with an authorization code. Write this code down for use in your second
installation of the StoreFront server.
5.
On your second StoreFront server, proceed to open the StoreFront Management Console. Click "Join
existing server group".
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
213
Solution Validation
6.
Enter the Authorization server name and Authorization Code provided in StoreFront server 1.
7.
Both servers and their synchronization status appear under the server group page.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
214
Desktop Delivery Infrastructure
Desktop Delivery Infrastructure
This section provides details on how to use Citrix XenDesktop 7.1 delivery infrastructure to create
virtual desktops golden images and to deploy the virtual desktops.
This section includes:
•
Overview of Desktop Delivery
•
Overview of PVS vDisk Image Management
•
Overview of the components in the solution
•
Citrix User Profile Management
•
Creating the Windows 7 SP1 Golden Image and converting it to a Provisioning Services vDisk
•
Deploying Desktops with Citrix Provisioning Services 7.1
•
Load balancing StoreFront servers with Citrix NetScaler VPX 10.1
Overview of Desktop Delivery
The advantage of using Citrix Provisioning Services (PVS) is that it allows VMs to be provisioned and
re-provisioned in real-time from a single shared disk image called a virtual Disk (vDisk). By streaming
a vDisk rather than copying images to individual machines, PVS allows organizations to manage a small
number of disk images even when the number of VMs grows, providing the benefits of centralized
management, distributed processing, and efficient use of storage capacity.
In most implementations, a single vDisk provides a standardized image to multiple target devices.
Multiple PVS servers in the same farm can stream the same vDisk image to thousands of target devices.
Virtual desktop environments can be customized through the use of write caches and by personalizing
user settings though Citrix User Profile Management.
This section describes the installation and configuration tasks required to create standardized master
vDisk images using PVS.
Overview of PVS vDisk Image Management
After installing and configuring PVS components, a vDisk is created from a device's hard drive by taking
a snapshot of the OS and application image, and then storing that image as a vDisk file on the network.
vDisks can exist on a Provisioning Server, file share, or in larger deployments (as in this CVD) on a
storage system with which the Provisioning Server can communicate (through iSCSI, SAN, NAS, and
CIFS). A PVS server can access many stored vDisks, and each vDisk can be several gigabytes in size.
For this solution, the vDisk was stored on a CIFS share located on the NetApp storage.
vDisks can be assigned to a single target device in Private Image Mode, or to multiple target devices in
Standard Image Mode. In Standard Image mode, the vDisk is read-only, which means that multiple target
devices can stream from a single vDisk image simultaneously. Standard Image mode reduces the
complexity of vDisk management and the amount of storage required since images are shared. In
contrast, when a vDisk is configured to use Private Image Mode, the vDisk is read/write and only one
target device can access the vDisk at a time.
When a vDisk is configured in Standard Image mode, each time a target device boots, it always boots
from a "clean" vDisk image. Each target device then maintains a Write Cache to store any writes that the
operating system needs to make, such as the installation of user-specific data or applications. Each
virtual desktop is assigned a Write Cache disk (a differencing disk) where changes to the default image
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
215
Desktop Delivery Infrastructure
are recorded. Used by the virtual Windows operating system throughout its working life cycle, the Write
Cache is written to a dedicated virtual hard disk created by thin provisioning and attached to each new
virtual desktop.
Overview of Solution Components
Figure 24 provides a logical overview of the solution components in the environment.
Figure 24
Citrix XenDesktop 7.1 and Provisioning Services 7.1 Logical Diagram
Summary of the Environment:
•
(14) XenServer 6.2 SP1 B200 M3
•
(14) XenServer 6.2 SP1 B250 M2 (Client Launcher hosts: Not required for solution)
•
(2) XenDesktop 7.1 Delivery Controller VMs
•
(5) Provisioning Server 7.1 Server for Virtual Desktop VMs
•
(1) Provisioning Server 7.1 for Client Launcher VM (Client Launcher VM: Not required for
solution)
•
(2) Citrix NetScaler 10.1 VPX
•
(140) VSI Launcher VMs (Client Launcher VMs: Not required for solution)
•
(550) Windows 7 Hosted Virtual Desktops
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
216
Desktop Delivery Infrastructure
•
(64) XenDesktop 7.1 RDS Server VMs
•
(1) Citrix Licensing Server VM
•
(2) StoreFront Server VMs
•
3 node Microsoft SQL Server 2012 R2 VM cluster for Provisioning Services, XenDesktop
Storage on Netapp 3250:
•
(14) 12-GB Fibre Channel Boot LUNs
•
(1) 2 TB Shared XenDesktop infrastructure NFS Volume
•
(1) 2 TB Hosted Virtual Desktop NFS Volume for Write Cache
•
(4) 180 GB Hosted Shared Desktop NFS Volumes for Write Cache
•
(1) 500GB vDisk CIFS share for PVS vDisk storage
•
(1) 75GB User Profiles CIFS share, for UPM storage
•
(1) 200GB Always on SQL CIFS share for SQL AON storage
The following tables provide details on the configuration of the solution components.
XenServer 6.2 SP1 Hosts
Hardware: Cisco B-Series Blade Servers
OS:
XenServer 6.2 SP1
Model:
RAM:
CPU:
2X Intel Xeon E5-2680 v2
Processors
Network:
Disk:
(Internal Disks)
Disk
Enterprise Infrastructure Hosts
Hardware: Cisco B-Series Blade Servers
OS:
XenServer 6.2 SP1
CPU:
Disk:
2X Intel Xeon E5-2650 v2
Processors
(Internal Disks)
Citrix Provisioning Server 7.1
Hardware: Virtual Machine
OS:
Windows server 2012
CPU:
4vCPUs
Disk:
60GB
Citrix XenDesktop 7.0 Delivery Controllers
Hardware: Virtual Machine
Model:
RAM:
Cisco UCS B200 M3
384GB
8-port 10Gbps
UCSB-MLOM-40G-0
1
2 X 400 GB SSDs on
one HVD and one
HSD blade
Network:
Cisco UCS B200 M3
256GB
8-port 10Gbps
UCSB-MLOM-40G-0
1
Model:
RAM:
Network:
16GB
2x10Gbps
Model:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
217
Citrix Profile Management
OS:
CPU:
Disk:
Windows server 2012
2vCPU
60GB
Citrix Store Front server 7.0
Hardware: Virtual Machine
OS:
Windows server 2012
CPU:
8vCPU
Disk:
60GB
Microsoft SQL Server 2012R2 for DDC and
PVS
Hardware: Virtual Machine
OS:
Windows server 2012
CPU:
4vCPU
Disk:
60GB
RAM:
Network:
8GB
1x10Gbps
Model:
RAM:
Network:
8GB
1x10Gbps
Model:
RAM:
Network:
12GB
1x10Gbps
Citrix Profile Management
Overview of Profile Management
Profile management helps ensure that the user's personal settings are applied to the user's virtual desktop
and applications, regardless of the location and end point device.
Profile management is enabled through a profile optimization service that provides an easy, reliable way
for managing these settings in Windows environments to ensure a consistent experience by maintaining
a single profile that follows the user. It auto-consolidates and optimizes user profiles to minimize
management and storage requirements and requires minimal administration, support and infrastructure,
while providing users with improved logon and logout.
Profile management is a feature available for XenApp Enterprise and Platinum editions and XenDesktop
Advanced, Enterprise and Platinum editions.
This section explains the installation and configuration of the profile cluster and includes the following:
•
Clustered the two virtual machines
•
Create a highly available file share
Configuration of User Profile Manager Share on NetApp FAS3250
Clustered Data ONTAP was introduced to provide more reliability and scalability to the applications and
services hosted on Data ONTAP. Windows File Services is one of the key value propositions of clustered
Data ONTAP because it provides services through the Server Message Block (CIFS/SMB) protocol.
Clustered Data ONTAP 8.2 brings added functionality and features to Windows File Services.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
218
Citrix Profile Management
SMB 3.0 is the revised version of the SMB 2.x protocol, introduced by Microsoft in Windows 8 and
Windows Server 2012. The SMB 3.0 protocol offers significant enhancements to the SMB protocol in
terms of availability, scalability, reliability, and protection.
For more information on CIFS configuration see TR-4191: Best Practice Guide for Clustered Data
ONTAP 8.2 Windows File Services.
Setting up the CIFS server involves creating the storage virtual machine with the proper setting for CIFS
access, configuring DNS on the Vserver, creating the CIFS server, and, if necessary, setting up UNIX
user and group name services.
Note
Before you set up your CIFS server, you must understand the choices you need to make when performing
the setup. You should make decisions regarding the storage virtual machine, DNS, and CIFS server
configurations and record your choices in the planning worksheet prior to creating the configuration.
This can help you in successfully creating a CIFS server.
Follow this process for the share called User_Profiles used by UPM.
R4E08NA3250-CL-02::> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a storage virtual machine that serves data to clients.
Step 1. Create a Vserver.
Enter the Vserver name: CIFS
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:
cifs
Choose the Vserver client services to be configured {ldap, nis, dns}:
dns
Enter the Vserver's root volume aggregate { aggr0_R4E08NA3250_02,
DATA_R4E08NA3250_02}
[DATA_R4E08NA3250_02]: DATA_R4E08NA3250_02
Enter the Vserver language setting, or "help" to see all languages [C]:
en-us
Enter the Vserver root volume's security style {unix, ntfs, mixed} [unix]:
ntfs
Vserver creation might take some time to finish….Vserver vDisk with language
set to C created. The permitted protocols are cifs.
Step 2: Create a data volume
You can type "back", "exit", or "help" at any question.
Do you want to create a data volume? {yes, no} [yes]: yes
Enter the volume name [vol1]: User_Profiles
Enter the name of the aggregate to contain this volume {
aggr0_R4E08NA3250_02, DATA_R4E08NA3250_02} [DATA_R4E08NA3250_02]:
DATA_R4E08NA3250_02
Enter the volume size: 75GB
Enter the volume junction path [/User_Profiles]:
It can take up to a minute to create a volume…Volume User_Profiles of size
75GB created on aggregate DATA_R4E08NA3250_02 successfully.
Step 3: Create a logical interface.
You can type "back", "exit", or "help" at any question.
Do you want to create a logical interface? {yes, no} [yes]: yes
Enter the LIF name [lif1]: CIFS_User_Profiles
Which protocols can use this interface [cifs]:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
219
Citrix Profile Management
Enter the home node { R4E08NA3250-CL-01, R4E08NA3250-CL-02}
[R4E08NA3250-CL-02]: R4E08NA3250-CL-02
Enter the home port {a0b, a0b-803, a0b-804} [a0a]:
a0b-803
Enter the IP address: 10.218.241.101
Enter the network mask: 255.255.255.0
Enter the default gateway IP address:
LIF CIFS_User_Profiles on node R4E08NA3250-CL-02, on port a0b-803 with IP
address
10.218.241.101 was created.
Do you want to create an additional LIF now? {yes, no} [no]: no
Step 4: Configure DNS (Domain Name Service).
You can type "back", "exit", or "help" at any question.
Do you want to configure DNS? {yes, no} [yes]:
Enter the comma separated DNS domain names: rainier14q1.net
Enter the comma separated DNS server IP addresses: 10.218.241.15
DNS for Vserver CIFS is configured.
Step 5: Configure CIFS.
You can type "back", "exit", or "help" at any question.
Do you want to configure CIFS? {yes, no} [yes]:
Enter the CIFS server name [VDISK]: R4E08NA3250-CL
Enter the Active Directory domain name: rainier14q1.net
In order to create an Active Directory machine account for the CIFS server,
you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"rainier14q1.net" domain.
Enter the user name [administrato]: administrator
Enter the password:
CIFS server "R4E08NA3250-CL" created and successfully joined the domain.
Do you want to share a data volume with CIFS clients? {yes, no} [yes]:
Yes
Enter the CIFS share name [User_Profiles]:
Enter the CIFS share path [/User_Profiles]:
Select the initial level of access that the group "Everyone" has to the
share
{No_access, Read, Change, Full_Control} [No_access]: Full_Control
The CIFS share "User_Profiles" created successfully.
Default UNIX users and groups created successfully.
UNIX user "pcuser" set as the default UNIX user for unmapped CIFS users.
Default export policy rule created successfully.
Vserver CIFS, with protocol(s) cifs, and service(s) dns has been
configured successfully.
NetApp Flash Cache in Practice
Flash Cache™ (previously called PAM II) is a solution that combines software and hardware within
NetApp storage controllers to increase system performance without increasing the disk drive count.
Flash Cache is implemented as software features in Data ONTAP and PCIe-based modules with 256GB,
512GB or 1TB of flash memory per module. Flash Cache cards are controlled by custom-coded
field-programmable gate arrays (FPGAs). Multiple modules may be combined in a single system and are
presented as a single unit. This technology allows sub-millisecond access to data that would previously
have been served from disk at averages of 10 milliseconds or more.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
220
Citrix Profile Management
Complete the following steps to enable Flash Cache on each node:
1.
Run the following commands from the cluster management interface:
system
system
system
system
system
system
Note
node
node
node
node
node
node
run
run
run
run
run
run
-node
-node
-node
-node
-node
-node
<<var_node01>>
<<var_node01>>
<<var_node01>>
<<var_node02>>
<<var_node02>>
<<var_node02>>
options
options
options
options
options
options
flexscale.enable on
flexscale.lopri_blocks off
flexscale.normal_data_blocks on
flexscale.enable on
flexscale.lopri_blocks off
flexscale.normal_data_blocks on
Data ONTAP 8.1 and later does not require a separate license for Flash Cache.
For instructions about how to configure Flash Cache in metadata mode or low-priority data caching
mode, refer to TR-3832: Flash Cache Best Practices Guide. Before customizing the settings, determine
whether the custom settings are required or if the default settings are sufficient.
Installing and Configuring User Profile Management
The following are the steps to install and configure User Profile Management in the Virtual Desktop
Master Image.
1.
Start the UPM installer. Click Next.
2.
Use the default installation paths. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
221
Citrix Profile Management
3.
Click Finish after installation is complete.
4.
Create a GPO linked to users OU (Organizational Unit).
5.
Add Citrix UPM administrative template:
a. Edit the new GPO, browse to User Configuration > Policies > Administrative Template.
b. Right-click Administrative Template and select Add/Remove Template
c. Click Add.
d. Browse to the location of the template file provided with UPM installation files.
(ctxprofile4.1.1.adm)
6.
Configure the following settings under Administrative templates >Citrix >Profile Management:
7.
Enable Active write back
8.
Enable Profile Management
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
222
Citrix Profile Management
9.
Enter the absolute path for the location where the profiles will be stored. (An example of the syntax
would be \\upmshare\profiles\%username%)
10. Select Enable for Process logons of local administrators
11. Select Enable for the File system |Exclusion list - directories and enter the following information:
– AppData\LocalLow
– AppData\Roaming
– $Recycle.Bin
– AppData\Local
12. Click Log Settings > Enable Logging and select Enable
13. Click Profile handling >Delete locally cached profiles on logoff and select Disabled
14. Click Local profile conflict handling
15. Select "If both local windows profile and Citrix Profile exist"
16. Select "Delete local profile"
17. Click "Streamed User Profiles
18. Enable Profile Streaming
Note
These settings were used based on Citrix documentation. Refer to the Reference section of this
document for more information.
Golden Image and vDisk Creation—Microsoft Windows 7 for Hosted
Virtual Desktop and Server 2012 for Hosted Shared Desktop
Creating a golden Image for Hosted Virtual Desktop on Windows 7 32bit, and Hosted Shared Desktop
on Windows Server 2012 will include the following components:
•
Citrix Provisioning Services Target
•
Citrix XenDesktop 7.1 Virtual Desktop Agent
•
Login VSI Target Software
•
Windows Office 2010 Professional
Follow the steps in this section for each of these components for each Virtual Desktop Type.
Create Base Windows 7 SP1 32bit Virtual Machine
The Microsoft Windows 7 SP1 master or golden image with additional software was initially installed
and prepared as a standard virtual machine on XenServer prior to being converted into a separate Citrix
Provisioning Server vDisk file. The vDisk is used in conjunction with Provisioning Server 7.1 and the
XenDesktop 7.1 controller to provision 550 new desktop virtual machines on the XenServer 6.2 SP1
Pool.
With XenDesktop 7.1 and Provisioning Server 7.1, the XenDesktop Setup Wizard was utilized.
Each virtual desktop virtual machine was created with a 3.0 GB write cache disk.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
223
Citrix Profile Management
The section below describes the process used to create the master or golden image and centralized
Windows 7 vDisk used by Provisioning Services.
1.
Install Windows 7 32bit SP1 Enterprise.
2.
Install Office 2010 Professional with Run All From My Computer.
3.
Install Office 2010 Service Pack (most recent).
4.
Windows Updates (Be sure not to install IE9; use IE8).
5.
Set static page file "custom size" max 1536MB, min 1536MB.
Create Base Windows Server 2012 Virtual Machine
The Microsoft Windows Server 2012 master or golden image was also initially installed and prepared as
a standard virtual machine on XenServer prior to being converted into a separate Citrix Provisioning
Server vDisk file. The vDisk is used in conjunction with Provisioning Server 7.1 and the XenDesktop
7.1 controller to provision 64 new virtual machines on the XenServer 6.2 SP1 Pool.
The XenDesktop Setup Wizard was utilized with XenDesktop 7.1 and Provisioning Server 7.1.
Each Hosted Shared desktop virtual machine was created with a 25.0 GB write cache disk.
The section below describes the process used to create the master or golden image and centralized
Windows 2012 vDisk used by Provisioning Services.
1.
Install Windows Server 2012.
2.
Install the Remote Desktop Services Role from the Add Roles and Features Wizard.
3.
Install .Net version 4.01
4.
Install Office 2010 Professional with Run All From My Computer.
5.
Install Office 2010 Service Pack (most recent).
6.
Windows Updates.
7.
Set static page file "custom size" max 4080MB min 4080MB.
Add Provisioning Services Target Device Software
1.
Install the Provisioning Services Target Device Software on both your Windows 7 SP1 Golden
image and Windows Server 2012 Golden Image.
2.
Launch the PVS Device executable
3.
Click Target Device Installation then click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
224
Citrix Profile Management
4.
Accept the license agreement. Click Next.
5.
Enter in the customer information. Click Next.
6.
Choose the default installation location. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
225
Citrix Profile Management
7.
Click Install to begin the PVS Client installation process.
8.
Uncheck "Launch Imaging Wizard" (This process will take place later in the conversion process).
Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
226
Citrix Profile Management
9.
Click Yes to restart the virtual machine.
Add XenDesktop 7.1 Virtual Desktop Agent
To install the XenDesktop agent on both your Windows 7SP1 Golden image and Windows Server 2012
Golden Image, follow these steps:
1.
Copy the VDA executable to the local machine.
2.
Launch executable and select Start.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
227
Citrix Profile Management
3.
Select Virtual Delivery Agent for Windows Desktop OS.
4.
Select I want to: Create a Master Image, click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
228
Citrix Profile Management
5.
Select No, install the standard VDA for HDX 3D Pro. Click Next.
6.
Do not install Citrix Receiver. Uncheck the Checkbox and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
229
Citrix Profile Management
7.
Enter Delivery Controller information, click test connection, then click Add. Click Next.
8.
Select Optimize Performance and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
230
Citrix Profile Management
9.
Windows Firewall: Select Automatically create the rules and click Next.
10. Review installation components and click Install.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
231
Citrix Profile Management
11. Click Finish and Restart the Virtual Machine.
12. Remove VDA Welcome Screen program from the Windows Startup folder.
13. Restart VM.
14. Log in and check the event log to make sure that the DDC registration has successfully completed.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
232
Citrix Profile Management
Add Login VSI Target Software
To add the login VSI target software, follow these steps:
1.
Install the Login VSI on both your Windows 7SP1 Golden image and Windows server 2012 Golden
Image.
2.
Launch setup wizard using run as Administrator.
3.
Enter VSI share path.
4.
Use default installation paths.
Perform Additional PVS and Citrix XenDesktop Optimizations
To optimize PVS and XenDesktop, follow these steps:
1.
Delete XPS Printer.
2.
Make sure that Bullzip PDF is the default printer.
3.
Optimize:
•
Configure SWAP file to 1536 MB (Cisco requirement)
•
Disable Windows Restore and service
– Delete the restore points
•
Perform a disk cleanup
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
233
Citrix Profile Management
•
Disable Windows Firewall Service
•
Disable Windows Backup scheduled jobs
•
Open Computer Management > System Tools > Task Scheduler > Task Scheduler Library >
Microsoft > Windows and disable the following:
– Defrag
– Offline files
– Windows Backup
•
Windows Performance Settings
– Smooth Edges
– Use Visual Styles
– Show Translucent
– Show Window contents when dragging
4.
Modify Action Center settings (uncheck all warnings)
5.
Make sure that the Shadow Copy service is running and set to auto
Convert Golden Image Virtual Machine to PVS vDisk
To convert a virtual machine to a vDisk that will be used to stream desktops through PVS, follow these
steps:
1.
Run the Imaging Wizard on both your Windows 7SP1 Golden image and Windows server 2012
Golden Image to create separate vDisks.
2.
Reboot the source virtual machine.
3.
Log in to the virtual machine using an account that has administrative privileges.
4.
Go to Start > All Programs > Citrix > Provisioning Services.
5.
Launch the PVS Imaging Wizard.
6.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
234
Citrix Profile Management
7.
Enter the Server Name or IP address of the PVS server you will be connecting to in order to create
the new vDisk, and select use my Windows Credentials if your account has PVS server
administrative permissions. Click Next.
8.
Select Create A New vDisk. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
235
Citrix Profile Management
9.
Enter the vDisk Name, select the PVS store and vDisk type Fixed. Click Next.
10. Select KMS for Licensing Management. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
236
Citrix Profile Management
11. Use default size image volumes. Click Next.
12. Assign a Target Device name, select a PVS Device collection. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
237
Citrix Profile Management
13. Click Optimize for Provisioning Services. Click Finish to begin the vDisk creation.
14. You will be prompted to reboot the source virtual machine. Prior to rebooting, go to the properties
of the source virtual machine and change the boot options so that it performs a Network boot.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
238
Citrix Profile Management
15. Click Yes to reboot the source virtual machine.
16. Log in as the same user account that was used at the beginning of this process.
17. When logged in the Imaging wizard will start the data conversion process. The time needed to
complete this process is dependent on the size of the vDisk.
18. Shutdown the source virtual machine.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
239
Citrix Profile Management
19. Make sure that the VM is set to boot to Network.
20. In PVS, switch the collection account to boot to vDisk.
21. In PVS server, switch the vDisk mode to standard mode.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
240
Citrix Profile Management
Delivering Desktops with Provisioning Services (PVS) 7.1
Citrix Provisioning Services (PVS) was used in this solution for desktop delivery.
PVS Configuration for Standard Mode Desktops
The Windows 7 SP1 desktop image is converted into a vDisk (.vhd) image. The vDisk is then configured
in a Shared (Read-only) mode and hosted within a shared file location.
•
PVS was used to create the desired number of virtual machines and machine accounts in Active
Directory based on parameters specified using the built-in XenDesktop setup wizard (referenced in
the next section).
•
PVS streams the vDisk image when starting the Virtual Machine to the Hypervisor and is loaded
into RAM.
•
PVS injects a Security Identifier (SID) and host name associated with the virtual machine as each
desktop boots to maintain uniqueness in AD. These object mappings are maintained and managed
within the PVS server and are visible in the PVS Console under "Collections" view.
•
Each virtual desktop is assigned a "Write Cache" (temporary file) where any delta changes (writes)
to the default image are recorded and is used by the virtual windows operating system throughout
its working life cycle. The Write Cache is written to a dedicated 3GB hard drive.
•
Five PVS servers were configured in a farm with a single site to provide streaming services for 550
Hosted Virtual Desktop machines and 64 Hosted Shared Virtual machines, with high availability and
resilience. Streaming connections are automatically failed over to a working server/s within the farm
in the event of a failure without interruption to the desktop.
•
The vDisk was hosted on a dedicated CIFS share from the NetApp 3250 Cluster, and was accessible
by all servers in the farm for ease of management and to support high availability.
•
Two Device collections were created, one for each XenServer 6.2 SP1 Pool, to contain target device
records for ease of management.
•
Assigned PVS servers with 4 vCPUs and 16GB RAM.
Figure 25
Provisioning Services Farm Layout
A separate PVS server with local storage was used to provision Login VSI Launcher machines for test
workload. We used 1 NFS Volume to create and store each virtual machine's Write-Cache drive.
It is important to consider where the Write Cache is placed when scaling virtual desktops using PVS
server. There are several options as to where the Write Cache can be placed:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
241
Citrix Profile Management
•
PVS Server (Default storage)
•
Hypervisor RAM
•
Device Local Disk (an additional Virtual Disk for Hosted Virtual Desktop and Hosted Shared
Desktop machines)
For this project's optimal performance and scalability the Cache on device Hard Disk option is used. A
3GB virtual disk is assigned to the Hosted Virtual Desktop machine and 24GB virtual disk for Hosted
Shared Desktop, through the PVS XenDesktop provisioning Wizard.
The PVS Target device agent installed in the Windows 7 golden image and Windows Server 2012 gold
image, automatically places the Windows swap file on the same drive used by the PVS Write Cache
when this mode is enabled.
Storage Configuration for PVS Write Cache
NetApp OnCommand System Manager can be used to set up volumes and LIFs. Although LIFs can be
created and managed through the command line, this document focuses on the NetApp OnCommand
System Manager GUI. Note that System Manager 2.1 or later is required to perform these steps. NetApp
recommends creating a new LIF whenever a new volume is created. A key feature in clustered Data
ONTAP is its ability to move volumes in the same Vserver from one node to another. When you move
a volume, make sure that you move the associated LIF as well. This will help keep the virtual cabling
neat and prevent indirect I/O that will occur if the migrated volume does not have an associated LIF to
use. It is also best practice to use the same port on each physical node for the same purpose.
In this section, the volume for the PVS Write Cache and its respective network interface will be created.
Create a Network Interface
To create the network interface using the OnCommand System Manager, follow these steps:
1.
Log in to the clustered Data ONTAP on the System Manager.
2.
On the Vserver Hosted_VDI select the Network Interface tab under Configuration.
3.
Click Create to start the Network Interface Create Wizard. Click Next.
4.
Enter a name for the Network Interface: Hosted_VDI_WS. Select Data and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
242
Citrix Profile Management
5.
Select the protocol as NFS and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
243
Citrix Profile Management
6.
Select the Home port for the Network Interface and enter the corresponding IP, Netmask and
Gateway details.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
244
Citrix Profile Management
7.
On the Summary page, review the details and click Next. The network interface is now available.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
245
Citrix Profile Management
Create a Volume for Write Cache
Use NetApp Virtual Storage Console (VSC) for Citrix XenServer to create a volume for the write cache.
The VSC applies best practices and makes the provisioning of storage repositories a much simpler
operation than performing it manually.
1.
In XenCenter, right-click the host you wish you provision the storage repository, and select NetApp
VSC > Provision Storage Repository.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
246
Citrix Profile Management
2.
Select the target storage controller and Vserver.
3.
Select NFS as the protocol and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
247
Citrix Profile Management
4.
Enter the name for the storage repository and provide additional details as prompted on the screen:
a. Size: Enter the maximum size depending on the controller and space available. For details, see
the Data ONTAP Storage Management Guide for your Data ONTAP release.
b. Storage Repository Name: Use the default or use a custom name.
c. Aggregate: Select the available aggregate from the drop-down list.
d. Enable Thin Provision: This option sets space reservation to none and disables space checks.
e. Enable Auto-Grow and provide the following information:
– Grow increment: Amount of storage added to the storage repository each time space is needed.
– Maximum storage repository size: Limit at which autogrow stops.
5.
Click Finish and wait for a few brief moments until the volume and storage repository are created.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
248
Citrix Profile Management
The result is a new NFS volume created on the controller and Vserver selected during the provisioning
wizard, and a storage repository is created and mounted on the selected hosts in XenCenter.
Removing Deduplication on Write Cache volumes
If the write cache volume is being recycled frequently and data change rate is fairly low, there is no need
to enable deduplication on that volume. If deduplication is enabled and we wish to disable it, we can
leverage the VSC for Citrix XenServer to disable deduplication on any storage repository that has it
enabled.
To remove deduplication on the write cache volumes, follow these steps:
1.
In XenCenter, right-click the desired storage repository.
2.
Select Deduplicate Storage Repository.
3.
Verify that the deduplication state is Enabled.
4.
If enabled, select the Disable Deduplication checkbox and click OK. If the deduplication state shows
Disabled, there is nothing that needs to be done.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
249
Citrix Profile Management
Creating a Hosted Virtual Desktop Using PVS XenDesktop Setup Wizard
To create a hosted virtual desktop, follow these steps:
1.
Start XenDesktop Setup Wizard.
2.
Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
250
Citrix Profile Management
3.
Connect to XenDesktop Controller.
4.
Enter your XenDesktop Controller address. Click Next.
5.
Select XenDesktop Host Resources.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
251
Citrix Profile Management
6.
Select your predefined Host Resource from your XenDesktop Controller, which includes
Hypervisor, storage and Network information. Click Next.
7.
Enter the Password to connect to XenServer and select the VM template you are going to use. Click
Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
252
Citrix Profile Management
8.
Select a standard-mode vDisk and click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
253
Citrix Profile Management
9.
Select Create a new Catalog and enter a name and Description. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
254
Citrix Profile Management
10. Select Operating System for Catalog.
11. Select Windows Desktop Operating system for Hosted Virtual Desktop, Select Windows Server
Operating System for Hosted Shared Desktop. Click Next. The Xen Desktop Catalog type: user
Experience selection displays.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
255
Citrix Profile Management
12. Select A fresh new (random) desktop each time. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
256
Citrix Profile Management
13. Select the Virtual Machine properties
14. Enter the number of machines to create, number of CPU's per machine, memory and local write
cache disk size. Select PXE boot mode. Click Next.
15. Select Create new accounts, click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
257
Citrix Profile Management
16. Select the desired Active Directory Organizational Unit and enter a naming scheme. Click Next.
17. Review your Deployment settings and Click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
258
Citrix Profile Management
Creating a Hosted Shared Desktops Using the XenDesktop Wizard in PVS
To create a hosted shared desktop, follow these steps:
1.
Start XenDesktop Setup Wizard. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
259
Citrix Profile Management
2.
Connect to XenDesktop Controller.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
260
Citrix Profile Management
3.
Select your predefined Host Resource from your XenDesktop Controller, which includes
Hypervisor, storage and Network information. Click Next.
4.
Input Password to connect to XenServer and Select VM template you are going to use.Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
261
Citrix Profile Management
Note
Prior to this step, the VM templates need to be configured on each XenServer 6.2 SP1 Storage
Repository that will contain drives for the streamed desktops.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
262
Citrix Profile Management
5.
Select a standard-mode vDisk and click Next:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
263
Citrix Profile Management
6.
Select Create a new Catalog and enter a name and Description. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
264
Citrix Profile Management
7.
Select Windows Server Operating System for Hosted Shared Desktop. Click Next.
8.
Enter the number of machines to create, number of CPU's per machine, memory and local write
cache disk size. Select PXE boot mode. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
265
Citrix Profile Management
9.
Select Create new accounts, click Next.
10. Select the desired Active Directory Organizational Unit and enter a naming scheme. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
266
Citrix Profile Management
11. Review your deployment settings, and click Finish.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
267
Citrix Profile Management
Citrix NetScaler VPX 10.1 for Load Balancing StoreFront
Overview of NetScaler VPX
NetScaler VPX is a software-based virtual appliance providing the comprehensive NetScaler feature set.
As an easy-to-deploy application delivery solution that runs on multiple virtualization platforms,
NetScaler VPX can be deployed on demand, anywhere in the datacenter, using off-the-shelf standard
servers. The simplicity and flexibility of NetScaler VPX make it simple and cost-effective to fully
optimize every web application and more effectively integrate networking services with application
delivery.
To provide the best performance out of StoreFront, we used an HA pair of Citrix NetScaler VPX 10.1 to
load balance the web traffic between StoreFront servers.
In the following sections, we import the downloadable Citrix NetScaler VPX 10.1 virtual machine into
XenServer 6.2 SP1 DC-INFRA pool, configure the first node, add a service Group, create a Virtual
Server and provide the steps about how to configure HA pair from the second node.
Import a Virtual Machine into Citrix XenCenter
To import a virtual machine into XenCenter, follow these steps:
1.
Download the Citrix NetScaler VPX VM. On XenCenter, right-click on your XenServer 6.2 SP1
pool name, and select Import VM.
2.
Select Browse to point to the filename. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
268
Citrix Profile Management
3.
Confirm the Pool to import into. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
269
Citrix Profile Management
4.
Select a storage SR to import the VM into. Click Next.
5.
Select the appropriate network from the drop-down menu. Click Next.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
270
Citrix Profile Management
6.
Click Finish.
Configure NetScaler VPX 10.1 for Load Balancing
To configure NetScaler VPX 10.1, follow these steps:
1.
From XenCenter, go to the Virtual Machine console and enter an IP address, subnetmask and
gateway for management.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
271
Citrix Profile Management
2.
Enter 4 to Save and quit.
3.
From a web browser, navigate to the IP address, login as user: nsroot password: nsroot
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
272
Citrix Profile Management
4.
Enter a subnet IP address, DNS information, Time Zone and change the administrator password.
Click Continue.
5.
Click Browse to import a license file.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
273
Citrix Profile Management
6.
Click Continue.
7.
Click Done.
8.
You will be prompted to reboot. Click Yes.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
274
Citrix Profile Management
9.
Log back into the NetScaler. From System, go to Settings. Click n Configure basic features.
10. Check the box for Load Balancing. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
275
Citrix Profile Management
Add Load Balancing Servers
To add load balancing servers, follow these steps:
11. From Configuration, click Traffic Management, expand Load Balancing and select Servers. Click
Add.
12. Enter a name for the StoreFront server and enter an IP address. Click Create.
13. Enter the name and IP address for the second StoreFront server. Click Create then click Close.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
276
Citrix Profile Management
Create Service Group
To create a Service Group, follow these steps:
1.
Under Traffic Management > Load Balancing, select Service Groups. Click Add.
2.
Enter a Service Group Name, click Server Based under Members, and enter 80 for the port. Click
Add for each StoreFront server. Click Create.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
277
Citrix Profile Management
Create a Virtual Server
To create a virtual server, follow these steps:
3.
From Traffic Management > Load Balancing, click Virtual Servers. Click Add.
4.
Enter a name for the Virtual Server, the IP address, and click the Service Groups tab. Check the box
for the Service Group. Click the Method and Persistence tab.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
278
Citrix Profile Management
5.
From the Method drop-down menu, select Least Connection. From the Persistence drop-down
menu, select SOURCEIP and enter a /32 subnetmask. Click Create.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
279
Citrix Profile Management
Add and Configure High Availability Pair
To add and configure the HA pair, follow these steps:
1.
Proceed with configuring the second NetScaler node. Before configuring the HA reference, read the
following eDocs document for the best HA configuration for your environment.
http://support.citrix.com/proddocs/topic/ns-system-10-1-map/ns-nw-ha-intro-wrppr-con.html
Proceed with the next steps after you import and configure a management IP address for your second
NetScaler VPX.
2.
When you have the second NetScaler configured with a management IP address, go to System and
select High Availability. Click Add.
3.
Enter the IP address of the first NetScaler. Click OK.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
280
Test Setup and Configurations
Note
The configuration from the first Citrix NetScaler VPX 10.1 node will be replicated to your second node.
4.
Repeat this same process for the first NetScaler VPX.
Test Setup and Configurations
In this project, we tested a single Cisco UCS B200 M3 blade server in a single chassis and twenty-five
Cisco UCS B200 M3 blade servers in four chassis to illustrate linear scalability.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
281
Test Setup and Configurations
Cisco UCS Test Configuration for Single Blade Scalability
Figure 26
Cisco UCS B200 M3 Blade Server for Single Server Scalability XenDesktop 7.1 Windows 7 Hosted
Virtual Desktops (HVD) with PVS 7.1 Login VSImax
Hardware components
•
1 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 Processors@ 2.8 GHz) blade server with 384GB
RAM (24 GB X 16 GB DIMMS @ 1866 MHz) running XenServer 6.2 SP1 as Windows 7 SP1
32-bit Virtual Desktop hosts and 256GB RAM (16GB X 16 DIMMS at 1866 MHZ) running
XenServer 6.2 SP1 as Windows Server 2012 virtual desktop session hosts
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2 Processors@2.6 GHz) blade servers with 128 GB
of memory (16 GB X 8 DIMMS @ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators (Not required for solution deployment)
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco UCS Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
282
Test Setup and Configurations
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Shared
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Microsoft Windows 7 SP1 32 bit, 1vCPU, 1.5 GB RAM, 17 GB hard disk/VM
•
Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM
Figure 27
Cisco UCS B200 M3 Blade Server for Single Server Scalability XenDesktop 7.1 Hosted Shared
Desktops (HSD) PVS 7.1 Login VSImax
Hardware components
•
1 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 Processors @ 2.8 GHz) blade server with 384GB
RAM (24 GB X 16 DIMMS @ 1866 MHz) running XenServer 6.2 SP1 as Windows 7 SP1 32-bit
Virtual Desktop hosts and 256GB RAM (16GB X 16 DIMMS at 1866 MHZ) running XenServer 6.2
SP1 as Windows Server 2012 virtual desktop session hosts
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
283
Test Setup and Configurations
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2 Processors@2.6 GHz) blade servers with 128 GB
of memory (16 GB X 8 DIMMS @ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators (Not required for solution deployment)
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
•
2 X NetApp FAS 3250 Controllers with 4 DS4243 Disk Shelves and 512 MB Flash Cache Cards
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Virtual Desktops and RDS Hosted Shared Desktops
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
284
Test Setup and Configurations
Cisco UCS Test Configuration for Single Blade Scalability with SSD
Figure 28
Cisco UCS B200 M3 Blade Server w/SSD for Single Server Scalability XenDesktop 7.1 Windows
7 Hosted Virtual Desktops (HVD), PVS 7.1 Login VSImax
Hardware components
•
1 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 Processors @ 2.8 GHz) blade server with 384GB
RAM (24 GB X 16 DIMMS @ 1866 MHz) and 2 x 400GB SSDs in a RAID 0 Array for PVS Write
Cache running XenServer 6.2 SP1 as Windows 7 SP1 32-bit Virtual Desktop hosts and 256GB RAM
(16GB X 16 DIMMS at 1866 MHZ) running XenServer 6.2 SP1 as Windows Server 2012 virtual
desktop session hosts
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2 Processors @ 2.6 GHz) blade servers with 128 GB
of memory (16 GB X 8 DIMMS @ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators (Not required for solution deployment)
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
285
Test Setup and Configurations
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Virtual Desktops and RDS Hosted Shared Desktops
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Microsoft Windows 7 SP1 32 bit, 1vCPU, 1.5 GB RAM, 17 GB hard disk/VM
•
Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM
Figure 29
Cisco UCS B200 M3 Blade Server w/SSD for Single Server Scalability XenDesktop 7.1 Hosted
Shared Desktops (HSD) with PVS 7.1 Login VSImax
Hardware components
•
1 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 @ 2.8 GHz) blade server with 384GB RAM (24
GB X 16 DIMMS @ 1866 MHz) and 2 x 400GB SSDs in a RAID 0 Array for PVS Write Cache
running XenServer 6.2 SP1 as Windows 7 SP1 32-bit Virtual Desktop hosts and 256GB RAM
(16GB X 16 DIMMS at 1866 MHZ) running XenServer 6.2 SP1 as Windows Server 2012 virtual
desktop session hosts
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
286
Test Setup and Configurations
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2) blade servers with 128 GB of memory (16 GB X
8 DIMMS @ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Shared
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
287
Test Setup and Configurations
Cisco UCS Configuration for Cluster Tests
Figure 30
Four Blade Cluster XenDesktop 7.1 with Provisioning Server 7.1 - 550 Hosted Virtual Desktops
Hardware components
•
4 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 @ 2.8 GHz) blade server with 384GB RAM (24
GB X 16 DIMMS @ 1866 MHz) running XenServer 6.2 SP1 as Windows 7 SP1 32-bit Virtual
Desktop hosts and 256GB RAM (16GB X 16 DIMMS at 1866 MHZ) running XenServer 6.2 SP1 as
Windows Server 2012 virtual desktop session hosts
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2) blade servers with 128 GB of memory (16 GB X
8 DIMMS @ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
•
2 X NetApp FAS 3250 Controllers with 4 DS4243 Disk Shelves and 512 MB Flash Cache Cards
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
288
Test Setup and Configurations
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Shared
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
Figure 31
Eight Blade Cluster XenDesktop 7.1 RDS with Provisioning Server 7.1 - 1450 Hosted Shared
Desktops
Hardware components
•
8 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 @ 2.8 GHz) blade server with 256 GB RAM (16
GB X 16 DIMMS @ 1866 MHz) running XenServer 6.2 SP1 as Windows 7 SP1 32-bit Virtual
Desktop hosts and 256GB RAM (16GB X 16 DIMMS at 1866 MHZ) running XenServer 6.2 SP1 as
Windows Server 2012 virtual desktop session hosts
•
2 X Cisco UCS B200-M3 (Intel Xeon E5-2650v2) blade servers with 128 GB of memory (16 GB X
8 DIMMS @ 1866 MHz) Infrastructure Servers
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
289
Test Setup and Configurations
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
•
2 X NetApp FAS 3250 Controllers with 4 DS4243 Disk Shelves and 512 MB Flash Cache Cards
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Shared
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
290
Test Setup and Configurations
Cisco UCS Configuration for Two Chassis—Twelve Mixed Workload
Blade Test 2000 Users
Figure 32
Two Chassis Test Configuration - 12 B200 M3 Blade Servers - 2000 Mixed Workload Users
Hardware components
•
4 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 @ 2.8 GHz) blade server with 384GB RAM (24
GB X 16 DIMMS @ 1866 MHz) running XenServer 6.2 SP1 as Windows 7 SP1 32-bit Virtual
Desktop hosts
•
8 X Cisco UCS B200-M3 (Intel Xeon E5-2680v2 @ 2.8 GHz) blade server with 256GB RAM
(16GB X 16 DIMMS @ 1866 MHZ) running XenServer 6.2 SP1 as Windows Server 2012 virtual
desktop session hosts
•
2 X Cisco UCS B200-M3 (E5-2650v2) blade servers with 128 GB of memory (16 GB X 8 DIMMS
@ 1866 MHz) Infrastructure Servers
•
4 X Cisco UCS B250-M2 (5680 @ 3.33 GHz) blade servers with 192 GB of memory (4 GB X 48
DIMMS @ 1333 MHz) Load Generators
•
1X VIC1240 Converged Network Adapter/Blade (B200 M3)
•
2 X Cisco Fabric Interconnect 6248UPs
•
2 X Cisco Nexus 5548UP Access Switches
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
291
Test Setup and Configurations
•
2 X NetApp FAS 3250 Controllers with 4 DS4243 Disk Shelves and 512 MB Flash Cache Cards
Software components
•
Cisco UCS firmware 2.1(3a)
•
Citrix XenServer 6.2 SP1
•
Citrix XenDesktop 7.1 Hosted Virtual Desktops and RDS Hosted Shared Desktops
•
Citrix Provisioning Server 7.1
•
Citrix User Profile Manager
•
Microsoft Windows 7 SP1 32 bit, 1vCPU, 1.5 GB RAM, 17 GB hard disk/VM
•
Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM
Testing Methodology and Success Criteria
The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics
during the desktop boot-up, user login and virtual desktop acquisition (also referred to as ramp-up,) user
workload execution (also referred to as steady state), and user log off for the XenDesktop 7.1 Hosted
Virtual Desktop and RDS Hosted Shared models under test.
Test metrics were gathered from the hypervisor, virtual desktop, storage, and load generation software
to assess the overall success of an individual test cycle. Each test cycle was not considered passing
unless all of the planned test users completed the ramp-up and steady state phases (described below) and
unless all metrics were within the permissible thresholds as noted as success criteria.
Three successfully completed test cycles were conducted for each hardware configuration and results
were found to be relatively consistent from one test to the next.
Load Generation
Within each test environment, load generators were utilized to put demand on the system to simulate
multiple users accessing the XenDesktop 7.1 environment and executing a typical end-user workflow.
To generate load within the environment, an auxiliary software application was required to generate the
end user connection to the XenDesktop 7.1 environment, to provide unique user credentials, to initiate
the workload, and to evaluate the end user experience.
In the Hosted VDI test environment, sessions launchers were used simulate multiple users making a
direct connection to XenDesktop 7.1 via a Citrix HDX protocol connection.
User Workload Simulation - LoginVSI From Login VSI Inc.
One of the most critical factors of validating a desktop virtualization deployment is identifying a
real-world user workload that is easy for customers to replicate and standardized across platforms to
allow customers to realistically test the impact of a variety of worker tasks. To accurately represent a
real-world user workload, a third-party tool from Login VSI Inc was used throughout the Hosted VDI
testing.
The tool has the benefit of taking measurements of the in-session response time, providing an objective
way to measure the expected user experience for individual desktop throughout large scale testing,
including login storms.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
292
Test Setup and Configurations
The Login Virtual Session Indexer (Login VSI Inc' Login VSI 3.7) methodology, designed for
benchmarking Server Based Computing (SBC) and Virtual Desktop Infrastructure (VDI) environments
is completely platform and protocol independent and hence allows customers to easily replicate the
testing results in their environment.
Note
In this test, we utilized the tool to benchmark our VDI environment only.
Login VSI calculates an index based on the amount of simultaneous sessions that can be run on a single
machine.
Login VSI simulates a medium workload user (also known as knowledge worker) running generic
applications such as: Microsoft Office 2007 or 2010, Internet Explorer 8 including a Flash video applet
and Adobe Acrobat Reader.
Note
For the purposes of this test, applications were installed locally, not streamed by ThinApp.
As with actual users, the scripted Login VSI session will leave multiple applications open at the same
time. The medium workload is the default workload in Login VSI and was used for this testing. This
workload emulated a medium knowledge working using Office, IE, printing and PDF viewing.
•
When a session has been started the medium workload will repeat every 12 minutes.
•
During each loop the response time is measured every 2 minutes.
•
The medium workload opens up to 5 apps simultaneously.
•
The type rate is 160ms for each character.
•
Approximately 2 minutes of idle time is included to simulate real-world users.
Each loop will open and use:
•
Outlook 2007/2010, browse 10 messages.
•
Internet Explorer, one instance is left open (BBC.co.uk), one instance is browsed to Wired.com,
Lonelyplanet.com and heavy
•
480 p Flash application gettheglass.com.
•
Word 2007/2010, one instance to measure response time, one instance to review and edit document.
•
Bullzip PDF Printer & Acrobat Reader, the word document is printed and reviewed to PDF.
•
Excel 2007/2010, a very large randomized sheet is opened.
•
PowerPoint 2007/2010, a presentation is reviewed and edited.
•
7-zip: using the command line version the output of the session is zipped.
A graphical representation of the medium workload is shown below.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
293
Test Setup and Configurations
You can obtain additional information and a free test license from http://www.loginvsi.com.
Testing Procedure
The following protocol was used for each test cycle in this study to insure consistent results.
Pre-Test Setup for Single and Multi-Blade Testing
•
All virtual machines were shut down utilizing the XenDesktop 7.1 Administrator and vCenter.
•
All Launchers for the test were shut down. They were then restarted in groups of 10 each minute
until the required number of launchers was running with the Login VSI Agent at a "waiting for test
to start" state.
•
All Citrix XenServer 6.2 SP1 VDI host blades to be tested were restarted prior to each test cycle.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
294
Test Setup and Configurations
Test Run Protocol
To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known
as Ramp Up, to complete in 30 minutes. Additionally, we require all sessions started, whether 195 single
server users or 600 full scale test users to become active within 2 minutes after the last session is
launched.
In addition, Cisco requires that the Login VSI Parallel Launching method is used for all single server
and scale testing. This assures that our tests represent real-world scenarios.
Note
The Login VSI Sequential Launching method allows the CPU, storage and network components to rest
between each logins. This does not produce results that are consistent with the real-world scenarios that
our Customers run in.
For each of the three consecutive runs on single server (195 User) and 4 and 5 server (500 and 600 User)
tests, the same process was followed:
1.
Time 0:00:00 Started XenServer Performance Metrics Logging on the following systems:
– VDI Host Blades used in test run
– DDCs used in test run
– SQL Server(s) used in test run
– Provisioning Servers
– StoreFront Servers
2.
Time 0:00:10 Started NetApp IOStats Logging on the controllers
3.
Time 0:00:15 Started Perfmon logging on key infrastructure VMs
4.
Time 0:05 Take test desktop Delivery Group(s) out of maintenance mode on XenDesktop 7.1 Studio
5.
Time 0:06 First machines boot
6.
Time 0:26 Test desktops or RDS servers booted
7.
Time 0:28 Test desktops or RDS servers registered with XenDesktop 7.1 Studio
8.
Time 1:28 Start Login VSI 3.7 Test with test desktops utilizing Login VSI Launchers (25 Sessions
per)
9.
Time 1:58 All test sessions launched
10. Time 2:00 All test sessions active
11. Time 2:15 Login VSI Test Ends
12. Time 2:30 All test sessions logged off
13. Time 2:35 All logging terminated
Success Criteria
There were multiple metrics that were captured during each test run, but the success criteria for
considering a single test run as pass or fail was based on the key metric, VSImax. The Login VSImax
evaluates the user response time during increasing user load and assesses the successful start-to-finish
execution of all the initiated virtual desktop sessions.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
295
Test Setup and Configurations
Login VSImax
VSImax represents the maximum number of users the environment can handle before serious
performance degradation occurs. VSImax is calculated based on the response times of individual users
as indicated during the workload execution. The user response time has a threshold of 4000ms and all
users response times are expected to be less than 4000ms in order to assume that the user interaction with
the virtual desktop is at a functional level. VSImax is reached when the response times reaches or
exceeds 4000ms for 6 consecutive occurrences. If VSImax is reached, that indicates the point at which
the user experience has significantly degraded. The response time is generally an indicator of the host
CPU resources, but this specific method of analyzing the user experience provides an objective method
of comparison that can be aligned to host CPU performance.
Note
In the prior version of Login VSI, the threshold for response time was 2000ms. The workloads and the
analysis have been upgraded in Login VSI 3 to make the testing more aligned to real-world use. In the
medium workload in Login VSI 3.0, a CPU intensive 480p flash movie is incorporated in each test loop.
In general, the redesigned workload would result in an approximate 20% decrease in the number of users
passing the test versus Login VSI 2.0 on the same server and storage hardware.
Calculating VSIMax
Typically the desktop workload is scripted in a 12-14 minute loop when a simulated Login VSI user is
logged on. After the loop is finished it will restart automatically. Within each loop the response times of
seven specific operations is measured in a regular interval: six times in within each loop. The response
times if these seven operations are used to establish VSImax.
The seven operations from which the response times are measured are as follows:
•
Copy new document from the document pool in the home drive
– This operation will refresh a new document to be used for measuring the response time. This
activity is mostly a file-system operation.
•
Starting Microsoft Word with a document
– This operation will measure the responsiveness of the Operating System and the file system.
Microsoft Word is started and loaded into memory, also the new document is automatically
loaded into Microsoft Word. When the disk I/O is extensive or even saturated, this will impact
the file open dialogue considerably.
•
Starting the "File Open" dialogue
– This operation is handled for small part by Word and a large part by the operating system. The
file open dialogue uses generic subsystems and interface components of the OS. The OS
provides the contents of this dialogue.
•
Starting "Notepad"
– This operation is handled by the OS (loading and initiating notepad.exe) and by the Notepad.exe
itself through execution. This operation seems instant from an end-user's point of view.
•
Starting the "Print" dialogue
– This operation is handled for a large part by the OS subsystems, as the print dialogue is provided
by the OS. This dialogue loads the print-subsystem and the drivers of the selected printer. As a
result, this dialogue is also dependent on disk performance.
•
Starting the "Search and Replace" dialogue \
– This operation is handled within the application completely; the presentation of the dialogue is
almost instant. Serious bottlenecks on application level will impact the speed of this dialogue.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
296
Test Setup and Configurations
•
Compress the document into a zip file with 7-zip command line
– This operation is handled by the command line version of 7-zip. The compression will very
briefly spike CPU and disk I/O.
These measured operations with Login VSI do hit considerably different subsystems such as CPU (user
and kernel), Memory, Disk, the OS in general, the application itself, print, GDI, etc. These operations
are specifically short by nature. When such operations are consistently long: the system is saturated
because of excessive queuing on any kind of resource. As a result, the average response times will then
escalate. This effect is clearly visible to end-users. When such operations consistently consume multiple
seconds the user will regard the system as slow and unresponsive.
With Login VSI 3.0 and later it is now possible to choose between 'VSImax Classic' and 'VSImax
Dynamic' results analysis. For these tests, we utilized VSImax Dynamic analysis.
VSIMax Dynamic
VSImax Dynamic is calculated when the response times are consistently above a certain threshold.
However, this threshold is now dynamically calculated on the baseline response time of the test.
Individual measurements are weighted to better support this approach:
•
Copy new doc from the document pool in the home drive: 100%
•
Microsoft Word with a document: 33.3%
•
Starting the "File Open" dialogue: 100%
•
Starting "Notepad": 300%
•
Starting the "Print" dialogue: 200%
•
Starting the "Search and Replace" dialogue: 400%
•
Compress the document into a zip file with 7-zip command line 200%
A sample of the VSImax Dynamic response time calculation is displayed below:
Then the average VSImax response time is calculated based on the amount of active Login VSI users
logged on to the system. For this the average VSImax response times need to consistently higher than a
dynamically calculated threshold.
To determine this dynamic threshold, first the average baseline response time is calculated. This is done
by averaging the baseline response time of the first 15 Login VSI users on the system.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
297
Test Setup and Configurations
The formula for the dynamic threshold is: Avg. Baseline Response Time x 125% + 3000. As a result,
when the baseline response time is 1800, the VSImax threshold will now be 1800 x 125% + 3000 =
5250ms.
When application virtualization is used, the baseline response time can wildly vary per vendor and
streaming strategy. Therefore it is recommend to use VSImax Dynamic when comparisons are made with
application virtualization or anti-virus agents. The resulting VSImax Dynamic scores are aligned again
with saturation on a CPU, Memory or Disk level, also when the baseline response time are relatively
high.
Determining VSIMax
The Login VSI analyzer will automatically identify the "VSImax". In the example below the VSImax is
98. The analyzer will automatically determine "stuck sessions" and correct the final VSImax score.
•
Vertical axis: Response Time in milliseconds
•
Horizontal axis: Total Active Sessions
Figure 33
Sample Login VSI Analyzer Graphic Output
•
Red line: Maximum Response (worst response time of an individual measurement within a single
session)
•
Orange line: Average Response Time within for each level of active sessions
•
Blue line: the VSImax average.
•
Green line: Minimum Response (best response time of an individual measurement within a single
session)
In our tests, the total number of users in the test run had to login, become active and run at least one test
loop and log out automatically without reaching the VSImax to be considered a success.
Note
We discovered a technical issue with the VSIMax dynamic calculation in our testing on Cisco B230 M2
blades where the VSIMax Dynamic was not reached during extreme conditions. Working with Login
VSI Inc, we devised a methodology to validate the testing without reaching VSIMax Dynamic until such
time as a new calculation is available.
Our Login VSI "pass" criteria, accepted by Login VSI Inc for this testing follows:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
298
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
•
Cisco will run tests at a session count level that effectively utilizes the blade capacity measured by
CPU utilization, Memory utilization, Storage utilization and Network utilization.
•
We will use Login VSI to launch version 3.6 medium workloads, including flash.
•
Number of Launched Sessions must equal Active Sessions within two minutes of the last session
launched in a test.
•
The XenDesktop 7.1 Administrator will be monitored throughout the steady state to make sure that:
– All running sessions report In Use throughout the steady state
– No sessions move to Agent unreachable or Disconnected state at any time during Steady
– State
– Within 20 minutes of the end of the test, all sessions on all Launchers must have logged out
automatically and the Login VSI Agent must have shut down.
•
We will publish our CVD with our recommendation following the process detailed above and will
note that we did not reach a VSIMax dynamic in our testing due to a technical issue with the analyzer
formula that calculates VSIMax.
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and
Hosted Shared Desktop (HSD) Mixed Workload on Cisco
UCS B200 M3 Blades, NetApp 3250 and Citrix XenServer
6.2 SP1Test Results
This section details the test results for Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted
Shared Desktop (HSD) Mixed Workload on Cisco UCS B200 M3 Blade Servers, NetApp 3250 and
Citrix XenServer 6.2 SP1.
The purpose of this test is to provide the data needed to validate Citrix XenDesktop 7.1 Hosted Virtual
Desktop and Hosted Shared Desktop with Citrix Provisioning Services 7.1 using XenServer 6.2 SP1 to
virtualize Microsoft Windows 7 SP1 desktops and Microsoft Windows Server 2012 on Cisco UCS B200
M3 Blade Servers using a NetApp FAS3250 storage system.
The information contained in this section provides data points that a customer may reference in
designing their own implementations. These validation results are an example of what is possible under
the specific environment configuration outlined here, and do not represent the full characterization of
XenDesktop with XenServer 6.2 SP1.
Two test sequences, each containing three consecutive test runs generating the same result, were
performed to establish single blade performance and multi-blade, linear scalability.
One series of stress tests on a single blade server was conducted to establish the official Login VSI Max
Score.
To reach the Login VSI Max with XenDesktop 7.1 Hosted Virtual Desktop, we ran 200 Medium
Workload with flash Windows 7 SP1 sessions on a single blade. The consistent Login VSI score of 175
was achieved on three consecutive runs and is shown below.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
299
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 34
HVD 200 users Login VSI Max Score: 175
To reach the Login VSI Max with Citrix Hosted Shared Desktop 7.1 we ran 216 Medium Workload with
flash Windows Server 2012 sessions on a single blade hosting 8 Hosted Shared Desktop Virtual
Machines. The consistent Login VSI score was achieved on three consecutive runs and is shown below.
Figure 35
HSD 216 users Login VSI Max Score: 198
Single-Server Scalability Test Results
One of the criteria used to validate the overall success of the test cycle is an output chart from Login
Consultants' VSI Analyzer Professional Edition, VSI Max Dynamic for the Medium workload (with
Flash) that determines if VSI Max is reached. During Singe-Server Scalability testing. We performed a
VSI Max test and a Recommended Load test.
VSIMax determines the maximum session density per blade while the Recommended Load is a reduced
scale load which is what we are recommending.
See Test Setup and Configurations to learn more about VSImax.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
300
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Single-Server Hosted Virtual Desktop VSI Max
This section details the results from the Hosted Virtual Desktop 7.1 single blade server hosting 200
Hosted Virtual Desktops streamed by 1 Provisioning server. The VSI Max score is 175. The test
delivered the following results including data from key components in the environment:
Test Phase
Time
Figure 36
Login VSI Max Score: 175
Figure 37
Test Information
Boot storm
Start
2:10PM
Boot
storm End
2:38PM
Test Start
3:45PM
All Users
Logged In
4:15PM
Log Off Start
4:33PM
All Users Logged
Off
4:47PM
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
301
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 38
Host CPU Utilization
Figure 39
Host Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
302
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 40
Host Network Utilization
Figure 41
XenDesktop Delivery Controller CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
303
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 42
XenDesktop Delivery Controller Memory Utilization
Figure 43
XenDesktop Delivery Controller Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
304
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 44
Provisioning Services CPU Utilization
Figure 45
Provisioning Services Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
305
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 46
Provisioning Services Network Utilization
Figure 47
StoreFront CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
306
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 48
StoreFront Memory Utilization
Figure 49
StoreFront Network Utilization
Single-Server Hosted Virtual Desktop Recommended Maximum Load
This section provides the results from the Hosted Virtual Desktop single blade server hosting 180 HVD
Virtual Machines streamed by 1 Provisioning server. The purpose of this test is to validate the maximum
recommended number of virtual desktops to use in establishing your Server N+1 requirement.
At 180 user load, the Cisco UCS B200 M3 Blade Server delivered excellent end user response times
without exhausting server, network or storage resources.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
307
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
The test delivered the following results including data from key components in the environment:
Test Phase
Time
Figure 50
Login VSI Response 180 Users
Figure 51
Test Information
Boot storm
Start
2:30PM
Figure 52
Boot storm End
Test Start
3:00PM
3:55PM
Host CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
308
All Users
Logged In
4:26PM
Log Off
Start
4:44PM
All Users
Logged Off
4:58PM
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 53
Host Memory Utilization
Figure 54
Host Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
309
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 55
XenDesktop Delivery Controller CPU Utilization
Figure 56
XenDesktop Delivery Controller Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
310
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 57
XenDesktop Delivery Controller Network Utilization
Figure 58
Provisioning Services CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
311
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 59
Provisioning Services Memory Utilization
Figure 60
Provisioning Services Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
312
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 61
StoreFront CPU Utilization
Figure 62
StoreFront Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
313
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 63
StoreFront Network Utilization
Single-Server Hosted Shared Desktop VSI Max
This section details the results from the Hosted Shared Desktop single blade server hosting 8 HSD
Virtual Machines streamed by 1 Provisioning server.
In this test we ran 216 sessions reaching a VSI Max score of 198.
The test delivered the following results including data from key components in the environment:
Figure 64
Login VSI Max score: 198 Users
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
314
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 65
Test Phase
Time
Test Start
10:19AM
Test Information
All Users Logged In
10:49AM
Figure 66
Host CPU Utilization
Figure 67
Host Memory Utilization
Log Off Start
All Users Logged Off
11:08AM 11:36AM
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
315
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 68
Host Network Utilization
Figure 69
XenDesktop Delivery Controller CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
316
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 70
XenDesktop Delivery Controller Memory Utilization
Figure 71
XenDesktop Delivery Controller Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
317
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 72
Provisioning Services CPU Utilization
Figure 73
Provisioning Services Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
318
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 74
Provisioning Services Network Utilization
Figure 75
StoreFront CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
319
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 76
StoreFront Memory Utilization
Figure 77
StoreFront Network Utilization
Single-Server Hosted Shared Desktop Recommended Maximum Load
This section provides the results from the Hosted Virtual Desktop single blade server test, 208 sessions
on 8 HSD Virtual Machines streamed by 1 Provisioning server. The purpose of this test is to validate the
maximum recommended number of hosted server desktop sessions to use in establishing your Server
N+1 requirement.
At a 208 user load, the Cisco UCS B200 M3 Blade Server delivered excellent end user response times
without exhausting server, network or storage resources.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
320
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
The test delivered the following results including data from key components in the environment:
Test Phase
Time
Figure 78
Login VSI Response 208 Users
Figure 79
Test Information
Test Start
11:58AM
Figure 80
All Users Logged In
12:29PM
Log Off Start
12:47PM
All Users Logged Off
1:17PM
Host CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
321
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 81
Host Memory Utilization
250000000
200000000
150000000
192.168.1.50 -- host -memory_free_kib
100000000
50000000
0
M
A
5
:2
6
5
:1
1
Figure 82
M
A
5
:3
9
5
:1
1
M
P
5
:4
2
0
:2
1
M
P
5
:5
5
0
:2
1
M
P
5
:0
9
0
:2
1
M
P
5
:1
2
1
:2
1
M
P
5
:2
5
1
:2
1
M
P
5
:3
8
1
:2
1
M
P
5
:4
1
2
:2
1
M
P
5
:5
4
2
:2
1
M
P
5
:0
8
2
:2
1
M
P
5
:1
1
3
:2
1
M
P
5
:2
4
3
:2
1
M
P
5
:3
7
3
:2
1
M
P
5
:4
0
4
:2
1
M
P
5
:5
3
4
:2
1
Host Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
322
M
P
5
:0
7
4
:2
1
M
P
5
:1
0
5
:2
1
M
P
5
:2
3
5
:2
1
M
P
5
:3
6
5
:2
1
M
P
5
:4
9
5
:2
1
M
P
5
:5
2
0
:
1
M
P
5
:0
6
0
:
1
M
P
5
:1
9
0
:
1
M
P
5
:2
2
1
:
1
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 83
XenDesktop Delivery Controller CPU Utilization
Figure 84
XenDesktop Delivery Controller Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
323
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 85
XenDesktop Delivery Controller Network Utilization
Figure 86
Provisioning Services CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
324
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 87
Provisioning Services Memory Utilization
Figure 88
Provisioning Services Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
325
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 89
StoreFront CPU Utilization
Figure 90
StoreFront Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
326
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 91
StoreFront Network Utilization
Single-Server XenDesktop 7.1 Hosted Virtual Desktop with SSD Storage VSI Max
This section provides the results from the XenDesktop 7.1 Hosted Virtual Desktop single blade server
validation testing with Local SSD Storage. We tested this with 200 users, the same scale used in the
remote NetApp NFS storage scenario. The test delivered the following results including data from key
components in the environment:
Figure 92
Login VSI Max Score: 191
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
327
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 93
Test information
Test
Phase
Boot storm
Start
Boot storm End
Test Start
All Users Logged In Log Off Start
Time
8:40AM
9:08AM
11:20AM
11:51PM
Figure 94
Host CPU Utilization
Figure 95
Host Memory Available Mbytes
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
328
12:03PM
All Users
Logged
Off
12:30PM
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 96
Host Network Utilization
Figure 97
XenDesktop Delivery Controller CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
329
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 98
XenDesktop Delivery Controller Memory Utilization
Figure 99
XenDesktop Controller Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
330
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 100
Provisioning Services CPU Utilization
Figure 101
Provisioning Services Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
331
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 102
Provisioning Services Network Utilization
Figure 103
StoreFront CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
332
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 104
StoreFront Memory Utilization
Figure 105
StoreFront Network Utilization
Single-Server XenDesktop 7.1 Hosted Shared Desktop with SSD Storage VSI Max
This section details the results from the XenDesktop 7.1 Hosted Shared Desktops single blade server
validation testing on local SSD storage. We tested this with 216 users, the same scale used in the remote
NetApp NFS storage scenario.
The test delivered the following results including data from key components in the environment:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
333
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Test Phase
Time
Figure 106
Login VSI Max Score: 198
Figure 107
Test information
Test Start
12:48PM
All Users Logged In
1:20PM
Log Off Start
1:36PM
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
334
All Users Logged Off
1:48PM
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 108
Host CPU Utilization
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
M
P
0
:0
6
4
:2
1
Figure 109
M
P
0
:0
9
4
:2
1
M
P
0
:0
2
5
:2
1
M
P
0
:0
5
5
:2
1
M
P
0
:0
8
5
:2
1
M
P
0
:0
1
0
:
1
M
P
0
:0
4
0
:
1
M
P
0
:0
7
0
:
1
M
P
0
:0
0
1
:
1
M
P
0
:0
3
1
:
1
M
P
0
:0
6
1
:
1
M
P
0
:0
9
1
:
1
M
P
0
:0
2
2
:
1
M
P
0
:0
5
2
:
1
M
P
0
:0
8
2
:
1
M
P
0
:0
1
3
:
1
M
P
0
:0
4
3
:
1
M
P
0
:0
7
3
:
1
M
P
0
:0
0
4
:
1
M
P
0
:0
3
4
:
1
M
P
0
:0
6
4
:
1
M
P
0
:0
9
4
:
1
M
P
0
:0
2
5
:
1
M
P
0
:0
5
5
:
1
M
P
0
:0
8
5
:
1
192.168.1.50 -- host -- cpu39
192.168.1.50 -- host -- cpu38
192.168.1.50 -- host -- cpu37
192.168.1.50 -- host -- cpu36
192.168.1.50 -- host -- cpu35
192.168.1.50 -- host -- cpu34
192.168.1.50 -- host -- cpu33
192.168.1.50 -- host -- cpu32
192.168.1.50 -- host -- cpu31
192.168.1.50 -- host -- cpu30
192.168.1.50 -- host -- cpu29
192.168.1.50 -- host -- cpu28
192.168.1.50 -- host -- cpu27
192.168.1.50 -- host -- cpu26
192.168.1.50 -- host -- cpu25
192.168.1.50 -- host -- cpu24
192.168.1.50 -- host -- cpu23
192.168.1.50 -- host -- cpu22
192.168.1.50 -- host -- cpu21
192.168.1.50 -- host -- cpu20
192.168.1.50 -- host -- cpu19
192.168.1.50 -- host -- cpu18
192.168.1.50 -- host -- cpu17
192.168.1.50 -- host -- cpu16
192.168.1.50 -- host -- cpu15
192.168.1.50 -- host -- cpu14
192.168.1.50 -- host -- cpu13
192.168.1.50 -- host -- cpu12
192.168.1.50 -- host -- cpu11
192.168.1.50 -- host -- cpu10
192.168.1.50 -- host -- cpu9
192.168.1.50 -- host -- cpu8
Host Memory Available Mbytes
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
335
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 110
Host Network Utilization
Figure 111
XenDesktop Delivery Controller CPU Utilization
40
35
30
25
20
Processor -- % Processor Time -_Total
15
10
5
0
M
P
1
:2
6
4
:2
1
M
P
0
:3
9
4
:2
1
M
P
9
:3
2
5
:2
1
M
P
8
:4
5
5
:2
1
M
P
7
:5
8
5
:2
1
M
P
6
:0
2
0
:
1
M
P
5
:1
5
0
:
1
M
P
4
:2
8
0
:
1
M
P
4
:3
1
1
:
1
M
P
3
:4
4
1
:
1
M
P
2
:5
7
1
:
1
M
P
1
:0
1
2
:
1
M
P
0
:1
4
2
:
1
M
P
9
:1
7
2
:
1
M
P
8
:2
0
3
:
1
M
P
7
:3
3
3
:
1
M
P
7
:4
6
3
:
1
M
P
6
:5
9
3
:
1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
336
M
P
5
:0
3
4
:
1
M
P
4
:1
6
4
:
1
M
P
3
:2
9
4
:
1
M
P
2
:3
2
5
:
1
M
P
1
:4
5
5
:
1
M
P
0
:5
8
5
:
1
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 112
XenDesktop Delivery Controller Memory Utilization
Figure 113
XenDesktop Controller Network Utilization
1200000
1000000
800000
Network Interface -- Bytes
Received/sec -- Citrix PV Network
Adapter _0
600000
Network Interface -- Bytes
Sent/sec -- Citrix PV Network
Adapter _0
400000
200000
0
M
P
1
2
:
6
:4
2
1
M
P
4
4
:
9
:4
2
1
M
P
7
0
:
3
:5
2
1
M
P
0
3
:
6
:5
2
1
M
P
3
5
:
9
:5
2
1
M
P
6
1
:
3
0
:
1
M
P
9
3
:
6
0
:
1
M
P
3
0
:
0
1
:
1
M
P
6
2
:
3
1
:
1
M
P
9
4
:
6
1
:
1
M
P
2
1
:
0
2
:
1
M
P
5
3
:
3
2
:
1
M
P
8
5
:
6
2
:
1
M
P
1
2
:
0
3
:
1
M
P
4
4
:
3
3
:
1
M
P
8
0
:
7
3
:
1
M
P
1
3
:
0
4
:
1
M
P
4
5
:
3
4
:
1
M
P
7
1
:
7
4
:
1
M
P
0
4
:
0
5
:
1
M
P
3
0
:
4
5
:
1
M
P
6
2
:
7
5
:
1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
337
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 114
Provisioning Services CPU Utilization
Figure 115
Provisioning Services Memory Utilization
14250
14245
14240
14235
14230
Memory -- Available MBytes
14225
14220
14215
M
P
1
2
:
6
4
:2
1
M
P
7
3
:
9
4
:2
1
M
P
3
5
:
2
5
:2
1
M
P
9
0
:
6
5
:2
1
M
P
5
2
:
9
5
:2
1
M
P
1
4
:
2
0
:
1
M
P
7
5
:
5
0
:
1
M
P
3
1
:
9
0
:
1
M
P
0
3
:
2
1
:
1
M
P
6
4
:
5
1
:
1
M
P
2
0
:
9
1
:
1
M
P
8
1
:
2
2
:
1
M
P
4
3
:
5
2
:
1
M
P
0
5
:
8
2
:
1
M
P
6
0
:
2
3
:
1
M
P
3
2
:
5
3
:
1
M
P
9
3
:
8
3
:
1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
338
M
P
5
5
:
1
4
:
1
M
P
1
1
:
5
4
:
1
M
P
7
2
:
8
4
:
1
M
P
3
4
:
1
5
:
1
M
P
9
5
:
4
5
:
1
M
P
5
1
:
8
5
:
1
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 116
Provisioning Services Network Utilization
Figure 117
StoreFront CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
339
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 118
StoreFront Memory Utilization
Figure 119
StoreFront Network Utilization
Single Cluster Scalability Test Results
This section details the results from the XenDesktop 7.1 Hosted Virtual Desktop, and Hosted Shared
Desktop individual cluster validation testing. It demonstrates linear scalability for the system. We have
used the Cisco protocol for XenDesktop described in section 8 to determine the success criteria. One of
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
340
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
the criteria used to validate the overall success of the test cycle is an output chart from Login
Consultants' VSI Analyzer Professional Edition, VSI Max Dynamic for the Medium workload (with
Flash)
Single Cluster XenDesktop 7.1 Hosted Virtual Desktop
For Single Cluster, Hosted Virtual Desktop testing we used a Citrix XenServer 6.2 SP1 Pool with 4 Cisco
UCS B200M3 blades to provide N+1 Server fault tolerance, based on our Recommended Maximum
Load value shown above. The rest of the infrastructure was scaled to meet the cluster scale requirements:
We added a second Desktop Delivery Controller, a second StoreFront server, load balanced by a Citrix
NetScaler HA pair, and a 5-server PVS farm.
The test delivered the following results including data from key components in the environment:
Test Phase
Time
Figure 120
Login VSI response graph 550 users
Figure 121
Test Information
Boot storm
Start
8:19AM
Boot storm
End
8:47AM
Test Start
10:26AM
All Users
Logged In
10:57AM
Log Off Start
11:15AM
All Users Logged
Off
11:27AM
The following graphs detail CPU, Memory, Disk and Network performance on 1 representative blade
from the four Cisco UCS B200-M3 blade XenServer 6.2 SP1 pool and critical delivery infrastructure
performance information.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
341
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 122
Cisco UCS B200-CH3-BL04 CPU Utilization
Figure 123
Cisco UCS B200-CH3-BL04 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
342
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 124
Cisco UCS B200-CH3-BL04 Network Utilization
Figure 125
XenDesktop Delivery Controller server 1 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
343
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 126
XenDesktop Delivery Controller server 1 Memory Utilization
Figure 127
XenDesktop Delivery Controller server 1 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
344
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 128
XenDesktop Delivery Controller server 2 CPU Utilization
Figure 129
XenDesktop Delivery Controller server 2 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
345
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 130
XenDesktop Delivery Controller server 2 Network Utilization
Figure 131
Provisioning Services 1 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
346
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 132
Provisioning Services 1 Memory Utilization
Figure 133
Provisioning Services 1 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
347
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 134
Provisioning Services 2 CPU Utilization
Figure 135
Provisioning Services 2 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
348
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 136
Provisioning Services 2 Network Utilization
Figure 137
Provisioning Services 3 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
349
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 138
Provisioning Services 3 Memory Utilization
Figure 139
Provisioning Services 3 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
350
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 140
Provisioning Services 4 CPU Utilization
Figure 141
Provisioning Services 4 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
351
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 142
Provisioning Services 4 Network Utilization
Figure 143
Provisioning Services 5 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
352
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 144
Provisioning Services 5 Memory Utilization
Figure 145
Provisioning Services 5 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
353
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 146
StoreFront Server 1 CPU Utilization
Figure 147
StoreFront Server 1 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
354
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 148
StoreFront Server 1 Network Utilization
Figure 149
StoreFront Server 2 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
355
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 150
StoreFront Server 2 Memory Utilization
Figure 151
StoreFront Server 2 Network Utilization
Single Cluster XenDesktop 7.1 Hosted Shared Desktop
For Hosted Shared Desktop single cluster testing, we utilized a Citrix XenServer 6.2 SP1 Pool with 8
Cisco UCS B200 M3 blades to provide N+1 Server fault tolerance, based on our Recommended
Maximum Load value shown above. The same infrastructure scale as the previous section (Single
Cluster XenDesktop 7.1 Hosted Virtual Desktop) was used.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
356
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Two XenDesktop Delivery Controllers, two StoreFront load balanced servers, and a 5 server PVS farm.
The test size is 1450 sessions on 64 Shared Hosted Desktop VM's (8 VM's per blade).
The test delivered the following results including data from key components in the environment:
Figure 152
Login VSI response graph 1450 users
The following graphs detail CPU, Memory, Disk and Network performance on 1 representative blade
from the eight Cisco UCS B200-M3 blade XenServer 6.2 SP1 pool and critical delivery infrastructure
performance information.
Figure 153
Cisco UCS B200-CH4-BL01 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
357
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 154
Cisco UCS B200-CH4-BL01 Memory Utilization
Figure 155
Cisco UCS B200-CH4-BL01 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
358
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 156
XenDesktop Delivery Controller server 1 CPU Utilization
Figure 157
XenDesktop Delivery Controller server 1 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
359
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 158
XenDesktop Delivery Controller server 1 Network Utilization
Figure 159
XenDesktop Delivery Controller server 2 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
360
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 160
XenDesktop Delivery Controller server 2 Memory Utilization
Figure 161
XenDesktop Delivery Controller server 2 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
361
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 162
Provisioning Services Server 1 CPU Utilization
Figure 163
Provisioning Services Server 1 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
362
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 164
Provisioning Services Server 1 Network Utilization
Figure 165
Provisioning Services Server 2 CPU Utilization
40
35
30
25
20
Processor -- % Processor Time -_Total
15
10
5
0
M
P
4
:3
8
0
:
3
M
P
0
:3
1
1
:
3
M
P
5
:2
4
1
:
3
M
P
0
:2
7
1
:
3
M
P
5
:1
0
2
:
3
M
P
0
:1
3
2
:
3
M
P
5
:0
6
2
:
3
M
P
0
:0
9
2
:
3
M
P
5
:5
1
3
:
3
M
P
0
:5
4
3
:
3
M
P
6
:4
7
:3
3
M
P
1
:4
0
4
:
3
M
P
6
:3
3
4
:
3
M
P
1
:3
6
4
:
3
M
P
6
:2
9
4
:
3
M
P
1
:2
2
5
:
3
M
P
6
:1
5
5
:
3
M
P
1
:1
8
5
:
3
M
P
6
:0
1
0
:
4
M
P
2
:0
4
0
:
4
M
P
7
:5
6
0
:
4
M
P
2
:5
9
0
:
4
M
P
7
:4
2
1
:
4
M
P
2
:4
5
1
:
4
M
P
7
:3
8
1
:
4
M
P
2
:3
1
2
:
4
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
363
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 166
Provisioning Services Server 2 Memory Utilization
Figure 167
Provisioning Services Server 2 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
364
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 168
Provisioning Services Server 3 CPU Utilization
Figure 169
Provisioning Services Server 3 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
365
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 170
Provisioning Services Server 3 Network Utilization
Figure 171
Provisioning Services Server 4 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
366
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 172
Provisioning Services Server 4 Memory Utilization
Figure 173
Provisioning Services Server 4 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
367
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 174
Provisioning Services Server 5 CPU Utilization
Figure 175
Provisioning Services Server 5 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
368
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 176
Provisioning Services Server 5 Network Utilization
Figure 177
StoreFront Server 1 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
369
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 178
StoreFront Server 1 Memory Utilization
Figure 179
StoreFront Server 1 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
370
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 180
StoreFront Server 2 CPU Utilization
Figure 181
StoreFront Server 2 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
371
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 182
StoreFront Server 2 Network Utilization
Full Scale Mixed Workload XenDesktop 7.1 Hosted Virtual Desktops and
Hosted Shared Desktops Test Results
This section details the results from the full scale combined XenDesktop 7.1 Hosted Virtual Desktops
and Hosted Shared Desktops 2000 user validation test. It demonstrates linear scalability for the system.
We have used the Cisco Test Protocol for XenDesktop described in section 8 to determine the success
criteria. One of the criteria used to validate the overall success of the test cycle is an output chart from
Login Consultants' VSI Analyzer Professional Edition, VSI Max Dynamic for the Medium workload
(with Flash).
We ran the full scale test with 550 Hosted Virtual Desktop session and 1450 Hosted Shared Desktops
sessions. An N+1 fault tolerance configuration was used per cluster based on the recommended
maximum load determined in earlier tests.
The full scale test delivered the following results:
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
372
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 183
2000 Desktop Sessions on Citrix XenServer 6.2 SP1
The following graphs detail CPU, Memory, Disk and Network performance on a representative Cisco
UCS B200-M3 blade servers during the twelve blade, 2000 user test. (Representative results for all
blades in every of the XenDesktop pools can be found in Appendix C)
Figure 184
DC-HVD CH3-BL03 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
373
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 185
DC-HVD CH3-BL03 Memory Utilization
Figure 186
DC-HVD CH3-BL03 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
374
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 187
DC-HSD B200-CH4-BL01 CPU Utilization
Figure 188
DC-HSD B200-CH4-BL01 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
375
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 189
DC-HSD Network Utilization
Figure 190
XenDesktop Delivery Controller server 1 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
376
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 191
XenDesktop Delivery Controller server 1 Memory Utilization
Figure 192
XenDesktop Delivery Controller server 1 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
377
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 193
XenDesktop Delivery Controller server 2 CPU Utilization
Figure 194
XenDesktop Delivery Controller server 2 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
378
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 195
XenDesktop Delivery Controller server 2 Network Utilization
Figure 196
Provisioning services server 1 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
379
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 197
Provisioning services server 1 Memory Utilization
Figure 198
Provisioning services server 1 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
380
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 199
Provisioning services server 2 CPU Utilization
Figure 200
Provisioning services server 2 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
381
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 201
Provisioning services server 2 Network Utilization
Figure 202
Provisioning services server 3 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
382
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 203
Provisioning services server 3 Memory Utilization
Figure 204
Provisioning services server 3 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
383
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 205
Provisioning services server 4 CPU Utilization
Figure 206
Provisioning services server 4 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
384
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 207
Provisioning services server 4 Network Utilization
Figure 208
Provisioning services server 5 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
385
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 209
Provisioning services server 5 Memory Utilization
Figure 210
Provisioning services server 5 Network Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
386
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 211
StoreFront server 1 CPU Utilization
Figure 212
StoreFront server 1 Memory Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
387
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 213
StoreFront server 1 Network Utilization
Figure 214
StoreFront server 2 CPU Utilization
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
388
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 215
StoreFront server 2 Memory Utilization
Figure 216
StoreFront server 2 Network Utilization
1200000
1000000
800000
Network Interface -- Bytes
Received/sec -- Citrix PV Network
Adapter _0
600000
Network Interface -- Bytes Sent/sec -Citrix PV Network Adapter _0
400000
200000
0
M
A
7
5
:
0
:3
1
1
M
A
2
5
:
3
:3
1
1
M
A
7
4
:
6
:3
1
1
M
A
2
4
:
9
:3
1
1
M
A
7
3
:
2
:4
1
1
M
A
2
3
:
5
:4
1
1
M
A
8
2
:
8
:4
1
1
M
A
3
2
:
1
:5
1
1
M
A
8
:1
4
:5
1
1
M
A
3
1
:
7
:5
1
1
M
P
8
:0
0
0
:2
1
M
P
3
:0
3
0
:2
1
M
P
8
:5
5
0
:2
1
M
P
3
:5
8
0
:2
1
M
P
8
:4
1
1
:2
1
M
P
4
:4
4
1
:2
1
M
P
9
:3
7
1
:2
1
M
P
4
:3
0
2
:2
1
M
P
9
:2
3
2
:2
1
M
P
4
:2
6
2
:2
1
M
P
9
:1
9
2
:2
1
M
P
4
:1
2
3
:2
1
M
P
9
:0
5
3
:2
1
M
P
4
:0
8
3
:2
1
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
389
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Key NetApp FAS3250 Performance Metrics During Scale Testing
This section details the key performance metrics that were captured on the NetApp storage controller
during the full-scale testing.
Workload Test Cases
Boot
All 550 HVD sessions and 1450 HSD sessions at the same time.
Login
The test assumed 2000 users logging in over a period of 30 minutes, which
equates to approximately 67 users logging in per minute. After the users
were logged in, a quiet period of 1 hour elapsed before starting the tests in
steady state.
Steady
In the steady-state workload all users performed various tasks such as using
state
Microsoft Office, web browsing, PDF printing, playing flash videos, compressing, and uncompressing archives, and using the freeware mind -mapper
application.
Logoff
The logoff sequence was initiated after the workload completed for each
user.
The test setup included the following:
•
FAS3250 two-node cluster
•
4 shelves SAS 450GB 15K RPM
•
Clustered Data ONTAP 8.2P5
•
10GbE Intel cards and Intel SFPs for NFS and CIFS
Performance Results
•
NetApp Flash Cache decreases IOPS during the boot and login phases.
•
Storage can easily handle the 2000-user virtual desktop workload with an average less than 3ms read
latency and less than 1ms write latency, which is excellent for any user type. Based on the test results
with 2000 users and the leftover IOPS in the system, this configuration can easily support more than
3000 users.
•
With NetApp clustered Data ONTAP, volumes hosting live virtual desktops can be easily moved
between the nodes without any downtime or impact to the end user experience.
•
Boot time is 30 minutes and login time for 2000 users is 30 minutes, consistently.
During the steady state test using Login VSI medium workload, we observed IOPS loads of more than
22K on the storage array (average 8 IOPS per desktop) and very low latencies. Figure 9.X is a graph of
the total IOPS and latency experienced during the full scale LoginVSI test of 2000 users. Again, the
storage latency and Login VSI average response times were well within the acceptable limits. The array
managed these IOPS and low latencies using NetApp I/O optimization intelligence and a total of 96
HDDs.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
390
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 217
Total Storage IOPS and Latency
Table 20 lists the average IOPS during boot, login, steady state and log off for all storage volumes for
2000 VDI sessions (1450 Hosted Shared Desktops and 550 Hosted Virtual Desktops).
Table 20
2000 Average User Workload (IOPS for Boot, Login, Steady State, and Log Off)
Avg.Read Ops
Avg.Write Ops
Avg.TotalOps
Avg. Latency (milliseconds)
Boot
569
3712
4281
0.25
Login
346
9518
6294
0.74
Steady
125
14335
14460
0.81
Logoff
401
6308
6709
0.60
Citrix User Profile Manager (UPM) was used to manage the user's profiles during the test and the UPM
profiles were kept in a CIFS share on NetApp storage. In addition, home directories and folders were
redirected to a CIFS share on NetApp storage. And, per Citrix' best practices, it is recommended to place
the PVS vDisk on a CIFS share as well; as such, the PVS vDisk resided on a CIFS share on NetApp
storage. For more information, go to
http://blogs.citrix.com/2010/11/05/provisioning-services-and-cifs-stores-tuning-for-performance/
Figure 218 depicts the CIFS workload for 2000 users. The graph shows total CIFS IOPS during Boot,
Login, Steady State, and Logoff periods for the CIFS workload during the LoginVSI test. The CIFS
workload included the IOPS for UPM user profiles, User Shares, and PVS vDisk.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
391
Citrix XenDesktop 7.1 Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) Mixed Workload on
Figure 218
Total CIFS IOPS
Table 21 lists the average CIFS IOPS during boot, login, steady state and log off for all storage volumes
for 2000 VDI sessions (1450 Hosted Shared Desktops and 550 Hosted Virtual Desktops).
Table 21
CIFS Workload (IOPS for Boot, Login, Steady State, and Log Off)
Average Read Ops
Average Write Ops
Average Total Ops
Boot
156
0
156
Login
250
4
254
Steady
25
11
36
Logoff
82
88
170
The figure below illustrates the total CPU percentage of the two storage nodes used during the LoginVSI
workload test. The CPU metric was recorded during the full scale test of 2000 users. A NetApp storage
array has common processes in each storage node that do not fail over if a failure occurs with one of the
storage nodes. Therefore, the figure below illustrates that the full-scale workload can be run off of a
single storage node in advent of a storage node failure.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
392
Scalability Considerations and Guidelines
Table 22lists the average CPU on node 1 and node 2 during boot, login, steady state and log off.
Table 22
Note
Percent CPU(CPU during Boot, Login, Steady State, and Log Off)
Avg. CPU Node 1
Avg. CPU Node 2
Boot
3%
14%
Login
15%
30%
Steady
20%
45%
Logoff
10%
28%
The load on node 2 is higher than on node 1 because it is carrying half the load for HSD and the full load
for HVD; conversely, node 1 is only carrying the other half of HSD load.
Scalability Considerations and Guidelines
There are many factors to consider when you begin to scale beyond 2000 User, two chassis 12 mixed
workload VDI/HVD host server configuration, which this reference architecture has successfully tested.
In this section we give guidance to scale beyond the 2000 user system.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
393
Scalability Considerations and Guidelines
Cisco UCS System Scalability
As our results indicate, we have proven linear scalability in the Cisco UCS Reference Architecture as
tested.
•
Cisco UCS 2.1(3a) management software supports up to 20 chassis within a single Cisco UCS
domain on our second generation Cisco UCS Fabric Interconnect 6248 and 6296 models. Our single
Cisco UCS domain can grow to 160 blades.
•
With Cisco UCS 2.1(3a) management software, released in November 2013, each UCS 2.1(3a)
Management domain is extensibly manageable by Cisco UCS Central, our new manager of
managers, vastly increasing the reach of the Cisco UCS system.
•
As scale grows, the value of the combined UCS fabric, Nexus physical switches and Nexus virtual
switches increases dramatically to define the Quality of Services required to deliver excellent end
user experience 100% of the time.
•
To accommodate the Cisco Nexus 5500 upstream connectivity in the way we describe in the LAN
and SAN Configuration section, we need two Ethernet uplinks and two Fiber Channel uplinks to be
configured on the Cisco UCS Fabric interconnect. And based on the number of uplinks from each
chassis, we can calculate number of desktops can be hosted in a single Cisco UCS domain.
Assuming eight links per chassis, four to each 6248, scaling beyond 10 chassis would require a pair
of Cisco UCS 6296 fabric interconnects.
•
A 25,000 virtual desktop building block, managed by a single Cisco UCS domain, with its support
infrastructure services can be built out from the RA described in this study with eight links per
chassis and 152 Cisco UCS B200 M3 Servers and 8 infrastructure blades configured per the
specifications in this document in 20 chassis.
•
Consider using Tier O storage on Cisco UCS B200 M3 XenDesktop Hosted Virtual Desktop and
RDS blades for PVS write cache for non-persistent desktops to extend the capabilities of the NetApp
storage system.
Of course, the backend storage has to be scaled accordingly, based on the IOP considerations as
described in the NetApp scaling section. Please refer the NetApp section that follows this one for
scalability guidelines.
NetApp FAS Storage Guidelines for Mixed Desktops Virtualization
Workload
Storage sizing has three steps:
•
Gathering solution requirements.
•
Estimating storage capacity and performance.
•
Obtaining recommendations for storage configuration.
Solution Assessment
Assessment is an important first step. Liquidware Labs Stratusphere FIT and Lakeside VDI Assessment
are recommended to collect network, server, and storage requirements. NetApp has contracted with
Liquidware Labs to provide free licenses to NetApp employees and channel partners. For information
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
394
Scalability Considerations and Guidelines
on how to obtain software and licenses, refer to this FAQ. Liquidware Labs also provides a storage
template fits NetApp system performance modeler. For guidelines on how to use Stratusphere FIT and
the NetApp custom report template, refer to TR-3902: Guidelines for Virtual Desktop Storage Profiling.
Virtual desktop sizing varies depends on:
•
Number of the seats
•
VM workload (applications, VM size, VM OS)
•
Connection broker (VMware View™ or Citrix XenDesktop)
•
Hypervisor type (vSphere, XenServer, or Hyper-V)
•
Provisioning method (NetApp clone, Linked clone, PVS, MCS)
•
Storage future growth
•
Disaster recovery requirements
•
User home directories
There are many factors that affect storage sizing. NetApp has developed a sizing tool system
performance modeler (SPM) to simplify the process of performance sizing for NetApp systems. It has a
step-by-step wizard to support varied workload requirements and provide recommendations to meet the
customers' performance needs.
Tip
NetApp recommends using the NetApp SPM tool to size the virtual desktop solution. Contact NetApp
partners and NetApp Sales Engineers who have the access to SPM. When using the NetApp SPM to size
a solution, it is recommended to separately size the VDI workload (including write cache and personal
vDisk if used), and the CIFS profile/home directory workload. When sizing CIFS, NetApp recommends
sizing with CIFS heavy user workload. 80% concurrency was assumed. It was also assumed that 10GB
per user for home directory space with 35% deduplication space savings. Each VM used 2GB of RAM.
PVS write cache is sized at 5GB per desktop for non-persistent/pooled, and 2GB for persistent desktops
with personal vDisk.
Storage sizing has two factors: capacity and performance.
Capacity Considerations
Deploying XenDesktop with PVS has the following capacity considerations:
•
vDisk. The size of the vDisk depends greatly on the operating system and the number of applications
to be installed on the vDisk. It is a best practice to create vDisks larger than necessary in order to
leave room for any additional application installations or patches. Each organization should
determine the space requirements for its vDisk images.
•
20GB vDisk with a Windows 7 image is used as an example. NetApp deduplication can be used for
space saving.
•
Write cache file. NetApp recommends the size range for each user to be 4-18GB. Write cache size
is based on what type of workload and how often the VM is rebooted. 4GB is used in this example
for the write-back cache. Since NFS is thin provisioned by default, only the space currently used by
the virtual machine will be consumed on the NetApp storage. If iSCSI or FCP is used, N x 4GB
would be consumed as soon as a new virtual machine is created.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
395
Scalability Considerations and Guidelines
•
CIFS home directory. Various factors must be considered for each home directory deployment. The
key considerations for architecting and sizing a CIFS home directory solution include the number
of users, the number of concurrent users, the space requirement for each user, and the network load.
Run deduplication and obtain space saving.
•
vSwap. XenServer requires1GB per VM.
•
Infrastructure. Host XenDesktop, PVS, SQL Server, DNS, DHCP
Performance Considerations
Performance requirement collection is a critical step. After using Liquidware Labs Stratusphere FIT and
Lakeside VDI Assessment to gather I/O requirements, contact NetApp's account team to obtain
recommended software and hardware configuration.
I/O has a few factors: size, read/write ratio, and random/sequential. We use 90% write and 10% read for
PVS work load. Storage CPU utilization also needs to be considered. Table 23 can be used as guidance
for your sizing guidance on PVS workload when using LoginVSI heavy workload.
Table 23
Sizing guidance.
Write Cache (NFS)
vDisk (CIFS)
Infrastructure (NFS)
Boot IOPS
8-10
0.5
2
Login IOPS
Steady IOPS
9
0
1.5
7.5
0
0
Citrix Technologies Considerations and Guidelines
XenDesktop 7.1 environments can scale to large numbers of desktops. When implementing a Citrix
XenDesktop hosted VDI solution, the following items should be taken into consideration:
•
Type of Storage available in your environment
•
Type of desktops (and users) that will be deployed
•
End user locations (single site, multiple site, remote users, etc.)
•
Data protection requirements
•
For Citrix Provisioning Services pooled desktops, the write cache size and placement
These and other scalability considerations are described in greater detail in the "XenDesktop - Modular
Reference Architecture" document and in the Citrix Product Blueprint document and should be an
integral part of any VDI design.
When designing and deploying our test environment, we followed best practices as much as possible.
The following best practices are worth mentioning:
•
Citrix recommends deploying using an N+1 formula for resiliency. In our test environment, this was
applied to all infrastructure servers.
•
All Provisioning Server Network Adapters were configured with a static IP. In addition,
management and streaming traffic was separated on different network adapters.
•
All Provisioning Server and target device OS's Network Adapters were configured to disable Large
TCP Offload.
•
All the PVS services should be set to start as: Automatic (Delayed Start).
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
396
Scalability Considerations and Guidelines
•
All the StoreFront services should be set to start as: Automatic (Delayed Start).
•
It is recommended to use the XenDesktop Setup Wizard in PVS. The Wizard does an excellent job
of creating the desktops automatically. It is possible to run multiple instances of the wizard provided
the deployed desktops are placed in different catalogs and have different naming conventions.
•
To run the PVS XenDesktop Setup wizard, at a minimum, you need to install the Provisioning
Server, the XenDesktop Controller, configure the hosts, and create VM templates on all Storage
Repositories where desktops will be deployed.
•
Active Directory DNS Server reverse DNS look-up zones were configured for each network.
•
For a 2000 desktop scale, two XenServer Pools were used. One NFS Storage Repository on
XenServer was used for Hosted Virtual desktop write cache drives, and four 180GB NFS Storage
Repository volumes for the Hosted Shared Desktop VM write cache. Two templates per NFS
Storage Repository for PVS deployment were used.
•
When deploying (local storage) SSD XenDesktop with PVS, it is necessary to configure a dedicated
XenDesktop Resource pointing to SSD storage.
•
When configuring storage for XenServer 6.2 SP1 we took advantage of Jumbo-Frame capability on
the NetApp storage. We configured an isolated Network for storage traffic, configured at MTU of
9000 at the storage end and on the XenServer 6.2 Network Adapter for NFS traffic only.
XenDesktop 7.1 Hosted Virtual Desktop
To get the best performance out of Windows 7 Hosted Virtual Desktop we modified the OS as follows:
•
We used the Provisioning services target device installation optimization tool.
•
We enabled optimization when installing the XenDesktop VDA agent.
•
XenServer tool was installed on all Virtual Machine Client OS.
•
Page file was set to static at 1450MB max, and 1450MB min.
•
Disabled System Restore
•
Removed all printers
•
Change Citrix components via DDC policy: disable drive mapping, client hardware access, client
microphone, clipboard, Image Capture, printer mapping
Active Directory Group Policy Objects were used to push policies to:
•
Disable printer mapping
•
Disable IPV6
•
Disable Windows Firewall
•
Disable Windows Update
•
Disable UAC
•
NTP server configuration
•
NTP client configuration
•
Microsoft Office 2010 disable error reporting and first use dialogue boxes (this is necessary for a
Login VSI test only)
•
Microsoft Outlook 2010 disable archiving
•
Microsoft Internet Explorer disable error reporting and first use dialogue boxes (this is necessary
for a Login VSI test only)
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
397
References
•
Power Plan set to High performance
•
Disable Themes
•
Disable animations
•
Enable font smoothing
•
Added Citrix StoreFront URL to trusted Sites in Internet Explorer
XenDesktop 7.1 Hosted Shared Desktop
For Windows Server 2012 Hosted Shared Desktop Virtual Machines, we used the same modifications
and policies to the OS as Hosted Virtual Desktop on Windows 7.
In Addition, these are the specific Windows Server 2012 modifications we did the following:
•
Set page file to static Max 4096MB and 4096MB minimum.
•
Installed Remote Desktop Sessions Role
Active Directory Group Policy Objects pushed:
•
Disable Internet Explorer Enhanced security
•
Bypass software execution policy
XenServer 6.2 SP1
When Deploying XenServer 6.2 SP1 on NetApp fibre channel boot LUNs, we used multi-path
installation. On the ISO installation media of XenServer 6.2, "multipath" must be entered at the boot up
screen. The process to install then continues (see section 6.6).
To fully configure multi-path, after the initial setup the multipath.conf file needs to be updated. See
Configuring Fibre Chanel Mulit-path and page 123 for multipath.conf modification.
http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-104657-16&m=tr-3732.pdf
To be able to scale up to 2000 sessions we had to modify each Hosted Virtual Desktop and Hosted Shared
Desktop Virtual Machine from XenServer to get the best performance by disabling the following:
•
DVD drive
•
USB emulation
Each XenServer 6.2 host had Service Pack 1 apply to it through XenCenter.
In order to take full advantage of NetApp storage and get the best performance we configured the
XenServer 6.2 SP1 storage NIC to Jumbo Frame (MTU 9000).
References
This section provides links to additional information for each partner's solution component of this
document.
Cisco Reference Documents
Cisco Unified Computing System Manager Home Page
http://www.cisco.com/en/US/products/ps10281/index.html
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
398
References
Cisco UCS B200 M3 Blade Server Resources
http://www.cisco.com/en/US/products/ps10280/index.html
Cisco UCS 6200 Series Fabric Interconnects
http://www.cisco.com/en/US/products/ps11544/index.html
Cisco Nexus 5500 Series Switches Resources
http://www.cisco.com/en/US/products/ps9670/index.html
Download Cisco UCS Manager and Blade Software Version 2.1(3a)
http://software.cisco.com/download/release.html?mdfid=283612660&softwareid=283655658&release
=1.4(4l)&relind=AVAILABLE&rellifecycle=&reltype=latest
Download Cisco UCS Central Software Version 1.1(1b)
http://software.cisco.com/download/release.html?mdfid=284308174&softwareid=284
308194&release=1.1(1b)&relind=AVAILABLE&rellifecycle=&reltype=latest&i=rs
Citrix Reference Document
Citrix Product Documentation http://support.citrix.com/proddocs/topic/infocenter/ic-how-to-use.html
XenDesktop 7.1 Hosted Virtual Desktops and Hosted Shared Desktops
Guide to XenDesktop 7.1
http://support.citrix.com/proddocs/topic/xenapp-xendesktop/cds-xendesktop-71-landing-page.html
XenDesktop 7.1 compared to 7.0 http://support.citrix.com/article/CTX138195#7.1
Microsoft SQL Server 2012 Always ON cluster database for XenDesktop 7.1
http://msdn.microsoft.com/en-us/library/jj215886.aspx
Provisioning Services 7.1
Provisioning Services 7.x product documentation
http://support.citrix.com/proddocs/topic/provisioning-7/pvs-provisioning-7.html
Provisioning Services 7.x Issues Fixed http://support.citrix.com/article/CTX138199#7.1
Citrix NetScaler 10.1
NetScaler 10.1 product documentation
http://support.citrix.com/proddocs/topic/netscaler/ns-gen-netscaler10-1-wrapper-con.html
NetScaler High Availability documentation
http://support.citrix.com/proddocs/topic/ns-system-10-1-map/ns-nw-ha-intro-wrppr-con.html
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
399
References
NetScaler Load Balancing documentation
http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-1-map/ns-lb-wrapper-con-10
.html
Citrix XenServer 6.2 SP1
XenServer 6.2 product documentation
http://support.citrix.com/proddocs/topic/xenserver/xs-wrapper-62.html
Citrix XenServer removing DVD drive from XenServer Virtual Machine
http://support.citrix.com/article/CTX132411Show
How to Remove a DVD Drive from a XenServer Virtual Machine
http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/reference.html#cli-xe-commands_template
Citrix XenServer 6.2 Service Pack 1 documentation http://support.citrix.com/article/CTX139788
Netapp muliti-path with XenServer. See Configuring Fibre Chanel Mulit-path and page 123 for
multipath.conf modification.
http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-104657-16&m=tr-3732.pdf
Login VSI
http://www.loginvsi.com/documentation/v3
NetApp References
Citrix XenDesktop on NetApp Storage Solution Guide
750 Seats Citrix XenDesktop On NetApp Storage at $37 storage per desktop
Site Requirements Guide
Clustered Data ONTAP High-Availability Configuration Guide.
Clustered Data ONTAP Network Management Guide.
Clustered Data ONTAP Software Setup Guide.
TR-3437: Storage Subsystem Resiliency Guide.
TR-3732: Citrix XenServer and NetApp Storage Best Practices.
FAS3200-series documentation
Disk Shelf Installation and Setup section of the DS4243 Disk Shelf Overview
Instructions for Downloading and Installing Disk Firmware
SAS Disk Shelves Universal SAS and ACP Cabling Guide.
TR-4191: Best Practice Guide for Clustered Data ONTAP 8.2 Windows File Services
TR-3832: Flash Cache Best Practices Guide
TR-3902: Guidelines for Virtual Desktop Storage Profiling.
TR-3802 Ethernet Storage Best Practices
NetApp Data ONTAP PowerShell Toolkit
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
400
Appendix
Appendix
Click here for the Appendix.
FlexPod Datacenter with Citrix XenDesktop 7.1 and Citrix XenServer 6.2
401
Download