VCE Vblock Systems Deployment and Implementation

advertisement
CERTIFIED PROFESSIONAL STUDY GUIDE
VCE VBLOCK® SYSTEMS DEPLOYMENT
AND IMPLEMENTATION: VIRTUALIZATION
EXAM 210-030
Document revision 1.2
December 2014
© 2014 VCE Company, LLC. All rights reserved.
Table Of Contents
Obtaining The VCE-CIIEs Certification Credential ........................................... 3 VCE Vblock Systems Deployment and Implementation: Virtualization Exam ................................. 3 Recommend Prerequisites .............................................................................................................. 3 VCE Exam Preparation Resources ................................................................................................. 3 VCE Certification Web Site ............................................................................................................. 3 About This Study Guide ..................................................................................... 5 Vblock Systems Overview ................................................................................. 5 Vblock Systems Architecture ........................................................................................................... 6 VMware vSphere Architecture ......................................................................................................... 8 VMware vSphere Components ....................................................................................................... 8 VMware vCenter Server On The AMP ............................................................. 10 Storage Provisioning And Configuration ....................................................... 10 Virtual Storage Concepts .............................................................................................................. 10 ESXi Host Data Stores .................................................................................................................. 11 Virtual Storage And High Availability ............................................................................................. 12 Virtual Network Switches ................................................................................. 13 Validate Networking And Storage Configurations ......................................... 14 VMware vSphere Administration In Vblock Systems .................................... 15 VMware vSphere Administration Tools ......................................................................................... 15 Vblock Systems Security And Access Procedures ....................................................................... 15 Storage And Data Store Administration ........................................................................................ 16 Virtual Machine External Devices And Media Attachment ............................................................ 17 Virtual Machine Installation ........................................................................................................... 17 VMware vSphere Upgrades .......................................................................................................... 18 Conclusion......................................................................................................... 19 © 2014 VCE Company, LLC. All rights reserved.
2
Obtaining The VCE-CIIEs Certification Credential
The VCE™ Certified Professional program validates qualified IT professional to design, manage, configure,
and implement Vblock® Systems. The VCE™ Certified Converged Infrastructure Implementation Engineer
(VCE-CIIE) credential verifies proficiency with respect to the deployment methodology and management
concepts of the VCE Converged Infrastructure. VCE-CIIE credentials assure customers that a qualified
implementer with a thorough understanding of Vblock Systems is deploying their systems.
The VCE-CIIE track includes a core qualification and four specialty qualifications: Virtualization, Network,
Compute, and Storage. Each track requires a passing score for the VCE Vblock Systems Deployment and
Implementation: Core Exam and one specialty exam.
To obtain the Certified Converged Infrastructure Implementation Engineer Virtualization (CIIEv) certification,
you must pass both the VCE Vblock Systems Deployment and Implementation: Core Exam, and the VCE
Vblock Systems Deployment and Implementation: Virtualization Exam.
VCE Vblock Systems Deployment and Implementation: Virtualization Exam
The VCE Vblock Systems Deployment and Implementation: Virtualization Exam validates that candidates
have met all entrance, integration, and interoperability criteria and that the person is technically qualified to
install, configure, and secure the VMware infrastructure on Vblock Systems.
The exam covers Vblock Systems virtualization technology available at the time the exam was developed.
Recommend Prerequisites
There are no required prerequisites for taking the VCE Vblock Systems Deployment and Implementation:
Virtualization Exam. However, exam candidates should have working knowledge of VMware Service
solutions obtained through formal ILT training and a minimum of one-year experience. It is also highly
recommended that exam candidates have training, knowledge, and/or working experience with industrystandard x86 servers and operating systems.
VCE Exam Preparation Resources
VCE strongly recommends exam candidates carefully review this study guide. However, it’s not the only
recommended preparation resource for the VCE Vblock Systems Deployment and Implementation:
Virtualization Exam, and reviewing this study guide alone does not guarantee passing the exam. VCE
certification credentials require a high level of expertise and it’s expected that you review the related
VMware, Cisco, or EMC resources listed in the References document (available from the VCE Certification
website). It’s also expected that you draw from real-world experiences to answer the questions on the VCE
certification exams. The certification exam also tests deployment and implementation concepts covered in
the Instructor-Led training (ILT) course VCE Vblock Systems Deployment and Implementation, which is a
recommended reference for the exam.
VCE Certification Website
Please refer to https://www.vce.com/services/training/certified/exams for more information on the VCE
Certified Professional program and exam preparation resources.
© 2014 VCE Company, LLC. All rights reserved.
3
Accessing VCE Documentation
The descriptions of the various hardware and software configurations in this study guide apply generically to
Vblock Systems. Vblock System 200, Vblock System 300 family, Vblock System700 family Physical Build,
Logical Build, Architecture and Administration guides contain more specific configuration details.
The VCE related documentation is available via the links listed below. Use the link relative to your role.
Role
Resource
Customer
http://support.vce.com/
VCE partner
www.vcepartnerportal.com
VCE employee
www.vceview.com/solutions/products/
Cisco, EMC, VCE, or VMware employee
www.vceportal.com/solutions/68580567.html
Note: The websites listed above require some form of authentication using a username/badge and password.
© 2014 VCE Company, LLC. All rights reserved.
4
About This Study Guide
The content in this study guide is relevant to the VCE Vblock Systems Deployment and Implementation:
Virtualization Exam. It provides information about VMware virtualization and focuses on how it integrates
into the VCE Vblock Systems. Specifically, it addresses installation, administration, and troubleshooting of
ESXi servers within the virtual Vblock Systems environment.
This study guide focuses on deploying VMware vSphere in a VCE Vblock Systems converged infrastructure.
Vblock Systems come configured with specific customer-defined server, storage, and networking hardware
that is already VMware qualified. The bulk of this study guide concentrates on how to configure and manage
the virtual infrastructure on Vblock Systems.
The following topics are covered in this study guide:
 Overview of Vblock Systems and VMware vSphere environment, including an architectural review of
VMware vSphere, as well as an overview of Vblock Systems specific VMware vSphere components.
 Vblock Systems Advanced Management Pod (AMP) and its role as a repository for management software
virtual machines, including vCenter server.
 How to configure and optimize storage and networking for both virtual applications, as well as how to
optimize the environment for the ESXi virtual infrastructure.
 Techniques to configure the VMware environment for high availability. Various clustering, high availability,
and fault-tolerant options, as well as vMotion and Storage vMotion, to understand how to maximize the
hardware redundancy.
 Troubleshooting, including specific situations often found in a deployment.
Vblock Systems Overview
This study guide focuses on the Vblock System 200, Vblock System 300 family, and Vblock System 700
family Converged Infrastructure comprised of Cisco Unified Computing System (UCS) blade servers, Cisco
Nexus unified and IP only network switches, Cisco Catalyst management switches, Cisco MDS SAN
switches, VMware vSphere Hypervisor ESXi, VMware vCenter Server software, and EMC VNX (Vblock
System 200 and Vblock System 300 family), or VMAX (Vblock System 700 family) storage systems.
Because Vblock Systems come largely preconfigured, this document discusses installing and upgrading virtual
machines (VMs) that are part of Vblock Systems infrastructure. It explores the management applications as
installed on the Advanced Management Pod (AMP) with an emphasis on vCenter Server management.
VCE Vblock Systems combine industry-leading hardware components to create a robust, extensible
platform to host VMware vSphere in an optimized scalable environment. Vblock Systems use redundant
hardware and power connections which, when combined with clustering and replication technologies, create
a highly available virtual infrastructure.
© 2014 VCE Company, LLC. All rights reserved.
5
Vblock Systems Architecture
Vblock Systems are complete, enterprise-class data center infrastructure platforms. They have scaled-out
architecture built for consolidation and efficiency. System resources are scalable through common
and fully redundant components. The architecture allows for deployments involving large numbers of
virtual machines and users.
The specific hardware varies depending on the particular model and configuration of Vblock Systems. The
compute, storage, and network components include:
 Cisco UCS-environment components
o UCS rack-mount blades (Vblock 200) and blade servers (Vblock System 300 family and Vblock
System 700 family)
o UCS chassis
o UCS Fabric Interconnects
o UCS I/O modules
 Redundant Cisco Nexus and or Catalyst LAN switches
 Redundant Cisco MDS SAN switches installed in pairs
 EMC VNX or VMAX enterprise storage arrays
Base preinstalled configuration software includes VMware vSphere on UCS production servers as well on
the C-Series servers that comprise the AMP for the Vblock System 300 family and Vblock System 700
family.
 The Advanced Management Pod (AMP) resides on a designated server made up of management virtual
machines. It functions as a centralized repository for Vblock Systems software management tools,
including vCenter Server.
 VCE Vision™ Intelligent Operations application is a single-source resource monitoring and management for
Vblock Systems. VCE Vision™ software is the industry first converged architecture manager designed with a
consistent interface that interacts with all Vblock Systems components. VCE Vision software integrates tightly
with vCenter Operations Manager, the management platform for the VMware vSphere environment.
© 2014 VCE Company, LLC. All rights reserved.
6
The diagram below provides a sample view of the Vblock Systems architecture. The Vblock System
720 is shown in this example.
© 2014 VCE Company, LLC. All rights reserved.
7
VMware vSphere Architecture
Vblock Systems support multiple versions of VMware vSphere. VMware vSphere 5.5 is the latest iteration of
VMware server virtualization suite. Architecturally, it has two layers:
 ESXi Virtualization Layer is the ESXi hypervisor that runs the management servers in the Vblock
Systems. It abstracts processor, memory, and storage into virtual machines. ESXi hosts reside on UCS
B-Series blade servers.
 Virtualization management layer in Vblock Systems is the vCenter Server. vCenter is a central
management point for the ESXi hosts and the virtual machines they service. vCenter runs as a service on
a Windows server and resides on the AMP. It provides the following functionality:
o User access to core services
o VM deployment
o Cluster configuration
o Host and VM monitoring
VMware vSphere Components
In addition, VMware vSphere features the following components. This is a partial list, which provides a
preface to some of the features investigated in this study guide:
 The vCenter Operations Manager is an automated operation management solution that provides an
integrated performance, capacity, and configuration system for virtual cloud infrastructure.
 The Web Client user interface lets an administrator manage the VMware vSphere environment
from a remote system.
 VMware vSphere HA provides business continuity services, such as a cluster file system, host and
VM monitoring, failover, and data protection.
 The VMFS is the cluster file system for ESXi environments. It allows access to multiple ESXi servers at
the same time and features a distributed journaling mechanism to maintain high availability.
 vMotion enables live migration of virtual machines from one server to another. Storage vMotion enables
live migration of VM files from one data store to another.
 The VMware vSphere Update Manager (VUM) is another notable VMware vSphere component. It
maintains the compliance of the virtual environment, automates patch management, and eliminates
manual tracking and patching of VMware vSphere hosts and virtual machines.
© 2014 VCE Company, LLC. All rights reserved.
8
The following diagram provides a concise view of vCenter and its related components:
© 2014 VCE Company, LLC. All rights reserved.
9
VMware vCenter Server On The AMP
The vCenter server instance installed on the Advanced Management Pod (AMP) is the primary management
point for Vblock Systems virtual environments. The AMP is a specific set of hardware in the Vblock Systems,
typically in a high-availability configuration that contains all virtual machines and vApps that are necessary to
manage Vblock Systems infrastructure. vCenter manages the VMware vSphere environment and allows you
to install and configure vApps and create new ESXi instances, as well as look at VMware performance and
troubleshooting information.
The Advanced Management Pod (AMP) is a set of hardware (optionally, HA clustered) that hosts a virtual
infrastructure containing VMware vCenter and VMs running the tools necessary to manage and maintain a
Vblock Systems environment. The diagram below represents a logical view of the AMP.
Storage Provisioning And Configuration
Storage systems and their ability to interact with VMware vSphere are important considerations when
creating a resilient virtualized environment. EMC storage arrays complement the Vblock Systems
architecture by providing a robust, highly available storage infrastructure. VMware vSphere leverages this
infrastructure to provision new VMs and virtual storage.
Virtual Storage Concepts
Thin provisioning allows for flexibility in allocating storage, and VMware vSphere includes supports for it.
Administrators can create thin-format virtual machine disks (VMDKs), where ESXi provisions the entire
space required for the disk’s current and future activities, but commits only as much storage space as the
disk needs for its initial operations. VMware vSphere manages usage and space reclamation. It is possible
to grow or shrink an existing VMDK to reflect its storage requirements.
The storage arrays in Vblock Systems are preconfigured based on array type and Vblock Systems model.
By default, the storage arrays are Fully Automatic Storage Tiering (FAST) enabled. FAST dynamically stores
data according to its activity level: highly active data goes to high-performance drives; less active data goes
to high-capacity drives. Vblock Systems storage arrays have a mix of Enterprise Flash Drives (EFD), Fibre
Channel Drives, SATA Drives, SAS Drives and NL-SAS Drives. As an example, Vblock System 300 family
models deploy FAST with a default configuration of 5% for Enterprise Flash drives, 45% for SAS and NearLine SAS set to 50%.
Vblock Systems virtual storage offers lazy and eager thick provisioning. Thick Provision Lazy Zeroed creates
a virtual disk in a default thick format with space reserved during the virtual disk creation. Any older data on
the storage device is cleared, or zeroed out, only when the VM first writes new data to that thick virtual disk.
It leaves the door open for recovering deleted files or restoring old data, if necessary. Alternatively, a Thick
Provision Eager Zeroed virtual disk clears data from the storage device upon creation.
© 2014 VCE Company, LLC. All rights reserved.
10
Vblock Systems support PowerPath and native VMware multipathing to manage storage I/O connections.
ESXi uses pluggable storage architecture in the Vmkernel and is delivered with I/O multipathing software
referred to as the Native Multipathing Plugin (NMP), an extensible module that manages sub plug-in.
VMware provides built-in sub plug-in, but they can also come from third parties. NMP sub plug-in can be one
of two types: one based on the storage-array type (SATP), and the other based on the path selection (PSP).
PSPs are responsible for choosing a physical path for I/O requests. The VMware NMP assigns a default
PSP for each logical device based on the SATP associated with the physical paths for that device, but you
can override the default.
The VMware NMP supports the following PSPs:
 In this case, the host selects the path used most recently (MRU). When it becomes unavailable, the host
selects an alternative path. The host does not revert to the original path when that path becomes available
again. There is no preferred path setting with the MRU policy. MRU is the default policy for most activepassive storage devices, and VMware vSphere displays the state as the Most Recently Used (VMware)
path selection policy.
 With a fixed path, the host uses a designated preferred path. Otherwise, it selects the first working path
discovered at boot time. If you want the host to use a particular preferred path, specify it manually. Fixed
is the default policy for most active-active storage devices. VMware vSphere displays the state as the
Fixed (VMware) path selection policy.
Note that if a default preferred path's status turns to Dead, the host will select a new preferred path.
However, designated preferred paths will remain preferred even when they become inaccessible.
 In round robin (RR), the host uses an automatic path selection algorithm. For active-passive arrays, it
rotates through all active paths. RR is the default for a number of arrays and can implement load
balancing across paths for different LUNs. VMware vSphere displays the state as the Round Robin
(VMware) path selection policy.
PowerPath/VE is host multipathing software optimized for virtual host environments. It provides I/O path
optimization, path failover, and I/O load balancing across virtual host bus adapters (HBAs). PPVE installs as
a virtual appliance OVF file. The PowerPath/VE license management server installs on the AMP.
ESXi Host Data Stores
All the files associated with VMs are contained in the ESXi host data store, a logical construct that can exist
on most standard SAN or NFS physical storage devices. A data store is a managed object that represents a
storage location for virtual machine files. A storage location can be a VMFS volume, a directory on Network
Attached Storage, or a local file system path. Virtual machines need no information about the physical
location of their storage, because the data store keeps track of it. Data stores are platform-independent and
host-independent. Therefore, they do not change when the virtual machines move between hosts.
Data store configuration is per host. As part of host configuration, a HostSystem can mount a set of network
drives. Multiple hosts may point to the same storage location. Only one data store object exists for each
shared location. Each data store object keeps a reference to the set of hosts that are mounted to it. You may
only remove a data store object when it has no mounted hosts.
Data stores are created during the initial ESXi-host boot and when adding an ESXi host to the inventory.
You can adjust their size with the Add Storage command. Once established, you can use them to store VM
files. Management functions include renaming data stores, removing them, and setting access-control
permissions. Data stores can also have group permissions.
© 2014 VCE Company, LLC. All rights reserved.
11
Virtual Storage And High Availability
To minimize the possibility of service outages, Vblock Systems Converged Infrastructure hardware has
multiple redundant features designed to eliminate single points of failure. The EMC storage systems used in
Vblock Systems configurations also implement various mechanisms to ensure data reliability with BC/DR
capabilities. VMware vSphere high-availability (HA) features enhance the inherent resiliency of the Vblock
Systems environment. When properly configured, these features enable VMs to remain available through a
variety of both planned and unplanned outages.
Clustering computer systems has been around for a long time. Shared-nothing failover clustering at the OS
level is a predominant approach to system availability, and it does indeed provide system continuity.
VMware vSphere HA allows clustering at the hypervisor layer, leveraging its cluster file system, VMFS, to
allow shared access to VM files during cluster operations. Unlike OS-based clustering, VMware vSphere
clusters remain in service during failure and migration scenarios that would cause an outage in a typical
failover cluster. VMware vSphere HA gets VMs back up and running after an ESXi host failure with very little
effect to the virtual infrastructure.
A key to implementing a resilient HA cluster is using multiple I/O paths for cluster communications and data
access. This hardware infrastructure is part of the Vblock Systems, encompassing both SAN and LAN
fabrics. Another Vblock Systems best practice is to configure redundant data stores, enabling alternate
paths for data store heartbeat. Additionally, NIC teaming uses multiple paths for cluster communications and
tolerates NIC failures.
Another important consideration is ensuring that cluster failover targets have the necessary resources to
handle the application requirements of the primary host. Because certain planned outage scenarios are
relatively short-lived, the primary VM can run on a reduced set of resources until migrated back to the
original location. In the case of failover due to a real hardware or software failure, the target VM environment
must be able to host the primary OS and application with no performance degradation.
VMware vSphere HA provides a base layer of support for fault tolerance. Full fault tolerance is at the VM
level. The Host Failure Cluster control policy specifies a maximum number of host failures, given the
available resources. VMware vSphere HA ensures that if these hosts fail, sufficient resources remain in the
cluster to failover all of the VMs from those hosts. This is particularly important for business applications
hosted on ESXi.
VMware vSphere includes tools to analyze the slot size required to successfully failover VMs to a new location.
The slot size is a representation of the CPU and Memory necessary to host the VM after a failover event. Several
additional settings define failover and restart parameters. Keep in mind that configuring slot size requires careful
consideration. A smaller size may conserve resources at the expense of application performance.
The VMware vSphere Distributed Resource Scheduler DRS takes compute resources and aggregates them
into logical pools, which ultimately simplifies HA configuration deployment. It balances computing capacity
and load balancing within the cluster to optimize VM performance.
VMware vSphere Fault Tolerance protects VMs when an ESXi server goes down. There is no loss of data,
transactions, or connections. If an ESXi host fails, Fault Tolerance instantly moves VMs to a new host via
vLockstep, which keeps a secondary VM in sync with the primary VM, ready to take over if need be.
vLockstep passes the instructions and instruction execution sequence of primary VM to the secondary VM in
case of primary host failure. It occurs immediately. After the failover, a new secondary VM respawns to
reestablish redundancy. The entire process is transparent and fully automated and occurs even if the
vCenter Server is unavailable.
A VMware vSphere HA cluster is a prerequisite for configuring Fault Tolerant VMs.
Many situations require migration of a VM to a compatible host without service interruption, performing
maintenance on production hosts, for example, or processing issues and bottlenecks on existing hosts, etc.
vMotion enables live host-to-host migration for virtual machines.
© 2014 VCE Company, LLC. All rights reserved.
12
In addition, VMware vSphere offers an enhanced vMotion compatibility (EVC) feature that allows live VM
migration between hosts with different CPU capabilities. This is useful when upgrading server hardware,
particularly if the new hardware contains a new CPU type or manufacturer. These enhanced clusters are a distinct
cluster type. Existing hosts can only function as an EVC host after migrated into a new, empty EVC cluster.
Storage systems need maintenance too, and Storage vMotion allows VM files to migrate from shared
storage system to another with no downtime or service disruption. It is also effective when performing
migrations to different tiers of storage.
Performing any vMotion operation requires permissions associated with Data Center Administrator,
Resource Pool Administrator, or Virtual Machine Administrator.
Virtual Network Switches
This section details network connectivity and management for virtual machines. Vblock Systems support a
number of different network/storage paradigms: segregated networks with block-only storage, with unified storage,
with SAN boot storage; unified networks with block-only storage, SAN boot storage, and unified storage.
Close investigation of the Vblock Systems network architecture is beyond the scope of this study guide.
Generally, segregated network connections use separate pairs of LAN (Catalyst and Nexus) and SAN
(MDS) switches and unified network connections consolidate to a single pair of Nexus network switches
both LAN and SAN connectivity.
Virtual servers are managed and connected differently than physical servers and have different
requirements for fabric connectivity and management. They use a virtual network switch. Vblock Systems
customers have two options here, the VMware virtual switch (vSwitch), and the Cisco Nexus 1000V virtual
switch. The VMware virtual switch runs on the ESXi kernel and connects to the Vblock LAN through the
UCS Fabric Interconnect.
The Nexus 1000V virtual switch from Cisco resides on each Vblock Systems server and is licensed on a
per-host basis. IT is equipped with better virtual network management and scalability than the VMware
virtual switch, and VCE considers it a best practice.
The Nexus 1000V is a combined hardware and software switch solution, consisting of a Virtual Ethernet
Module (VEM) and a Virtual Supervisor Module (VSM). The following diagram depicts the 1000V distributed
switching architecture:
© 2014 VCE Company, LLC. All rights reserved.
13
The VEM runs as part of the ESXi kernel and uses the VMware vNetwork Distributed Switch (vDS) API,
which was developed jointly by Cisco and VMware, for virtual machine networking. The integration is tight. It
ensures that the Nexus 1000V is fully aware of all server virtualization events, such as vMotion and
Distributed Resource Scheduler (DRS). The VEM takes configuration information from the VSM and
performs Layer-2 switching and advanced networking functions.
If the communication between the VSM and the VEM is interrupted, the VEM has Nonstop Forwarding
(NSF) capability to continue to switch traffic based on the last known configuration. You can use the VMware
vSphere Update Manager (VUM) to install the VEM, or you can install it manually using the CLI.
The VSM controls multiple VEMs as one logical switch module. Instead of multiple physical line-card
modules, the VSM supports multiple VEMs that run inside the physical servers. Initial virtual switch
configuration occurs in the VSM, which automatically propagates to the VEMs. Instead of configuring soft
switches inside the hypervisor on a host-by-host basis, administrators can use a single interface to define
configurations for immediate use on all VEMs managed by the VSM. The Nexus 1000V provides
synchronized, redundant VSMs for high availability.
You have a few interface options for configuring the Nexus 1000V virtual switch: standard SNMP and XML
as well Cisco CLI and Cisco LAN Management Solution (LMS). The Nexus 1000V is compatible with all the
vSwitch management tools, and the VSM also integrates with VMware vCenter Server so that the
virtualization administrator can manage the network configuration in the Cisco Nexus 1000V switch.
The VMware virtual switch directs network traffic to one of two distinct traffic locations: the VMkernel and the
VM network. VMkernel traffic controls Fault Tolerance, vMotion, and NFS. The VM network allows hosted
VMs to connect to the virtual and physical network.
Standard vSwitches exist at each (ESXi) server and can be configured either from the vCenter Server or directly
on the host. Distributed vSwitches exist at the vCenter Server level, where they are managed and configured.
Several factors govern choice of adapter, generally either host compatibility requirements or application
requirements. Virtual network adapters install into ESXi and emulate a variety of physical Ethernet and Fibre
Channel NICs. (Refer to the Vblock Systems Architecture Guides for network hardware details and
supported topologies.)
Validate Networking And Storage Configurations
The networking topology and associated hardware on the Vblock Systems arrives preconfigured. With the
basic configuration performed at manufacturing, the network is adjusted to accommodate the applications it
supports and other environmental considerations. For example, if using block storage, SAN configuration
components must be tested and verified. If using filers or unified storage, the LAN settings may need
adjustment, particularly NIC teaming, multipathing, and jumbo frames.
With regard to the SAN configuration, you need to review the overall connectivity in terms of availability. Check
both host multipathing and switch failover to ensure that the VMs will be as highly available. Review the storage
configuration to verify the correct number of LUNs and storage pools and verify the storage pool accessibility.
Then make sure that all the deployed virtual machines have access to the appropriate storage environment.
These activities require almost the complete suite of monitoring and management tools in the Vblock Systems
with most of the tools installed on the AMP. Specific tools used during a deployment include, vCenter, Operations
Manager, EMC Unisphere, EMC PowerPath Viewer, VCE Vision software, and Cisco UCS Manager.
© 2014 VCE Company, LLC. All rights reserved.
14
VMware vSphere Administration In Vblock Systems
VMware vSphere Administration Tools
VMware vCenter has a number of established, integrated components that form a comprehensive
management framework for the virtual environment. The following items make up a partial list.
The most significant item in vCenter management toolbox, the vCenter Operations Manager is an
automated operations management solution that provides an integrated performance, capacity, and
configuration system for virtualized and cloud infrastructure. Deep VMware vSphere integration makes it the
most comprehensive management tool available for VMware environments. vCenter Operations Manager is
purpose-built for VMware administrators, allowing them to manage the performance of their VMware
environments as they move to the private cloud.
The Web Client is a sustainable user interface model that allows an administrator to manage the VMware
vSphere environment from a remote system. The Web Client works within a heterogeneous environment
and has the ability to manage a large number of objects across geographically dispersed data centers.
ESXi Direct Console User Interface (DCUI) is a console interface allowing access to configuration and
security information. DCUI allows direct access the ESXi host, even if it is unavailable from vCenter.
The Inventory Service stores vCenter Server application and inventory data, so you can search and access
inventory objects across linked vCenter Server instances. To speed inventory search time, the Inventory Service
uses user-defined tags that categorize inventory objects. These tags are searchable metadata and reduce the
time to find inventory object information. Tags might include configuration attributes (such as all virtual machines
with more than 4 GB RAM) or business classification (such as all Tier 1 applications). Use these groupings to
retrieve not just virtual machines but networking, data store, and cluster information as well.
Mainly, the Inventory Service manages the VMware vSphere Web Client inventory objects and property
queries that the client requests when users navigate the VMware vSphere environment. The VMware
vSphere Web Client requests only visible information, making navigation more efficient.
Note that vCenter Server is an independent component and should be offloaded to a separate server. This
reduces traffic and improves response times.
vCenter Web Services. VMware offers vCenter Web Services as a programming interface for third-party
clients to communicate with vCenter. In fact, VCE Vision software uses this interface. vCenter Management
Web Services include Performance Overview, Storage Views, Hardware Status, vCenter Service Status,
and License Reporting Manager.
Vblock Systems Security And Access Procedures
Environments should implement a well-defined security model before doing any substantial configuration to the
Vblock Systems. Many of the roles and functions require specific access controls. The levels of access are very
granular, down to the individual component level. Ensure that the environment has the appropriate rights before
finalizing the deployment. The system administrator will set the criteria for the security access features.
The Authorization Manager protects VMware vSphere components from unauthorized access. Access to
components is role-based: Users and group role assignment encompass the privileges needed to view and
perform operations on VMware vSphere objects. Authorization Manager has operations for creating
new roles, modifying roles, setting permissions on entities, and handling the relationship between managed
objects and permissions.
vCenter Single Sign-On (SSO) is a critical vCenter Server component, a robust authentication system buttressed
by proven industry standards for the VMware environment. SSO supports Open LAPD and NIS repositories along
with Microsoft Active Directory. It also has multiple identity sources, including multiple Active Directory
forests/domains or mixed identity sources. By default, SSO passwords expire after 365 days.
© 2014 VCE Company, LLC. All rights reserved.
15
vCenter Server instances need not necessarily require authentication every time a solution is accessed. The
system is flexible. Severs need not necessarily be in the same location, either. The SSO architecture
features multi-instance and multisite configurations, so servers may be located locally or be geographically
dispersed. SSO provides single-solution authentication across the entire environment.
Fortunately, customers can tie in any existing authentication solution to the SSO system. It accepts identity
sources without a Microsoft Active Directory server. When users log in to the VMware vSphere Web Client with a
username and password, the vCenter Single Sign-On server receives their credentials. These credentials are
authenticated against the back-end identity source and, then, exchanged for a security token. Upon return to the
client, the token authentication allows appropriate access to the solutions within the environment.
Users cannot log in to a vCenter server directly. Rather, a background discovery service maintains a list of
all vCenter Server components and automatically populates the VMware vSphere Web Client with only
permissible vCenter servers. In other words, the Web Client only offers the servers that the user has access
to. Users, then, have a single pane-of-glass view of their entire vCenter Server environment, including
multiple vCenter servers and their inventories.
As its name implies, lockdown mode secures ESXi hosts absolutely. No users other than vpxuser have
authentication permissions, nor can they perform operations against the host directly. Lockdown mode
forces all operations to execute through vCenter Server. You cannot run vCLI commands from an
administration server, from a script, or from vMA against the host. External software or management tools
might not be able to retrieve or modify information from the ESXi host. The root user may log in to the direct
console user interface (DCUI) when in lockdown mode. Indeed, the DCUI is an alternative way to initiate
lockdown mode.
A firewall exists in ESXi between the management interface and the network. At installation time, the ESXi
firewall default state configuration blocks incoming and outgoing traffic, except traffic default services.
ESXi and vCenter Server support encryption based on standard X.509 version 3 (X.509v3) certificates to encrypt
session information sent over Secure Socket Layer (SSL) protocol connections between components. If SSL is
enabled, data is private, protected, and cannot be modified in transit without detection.
An optional feature for EMC storage systems is Data at Rest Encryption (DARE). DARE performs XTS-AES
256-bit encryption on all data written to the disks in the array. It also leverages RSA key-management
technology to provide an encryption key for each drive. This adds an additional level of security to data
stores and VM storage by rendering disk devices unreadable if removed from the array.
Storage And Data Store Administration
Storage configuration is a vital aspect of VM and application performance and availability. VMware vSphere
abstracts and optimizes the underlying storage hardware, creating objects and containers for VM files, and
VMFS and NFS file systems. All the files associated with VMs go into the ESXi host data store—a logical
container for the VM files. Use the VMware vSphere Client to access different types of storage devices that
your ESXi host discovers and deploys data stores on.
In terms of accessing storage, the virtual disk always appears to the virtual machine as a mounted SCSI
device. The virtual disks within the data store hide the physical storage layer from the VM.
For direct VM access to a LUN on a Fibre Channel storage device, ESXi provides Raw Device Mapping.
This is useful when you have to share a LUN with a physical server or have SAN utilities running in the VM
that must access the LUN directly.
© 2014 VCE Company, LLC. All rights reserved.
16
The proliferation of Network Filesystem Storage (NFS) in the data center today, as well as the lower cost-per-port
for IP-based storage, has led many virtualization environments toward Network Attached Storage (NAS) sharedstorage resources. More and more ESXi deployments are leveraging NAS. For the purpose of clarity, both NFS
and NAS refer to the same type of storage protocol. The capabilities of VMware vSphere on NFS are very similar
to those of VMware vSphere on block-based storage. VMware offers support for almost all features and functions
on NFS—as it does for VMware vSphere on SAN. Given its strong performance and stability when correctly
configured, NFS is a very viable option for many virtualization deployments.
VMware vSphere Storage Distributed Resource Scheduler (DRS) as it relates to high availability. When
strictly speaking about storage, it functions essentially as a machine-placement and load-balancing
mechanism, based on I/O and space capacity. It is used to provision virtual machines and monitor the
storage environment. Storage DRS continuously balances storage space, usage, and I/O while avoiding
resource bottlenecks to meet application service levels. Its monitoring feature continuously watches storage
space and I/O utilization across a preassigned data store pool. It also has a maintenance mode, where
VMDK files from DRS-enabled clusters can move to another data store. (Note that a data store remains in
maintenance mode state until all virtual disks have moved to other data stores within the cluster.)
Virtual Machine External Devices And Media Attachment
When installing new Virtual Machines and vApps, media and files access is required. VMware vSphere
supports several mechanisms to access external media and devices. ISO image files contained on optical
media (CD/DVD) may be directly mounted. You may also mount and directly access files on USB storage
media. ESXi also supports NFS/CIFS mounts for storing virtual disks on NFS/CIFS data stores.
Virtual Machine Installation
VMware vSphere components are distributed in a variety of formats. Extensible, elastic environments
leverage VMware vSphere ability to manage and deploy VMs and Virtual Applications by cloning files.
The VMware vSphere maximums are documented by VMware and list the recommended processor,
memory, storage, and I/O configurations for VMware vSphere deployments. This will define the maximum
number of VMs that can be created within Vblock Systems.
Virtual Applications (vApps) are a collection of components that combine to create a virtual appliance
running, as a VM. Several Vblock Systems management components hosted reside on the AMP as vApps.
VMware uses the Open Virtualization Format/Archive (OVF/OVA) extensively. VMware vSphere relies on
OVF/OVA standard interface templates as a means of deploying virtual machines and vApps. A VM
template contains all of its OS, application, and configuration data. You can use an existing template as a
master to replicate any VM or vApp. or use it as the foundation to customize a new VMs. Any VM can
become a template. It’s a simple procedure from within the vCenter Operations Manager.
VM Cloning is another VM replication option. The existing virtual machine is the parent of the clone. When
the cloning operation is complete, the clone is a separate virtual machine, though it may share virtual disks
with the parent virtual machine. Again, cloning is a simple vCenter Operations Manager procedure.
The current state of a virtual machine is saved with a snapshot. This provides the benefit to revert to a
previous state in case of an error modifying or updating the VM.
© 2014 VCE Company, LLC. All rights reserved.
17
VMware vSphere Upgrades
VMware vSphere Update Manager (VUM) automates patch management and eliminates manual tracking
and patching of VMware vSphere hosts and virtual machines. It compares the state of VMware vSphere
hosts with baselines, then updates and patches to enforce compliance.
A VUM patch-compliance dashboard provides a window into patch status across the virtual infrastructure.
The VUM also lets you schedule patching for remote sites and deploy off-line patch bundles downloaded
directly from vendor websites. A VUM virtual machine resides on the AMP.
Patch application can lead to compatibility errors that require remediation. VMware vSphere Update
Manager can eliminate the most common patching problems before they occur, ensuring that the time saved
in batch processing automation is not wasted later in performing roll-backs and dealing with one-off
problems. Snapshots for a user-defined period can be saved, and then you can roll back the virtual machine
if necessary. Virtual machines should be securely off-line for patching without exposure to the network,
reducing the risk of non-compliant virtual machines. Automatic notification services help ensure that the
most current version of a patch has been applied.
VMware vSphere Update Manager works in conjunction with VMware vSphere Distributed Resource Scheduler
(DRS) to provide non-disruptive host patching when remediating a cluster. VUM also uses DRS to place hosts in
maintenance mode when migrating live VMs. Virtual machines are migrated back after patching.
The Update Manager process begins with a baseline, which is essentially an update profile to model objects
(VMs, hosts, virtual appliances) after. A VMware repository gathers data about patches, extensions, and
upgrades, and then downloads and aggregates the data into a baseline. Baselines can be grouped and
added to an existing group.
When scanning hosts, virtual machines and virtual appliances are evaluated against baselines and baseline
groups to determine their level of compliance to determine what needs updated. This starts the remediation
process, which applies to patches, extensions, and upgrades.
Given the complexity involved in upgrading a converged infrastructure, VCE has implemented a full-scale
upgrade service, VCE™ Software Upgrade Service, providing everything from upgrade project planning
through implementation and verification.
Still, many customers prefer to perform upgrades in-house. Regardless, upgrades are based on the VCE
Release Certification Matrix (RCM), which lists the hardware and software component versions that have
been fully tested and verified for a particular release version of Vblock Systems. Updated components may
include, but are not limited to:
 AMP software
 AMP hardware
 Storage array firmware
 vSwitch firmware
 Switch hardware
 VMware vSphere
 vApps
 Plugins
© 2014 VCE Company, LLC. All rights reserved.
18
Conclusion
This study guide represents a subset of all of the tasks, configuration parameters, and features that are part
of a Vblock Systems deployment and implementation. This study guide focused on deploying VMware
vSphere in a VCE Vblock Systems converged infrastructure. Vblock Systems come configured with specific
customer-defined server, storage, and networking hardware that is already VMware qualified. The bulk of
this study guide concentrated on how to configure and manage the virtual infrastructure on Vblock Systems.
Exam candidates with the related recommended prerequisite working knowledge, experience, and training
should thoroughly review this study guide and the resources in the References document (available on the
VCE Certification website) to help them successfully complete the VCE Vblock Systems Deployment and
Implementation: Virtualization Exam.
ABOUT VCE
VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of
converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while
improving time to market for our customers. VCE, through the Vblock Systems, delivers the industry's
only fully integrated and fully virtualized cloud infrastructure system. VCE solutions are available
through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of
integrating, validating, and managing IT infrastructure.
For more information, go to vce.com.
Copyright © 2014 VCE Company, LLC. All rights reserved. VCE, VCE Vision, Vblock, and the VCE logo are registered trademarks or
trademarks of VCE Company LLC or its affiliates in the United States and/or other countries. All other trademarks used herein are the property
of their respective owners.
© 2014 VCE Company, LLC. All rights reserved.
12052014
19
Download