Pure Storage and VMware Integration

advertisement
Pure Storage and VMware
Integration
Stefano Pirovano
System Engineer
@StePir75
© 2013 Pure Storage, Inc. | 1
Pure Storage is Flash for the
Enterprise
Consistent performance
100% MLC Flash
Less cost than disk
Inline deduplication & compression
Mission-critical reliability
99.999%, non-disruptive operations
Scalable & compatible
10 à 100s of TBs, Purity software
© 2013 Pure Storage, Inc. | 2
DB, VSI, VDI: Where Flash & Dedup’ are disruptive
Market-Leading
All-Flash Array
Gartner Magic Quadrant:
Solid State Arrays, August 2014
#1 All-Flash Array for
Databases, VSI, VDI
Gartner Critical
Capabilities: Solid State
Arrays, August 2014
© 2013 Pure Storage, Inc. | 3
The Disruption of Simplicity: 5 Differentiators
All-Inclusive Software
Pricing
Industry’s Broadest
End-to-End Guarantee
No Required Training or
Professional Services
CloudAssist!
Real-Time
Monitoring!
Global Analysis
/ Analytics!
Proactive
Resolution!
Continuous
Improvement!
Fanatically Proactive
Support
A Better Approach to Storage
Acquisition & Lifecycles
© 2013 Pure Storage, Inc. | 4
Agenda
•  VMware
•  Pure
•  Site
VAAI
Storage vSphere Web Client Plugin
Recovery Manager
•  vVol
Program
© 2013 Pure Storage, Inc. | 5
vStorage API for Array Integration (VAAI)
•  SCSI-based
offloading of common operations to storage
array
• 
Cloning of VMs, zeroing of disk space, metadata locking
•  Four
• 
• 
• 
• 
supported primitives:
Block Zero (WRITE SAME)
Hardware-assisted Locking (ATS)
Full Copy (XCOPY)
Dead Space Reclamation (UNMAP)
© 2013 Pure Storage, Inc. | 6
Hardware-Accelerated Zero
•  Without API
–  SCSI Write - Many identical small blocks of
zeroes moved from host to array for MANY
VMware IO operations
–  Pure automatically ignore zeros and never write
them to the drives, so no “zero reclaim” penalty
–  New Guest IO to VMDK is “pre-zeroed”
•  With API
–  SCSI Write Same - One giant block of zeroes
moved from host to array and repeatedly written
–  Thin provisioned array skips zero completely (pre
“zero reclaim”)
SCSI WRITE (0000…..)
SCSI WRITE (data)
SCSI WRITE SAME (0 * times)
SCSI WRITE (0000….)
SCSI WRITE (data)
SCSI WRITE (data)
Repeat MANY times…
VMFS-5
VMDK
•  Use Cases
–  Reduced IO when writing to new blocks in the
VMDK for any VM
–  Time to create VMs (particularly FT-enabled
VMs)
© 2013 Pure Storage, Inc. | 7
Hardware-Accelerated Locking
• 
• 
• 
Without API
• 
Reserves the complete LUN so that it could
obtain a lock
• 
Required several SCSI commands
• 
LUN level locks affect adjacent hosts
With API
• 
Locks occur at a block level
One efficient SCSI command - SCSI Compare
and Swap (CAS)
• 
Block level locks have no effect on adjacent
hosts.
VMFS-5
Use Cases
• 
Bigger clusters with more VMs
• 
View, Lab Manager, VMware vCD
• 
More & Faster VM Snapshotting
© 2013 Pure Storage, Inc. | 8
Hardware-Accelerated Copy
•  Without API
– 
– 
– 
– 
SCSI Read (Data moved from array to host)
SCSI Write (Data moved from host to array)
Repeat
Huge periods of large VMFS level IO, done via
millions of small block operations
“let’s Storage
VMotion”
SCSI READ
SCSI READ
SCSI
EXTENDED
SCSI
READ
COPY
..MANY times…
•  With API
– 
– 
– 
– 
SCSI Extended Copy (Data moved within array)
Repeat
Order of magnitude reduction in IO operations
Order of magnitude reduction in array IOps
VMFS-5
SCSI WRITE
SCSI WRITE
SCSI WRITE
..MANY times…
VMFS-5
•  Use Cases
–  Storage vMotion
–  VM Creation from Template
“Give me a VM
clone/deploy
from template”
© 2013 Pure Storage, Inc. | 9
FlashRecover Accelerated VM Cloning
Hardware-Driven VM Cloning via XCOPY
Legacy Disk Array
(no XCOPY)
Legacy Disk Array
(with XCOPY)
Pure Storage
Accelerated VM Cloning
Virtualization
Virtualization
copy
Virtualization
Metadata
Snap
copy
Stored Twice
Stored Twice
Stored Once
112 Seconds
22 Seconds
10 seconds
Example: Comparing Pure Storage vs. xxxxxx cloning 1,000 40GB VMs*
* http://virtualgeek.typepad.com/virtual_geek/2012/12/vmax-and-vsphere-vaai-xcopy-update.html
© 2013 Pure Storage, Inc. | 10
Hardware-Accelerated Copy
100 GB Zeroedthick Virtual Disk (50 GB of data)
© 2013 Pure Storage, Inc. | 11
Hardware-Accelerated Copy
© 2013 Pure Storage, Inc. | 12
Hardware-Accelerated BlockDelete
• 
• 
• 
Without API
• 
Changes to vmdk is not propagated to backend LUN (how would I know!)
• 
If data is constantly rewritten, LUN would grow till it fills up
With API
• 
Deleting blocks gets translated to block deletes on LUN
• 
Better LUN space management
Use Cases
• 
Great for us as we can mark the deleted blocks and delete immediately – Less
confused VI admin
• 
No backend LUN bloat
© 2013 Pure Storage, Inc. | 13
Flash Makes Virtual Administration Faster
3 min
15 min
Disk
Disk
10 sec
<1 min
Provision 50GB VM
From Template
(XCOPYBenefits)
Boot 100 VDI desktops
(ATS benefits)
4 min
45 min
4 min
SvMotion 50GB VM
(XCOPYBenefits)
Disk
Disk
10 sec
Recompose 100 VMs
© 2013 Pure Storage, Inc. | 14
Complete Management within vSphere
vCenter Web Client Plugin
Complete management of FlashArray from
within vCenter.
Automated Datastore Creation
Just specify size and done. No LUNs, no
RAID, no WWNs, no rescanning.
Complete Capacity Visibility
See through deduplication, compression, and
thin provisioning to understand real capacity.
Complete Performance Visibility
Correlate IOPS, latency, and bandwidth on a
per-datastore basis.
VMware Ready and VAAI Certified
Highest performance possible, jointly
supported by Pure Storage and VMware.
© 2013 Pure Storage, Inc. | 15
Pure Storage vSphere Web Client Plugin
© 2013 Pure Storage, Inc. | 16
Datastore à Array Visibility
© 2013 Pure Storage, Inc. | 17
Automated Datastore Creation
© 2013 Pure Storage, Inc. | 18
Automated Datastore Resizing
© 2013 Pure Storage, Inc. | 19
VMware Site Recovery Manager
•  Disaster
Recovery automation product for VMware
environments
•  Leverages
array-based replication to migrate or recovery
virtual machines from one datacenter to another
•  Interacts
with vendor-supplied Storage Replication
Adapters (SRAs) to migrate storage
•  No
licensing costs for Pure Storage replication or SRA use
© 2013 Pure Storage, Inc. | 20
FlashRecover Replication
The Benefits of Asynchronous + Snapshot Replication Combined
Flexible replication with low RPO
…
1000s
•  Differentials-based, bi-directional
•  Replicates every minute (configurable)
•  Retains library of PIT remote snaps for recovery
Simple setup in minutes
•  Automation with Protection Policies
(consistency groups and variable retention)
•  No pre-planning or professional services!
Data reduction-optimized
•  Always thin, deduped, compressed
•  Delta changes only after baseline
•  Data reduction accelerated Clone replicas
Instant recovery for Zero RTO
•  Instantly export any PIT replica
•  Instantly roll backward or forward
Advanced Multi-Site Replication
•  1:Many, Many:1 or Many:Many
© 2013 Pure Storage, Inc. | 21
FlashRecover Protection Policies
Policy-based Automation of Local and Remote Retention Schedules
Target
Source
Day 1
Day 0
Volume
Volume(s)
Host(s)
Host Group(s)
Protection Groups
• Delivers data protection
consistency across multiple
volumes or multiple hosts
• Group multiple objects within
a Protection Group
• Objects can belong to
multiple Protection Groups
Retain all
replicas
Day 1
Day 0
Day 2…
Retain all
replicas
Variable Retention Automation
• Configurable frequency
schedule and retention
period for local snapshots
and replication
• Policy automatically
creates and expires
snapshots and remote
replicas
© 2013 Pure Storage, Inc. | 22
Recovery with FlashArray
Protected Environment
Recovery Environment
Snapshot A1
Remote
Snapshot A1
Snapshot A2
Remote
Snapshot A2
Volume A
Snapshot A3
Volume B
Remote
Snapshot A3
© 2013 Pure Storage, Inc. | 23
vVols and VASA
•  What
are vVols and VASA?
•  What
problems are they solving?
•  What
opportunities do they create for Pure?
•  What
risks do they present?
•  What
are we planning to deliver?
© 2013 Pure Storage, Inc. | 24
Old world
•  Create
a volume
•  Export
to a bunch of ESXi hosts
•  Create
a VMFS filesystem and attach it to vCenter as a
datastore
•  Create
VMs or migrate them onto the datastore
© 2013 Pure Storage, Inc. | 25
Problems
•  VMFS
is slower than raw device
•  Duplication
of functionality – both storage array and ESXi
implement thin provisioning, fast cloning, and snapshots
• 
• 
• 
Storage array does it on a volume level
ESXi does it on a virtual disk level
Storage array functions apply to all VMs in the LUN – often want to target a
single VM
•  Have
a big pool of storage that you want to share among
many ESXi servers
• 
• 
Have to carve it up into multiple LUNs because of clustering limits in VMFS
Get the division wrong, it can be hard to fix
§ 
Can’t shrink a VMFS volume
§ 
Expanding may fail under load
© 2013 Pure Storage, Inc. | 26
Solution - vVols
•  Give
• 
each virtual disk of a VM its own volume
E.g. 2000 VMs with 2 virtual disks each = 4000 volumes
•  Expose
• 
• 
• 
• 
• 
a standard API (through VASA) for
Create/resize/delete a volume
Snapshot volume
Clone volume
List volumes
Connect/disconnect volume to host
© 2013 Pure Storage, Inc. | 27
Important details
•  What
• 
• 
about small config files (e.g. .vmx)?
They live in a per-VM config volume (2-8GB)
Happens to be formatted VMFS
•  Each
snapshot or clone of a vvol is a separate vvol
© 2013 Pure Storage, Inc. | 28
Old vs. new world
VMFS volume
FlashArray
FlashArray
© 2013 Pure Storage, Inc. | 29
vVols - Problems solved
•  Snapshot/clone
of virtual disk ó snapshot/clone of
volume
•  No
VMFS translation overhead for a VM’s virtual disks
•  No
shared VMFS metadata that needs co-ordinated
updates == fewer clustering limits
•  No
VMFS filesystem size to limit the size of the VMs
(double-edged sword)
•  Fewer
XCOPYs / UNMAPs – instead we see snapshot/
delete volume
© 2013 Pure Storage, Inc. | 30
vVols - opportunities
•  Per
• 
• 
virtual-disk or VM services
Space and perf reporting
Replication / data protection / tiering
•  Simpler
user experience
© 2013 Pure Storage, Inc. | 31
Storage containers
•  Motivation
• 
• 
Many storage arrays have different pools (RAID-6 pool vs RAID-10
pool)
Want to target vvol creation to the right pool
•  Solution
• 
• 
• 
• 
Each pool can expose a storage container to VMware
Storage containers are mapped to datastores
When creating/cloning a virtual disk, admin can choose a target
datastore
ESXi targets the create/clone operation to corresponding storage
container
© 2013 Pure Storage, Inc. | 32
Storage containers & Pure
•  We’ve
got one pool so we only need to expose one
storage container
•  But:
• 
• 
• 
• 
Could allow users via our GUI to create additional storage containers
for organization (finance vs. engineering)
When replication vvols from a remote array, might want to put them in
a separate storage container for easy clean-up in case replication
arrangement ends
Might need a storage container per vCenter - unclear whether 2
vCenters can safely share a storage container
Might be useful when we have quotas
© 2013 Pure Storage, Inc. | 33
About those new APIs
•  Create/resize/snapshot,
•  Didn’t
etc.
extend SCSI or NFS to add these features
•  Added
the APIs to VMware APIs for Storage Awareness
(VASA)
•  VASA
is accessed over HTTPS/TCP/IP/Ethernet through
the management port
•  If
management port is down, you can’t create or start
VMs!
• 
Today, if IP is down, a FibreChannel array will still happily serve data.
© 2013 Pure Storage, Inc. | 34
VASA – grab bag of features
• 
vVol APIs
• 
Describe the storage array to vCenter
• 
• 
• 
How many controllers? How many ports on each controller? What logical units are
exposed? What is the service level of logical unit (e.g. gold, bronze)? Supported
protocols?
Like an SNMP or CIM object model
Tell VMware about what storage pools are backing the LUNs it sees
• 
Don’t automatically do storage vmotion across these two LUNs because they are in the
same pool
• 
Policy APIs
• 
Alert / event API for raising alerts/events with vSphere
© 2013 Pure Storage, Inc. | 35
Policies
• 
Problem
• 
VMware administrator is creating a VM – which datastore should he use?
Today: sends e-mail to storage admin – what are the SLAs associated with the various
datastores?
• 
How does VMware administrator find out if the SLA isn’t being met/has changed?
• 
• 
Solution – VASA policies
• 
• 
• 
• 
Schema – vendor can programmatically describe the knobs supported (e.g there’s a
replication Interval knob and it’s a time duration)
Profile on vVol – requested knob settings (e.g. this vVol needs a replication Interval of 5
minutes)
Profile on storage container – knobs supported by storage container (replication Interval
5 min to many minutes)
Compliance – vCenter periodically queries the storage and asks whether it has been
able to satisfy the policies
© 2013 Pure Storage, Inc. | 36
Policies – new world
•  Admin
• 
• 
• 
• 
creates virtual disk
Specifies the policy they want to apply
VMware shows compatible datastores
Admin chooses one
Admin can get alert if datastore is unable to meet policy
© 2013 Pure Storage, Inc. | 37
Thanks!
Download