IBM System Storage SVC

advertisement
Danijel Paulin, [email protected]
Systems Architect, SEE
IBM Croatia
IBM Storage Virtualization
Cloud enabling technology
11th TF-Storage Meeting, 26-27 September 2012, Dubrovnik, Croatia
9/27/2012
© 2012 IBM Corporation
Agenda
 Introduction
 Virtualization – function and benefits
 IBM Storage Virtualization
 Virtualization Appliance SAN Volume Controller
 Virtual Storage Platform Management
 Integrated Infrastructure System - „Cloud Ready”
 Summary
2
© 2012 IBM Corporation
Smarter Computing
New approach in designing IT Infrastructures
Smarter Computing is realized through an IT infrastructure that is designed for data, tuned to
the task, and managed in the cloud...
Greater Storage
Higher Utilization
Efficiency & Flexibility
Workload Systems
Increased
Virtualization
Tuning
Flexibility
Foundation for
Cloud
Better Economics
Building a cloud starts with virtualizing your IT environment
© 2012 IBM Corporation
The journey to the cloud begins with virtualization!
Virtualize
Server, storage & network devices to increase utilization
Provision & Secure
Automate provisioning of resources
Monitor & Manage
Provide visibility of performance of virtual machines
Orchestrate Workflow
Manage the process for approval of usage
Meter & Rate
Track usage of resources
© IBM Corporation 2012
4
IBM Virtualization Offerings
Server virtualization
 System p, System i, System z LPARs, VMware ESX, IBM Smart Business
Desktop Cloud
 Virtually consolidate workloads on servers
File and File System virtualization
 Scale Out NAS (SoNAS), DFSMS, IBM General Parallel File System, N-series
 Virtually consolidate files in one namespace across servers
Storage virtualization
 SAN Volume Controller (the Storage Hypervisor), ProtecTIER
 Industry leading Storage Virtualization solutions
Server and Storage Infrastructure Management
 Data protection with Tivoli Storage Manager and TSM FastBack
 Advanced management of virtual environments with TPC, IBM Director
VMcontrol, TADDM, ITM, TPM
 Consolidated management of virtual and physical storage resources
IBM Storage Cloud Solutions
 Smart Business Storage Cloud (SoNAS), IBM SmartCloud Managed Backup
 Virtualization and automation of storage capacity, data protection, and other
storage services
© 2012 IBM Corporation
Virtualization – functions and benefits
Virtual
Virtual
Resources
Resources
Sharing
Aggregation
Resources
Examples: LPARs, VMs, virtual disks, VLANs
Resources
Examples: Virtual disks, system pools
Benefits: Resource utilization, workload
mgmt., agility, energy efficiency
Benefits: Management simplification,
investment protection, scalability
Virtual
Virtual
Resource
Type Y
Resources
Emulation
Resource
Type X
Resources
Insulation
Resources
Examples: Arch. emulators, iSCSI, FCoE, v. tape
Benefits:
Add or Change
Compatibility, software investment
protection, interoperability, flexibility
Add, Replace,
or Change
Resources
Examples: Compat. modes, CUOD, appliances
Benefits: Agility, investment protection,
complexity & change hiding
© 2012 IBM Corporation
What is Storage Virtualization?
Logical
Representation
Technology that makes one set of resources look and feel
like another set of resources
A logical representation of physical resources
– Hides some of the complexity
– Adds or integrates new function with existing
services
Virtualization
– Can be nested or applied to multiple layers of a
system
Physical
Resources
7
© 2012 IBM Corporation
What distinguishes a Storage Cloud
from Traditional IT?
1. Storage resources are virtualized from multiple arrays, vendors, and
datacenters – pooled together and accessed anywhere.
(as opposed to physical array-boundary limitations)
2. Storage services are standardized – selected from a storage service
catalog.
(as opposed to customized configuration)
3. Storage provisioning is self-service – administrators use automation to
allocate capacity from the catalog.
(as opposed to manual component-level provisioning)
4. Storage usage is paid per use – end users are aware of the impact of their
consumption and service levels.
(as opposed to paid from a central IT budget)
© 2012 IBM Corporation
IBM Storage Virtualization
9
© 2012 IBM Corporation
Today's SAN
SAN
10
SAN-attached disks look
like local disks to the OS
& application
© 2012 IBM Corporation
SAN – with Virtualization
SAN
Virtual disks start as
images of migrated
non-virtual disks.
Later, modify striping,
thin provisioning, etc.
Virtualization
layer
11
© 2012 IBM Corporation
Become truly flexible !
SAN
Virtual disks remain
constant during physical
infrastructure changes
Virtualization
layer
12
© 2012 IBM Corporation
Enable tiered Storage !
SAN
Virtualization
layer
13
Moving virtual disks
between storage
tiers requires no
downtime
© 2012 IBM Corporation
Avoid planned Downtime !
SAN
Upgrade
14
Virtualization
layer upgrade or
replacement with
no downtime!
© 2012 IBM Corporation
In-band Storage Virtualization - Benefits
CACHE + SSD
Pooling
Isolation
1.
2.
3.
Flat interoperability matrix
Non-disruptive migrations
No-cost multipathing
Performance
1.
2.
3.
Higher (pool) utilization
Cross-pool-striping: IOPS
Thin Provisioning: free GB
Mirroring
Mirroring
×
1.
2.
3.
Performance increase
Hot-spot elimination
Adds SSD to old gear
1.
2.
3.
License economies
Cross-vendor mirror
Favorable TCO
License $$
15
© 2012 IBM Corporation
Migration into Storage Virtualization (and back!)
ZONE
SAN
Virtualization layer
Virtual disks in transparent
Image Mode, before being
converted to Full Striped
This works backwards too (no
vendor lock-in)
16
© 2012 IBM Corporation
Redundant SAN !
ZONE
SAN A
17
SAN B
1
:
4
Virtualization layer
© 2012 IBM Corporation
Virtualization Appliance
SAN Volume Controller
18
© 2012 IBM Corporation
Storage Hypervisor
– Common device driver - iSCSI or FC host attach
– Common capabilities
•
•
•
•
•
I/O caching and cross-site cache coherency
Thin provisioning
Easy Tier automated tiering to Solid-state Disks
Snapshot (FlashCopy)
Mirroring (Synchronous and Asynchronous)
– Data mobility
•
•
Transparent data migration among arrays and across tiers
Snapshot and mirroring across arrays and tiers
• Virtual Storage Platform Management - Tivoli
Storage Productivity Center
– Manageability
• Integrated SAN-wide Management with Tivoli Storage
Productivity Center
• Integrated IBM server and storage management (Systems
Director Storage Control)
– Replication
• Application integrated FlashCopy
• DR automation
– High Availability
• Stretch Cluster HA
Storage Hypervisor
IBM Systems Director
• Virtual Storage Platform - SAN Volume Controller
Tivoli Storage Productivity
Center
(SAN Volume Controller)
Manage
Virtual Storage
Infrastructure
VMControl
Manage
Virtual Server
Infrastructure
Virtualization Appliance : SAN Volume Controller

Stand-alone product

Clustered ×2…8

SVC comes with write cache
mirrored in pairs (IOgroups)

Multi-use Fibrechannel in & out

Linux boot, 100% IBM stack
TCA:
1. Hardware
2. per-TB license (tiered)
3. per-TB mirroring license
20
© 2012 IBM Corporation
6th Generation.....
 Continuous development
initial Release
 Firmware is backwards compatible
(64 bit not for 32 bit Hardware)
 Replace while online
SAN Volume Controller CG8 – Firmware v6.4
MODELS
:
SVC 4F2 SVC 8F2 SVC 8F4 SVC 8G4 SVC CF8 SVC CG8 -
:
21
4GB cache, 2Gb SAN (Rel.3 / 2006)
8GB cache, 2Gb SAN (ROHS comp.)
8GB cache, 4Gb SAN
155.000 SPC-1™ IOPS
+Dual-core Processor
272.500 SPC-1™ IOPS
24GB cache, Quad-core
380.483 6-node SPC-1 IOPS
+10 GbE
approx. 640.000 SPC-1-like IOPS
© 2012 IBM Corporation
SVC Model & Code Release History
 1999 – Almaden Research group publish ComPaSS clustering
 2000 – SVC ‘lodestone’ development begins using ComPaSS
 2003 – SVC 1.1 – 4F2 Hardware 4 node
 2004 – SVC 1.2 – 8 node support
 2004 – SVC 2.1 – 8F2 Hardware
 2005 – SVC 3.1 – 8F4 Hardware
 2006 – SVC 4.1 – Global Mirror, MTFC
 2007 – SVC 4.2 – 8G4 Hardware, FlashCopy enh
 2008 – SVC 4.3 – Thin Provisioning, Vdisk Mirror 8A4 Hdw
 2009 – SVC 5.1 – CF8 Hardware, SSD Support, 4 Site
 2010 – SVC 6.1 – V7000 Hardware, RAID, Easy Tier
 2011 – SVC 6.2/3 – V7000U, 10G iSCSI, xtD Split Cluster
 2012 – SVC 6.4 – IBM Real-time Compression, FCoE, Volume mobility...
22
© 2012 IBM Corporation
SVC 2145-CG8 – Virtualization Appliance
 Based on IBM System x3550 M3 server (1U)
– Intel® Xeon® 5600 (Westmere) 2.53 GHz quad-core processor
 24GB of cache
– Up to 192GB of cache per SVC cluster
 Four 8Gbps FC ports (support Short-Wave & Long-Wave SFPs)
– Up to 32 FC ports per SVC cluster
For external storage
And/or for server attachment
And/or Remote Copy/Mirroring
 Two 1 Gbps iSCSI ports
– Up to 16 GbE ports per SVC cluster
 Optional 1 to 4 Solid State Drives
– Up to 32 SSD per SVC cluster
 Optional two 10 Gbps iSCSI/FCoE ports
 New engines may be intermixed in pairs with other engines in SVC clusters
– Mixing engine types in a cluster results in Volume throughput characteristics of
the engine type in that I/O group
 Cluster non-disruptive upgrade capability may be used to replace older engines with
new CG8 engines
© 2012 IBM Corporation
IBM SAN Volume Controller Architecture
consistent
Driver Stack
consistent
Driver Stack
consistent
Driver Stack
vDISK
here: striped Mode
IO Group
SVC Node
with UPS (not depicted)
Managed Disk
SAN Volume Controller cluster
Storage Pool
Storage Pool
Array LUNs
Storage Pool
© 2012 IBM Corporation
IBM SAN Volume Controller – Topology
SVC Cluster
© 2012 IBM Corporation
Virtual-Disk Types
Image Mode:
Pass thru; Virtual Disk = Physical LUN
A
Sequential Mode:
B
Virtual Disk mapped sequentially
to a portion of a managed disk
C
Virtual Disks
Striped Mode:
Virtual Disk striped
across multiple managed
disks. Preferred mode
MDG3
MDG1
A
B
MDG2
C
C
© 2012 IBM Corporation
IBM SAN Volume Controller
I/O Stack
 SVC software has a modular design
SCSI Frontend
Remote Copy
– 100% “In-house” code path
Cache
 Each function is implemented as an
independent component
Flash Copy
– Components bypassed if not in use for a given volume
Mirroring
 Standard interface between components
– Easy to add/remove components
 Components exploit a rich set of libraries
and frameworks
– Minimal Linux base OS to boot-strap and hand control to user space
60us
Space Efficient
Easy
Virtualization
Tier
RAID
– Custom memory management & thread scheduling
– Optimal I/O code path
– Clustered "support" processes like GUI, slpd, cimom, easy tier
27
Drives
SCSI
SCSI| External
Backend
© 2012 IBM Corporation
IBM SAN Volume Controller Management Options
SVC GUI
SVC CLI
Tivoli Productivity Center
 Completely redesigned
 ssh
 TPC, TPC-R
 Browser based
 scripting
 SMI-S 1.3
 Extremely easy to learn/use fast
 complete command set
 Embedded CIMOM
VDS
VSS
vCenter
Plugin
Storage Control
© 2012 IBM Corporation
SAN Volume Controller Features
29
© 2012 IBM Corporation
SAN Volume Controller Features - summary
 FlashCopy, Point-In-Time copy (optional)
– Up to 256 target per source
 Cache partitioning
 Embedded SMI-S agent
 Easy to use GUI
● Target FC may be source Remote Copy
– Built-in real time performance monitoring





E-mail, SNMP trap & Syslog error event logging
Authentication service for Single Sign-On & LDAP
Virtualise data without data-loss
Expand or shrink Volumes on-line
Thin-provisioned Volumes
– Reclaim Zero-write space
– Thick to thin, thin to thick & thin to thin migration
Volume
 On-line Volume Migration
Volume
 Volume Mirroring
MDisk
Source
Volume
copy 1
Up to 256
Full (with background copy = clone)
Partial (no background copy)
Vol1
Vol2
Vol0
FlashCopy
FlashCopy
Space Efficient
Source
target of Vol0
target of Vol1
Map 1
Map 2
Incremental
Cascaded
Map 4
Consistency Groups
Vol4
Vol3
FlashCopy
FlashCopy
target of Vol3
target of Vol1
Reverse
 Microsoft Virtual Disk Service & Volume Shadow
Copy Services hardware provider
 Remote Copy (optional)
– Synchronous & asynchronous remote replication with
SVC
SVC
Consistency groups
SVC
SVC
–
–
–
–
–
–
–
MDisk
Target
MM or GM
Relationship
Volume
copy 2
HDDs
SSDs
MM or GM Relationship
HDDs
 VMware
Automatic
Relocation
Hot-spots
MM or GM
Relationship
SVC
 EasyTier: Automatic relocation of hot and cold
extents
SSDs
Consolidated
DR Site
Optimized performance and throughput
SVC
– Storage Replication Adaptor for Site Recovery
Manager
– VAAI support & vCenter Server management plug-in
Volume Mirroring
Back-end high availability & migration
 SVC stores two copies of a Volume
– It maintains both copies in sync, reads primary copy and writes to both copies
SVC
 If disk supporting one copy fails, SVC provides continuous
data access by using other copy
R W
– Copies are automatically resynchronized after repair
 Intended to protect critical data against failure of a disk system or disk array
– A local high availability function, not a disaster recovery function
 Copies can be split
– Either copy can continue as production copy
 Either or both copies may be thin-provisioned
– Can be used to convert fully allocated to thin-provisioned volume
● Thick to thin migration
– May be used to convert thin-provisioned to fully allocated
● Thin to thick migration
 Mirrored Volumes use twice physical capacity of un-mirrored Volumes
– Base virtualisation licensed capacity must include required physical capacity
 The user can configure the timeout for each mirrored volume
31
– Priority on redundancy: Wait until write completes or times-out finally.
 Performance impact, but active copies are always synchronized
Copy 0
Copy 1
IBM EasyTier
Hot-spots
Optimized performance and throughput
Transparent
reorganization
 What is Easy Tier?
– A function that dynamically re- distributes active data across multiple tiers of storage class based on workload
characteristics  Automatic storage hierarchy
● Hybrid storage pool with 2 tiers = Solid-State Drives & Hard Disk Drives
● I/O Monitor keeps access history for each virtualisation extent (16MiB to 2GiB per extent) every 5 minutes
● Data Placement Adviser analyses history every 24 hours
● Data Migration Planner invokes data migration  Promote hot extents or demote inactive extents
– The goal being to reduce response time
– Users have automatic and semi-automatic extent based placement and migration management
SSDs
HDDs
SSDs
HDDs
Automatic
Relocation
 Why it matters?
Hot-spots
Optimized performance and throughput
– Solid State Storage has orders of magnitude better throughput and response time with random reads
– Full volume allocation to SSD only benefits a small number of volumes or portions of volumes, and use cases
– Allowing dynamic movement of the hottest extents to be transferred to the highest performance storage
enables a small number of SSD to benefit the entire infrastructure
32
– Works with Thin-provisioned Volumes
Thin-provisioning
 Traditional (“fully allocated”) virtual disks use physical disk capacity for the entire
capacity of a virtual disk even if it is not used
 With thin-provisioning, SVC allocates and uses physical disk capacity when data is
written
Dynamic
growth
Without thin provisioning, pre-allocated
space is reserved whether the application
uses it or not
With thin provisioning, applications can grow
dynamically, but only consume space they are
actually using
 Available at no additional charge with base virtualisation license
 Support all hosts supported with traditional volumes and all advanced features
(EasyTier, FlashCopy, etc.)
 Reclaiming Unused Disk Space
– When using Volume Mirroring to copy from a fully-allocated volume to a thinprovisioned volume, SVC will not copy blocks that are all zeroes
– When processing a write request, SVC detects if all zeroes are being written and does not
allocate disk space for such requests in the thin-provisioned volumes
● Helps avoid space utilization concerns when formatting Volumes
 Done at Grain Level (32/64/128/256KiB)  If grain contains all zeros don’t write
33
Copy Services
34
© 2012 IBM Corporation
Business Continuity with SVC
Traditional SAN
 Replication APIs differ by vendor
 Replication destination must be the
same as the source
 Different multipath drivers for each
array
 Lower-cost disks offer primitive, or no
replication services
FlashCopy®
Metro/Global Mirror
IBM
DS5000
35
IBM
DS5000
SAN
EMC
Clariion
SAN Volume Controller
 Common replication API, SAN-wide, that
does not change as storage hardware
changes
 Common multipath driver for all arrays
 Replication targets can be on lower-cost
disks, reducing the overall cost of
exploiting replication services
TimeFinder
SRDF
EMC
Clariion
SAN
SVC
HDS
AMS
IBM
Storwize
V7000
SVC
HP
EVA
EMC
Clariion
IBM
DS5000
Copy Services with SVC
Volume Mirroring
FlashCopy
Metro Mirror
Global Mirror
 Volume Mirroring
 Point-in-Time Copy
 Synchronous Mirror
 Consistent Asynchronous Mirror
“outside the box”
2 close sites (<10Km)
Warning, there is no consistency
group
“outside the box”
2 close sites (<10Km)
Warning, this is not real time
replication
– Write IO response time
doubled + distance latency
– No data loss
2 close sites (<300 Km)
Warning, production performance impact if
inter-site links are unavailable, during
microcode upgrades, etc.
– Limited impact on write IO response
time
– Data loss
– All write IOs are sent to the remote
site in the same order they were
received on source volumes
– Only 1 source and 1 target volumes
2 remote sites (>300 Km)
Vol0
R W
Vol0’
SVC
Managed Storage
36
Vol0’
SVC
Legacy Storage
Managed Storage
Source and target can have different characteristics and be from different vendors
Source and target can be in the same cluster
Multicluster Mirroring "any-to-any" (up to 4 instances)
SAN
SANVolume
Volume
Controller
Controller
SAN
SANVolume
Volume
Controller
Controller
Datacenter1
Datacenter 2
SAN
SANVolume
Volume
Controller
Controller
Datacenter 3
SAN Volume
Controller
Datacenter 4
37
© 2012 IBM Corporation
SVC split cluster solution
38
© 2012 IBM Corporation
SVC split cluster - symmetric disk mirroring
VM
VM
VM
Host
VM
High availability + protection
for virtual machines
VM
VM
VM
Host
VM
SVC 1 node A
LUN1
One storage system. Two locations.
 max.100km recommended 
max.300km supported
SVC 1 node B
LUN1'
Appliance functionality, not software-based, no license
39
© 2012 IBM Corporation
SVC split cluster & VDM – Connectivity
Bellow 10Km using passive DWDM
 You should always have 2 SAN fabrics (A
& B), and 2 switches per SAN fabric (one
on each site)
– This diagram is only showing connectivity
to a single fabric
●
In reality connectivity is to a redundant SAN
fabric and therefore everything should be
doubled
I/O Group
LW or SW
SW
SAN A
Switch 1
– The best is to connect each SVC node to
SAN fabric A switch 1 & 2, as well as SAN
fabric B switch 1 & 2
– You can consider (supported but it is not
recommended) connecting all SVC nodes
to the switch 1 in the SAN fabric A, and
to the switch 2 in the SAN fabric B
 To avoid fabric re-initialisation in case
of link hiccups on the ISL, consider
creating a Virtual SAN Fabric on each site
and use inter-VSAN routing
ISL
SAN A’
Switch 2
SW
LW or SW
Pool 1
Candidate
Quorum
Production room A
SW
SW
SW
 You should always connect each SVC
node in a cluster on the same SAN
switches
40
LW or SW
LW or SW
Pool 3
LW or SW
Primary
Quorum
Production room C
LW or SW
SW
Pool 2
Candidate
Quorum
Production room B
SVC split cluster & VDM – Connectivity
Up to 300Km using active DWDM
Enhanced!
I/O Group
SW
SW
Brocade virtual fabric or a Cisco VSAN can be
used to isolate Public and Private SANs
SW
ISL s/Trunks
Public SAN A
Private SAN A
SW
Public SAN A’
Dedicated ISLs/Trunks
For SVC inter-node traffic
SW
Private SAN A’
SW
SW
LW or SW
Pool 1
Candidate
Quorum
Production room A
LW or SW
Pool 3
Primary
Quorum
Production room C
You should always have 2 SAN fabrics (A & B) with at least:
41
Pool 2
Candidate
Quorum
Production room B
 2 switches per SAN fabric (1 per site) when using CISCO VSANs or Brocade virtual fabrics to isolate private and
public SANs
 4 switches per SAN fabric (2 per site) when private and public SANs are on physically dedicated switches
 This diagram is only showing connectivity to a single fabric A (In reality connectivity is to a redundant SAN
fabric and therefore everything should be doubled with also connection to B switches).
HA / Disaster Recovery with SVC Split Cluster
2-site Split Cluster
Server Cluster 1
SVC
Stretched-cluster
Failover
Improve availability, load-balance, and
deliver real-time remote data access by
distributing applications and their data
across multiple sites.
Seamless server / storage failover when
used in conjunction with server or
hypervisor clustering (such as VMware or
PowerVM)
Up to 300km between sites (3x EMC VPLEX)
Server Cluster 2
Stretched
virtual volume
Up to
300km
Data center 2
Data center 1
4-site Disaster Recovery
Server Cluster 1
Failover
Stretched
virtual volume
Data center 1
Server Cluster 1
Server Cluster 2
Data center 1
High Availability
Server Cluster 2
Stretched
virtual volume
Metro or Global Mirror
Data center 2
Failover
Data center 2
High Availability
Disaster Recovery
For combined high availability and disaster
recovery needs, synchronously or
asynchronously mirror data over long
distances between two high-availability
stretch clusters.
SVC Split Cluster Considerations
 The same code is used for all inter-node communication
– Clustering
– Write Cache Mirroring
– Global Mirror & Metro Mirror
 Advantages
– No manual intervention required
– Automatic and fast handling of storage failures
– Volumes mirrored in both locations
– Transparent for servers and host based clusters
– Perfect fit in a virtualized environment (like VMware VMotion, AIX Live Partition
Mobility)
 Disadvantages
– Mix between HA and DR solution but not a true DR solution
– Non-trivial implementation – involve IBM Services
43
Storwize V7000 : mini SVC with disks
44
© 2012 IBM Corporation
V7000 = The iPod of Midrange Storage
based on "mini" SVC
Delegated complexity
"auto optimizing"
Easy-Tier  SSD enabled  Thin provisioning 
45
Non-IBM expansion 
Auto-migration 
© 2012 IBM Corporation
Compatibility
46
© 2012 IBM Corporation
SVC 6.4 Supported Environments
IBM
z/VSE
VMware
Novell
vSphere
NetWare 4.1., 5
IBM Power7
Microsoft
IBM AIX
Windows
Sun
IBM i 6.1
Hyper-V
Solaris
(VIOS)
HP-UX 11i
Tru64
OpenVMS
SGI IRIX
Linux
(Intel/Power/z
Linux)
RHEL
SUSE 11
Citrix Xen
Server
IBM TS7650G
IBM
BladeCenter
Apple
Mac OS
1024
Hosts
VAAI
Point-in-time Copy
Full volume, Copy on write
256 targets,
Incremental, Cascaded, Reverse,
Space-Efficient, FlashCopy Mgr
Easy Tier
Native iSCSI*
8Gbps SAN fabric
1 or 10 Gigabit
Continuous Copy
SAN
Metro/Global Mirror
Multiple Cluster Mirror
SSD
SAN
Volume Controller
Space-Efficient Virtual Disks
SAN
Volume Controller
Virtual Disk Mirroring
TMS
RamSan620
IBM DS
IBM
XIV
IBM
Hitachi
Storwize V7000 Virtual Storage
DS3400, DS3500
DS4000
DS5020, DS3950
DCS9550
DS6000
Compellent DS8000, DS8800 DCS9900
Series 20
IBM System Storage SAN Volume Controller
IBM
N series
HP
EMC
Sun
NEC
Fujitsu
Pillar
3PAR,
VNX StorageTek
Axiom
Eternus
NetApp iStorage DX60, DX80,
Platform (VSP) StorageWorks VMAX
DX90,
DX410
Lightning
P9500,
FAS
CLARiiON
Thunder
MA, EMA
TagmaStore
MSA 2000, XP CX4-960
AMS 2100, 2300, 2500EVA 6400, 8400 Symmetrix
WMS, USP, USP-V
Bull
DX8100, DX8300, DX9700
8000 Models 2000 & 1200
Storeway4000 models 600 & 400, 3000
© 2012 IBM Corporation
Virtual Storage Platform
Management
48
© 2012 IBM Corporation
Storage Hypervisor
– Common device driver - iSCSI or FC host attach
– Common capabilities
•
•
•
•
•
I/O caching and cross-site cache coherency
Thin provisioning
Easy Tier automated tiering to Solid-state Disks
Snapshot (FlashCopy)
Mirroring (Synchronous and Asynchronous)
– Data mobility
•
•
Transparent data migration among arrays and across tiers
Snapshot and mirroring across arrays and tiers
• Virtual Storage Platform Management - Tivoli
Storage Productivity Center
– Manageability
• Integrated SAN-wide Management with Tivoli Storage
Productivity Center
• Integrated IBM server and storage management (Systems
Director Storage Control)
– Replication
• Application integrated FlashCopy
• DR automation
– High Availability
• Stretch Cluster HA
Storage Hypervisor
IBM Systems Director
• Virtual Storage Platform - SAN Volume Controller
Tivoli Storage Productivity
Center
(SAN Volume Controller)
Manage
Virtual Storage
Infrastructure
VMControl
Manage
Virtual Server
Infrastructure
Tivoli Storage Productivity Center - TPC
What You Need to Manage
Servers






ESX servers
Apps, DB’s, file systems
Volume managers
Host bus adaptors
Virtual HBAs
Multi-path drivers
Storage Networks
 Switches & Directors
 Virtual devices
Storage





Multi-vendor storage
Storage array provisioning
Virtualization / Vol. mapping
Block + NAS, VMFS
Tape libraries
TPC Can Help
Start Here
TPC 5.1
 Single management
console
 Heterogeneous storage
 Health monitoring
 Capacity mgmt.
 Provisioning
 Fabric management
 FlashCopy support
 Storage System
Performance
Management
 SAN Fabric Performance
management
 Trend Analysis
 DR & Business Continuity
 Applications & Storage
 Hypervisor (ESX, VIO)
 Hyperswap Mgmt.
… and Mature
IBM SmartCloud
Virtual Storage Center
All this and more…
 Advanced SAN Planning and
provisioning based on best
practices
 Proactive configuration change
management
 Performance optimization
 Tiering Optimization
 Complete SAN fabric
performance mgmt.
 Storage Virtualization
 Application Aware FlashCopy
management
Replication
 FlashCopy
 Metro Mirror
 Metro Global Mirror
5
© 2012 IBM Corporation
TPC 5.1 Highlights
 Fully integrated & Web-based
GUI
– Based on Storwize/XIV
success
 TCR/Cognos-based
Reporting & Analytics
 Enhanced management for
virtual environments
 Integrated Installer
 Simplified packaging
51
© 2012 IBM Corporation
Enhanced management for virtual environments
Virtual Machines Clustered Across Hosts
Hypervisor
Hypervisor
VM
VM
Tivoli Storage
Productivity Center
Storage
(SAN)
 Helps avoid double counting storage capacity in TPC reporting on VMware
 Associates storage not only with individual VMs and Hypervisors but also
with the clusters
 VMotion awareness
52
© 2012 IBM Corporation
Enhanced management for virtual environments
Web-based GUI - Hypervisor related Storage
53
© 2012 IBM Corporation
Integrated Infrastructure System
„Cloud Ready”
54
© 2012 IBM Corporation
IBM PureSystems
Infrastructure & Cloud
• Integrated Infrastructure
System
• Factory integration of
Compute, Storage,
Networking, and
management
• Broad support for x86 and
POWER environments
• Cloud ready for
infrastructure
55
Application & Cloud
• Integrated Application
Platform
• Factory integration of
infrastructure +
middleware (DB2,
Websphere)
• Application ready
(Power or x86 with
workload deployment
capability)
• Cloud ready
application platform
© 2012 IBM Corporation
PureFlex System is Integrated by design
Expert
Integrated
Systems
Tightly integrated compute, storage, networking,
software, management, and security
Storage
Virtualization
Compute
Security
Networking
Tools
Applications
Management
Flexible and open choice in a fully integrated system
56
© 2012 IBM Corporation
IBM PureSystems
What’s Inside? An evolution in design, a revolution in experience
IBM Flex System
Chassis
14 half-wide bays
for nodes
Compute
Nodes
IBM PureFlex System
Pre-configured, pre-integrated
infrastructure systems with compute,
storage, networking, physical and
virtual management, and entry cloud
management with
integrated expertise.
Expert
Integrated
Systems
IBM PureApplication System
Pre-configured, pre-integrated
platform systems with
middleware designed for
transactional web applications
and enabled for cloud with
integrated expertise.
Power 2S/4S*
x86 2S/4S
Storage Node
V7000
Expansion inside
or outside chassis
Management
Appliance
Networking
10/40GbE, FCoE, IB
8/16Gb FC
Expansion
PCIe
Storage
57
© 2012 IBM Corporation
Summary
58
© 2012 IBM Corporation
Why to consider Storage Virtualization?
1. Missing storage "hypervisor" for virtualized servers
2. Too high physical migration effort
3. Compatibility chaos (multipathing, HBA firmware…)
4. Need for transparent campus failover like Unix LVM
5. Need for automatic hotspot elimination ("Easy Tier")
SVC
6. Unhappy with storage performance
– Simplified administration, including copy services: 1 same process
– Online re-planning flexibility is greatly enhanced  "Cloud ready"
– Storage effectiveness (ongoing optimization) can be maintained over time
– Move applications up one tier as required, or down one tier when stale
– Move from performance design "in hardware" to QoS policy management
59
© 2012 IBM Corporation
Internet Resources
 Information Center
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
 SVC Support Matrix
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
 SVC / Storwize V7000 Documentation
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
60
© 2012 IBM Corporation
Thank you!
61
© 2012 IBM Corporation
62
© 2012 IBM Corporation
Download
Random flashcards
Radiobiology

39 Cards

Radioactivity

30 Cards

Nomads

17 Cards

Create flashcards