IBM System Storage SVC

advertisement

Danijel Paulin, danijel.paulin@hr.ibm.com

Systems Architect, SEE

IBM Croatia

IBM Storage Virtualization

Cloud enabling technology

11th TF-Storage Meeting, 26-27 September 2012, Dubrovnik, Croatia

9/27/2012

© 2012 IBM Corporation

Agenda

2

 Introduction

 Virtualization – function and benefits

 IBM Storage Virtualization

 Virtualization Appliance SAN Volume Controller

 Virtual Storage Platform Management

 Integrated Infrastructure System - „Cloud Ready”

 Summary

© 2012 IBM Corporation

Smarter Computing

New approach in designing IT Infrastructures

Smarter Computing is realized through an IT infrastructure that is designed for data, tuned to the task, and managed in the cloud...

Greater Storage

Higher Utilization

Efficiency & Flexibility

Workload Systems

Tuning

Virtualization

Increased

Flexibility

Foundation for

Cloud

Better Economics

Building a cloud starts with virtualizing your IT environment

© 2012 IBM Corporation

The journey to the cloud begins with virtualization!

Virtualize

Server, storage & network devices to increase utilization

Provision & Secure

Automate provisioning of resources

Monitor & Manage

Provide visibility of performance of virtual machines

Orchestrate Workflow

Manage the process for approval of usage

Meter & Rate

Track usage of resources

© IBM Corporation 2012

4

IBM Virtualization Offerings

Server virtualization

 System p, System i, System z LPARs, VMware ESX, IBM Smart Business

Desktop Cloud

 Virtually consolidate workloads on servers

File and File System virtualization

 Scale Out NAS (SoNAS), DFSMS, IBM General Parallel File System, N-series

 Virtually consolidate files in one namespace across servers

Storage virtualization

 SAN Volume Controller (the Storage Hypervisor) , ProtecTIER

 Industry leading Storage Virtualization solutions

Server and Storage Infrastructure Management

 Data protection with Tivoli Storage Manager and TSM FastBack

 Advanced management of virtual environments with TPC, IBM Director

VMcontrol, TADDM, ITM, TPM

 Consolidated management of virtual and physical storage resources

IBM Storage Cloud Solutions

 Smart Business Storage Cloud (SoNAS), IBM SmartCloud Managed Backup

 Virtualization and automation of storage capacity, data protection, and other storage services

© 2012 IBM Corporation

Virtualization – functions and benefits

Virtual

Resources

Sharing

Resources

Examples: LPARs, VMs, virtual disks, VLANs

Benefits: Resource utilization, workload

mgmt., agility, energy efficiency

Virtual

Resources

Aggregation

Resources

Examples: Virtual disks, system pools

Benefits: Management simplification,

investment protection, scalability

Resource

Type Y

Virtual

Resources

Emulation

Resource

Type X

Resources

Examples: Arch.

emulators, iSCSI, FCoE, v. tape

Benefits: Compatibility, software investment

protection, interoperability, flexibility

Add or Change

Insulation

Virtual

Resources

Add, Replace, or Change

Resources

Examples: Compat. modes, CUOD, appliances

Benefits: Agility, investment protection, complexity & change hiding

© 2012 IBM Corporation

What is Storage Virtualization?

Logical

Representation

Virtualization

Technology that makes one set of resources look and feel like another set of resources

A logical representation of physical resources

– Hides some of the complexity

– Adds or integrates new function with existing services

– Can be nested or applied to multiple layers of a system

Physical

Resources

7

© 2012 IBM Corporation

What distinguishes a Storage Cloud from Traditional IT?

1.

Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere.

(as opposed to physical array-boundary limitations)

2.

Storage services are standardized – selected from a storage service catalog.

(as opposed to customized configuration)

3.

Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog.

(as opposed to manual component-level provisioning)

4.

Storage usage is paid per use – end users are aware of the impact of their consumption and service levels.

(as opposed to paid from a central IT budget)

© 2012 IBM Corporation

9

IBM Storage Virtualization

© 2012 IBM Corporation

Today's SAN

SAN SAN-attached disks look like local disks to the OS

& application

© 2012 IBM Corporation

10

SAN – with Virtualization

SAN

Virtualization layer

Virtual disks start as images of migrated non-virtual disks.

Later, modify striping, thin provisioning, etc.

© 2012 IBM Corporation

11

Become truly flexible !

SAN

Virtualization layer

Virtual disks remain constant during physical infrastructure changes

© 2012 IBM Corporation

12

Enable tiered Storage !

SAN

Virtualization layer

Moving virtual disks between storage tiers requires no downtime

© 2012 IBM Corporation

13

Avoid planned Downtime !

SAN

Upgrade

Virtualization layer upgrade or replacement with no downtime!

© 2012 IBM Corporation

14

In-band Storage Virtualization - Benefits

Isolation

1.

Flat interoperability matrix

2.

Non-disruptive migrations

3.

No-cost multipathing

Pooling

CACHE + SSD

Performance

1.

Higher (pool) utilization

2.

Cross-pool-striping: IOPS

3.

Thin Provisioning: free GB

1.

Performance increase

2.

Hot-spot elimination

3.

Adds SSD to old gear

Mirroring

×

License $$

Mirroring

1.

License economies

2.

Cross-vendor mirror

3.

Favorable TCO

© 2012 IBM Corporation

15

Migration into Storage Virtualization (and back!)

ZONE

SAN Virtualization layer

Virtual disks in transparent

Image Mode, before being converted to Full Striped

This works backwards too (no vendor lock-in)

© 2012 IBM Corporation

16

Redundant SAN !

ZONE

SAN A SAN B

1

:

4

Virtualization layer

17

© 2012 IBM Corporation

18

Virtualization Appliance

SAN Volume Controller

© 2012 IBM Corporation

Storage Hypervisor

Virtual Server

Infrastructure

Virtual Storage

Infrastructure

(SAN Volume Controller)

• Virtual Storage Platform - SAN Volume Controller

– Common device driver - iSCSI or FC host attach

– Common capabilities

• I/O caching and cross-site cache coherency

• Thin provisioning

• Easy Tier automated tiering to Solid-state Disks

• Snapshot (FlashCopy)

• Mirroring (Synchronous and Asynchronous)

– Data mobility

• Transparent data migration among arrays and across tiers

• Snapshot and mirroring across arrays and tiers

• Virtual Storage Platform Management - Tivoli

Storage Productivity Center

– Manageability

• Integrated SAN-wide Management with Tivoli Storage

Productivity Center

• Integrated IBM server and storage management (Systems

Director Storage Control)

– Replication

• Application integrated FlashCopy

• DR automation

– High Availability

• Stretch Cluster HA

20

Virtualization Appliance : SAN Volume Controller

 Stand-alone product

 Clustered

×2…8

 SVC comes with write cache mirrored in pairs

(IOgroups)

 Multi-use Fibrechannel in & out

 Linux boot,

100% IBM stack

TCA:

1. Hardware

2. per-TB license (tiered)

3. per-TB mirroring license

© 2012 IBM Corporation

6th Generation.....

21

 Continuous development

 Firmware is backwards compatible

(64 bit not for 32 bit Hardware)

 Replace while online initial Release

SAN Volume Controller CG8 – Firmware v6.4

:

SVC 4F2 - 4GB cache, 2Gb SAN (Rel.3 / 2006)

SVC 8F2 - 8GB cache, 2Gb SAN (ROHS comp.)

SVC 8F4 - 8GB cache, 4Gb SAN 155.000 SPC-1 ™ IOPS

SVC 8G4 - +Dual-core Processor 272.500 SPC-1 ™ IOPS

SVC CF8 - 24GB cache, Quad-core 380.483 6-node

SPC-1 IOPS

SVC CG8 - +10 GbE

: approx.

640.000 SPC-1-like IOPS

© 2012 IBM Corporation

SVC Model & Code Release History

 1999 – Almaden Research group publish ComPaSS clustering

 2000 – SVC ‘lodestone’ development begins using ComPaSS

 2003 – SVC 1.1 – 4F2 Hardware 4 node

 2004 – SVC 1.2 – 8 node support

 2004 – SVC 2.1 – 8F2 Hardware

 2005 – SVC 3.1 – 8F4 Hardware

 2006 – SVC 4.1 – Global Mirror, MTFC

 2007 – SVC 4.2 – 8G4 Hardware, FlashCopy enh

 2008 – SVC 4.3 – Thin Provisioning, Vdisk Mirror 8A4 Hdw

 2009 – SVC 5.1 – CF8 Hardware, SSD Support, 4 Site

 2010 – SVC 6.1 – V7000 Hardware, RAID, Easy Tier

 2011 – SVC 6.2/3 – V7000U, 10G iSCSI, xtD Split Cluster

 2012 – SVC 6.4 – IBM Real-time Compression, FCoE, Volume mobility...

© 2012 IBM Corporation

22

SVC 2145-CG8 – Virtualization Appliance

 Based on IBM System x3550 M3 server (1U)

– Intel® Xeon® 5600 (Westmere) 2.53 GHz quad-core processor

 24GB of cache

– Up to 192GB of cache per SVC cluster

Four 8Gbps FC ports (support Short-Wave & Long-Wave SFPs)

– Up to 32 FC ports per SVC cluster

For external storage

And/or for server attachment

And/or Remote Copy/Mirroring

Two 1 Gbps iSCSI ports

– Up to 16 GbE ports per SVC cluster

 Optional 1 to 4 Solid State Drives

– Up to 32 SSD per SVC cluster

 Optional two 10 Gbps iSCSI/FCoE ports

New engines may be intermixed in pairs with other engines in SVC clusters

– Mixing engine types in a cluster results in Volume throughput characteristics of the engine type in that I/O group

 Cluster non-disruptive upgrade capability may be used to replace older engines with new CG8 engines

© 2012 IBM Corporation

IBM SAN Volume Controller Architecture

consistent

Driver Stack consistent

Driver Stack vDISK here: striped Mode

IO Group

SVC Node with UPS (not depicted)

Managed Disk consistent

Driver Stack

Storage Pool

SAN Volume Controller cluster

Storage Pool Storage Pool

Array LUNs © 2012 IBM Corporation

IBM SAN Volume Controller – Topology

SVC Cluster

© 2012 IBM Corporation

Virtual-Disk Types

A

MDG1

A

Virtual Disks

B

B

MDG2

C

MDG3

C

C

Image Mode:

Pass thru; Virtual Disk = Physical LUN

Sequential Mode:

Virtual Disk mapped sequentially to a portion of a managed disk

Striped Mode:

Virtual Disk striped across multiple managed disks. Preferred mode

© 2012 IBM Corporation

27

IBM SAN Volume Controller

I/O Stack

 SVC software has a modular design

– 100% “In-house” code path

 Each function is implemented as an independent component

– Components bypassed if not in use for a given volume

 Standard interface between components

– Easy to add/remove components

 Components exploit a rich set of libraries and frameworks

– Minimal Linux base OS to boot-strap and hand control to user space

– Custom memory management & thread scheduling

– Optimal I/O code path

– Clustered "support" processes like GUI, slpd, cimom, easy tier

SCSI Frontend

Remote Copy

Cache

Flash Copy

Mirroring

Space Efficient

Virtualization

RAID

60us

Easy

Tier

© 2012 IBM Corporation

IBM SAN Volume Controller Management Options

SVC GUI

 Completely redesigned

Browser based

 Extremely easy to learn/use fast

SVC CLI

 ssh

 scripting

 complete command set

Tivoli Productivity Center

 TPC, TPC-R

SMI-S 1.3

 Embedded CIMOM

VDS VSS vCenter Plugin

Storage Control

© 2012 IBM Corporation

29

SAN Volume Controller Features

© 2012 IBM Corporation

SAN Volume Controller Features - summary

 FlashCopy, Point-In-Time copy (optional)

 Cache partitioning

 Embedded SMI-S agent

 Easy to use GUI

– Built-in real time performance monitoring

– Up to 256 target per source

● Target FC may be source Remote Copy

– Full (with background copy = clone)

– Partial (no background copy)

Up to 256

 E-mail, SNMP trap & Syslog error event logging

 Authentication service for Single Sign-On & LDAP

 Virtualise data without data-loss

 Expand or shrink Volumes on-line

 Thin-provisioned Volumes

– Reclaim Zero-write space

– Space Efficient

– Incremental

– Cascaded

– Consistency Groups

– Reverse

Vol0

Source

Map 1

Vol1

FlashCopy target of Vol0

Vol2

FlashCopy

Map 2 target of Vol1

Vol3

FlashCopy target of Vol1

Map 4

Vol4

FlashCopy target of Vol3

– Thick to thin, thin to thick & thin to thin migration

 On-line Volume Migration

SVC

Volume

 Volume Mirroring SVC

Volume

MDisk

Source

Volume Volume copy 1 copy 2

 EasyTier: Automatic relocation of hot and cold extents

MDisk

Target

 Microsoft Virtual Disk Service & Volume Shadow

Copy Services hardware provider

 Remote Copy (optional)

– Synchronous & asynchronous remote replication with

Consistency groups

SVC SVC

MM or GM

Relationship

SVC

Consolidated

DR Site

MM or GM

Relationship

MM or GM Relationship

SSDs HDDs SSDs HDDs

Hot-spots

Automatic

Relocation

Optimized performance and throughput

 VMware

SVC

– Storage Replication Adaptor for Site Recovery

Manager

– VAAI support & vCenter Server management plug-in

Volume Mirroring

Back-end high availability & migration

 SVC stores two copies of a Volume

– It maintains both copies in sync, reads primary copy and writes to both copies

 If disk supporting one copy fails, SVC provides continuous data access by using other copy

– Copies are automatically resynchronized after repair

 Intended to protect critical data against failure of a disk system or disk array

– A local high availability function, not a disaster recovery function

 Copies can be split

– Either copy can continue as production copy

 Either or both copies may be thin-provisioned

– Can be used to convert fully allocated to thin-provisioned volume

● Thick to thin migration

– May be used to convert thin-provisioned to fully allocated

● Thin to thick migration

 Mirrored Volumes use twice physical capacity of un-mirrored Volumes

Copy 0

– Base virtualisation licensed capacity must include required physical capacity

 The user can configure the timeout for each mirrored volume

31

– Priority on redundancy: Wait until write completes or times-out finally.

 Performance impact, but active copies are always synchronized

SVC

Copy 1

IBM EasyTier

Hot-spots

Transparent reorganization

Optimized performance and throughput

 What is Easy Tier?

A function that dynamically re- distributes active data across multiple tiers of storage class based on workload characteristics  Automatic storage hierarchy

● Hybrid storage pool with 2 tiers = Solid-State Drives & Hard Disk Drives

● I/O Monitor keeps access history for each virtualisation extent (16MiB to 2GiB per extent) every 5 minutes

● Data Placement Adviser analyses history every 24 hours

● Data Migration Planner invokes data migration  Promote hot extents or demote inactive extents

The goal being to reduce response time

Users have automatic and semi-automatic extent based placement and migration management

SSDs SSDs HDDs

Automatic

Relocation

 Why it matters?

Hot-spots

Optimized performance and throughput

Solid State Storage has orders of magnitude better throughput and response time with random reads

Full volume allocation to SSD only benefits a small number of volumes or portions of volumes, and use cases

32

Allowing dynamic movement of the hottest extents to be transferred to the highest performance storage enables a small number of SSD to benefit the entire infrastructure

Works with Thin-provisioned Volumes

Thin-provisioning

33

 Traditional (“fully allocated”) virtual disks use physical disk capacity for the entire capacity of a virtual disk even if it is not used

 With thin-provisioning, SVC allocates and uses physical disk capacity when data is written

Dynamic growth

Without thin provisioning, pre-allocated space is reserved whether the application uses it or not

With thin provisioning, applications can grow dynamically, but only consume space they are actually using

 Available at no additional charge with base virtualisation license

 Support all hosts supported with traditional volumes and all advanced features

(EasyTier, FlashCopy, etc.)

 Reclaiming Unused Disk Space

– When using Volume Mirroring to copy from a fully-allocated volume to a thinprovisioned volume, SVC will not copy blocks that are all zeroes

– When processing a write request, SVC detects if all zeroes are being written and does not allocate disk space for such requests in the thin-provisioned volumes

● Helps avoid space utilization concerns when formatting Volumes

 Done at Grain Level (32/64/128/256KiB)  If grain contains all zeros don’t write

34

Copy Services

© 2012 IBM Corporation

Business Continuity with SVC

Traditional SAN

 Replication APIs differ by vendor

 Replication destination must be the same as the source

 Different multipath drivers for each array

 Lower-cost disks offer primitive, or no replication services

SAN Volume Controller

 Common replication API, SAN-wide, that does not change as storage hardware changes

 Common multipath driver for all arrays

 Replication targets can be on lower-cost disks, reducing the overall cost of exploiting replication services

FlashCopy®

Metro/Global Mirror

SAN

TimeFinder

SRDF

IBM

DS5000

35

IBM

DS5000

EMC

Clariion

EMC

Clariion

SVC

SAN

SVC

HDS

AMS

IBM

Storwize

V7000

HP

EVA

EMC

Clariion

IBM

DS5000

Copy Services with SVC

Volume Mirroring

 Volume Mirroring

“outside the box”

2 close sites (<10Km)

Warning, there is no consistency group

FlashCopy

 Point-in-Time Copy

“outside the box”

2 close sites (<10Km)

Warning, this is not real time replication

Metro Mirror

 Synchronous Mirror

– Write IO response time doubled + distance latency

– No data loss

2 close sites (<300 Km)

Warning, production performance impact if inter-site links are unavailable, during microcode upgrades, etc.

Global Mirror

 Consistent Asynchronous Mirror

– Limited impact on write IO response time

– Data loss

– All write IOs are sent to the remote site in the same order they were received on source volumes

– Only 1 source and 1 target volumes

2 remote sites (>300 Km)

Vol0 Vol0’

Vol0’

SVC SVC

36

Managed Storage Legacy Storage Managed Storage

Source and target can have different characteristics and be from different vendors

Source and target can be in the same cluster

Multicluster Mirroring "any-to-any" (up to 4 instances)

SAN Volume

Controller

Datacenter1

SAN Volume

Controller

Datacenter 2

SAN Volume

Controller

Datacenter 3

SAN Volume

Controller

Datacenter 4

© 2012 IBM Corporation

37

38

SVC split cluster solution

© 2012 IBM Corporation

SVC split cluster - symmetric disk mirroring

VM

VM

VM

VM

Host

High availability + protection for virtual machines

VM

VM

VM

VM

Host

SVC 1 node A

One storage system. Two locations.

SVC 1 node B

39

LUN1

 max.100km recommended  max.300km supported

LUN1'

Appliance functionality, not software-based, no license

© 2012 IBM Corporation

SVC split cluster & VDM – Connectivity

Bellow 10Km using passive DWDM

 You should always have 2 SAN fabrics (A

& B), and 2 switches per SAN fabric (one on each site)

– This diagram is only showing connectivity to a single fabric

● In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled

 You should always connect each SVC node in a cluster on the same SAN switches

– The best is to connect each SVC node to

SAN fabric A switch 1 & 2, as well as SAN fabric B switch 1 & 2

– You can consider (supported but it is not recommended) connecting all SVC nodes to the switch 1 in the SAN fabric A, and to the switch 2 in the SAN fabric B

SW

SW

SW

LW or SW

 To avoid fabric re-initialisation in case of link hiccups on the ISL, consider creating a Virtual SAN Fabric on each site and use inter-VSAN routing

Pool 1 Candidate

Quorum

Production room A

LW or SW

SAN A

Switch 1

I/O Group

LW or SW

ISL

Pool 3

Production room C

LW or SW

LW or SW

Primary

Quorum

SW

SAN A’

Switch 2

LW or SW

SW

SW

Pool 2

Candidate

Quorum

Production room B

40

SW

Public SAN A

SVC split cluster & VDM – Connectivity

Up to 300Km using active DWDM

Enhanced!

SW

I/O Group

Brocade virtual fabric or a Cisco VSAN can be used to isolate Public and Private SANs

ISL s/Trunks

SW

Public SAN A’

SW

Private SAN A

Dedicated ISLs/Trunks

For SVC inter-node traffic

Private SAN A’

SW

SW

Pool 1

Candidate

Quorum

Production room A

LW or SW

SW

LW or SW

Pool 3

Primary

Quorum

Production room C

Pool 2

Candidate

Quorum

Production room B

41

You should always have 2 SAN fabrics (A & B) with at least:

 2 switches per SAN fabric (1 per site) when using CISCO VSANs or Brocade virtual fabrics to isolate private and public SANs

 4 switches per SAN fabric (2 per site) when private and public SANs are on physically dedicated switches

 This diagram is only showing connectivity to a single fabric A (In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled with also connection to B switches).

HA / Disaster Recovery with SVC Split Cluster

2-site Split Cluster

SVC

Stretched-cluster

Server Cluster 1

Server Cluster 1

Failover

Stretched virtual volume

Server Cluster 2

Failover

Stretched virtual volume

Server Cluster 2

 Improve availability, load-balance, and deliver real-time remote data access by distributing applications and their data across multiple sites.

 Seamless server / storage failover when used in conjunction with server or hypervisor clustering (such as VMware or

PowerVM)

 Up to 300km between sites (3x EMC VPLEX) Up to

300km

Data center 1

4-site Disaster Recovery

Data center 2

Metro or Global Mirror

Server Cluster 1

Failover

Stretched virtual volume

Server Cluster 2

 For combined high availability and disaster recovery needs, synchronously or asynchronously mirror data over long distances between two high-availability

stretch clusters.

Data center 1

High Availability

Data center 2

Disaster Recovery

Data center 1

High Availability

Data center 2

SVC Split Cluster Considerations

43

 The same code is used for all inter-node communication

– Clustering

– Write Cache Mirroring

– Global Mirror & Metro Mirror

 Advantages

– No manual intervention required

– Automatic and fast handling of storage failures

– Volumes mirrored in both locations

– Transparent for servers and host based clusters

– Perfect fit in a virtualized environment (like VMware VMotion, AIX Live Partition

Mobility)

 Disadvantages

– Mix between HA and DR solution but not a true DR solution

– Non-trivial implementation – involve IBM Services

44

Storwize V7000 : mini SVC with disks

© 2012 IBM Corporation

V7000 = The

iPod

of Midrange Storage

based on "mini" SVC

Delegated complexity

"auto optimizing"

45

Easy-Tier  SSD enabled  Thin provisioning  Non-IBM expansion  Auto-migration 

© 2012 IBM Corporation

46

Compatibility

© 2012 IBM Corporation

SVC 6.4 Supported Environments

IBM z/VSE

Novell

NetWare

VMware vSphere

4.1., 5

Microsoft

Windows

Hyper-V

IBM Power7

IBM AIX

IBM i 6.1

(VIOS)

Sun

Solaris

HP-UX 11i

Tru64

OpenVMS SGI IRIX

Linux

(Intel/Power/z

Linux)

RHEL

SUSE 11 IBM TS7650G

Apple

Mac OS

Citrix Xen

Server

IBM

BladeCenter

1024

Hosts

VAAI

Point-in-time Copy

Full volume, Copy on write

256 targets,

Incremental, Cascaded, Reverse,

Space-Efficient, FlashCopy Mgr

Native iSCSI*

1 or 10 Gigabit

8Gbps SAN fabric

Continuous Copy

Metro/Global Mirror

Multiple Cluster Mirror

Easy Tier SSD SAN

Volume Controller

Space-Efficient Virtual Disks

SAN

SAN

Volume Controller

Virtual Disk Mirroring

TMS

RamSan-

620

Compellent

IBM DS

DS3400, DS3500

DS4000

DS5020, DS3950

DS6000

DS8000, DS8800

Series 20

IBM

XIV

IBM Hitachi HP

DCS9550

DCS9900

Storwize V7000

IBM

N series

Virtual Storage

Platform (VSP)

Lightning

Thunder

TagmaStore

3PAR ,

StorageWorks

P9500,

MA, EMA

MSA 2000, XP

AMS 2100, 2300, 2500

WMS, USP, USP-V

EVA 6400, 8400

EMC

VNX

VMAX

CLARiiON

CX4-960

Symmetrix

Sun

StorageTek

NetApp

FAS

NEC iStorage

Bull

Fujitsu

Eternus

Pillar

Axiom

Storeway

DX60, DX80, DX90, DX410

DX8100, DX8300, DX9700

8000 Models 2000 & 1200

4000 models 600 & 400, 3000

IBM System Storage SAN Volume Controller © 2012 IBM Corporation

48

Virtual Storage Platform

Management

© 2012 IBM Corporation

Storage Hypervisor

Virtual Server

Infrastructure

Virtual Storage

Infrastructure

(SAN Volume Controller)

• Virtual Storage Platform - SAN Volume Controller

– Common device driver - iSCSI or FC host attach

– Common capabilities

• I/O caching and cross-site cache coherency

• Thin provisioning

• Easy Tier automated tiering to Solid-state Disks

• Snapshot (FlashCopy)

• Mirroring (Synchronous and Asynchronous)

– Data mobility

• Transparent data migration among arrays and across tiers

• Snapshot and mirroring across arrays and tiers

• Virtual Storage Platform Management - Tivoli

Storage Productivity Center

– Manageability

• Integrated SAN-wide Management with Tivoli Storage

Productivity Center

• Integrated IBM server and storage management (Systems

Director Storage Control)

– Replication

• Application integrated FlashCopy

• DR automation

– High Availability

• Stretch Cluster HA

Tivoli Storage Productivity Center - TPC

What You Need to Manage TPC Can Help

Servers

 ESX servers

Apps, DB’s, file systems

 Volume managers

Host bus adaptors

 Virtual HBAs

 Multi-path drivers

Storage Networks

 Switches & Directors

Virtual devices

Storage

 Multi-vendor storage

 Storage array provisioning

 Virtualization / Vol. mapping

 Block + NAS, VMFS

 Tape libraries

Start Here

TPC 5.1

 Single management console

 Heterogeneous storage

Health monitoring

 Capacity mgmt.

 Provisioning

 Fabric management

 FlashCopy support

 Storage System

Performance

Management

 SAN Fabric Performance management

 Trend Analysis

DR & Business Continuity

 Applications & Storage

 Hypervisor (ESX, VIO)

 Hyperswap Mgmt.

Replication

 FlashCopy

 Metro Mirror

 Metro Global Mirror

… and Mature

IBM SmartCloud

V irtual S torage C enter

All this and more…

 Advanced SAN Planning and provisioning based on best practices

 Proactive configuration change management

 Performance optimization

 Tiering Optimization

 Complete SAN fabric performance mgmt.

 Storage Virtualization

Application Aware FlashCopy management

© 2012 IBM Corporation 5

TPC 5.1 Highlights

 Fully integrated & Web-based

GUI

– Based on Storwize/XIV success

 TCR/Cognos-based

Reporting & Analytics

 Enhanced management for virtual environments

 Integrated Installer

 Simplified packaging

51

© 2012 IBM Corporation

Enhanced management for virtual environments

Tivoli Storage

Productivity Center Virtual Machines Clustered Across Hosts

Hypervisor

VM

Hypervisor

VM

52

Storage

(SAN)

 Helps avoid double counting storage capacity in TPC reporting on VMware

 Associates storage not only with individual VMs and Hypervisors but also with the clusters

 VMotion awareness

© 2012 IBM Corporation

Enhanced management for virtual environments

Web-based GUI - Hypervisor related Storage

53 © 2012 IBM Corporation

54

Integrated Infrastructure System

„Cloud Ready”

© 2012 IBM Corporation

IBM PureSystems

Infrastructure & Cloud

• Integrated Infrastructure

System

• Factory integration of

Compute, Storage,

Networking, and management

• Broad support for x86 and

POWER environments

• Cloud ready for infrastructure

55

Application & Cloud

• Integrated Application

Platform

• Factory integration of infrastructure + middleware (DB2,

Websphere)

• Application ready

(Power or x86 with workload deployment capability)

• Cloud ready application platform

© 2012 IBM Corporation

PureFlex System is Integrated by design

Tightly integrated compute, storage, networking, software, management, and security

Expert

Integrated

Systems

56

Compute

Storage

Networking

Virtualization

Security

Tools Applications

Management

Flexible and open choice in a fully integrated system

© 2012 IBM Corporation

IBM PureSystems

What ’ s Inside? An evolution in design, a revolution in experience

IBM Flex System IBM PureFlex System

Expert

Integrated

Systems

IBM PureApplication System

Chassis

14 half-wide bays for nodes

Compute

Nodes

Power 2S/4S* x86 2S/4S

Storage Node

V7000

Expansion inside or outside chassis

Management

Appliance

Networking

10/40GbE, FCoE, IB

8/16Gb FC

Expansion

PCIe

Storage

Pre-configured, pre-integrated infrastructure systems with compute, storage, networking, physical and virtual management, and entry cloud management with integrated expertise.

Pre-configured, pre-integrated platform systems with middleware designed for transactional web applications and enabled for cloud with integrated expertise.

© 2012 IBM Corporation

57

58

Summary

© 2012 IBM Corporation

Why to consider Storage Virtualization?

SVC

1. Missing storage "hypervisor" for virtualized servers

2. Too high physical migration effort

3. Compatibility chaos (multipathing, HBA firmware…)

4. Need for transparent campus failover like Unix LVM

5. Need for automatic hotspot elimination ("Easy Tier")

6. Unhappy with storage performance

– Simplified administration, including copy services: 1 same process

– Online re-planning flexibility is greatly enhanced

"Cloud ready"

– Storage effectiveness (ongoing optimization) can be maintained over time

– Move applications up one tier as required, or down one tier when stale

– Move from performance design "in hardware" to QoS policy management

59

© 2012 IBM Corporation

Internet Resources

 Information Center http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

 SVC Support Matrix http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

 SVC / Storwize V7000 Documentation http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

60

© 2012 IBM Corporation

61

Thank you!

© 2012 IBM Corporation

62

© 2012 IBM Corporation

Download