Thoughts About CoprHD Migration Support

advertisement
Thoughts About CoprHD
Migration Support
Tom Watson
1
CoprHD Concepts Necessary for Migration
(Virtual Array and Virtual Pools)
Virtual Array is a collection of
hardware resources that are:
•
Physically interconnected by
network(s)
•
Used in conjunction with one
another to provision resources to
consumers such as hosts
•
Usually located in physical
proximity
•
Examples: arrays, protection
systems, storage ports, storage
pools, networks
Virtual Pool is a class of service
abstraction detailing capabilities like
performance, protection, high
availability.
•
Virtual Pools match a set of
Physical Pools provided by
Storage systems that can be
used to provide the required
service.
•
Varray + Vpool + volume
characteristics used for
provisioning
2
Virtual Array Partitioning Example –
Two Arrays and a VPLEX
•
Hardware, e.g. the arrays, may be
assigned to multiple Varrays for
logical purposes
•
The example here is a VPLEX
Metro configuration serving VA-2
and VA-3, each of which have a
VMAX array.
•
The VPLEX can create local
volumes (accessible in only one
VPLEX cluster) or distributed
volumes (accessible via both
clusters simultaneous read/write)
3
Migration Approaches (from Vicom paper “Fast.
Problem-free. On-time. … Migrations Nov. 2007)
Server Based Migration
Switch Based Migrations
 Lower cost since no additional hardware
 Potentially transparent to host and compatible
with any Storage Systems
 But reduced throughput or application
performance due to server CPU and network
overheads
Examples: EMC PowerPath
Array Based Migrations
 Mission critical, online or offline migrations with
synchronous or asynch. Copying to remote data
centers
 Faster than server migrations
 But not widely supported; different design center
than traditional switch; handling legacy systems
problematic
Appliance (Data Mover) Based Migrations
 Appliance serves as independent data mover
from central point within fabric
 Appliance can be tailored to legacy and vendor
behaviors, eliminating need for special drivers on
host
 But may not work for migrations between
equipment of different manufacturers (often
support only inbound migrations)
 But may not lend themselves to specialized
storage systems such as ESCON / FICON
 May require dedicated I/O ports / links
Examples: EMC VPlex, IBM SVC, Viacom Vmirror
8G
Examples: EMC VMAX SRDF, IBM XIV
4
Array Based Migration
Example: Tech. Refresh (array replacement) using
Array Migration
Host
Start: Host connected to Old Array
Steps
1.
Introduce new array
2.
Move host to new array (down time)
a.
b.
c.
d.
e.
3.
Old
Array
Virt.ual
host
New
Array
Migration continues while host accessing new
array
a.
4.
Shut-down host
Set up new array as virtual host to old array (zoning
and mapping/masking on old array)
Initiate migration (completes asynchronously)
Setup to host access to new array (zoning and
mapping / masking on new array)
Bring up host and rescan ITLs (SCSI ids)
Data not yet migrated to new host fetched
synchronously by new array and supplied to host
Migration committed when finished
a.
Remove new array virtual host access to old array
(zoning and mapping/masking)
5
IBM XIV Migration Facilities
(ref. IBM XIV Storage System Bussiness Continuity
Functions redbook Nov. 2014)
IBM XIV has built-in migration facilities from
arbitrary arrays to XIV, but no support (I found)
migrating from XIV to other vendor arrays
Features:
•
A single short outage to switch LUN ownership
•
Synchronizes data between the two storage
systems using transparent copying to the XIV in
background)
•
Supports data migration from most storage
vendors; FC or iSCSI
The XIV Storage System handles all I/O requests for the
host server during the data migration process. All read
requests are handled based on where the data currently is.
For example, if the data has already been migrated to the
XIV Storage System, it is read from that location. However,
if the data has not yet been migrated to the IBM XIV
storage, the read request comes from the host to the XIV
Storage System. That in turn retrieves the data from the
source storage device and provides it to the host server.
The XIV Storage System handles all host server write
requests and the non-XIV disk system is now transparent to
the host. All write requests are handled using one of two
user-selectable methods: source updating and no source
updating.
6
XIV Migration Steps
 Cable and zone XIV to non-XIV storage device.
 Define XIV on the non-XIV storage device as Linux or Windows host.
 Remove host multipath drivers; install IBM Host Attach Kit.
 Define and test the data migration volumes.
– On non XIV storage, remap volumes from old host to XIV.
– On XIV, create data migration tasks and test them.
 Activate data migration tasks on XIV.
 Define host on XIV and bring up host on XIV.
– Zone the host to XIV.
– Map volumes to the host on XIV.
– Bring host online and start applications.
 Complete migration on XIV. Upon completion, delete migration tasks.
7
Data Mover Based Migration
Example: Tech. Refresh (array replacement) using
Data Mover
Host
Start: Assume host connected to Data Mover.
1.
If host not initially connected to Data Mover, move
host to Data Mover (down time)
a.
b.
c.
d.
2.
Shut-down host
Set up Data Mover as virtual host to old array
(zoning and mapping/masking on old array)
Setup to host access to Data Mover (zoning and
mapping / masking on new array)
Bring up host and rescan ITLs (SCSI ids)
Cache
Data Mover
Migrate data using Data Mover (transparently to
host)
a.
b.
c.
Initiate migration using Data Mover
When migration finishes, commit migration
Remove data from Old Array if desired
Old
Array
New
Array
8
IBM SVC Migration Facilities
(from Implementing the IBM System Storage SAN
Volume Controller V7.4 April 2015)
SVC uses symmetric virtualization:
•
Host systems are isolated from physical
storage
•
Advanced features such as migration handled
by SVC, which is central point of control for
device
•
Provides a “large scalable cache”
•
Copy services (“IBM Flash Copy point-in-timcopy)
•
“Metro Mirror” (synchronous copy)
•
Data Migration can be used for:
•
•
•
•
.
Migrating data from one back-end-controller to
another using the SVC as a block data mover,
and afterwards removing the SVC from the SAN
Moving data from SVC managed node to image
node before SVC is removed
Moving data from one SVC to another
Redistributing volumes to new storage, or off
failing storage, or to rebalance workload
Migration operations:
• Migrating extents within a pool
• Migrating extents off an Mdisk to other
Mdisks
• Migrating a volume from one Storage Pool to
another
• Migrating a volume to change virtualization
type from/to image mode (vs. managed
mode)
• Moving a volume between I/O Groups
9
Using IBM San Volume Controller for Storage
Migration – steps:
 Add the SVC to the SAN environment and prepare SVC.
 Unmount the selected LUNs or shutdown host.
 Add the SVC between your storage and the host.
 Mount the LUNs or start the host again (this time from the SVC)
 Start the migration (image mode to image mode)
 After migration is complete, unmount the selected LUNs or shut down the host
 Mount the LUNs from the target array or restart host.
Flash Copy
FlashCopy can be used to facilitate the movement or migration of data between hosts while minimizing
downtime for applications. By using FlashCopy, application data can be copied from source volumes to
new target volumes while applications remain online. After the volumes are fully copied and synchronized,
the application can be brought down and then immediately brought back up on the new server that is
accessing the new FlashCopy target volumes.
This method differs from the other migration methods. Common uses for this capability are host and backend storage hardware refreshes.
10
Viacom Vmirror 8G (from product brief)
•
“Clustered, dual active-active appliance design
combines with Apple Xsan or other equivalent SAN
file systems to create enterprise-grade storage
solutions for business-critical and video
applications”
•
“Embedded hardware mirroring protects data
continuously with no overhead on hosts or storage.
“
•
“Universal Fibre Channel compatibility, works with
any mix of OS drivers, switches, and Xsan-certified
storage systems.”
•
“Appliance-based design eliminates need for
installation of host software or proprietary device
drivers.”
•
Storage System Support unclear: “Other storage
can be certified and supported by request”
•
Offer migration service; I couldn’t find a lot of detail
on buying the appliance and performing your own
migrations
•
Customer focus seems to be video distribution /
editing with secondary business of migration
11
EMC VPLEX (from EMC VPLEX Data Mobility and
Migrations Sep. 2014)
•
“Move data non-disruptively between EMC
and non-EMC storage arrays without any
downtime for the host”
•
“Virtual volumes retain the same identities…
The host does not need to be reconfigured”
•
Available in local, metro, and geo
configurations. CoprHD supports local and
metro.
•
Metro configurations provide data availability
from two metro sites simultaneously using a
write through distributed cache.
•
High availability is provided as the data can
be accessed from either site even if the other
site is down.
•
From blog… “over 45 different [back-end]
array families supported”
12
VPLEX CoprHD Migration Examples
VPLEX Virtual Pool Migrations
 Used for changing Storage Pool, or Storage Array, Tiering Policies, etc. within the same site
VPLEX Virtual Array Migrations
 Allows moving to a different site plus different Storage Pool, Storage Array, Tiering Policy, etc.
 Uses one VPLEX cluster in Site1 and the other VPLEX cluster in Site 2 to perform the move.
Migration of Non-VPLEX Volume Flow Using VPLEX
1. Ingest non VPLEX Volume into CoprHD
2. Move ingested volume under VPLEX “protection” (host affecting)
3. Migrate volume using change Varray
Migration of Existing VPLEX Volume with backend array not supported by CoprHD
1. Ingest the virtual volume only into CoprHD
2. Migrate the virtual volume using vpool change to a CoprHD supported back-end array
13
Step 1: Discover Unmanged Volumes from Array
(a pre-requisite for “Ingestion”)
14
Step 2: Ingestion of Unmanaged Volumes
(verifies that their form is such CoprHD can manage it)
15
Ingestion Result
16
Step 3: Move selected ingested volumes under VPLEX
“management” (this will change volume ITLs impacting
host)
17
Result of moving Volume(s) Under VPLEX Management
At this point you have to export volumes and change host
ITLs to restore access to volumes (a rescan operation)
18
Step 4: Perform VPLEX Migration of Backend Volume
to Another Array (transparent to host)
19
Result after Data Migration Completed- original volumes
deleted, VPLEX servicing from new volumes
20
Internal workflow steps for the migration operation
21
So… what interfaces are needed to allow community
development of data movers like SVC? Or native array
migration like XIV?
For data movers:
• A way to tell if a data mover can “manage” an existing underlying volume, based on both
connectivity and data mover support for the backend array
• Workflow for moving existing volume under data mover management
• Workflow for exporting the volume from the data mover to hosts (including host rescan,
zoning)
• Workflow to migrate the volumes to a different back-end volume (typically creating a
mirror, copying the data, committing the migration, and removing the original volume
• Workflow for the datamover to cease management of the volume (returning control to
underlying array) (including host rescan, zoning)
For arrays with native migration:
• A way to determine what migration targets (arrays, varrays, vpools) can be supported
• A way for an array to indicate what data sources are valid
• Workflow to migrate the target to the new array
• Workflow to move host exports to the new array (host rescan, zoning)
• Deletion of the volume off the original array (if desired)
22
Download