Uploaded by Arnold Kanyinda

vdocuments.mx vm-recovery-manager-ibm-vm-restart-hadr-solutions-powerha-systemmirror-for-linux

advertisement
Power Systems Virtual User Group 2019
VM Recovery Manager
HA & DR for IBM PowerVM LPARs
Michael Herrera
Certified IT Specialist - HA & DR SME
IBM Washington Systems Center
mherrera@us.ibm.com
@Herrera_HADR
YouTube: https://www.youtube.com/user/MrPowerHA/videos
Session Objectives
— VM Recovery Manager Offerings in 2019
• Product Positioning
— VM Recovery Manager for HA
• Product Use Cases
• Deployment Requirements
• Architectural requirements & Solution Design
— VM Recovery Manager for DR
• Product Use Cases
• Deployment Requirements
• Architectural requirements & Solution Design
— Upcoming Features in V1.4 – Q4 2019
KSYS
CLI or GUI
Administration
VM Recovery Manager Offering – Product Evolution
Jun
2017
Oct
2017
Sept
2018
Dec
2017
1.3 Product Beta
HA
Nov
2018
Dec
2018
• Support for P7+, P8, P9
VMR HA Release 1.3
Advanced Policies:
• Graphical & CLI management
• Flex Capacity Management
• LPM Management
• Collocation / Anti-collocation
Automated End-to-End management:
• Priority Based Failovers
• Host, VM, & App Level HA
• Host Blacklisting
• Application start sequence mgmt
HA agents for key middleware:
June
2019
SP2
• DB2, Oracle, SAP HANA
GDR Release 1.1.0.1
• Expanded storage
support:
✓ SVC/Storwize:
Sync,Async
DR
✓ DS8K Async
✓ EMC Sync
• IBM i Guest VM
• Boot list mgmt
1.2 Beta
• Host Group DR
• Failover
Rehearsal
GDR Release 1.2
VMR DR Release 1.3
•
•
•
•
•
• HA or DR deployments support
• Hitachi TrueCopy management
• Failover Rehearsal support for
Hitachi storage
Advanced DR policies (Host Groups)
DR Failover Rehearsal
Hitachi HUR replication support
Flex Capacity Management
VLANs, vSwitches per Site
SP2
HA & DR Solutions for Cognitive Systems (PowerVM based Servers)
Suited for
• Protecting mission critical workloads that
require quick recovery - Platinum Tier
• Advanced HA solutions (eg: HyperSwap)
• Storage and network management
• Automated Disaster Recovery
Suited for
• Cloud environments – Gold, Silver Tier
• Multi OS simplified HADR management
• Protecting few to large number of LPARs
• Optimized capacity deployments (no 1 to 1 backup)
Cognitive Systems HADR
Cluster HADR solutions
PowerHA SystemMirror
for AIX
VM Restart HADR solutions
PowerHA SystemMirror
for Linux
PowerHA SystemMirror
for IBM i
VM Recovery Manager
HA
VM Recovery Manager
DR (GDR)
5765-H39 Std. Edition
5765-H37 Ent. Edition
5765-L22 Std. Edition
5770-HAS Std. Edition (opt 2)
5770-HAS Ent. Edition (opt1)
5765-VRM
5765-DRG
•
Supported on all Power
servers
•
Supported on Power 8
and later servers
•
Supported on all Power
servers
•
Supported on Power 7+ and
later servers
•
Supported on Power 7 and
later servers
•
•
Premier clustering
solution for AIX workloads •
on Power for 25+ years
SUSE & SLES Support
Integrated support for
SAP HANA & NetWeaver
•
Premier clustering
solution for IBM i
workloads on Power for
10+ years
•
VM Restart among local
hosts for virtualized AIX, IBM
I & Linux VMs
•
VM Restart across sites for
virtualized AIX, IBM i & Linux
VMs
•
Tightly coupled with
Cluster Aware AIX
•
Enterprise Edition offering
is the primarily used
solution for HA mirroring
between Power Systems
Provides Host level, VM level
and application level
monitoring
•
•
Leverages Block level
replication of OS & data
volumes
•
LPM Management
•
•
New User Interface for
deployment & management
operations
Supports IBM, EMC & Hitachi
block level replication as well
as shared storage model
•
Simple to deploy & use DR
management
•
Enterprise Edition
available for Automated
DR functionality (IP &
Block Level replication
integration)
•
New Shared User
Interface with AIX
PowerHA clusters
VM Recovery Manager vs. Other VM Management Offerings
VM Failure
Detection
Host
Failure
Detection
VM Placement
Scheduling
SingleSite/HA
Multi-Site/DR
Application
Monitoring*
Automated
Initiation
HMC/NovaLink
No
Limited
No
Yes
No
No
No
PowerVC
No
Limited
Yes
Yes
No
No
Limited
Yes *
Comprehensive
Yes
Yes
Yes
Yes *
Yes
VMR HA | DR (K-sys)
*: Available for AIX & Linux VMs with installed agent
KSYS
Client
LPM
Automation
Tool
VM Recovery Manager – HA Solution:
1. Host Level failure
2. VM Level Failure
3. Application Level Failure
PowerVC
VMR-DR
NovaLink
HMC
VMR-HA
Managed Systems
Primary
Secondary
KSYS
LPM & Remote Restart Capabilities
Host Group A
Host Group B
Product ID – 5765-DRG
Managed VM
Managed VM
Managed VM
Managed VM
Host Pair 1
Managed VM
Unmanaged
VIO
VIO
VIO
VIO
Managed VM
Managed VM
Host Pair 2
Managed VM
Unmanaged
VIO
VIO
Block Level Replication
Disk Group
HMCs
Product ID – 5765-VRM
Managed VM
VIO
VIO
Unmanaged
HA VM Restart Model
Managed VM
KSYS
VMR-HA
LPM & Remote Restart Capabilities
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
Shared SAN Storage
Host Group
Expand up to 12 hosts per Host Group
LPM & Remote Restart Capabilities
Disaster Recovery Model
VMR-DR
Primary
Secondary
KSYS
LPM & Remote Restart Capabilities
Host Group A
Host Group B
Product ID – 5765-DRG
Managed VM
Managed VM
Host Pair 1
Managed VM
Unmanaged
VIO
VIO
VIO
VIO
Managed VM
Host Pair 2
Managed VM
Unmanaged
VIO
VIO
LPM & Remote Restart Capabilities
Disaster Recovery Model
VMR-DR
VIO
VIO
Unmanaged
Disk Group
KSYS
Product ID – 5765-VRM
HA VM Restart Model
VMR-HA
HMCs
LPM & Remote Restart Capabilities
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
Shared SAN Storage
Host Group
KSYS
VMR-HA
HMCs
HA VM Restart Model
(Local Data Center)
LPM & Remote Restart Capabilities
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
Host Group A
Shared SAN Storage
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
Host Group B
Shared SAN Storage
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
IBM Knowledge Center states
4 Host Group maximum when
using VMR HA
* That is a testing statement
Shared SAN Storage
Host Group C
[ 1 ] LPM Management
➔ LPM Individual VMs or Evacuate Managed VMs from Host & Return to Homehost
[ 2 ] VM Restart Capability ➔ Restart Offline or Failed VMs on new host (Independent of Simplified Remote Restart)
CLI or GUI
Administration
KSYS VM
VMR-HA
HMC
HMC
Host Group
Host 1: S824
Host 2: S922
VMR_oraDB
PHA_VMR_node1
PHA_VMR_node2
Unmanaged
Unmanaged
Unmanaged
Unmanaged
VIO 1
VIO 1
VIO 2
VMR SSP Cluster
Same (2) LUNs
shared across all
VIO servers in HG
10 GB
HA LUN
10 GB
Repository LUN
VIO 2
Host 3: E980
VMR HA: Graphical User Interface
VMR HA: LPM Host Evacuation (All VMs | Individual VMs)
LPM Operations
at Host Level
from the KSYS
GUI Interface
Migration Operations
LPM Operations
from the KSYS
controller CLI
# ksysmgr lpm vm <vm1>[,vm2,..] to=<hostname>
or
# ksysmgr lpm vm <vm1>[,vm2,..] to=<hostname> action=validate
LPM individual VM/s
# ksysmgr lpm host <hostname> to=<hostname>
or
# ksysmgr restore host <hostname>
Migrate all Managed VMs off host
LPM Validation only
Restore all Managed VMs to Homehost
VMR HA: KSYS VM Relocation Plans [ Host or VM Level ]
Migration Operations
# ksysmgr lpm host ATS-S824-8286-42A-SN2170AAV to=ATS-S922-9009-22A-SN7811D90
Relocation
Plan Report at
Host Level
Relocation
Plan Report
at VM Level
Migrate all Managed VMs to target host
VMR HA : VM Restart Actions
Remote Restart
Operations from
KSYS GUI
VM Manual Restart Operations
Remote Restart
Operations from
KSYS CLI
# ksysmgr restart vm <vm1>[,vm2,..] to=<hostname>
Relocate & Restart Managed VM/s
# ksysmgr restart host <hostname> to=<hostname>
Relocate & Restart ALL Managed VMs in a host
[ 1 ] LPM Management
Other Tools deliver
similar functionality
➔ LPM Individual VMs or Evacuate Managed VMs from Host & Return to Homehost
[ 2 ] VM Restart Capability ➔ Restart Offline or Failed VMs on new host (Independent of Simplified Remote Restart)
[ 3 ] Host Failure
➔ Automated VM Restart of Managed VMs or Notification if set to Advisory mode
CLI or GUI
Administration
KSYS VM
VMR-HA
HMC
HMC
Host Group
Host 1: S824
Host 2: S922
VMR_oraDB
PHA_VMR_node1
PHA_VMR_node2
Unmanaged
Unmanaged
Unmanaged
Unmanaged
VIO 1
VIO 1
VIO 2
VMR SSP Cluster
Same (2) LUNs
shared across all
VIO servers in HG
10 GB
HA LUN
10 GB
Repository LUN
VIO 2
Host 3: E980
VMR HA: What if I don’t want Automatic Restarts ?
The default behavior is for VMs to be automatically restarted on host failure
[ 1 ] LPM Management
Other Tools deliver
similar functionality
Differentiating
Features for AIX &
Linux VMs
➔ LPM Individual VMs or Evacuate Managed VMs from Host & Return to Homehost
[ 2 ] VM Restart Capability ➔ Restart Offline or Failed VMs on new host (Independent of Simplified Remote Restart)
[ 3 ] Host Failure
➔ Automated VM Restart of Managed VMs or Notification if set to Advisory mode
[ 4 ] VM Failure
➔ Individual VM Level Monitoring for AIX & Linux only
[ 5 ] Application Failure
➔ Included Agents for SAP HANA, Oracle, DB2, Postgres or framework for Custom Applications
CLI or GUI
Administration
KSYS VM
VMR-HA
HMC
HMC
Host Group
Host 1: S824
Host 2: S922
Oracle Agent
VMR_oraDB
Restart Automatically Locally or on next available host within VMR HA Host Group
PHA_VMR_node1
Custom User Defined Scripts
PHA_VMR_node2
Unmanaged
Unmanaged
Unmanaged
Unmanaged
VIO 1
VIO 1
VIO 2
VMR SSP Cluster
Same (2) LUNs
shared across all
VIO servers in HG
10 GB
HA LUN
10 GB
Repository LUN
VIO 2
Host 3: E980
VMR HA : Recap of Levels of Protection
Failure Scenarios:
• Host Failure ( AIX | Linux | IBM i VMs )
• VM Failure Detection (Agent Installed on AIX or Linux VMs)
• Application Failure Detection (Agent Installed on AIX or Linux VMs)
CLI or GUI
Administration
KSYS VM
Host Group
Host 1: S824
HMC
Host 2: S922
HMC
Host 3: E980
ora_app
VMR_oraDB2
custom_app1
VMR_oraDB2
PHA_VMR_node1
PHA_VMR_node1
PHA_VMR_node2
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
VIO 1
VIO 1
VIO 2
VIO 2
VMR SSP Cluster
10 GB
HA LUN
10 GB
Repository LUN
What is the VM Agent installing inside the VM ?
Daemon will be
automatically started
upon installation
Agents provided for SAP HANA,
DB2, Oracle & Postgres but
users can define their own to
monitor any other applications
Command specific to VMM
functionality only in VMs
VMR HA: Details on Included Application VM Agent scripts
Specific parameters for built-in supported application agents :
Attribute
Oracle
DB2
SAP HANA
TYPE
ORACLE
DB2
SAPHANA
version
Taken from application
Taken from application
Taken from application
instancename
Oracle user
DB2 instancename
SAP HANA instancename
database
Oracle SID
DB2 database
SAP HANA database
Sample Scripts will be used automatically when ksvmmgr app is defined specifying the TYPE:
start_script*
/usr/sbin/agents/oracle/startoracle
/usr/sbin/agents/db2/startdb2
/usr/sbin/agents/sap/startsaphana
stop_script*
/usr/sbin/agents/oracle/stoporacle
/usr/sbin/agents/db2/stopdb2
/usr/sbin/agents/sap/stopsaphana
monitor_script*
/usr/sbin/agents/oracle/monitororacle /usr/sbin/agents/db2/monitordb2 /usr/sbin/agents/sap/monitorsaphana
*** Table doesn’t reflect the Postgres scripts
VM Agent: Application Monitoring Enablement within VM
Oracle VM Configuration Steps
Oracle Agent
Enablement
Sample Syntax
# ksysvmmgr add app <appname> type=ORACLE instancename=<oracle_username> database=<database_name>
# ksysvmmgr sync
# ksysvmmgr –s add app ora_app type=ORACLE instancename=oracle database=vrmrsd
or
# ksysvmmgr –s add app ora_app type=ORACLE instancename=oracle database=vrmrsd critical=yes
* The “critical” option is not set by default.
It will attempt to restart the VM 3 times and
then attempt on the next available host
Other Included Agents and Custom Application Configuration
SAP HANA
Enablement
# ksysvmmgr –s add app <name> type=SAPHANA instancename=<HANAinstancename> critical=yes
DB2
Enablement
# ksysvmmgr –s add app <name> type=DB2 instancename=<DB2instancename> critical=yes
Postgres
Enablement
# ksysvmmgr –s add app <name> type=POSTGRES instancename=<POSTGRESinstancename> critical=yes
Custom
Application
# ksysvmmgr –s add app <name> start_script=<name> stop_script=<name> monitor_script=<name> critical=yes
VMR HA: Oracle VM Agent GUI View
The application status value GREEN
indicates a stable application. The YELLOW
value indicates an intermediate state of
application in which the application is
attempted to be restarted. The RED value
indicates permanent failure of an application
VMR HA : Enablement of VM Level & Application Level Monitoring
Step 1 : Install Agent in VM
Step 1a) Configure Application Definition with provided | custom agent
Step 2 : Enable HA Monitoring for VM
Step 3 : Re-Run Discovery
VMR HA : Failure Detection Tuning & Collocation Policies
Common Questions
1. Will VMR HA restart VMs if the PowerVM SRR flag is not set for them ?
Yes, it does not use the same profile configuration store. It already stores the configuration for
the managed VMs on the KSYS VM.
2. Will IBM i VMs get automatically started in the event of a VM failure ?
The VM agent packages are only available for AIX & Linux today. They allow the KSYS to do
additional monitoring at the VM level or the application level.
3. How does the implementation time change based on the number of servers ?
The discovery process may take a little bit longer but unlike clustered environments there is no
special customized set up or actively running standby LPARs.
4. Can / should I replace my HA clusters with VMR HA ?
True ”Mission Critical workloads” are typically better suited with a cluster configuration.
VM Recovery Manager for HA: Deployment Requirements

KSYS LPAR
KSYS Controller:
•
(1) VM running AIX with 1CPU and 8GBs of memory
CLI or GUI
Administration



VIO 1
VIO 2
10 GB
10 GB
Hardware Management Console (HMC):
•
(2) HMCs for redundancy – Version 9.1.1 and on
Servers:
P7+
•
•
•
FW770.90 or later’
FW780.70 (except 9117-MMB models)
FW783.50 or later
P8
•
•
FW 840.60 or later
FW 860.30 or above
P9
•
FW910 or later
VIO Server/s & SSP Cluster:
•
(2) VIOs per Host - Version 3.1.0 and on
•
(2) 10GB LUNs shared across all involved VIO servers / Host Group
*** Recommend additional 0.5 core and 2 GB memory on top of the VIOS sizing planned for the environment

Storage:
•
Shared Storage configuration (NPIV or VSCSI)
•
LUNs & zoning must be configured to provide LPM capabilities
VM Recovery Manager HA is included in AIX Enterprise Cloud Edition
VM Recovery Manager for HA [PID 5765-VRM ] – Licensing Scenario
KSYS
VMR HA
• VMR HA is installed on an AIX partition that
provides the KSYS cluster functionality. The
KSYS cluster is configured as type “HA”
Host Group
• KSYS is the Orchestrator that will act on the VMs
containing the managed cores in the host group
Scenario #1:
(6) Managed VMs consuming 10 processors
On Scale-out server: (500 x 10 = $5000)
On Scale-up server: (800 x 10 = $8000)
AIX
AIX
Linux
Linux
Scenario #2:
(4) Additional VMs consuming 8 processors
On Scale-out server: (+ $4000)
On Scale-up server: (+ $6400)
Linux
Linux
AIX
Linux
Linux
IBM i
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Unmanaged
Managed VMs
•
Newly created VMs
•
The default behavior is for automated VM restart
on Host failure. Advisory mode may be enabled.
Admin moves the VMs via LPM or via restart to
another system in the VMR HA host group
There can be up to 12 systems in the host group
and up to ~ 300 VMs can be managed by KSYS
Managed VMs
•
VM Recovery Manager HA vs. DR Comparison
VM Recovery Manager HA
- High Availability -
VM Recovery Manager DR
- Disaster Recovery -
Product ID
• 5765-VRM
• 5765-DRG
Solution Type
• High availability within one site
• Disaster Recovery across sites
Auto Restart Capabilities
• Automated or Manual ”Advisory mode” restart
1. Host failures
2. VM Failures (VM agent needed)
3. Application failures (VM agent needed)
• Manual restart (admin initiated automation)
Storage Support
• Shared storage based solution
• Block Level Replication management
• Site management
• Shared storage configuration support
Capacity Management
• Flex Capacity management, Best fit relocation
• Enterprise pool management tool
Restart order
• Priority based relocation
• Application restart sequencing
Application HA management
(within VM or across VMs-future)
• HA agents for common middleware
• Same support possible if HA is also
deployed
(DB2, Oracle, SAP HANA)
Advanced VM policies
• Collocation and anti collocation
• Host exclusion/blacklist for VM relocation
• Future
Planned HA management
(LPM)
• Vacate a host for maintenance
• Restore VMs to the original host
• Not applicable
Graphical User Interface
• Available for deployment & management
• Available in V1.3.0 SP2
Traditional Mobility & HA / DR Clustered Configurations
Primary Data Center
LPM & Remote Restart Capabilities
VM
Data
OS
VIO
VIO
VIO
VIO
Remote Restart Capabilities
• Manual or Automated
• Single OS instance (Same shared storage)
• Localized within same Site
• Flat Network
Can Automate these functions with VMR HA
LPM & Remote Restart Capabilities
Standby VM
Primary VM
Data
DataData
OS
OS
VIO
VIO
VIO
VIO
Clustered VMs
• Automated Fallover
• Independent OS volumes for each VM
• Shared Data Volumes
• Truly “Mission Critical” workloads
DR Clustering with Automated Fallover:
• Integrate with IP or Block Level Replication
Secondary Data Center
LPM & Remote Restart Capabilities
Primary VM
Data
DataData
OS
Standby VM
Standby VM
OS
OS
Data
DataData
VIO
VIO
VIO
VIO
VIO
VIO
Automated IP or Block Level Replication
Why not Replicate & Relocate the VMs to a Remote Site myself ?
Secondary Data Center
Primary Data Center
VM
Data
OS
Environment Specs:
• Different Network Segments
• Different Storage Enclosures
VIO
VIO
• Can I use replication & do this manually with separate profiles?
• What if I had to do this at scale?
VIO
VIO
KSYS
HMC
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Data
VIO
VIO
VIO
VIO
Block Level Replication of OS & Data Volumes
Inactive
Profiles
VM
HMC
OS
Data
OS
VMR-DR
VMR HA – VM Restart Model
Local Datacenter
HMCs
KSYS
LPM & Remote Restart Capabilities
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
VIO
Shared SAN Storage
Secondary Datacenter
Primary Datacenter
LPM & Remote Restart Capabilities
Host Group B
Managed VM
Managed VM
Managed VM
VIO
Host Pair 1
Unmanaged
VIO
VIO
VIO
Managed VM
Managed VM
VIO
VIO
Host Pair 2
Unmanaged
VIO
VIO
Unmanaged
Block Level Replication
LPM & Remote Restart Capabilities
Host Group A
KSYS
VMR DR – Site Recovery Model
VMR-HA
VMR-DR
PowerHA Enterprise Edition Cluster
How does VMR for DR compare to a PowerHA Enterprise Edition Cluster ?
Primary Data Center
Secondary Data Center
LPM & Remote Restart Capabilities
Standby VM
Standby VM
Primary VM
Data
DataData
OS
OS
VIO
VIO
VIO
VIO
Data
DataData
Automated Site
Fallover of the
Workload
Data
DataData
OS
VIO
VIO
Sync or Async IP or Block Level Replication of Data LUNs
Data
DataData
KSYS LPAR
VMR-DR
VM Recovery DR Solution
LPM & Remote Restart Capabilities
Managed VM
Data
DataData
OS
VIO
VIO
VIO
VIO
OS
Data
DataData
User Initiated
Planned or
Unplanned
VM Moves
Sync or Async Block Level Replication of OS & Data LUNs
VIO
VIO
OS
Data
DataData
VM Recovery Manager DR: Solution Overview
Primary Site
Secondary Site
VIO
Host Group A
HMCs
HMCs
DR Managed
DR Managed
DR Managed
Host Pair 1
Unmanaged
VIO
VIO
VIO
VIO
Host Group B
VIO
VIO
Host Group C
VIO
KSYS LPAR
VMR-DR
DR Managed
DR Managed
Host Pair 2
Unmanaged
VIO
Unmanaged
VIO
DR Managed
DR Managed
Host Pair 3
Unmanaged
Unmanaged
Disk Group
Disk Group
Disk Group
VIO
VIO
Product Use Cases:
• Planned DR [ Site | Host Group Level ]
• Unplanned DR [ Site | Host Group Level ]
• DR Testing [ Site | Host Group Level ]
VM Recovery Manager DR: Site Level Planned Move
Primary
Secondary
# ksysmgr move site from=Primary to=Secondary
Host Group A
Managed VM
Managed VM
KSYS
VMR-DR
Managed VM
Host Pair 1
Managed VM
Unmanaged
LPM & Remote Restart Capabilities
Managed VM
Managed VM
Host Pair 2
Managed VM
Unmanaged
Unmanaged
Managed VM
Managed VM
Managed VM
Host Pair 3
Managed VM
Unmanaged
Unmanaged
Host Group D
Host Group C
Managed VM
Host Pair n
Storage
Subsystem 1
Storage
Subsystem 2
LPM & Remote Restart Capabilities
Host Group B
Unmanaged
VM Recovery Manager DR: Host Group Level Planned Move
Primary
Secondary
Host Group A
Managed VM
KSYS
VMR-DR
Managed VM
Host Pair
Managed VM
Managed VM
Unmanaged
Host Pair
Managed VM
Unmanaged
Unmanaged
Managed VM
Host Pair
Managed VM
Unmanaged
Unmanaged
Host Group D
LPM & Remote Restart Capabilities
Unmanaged
LPM & Remote Restart Capabilities
Host Group C
Managed VM
Host Group B
# ksysmgr move host_group <HGA> from=Primary to=Secondary
Disk Group 1
Disk Group n
Storage
Subsystem 1
Storage
Subsystem 2
What about an Individual VM level move ?
Primary
Secondary
# ksysmgr move host_group <HGA> from=Primary to=Secondary
Managed
Unmanaged
VM
# ksysmgr unmanage vm name=<vm1,vm2,vmx>
# ksysmgr discover hg <hgA> verify=true
# ksysmgr move hg <HGA> from=Primary to=Secondary
Managed
Unmanaged
VM
Host Group A
Managed
Unmanaged
VM
Managed
Unmanaged
VM
Managed
Unmanaged
VM
Host Pair
Managed
Unmanaged
VM
Managed VM
Managed VM
Unmanaged
Unmanaged
Storage Layer
Unmanaged
Disk Group
Disk Group
Rebuilt Consistency Group
Disk Group
Disk Group
KSYS
VMR-DR
VM Recovery Manager DR: Un-Planned Outage Site Move
Primary
Secondary
# ksysmgr move site from=Primary to=Secondary dr_type=unplanned
KSYS
# ksysmgr cleanup site Primary
Host Group A
Managed VM
Managed VM
VMR-DR
Managed VM
Host Pair
Managed VM
Unmanaged
LPM & Remote Restart Capabilities
Host Group C
Host Group x
Managed VM
Managed VM
Managed VM
Host Pair
Managed VM
Unmanaged
Unmanaged
Managed VM
Managed VM
Managed VM
Host Pair
Managed VM
Unmanaged
Unmanaged
Block Level replication
LPM & Remote Restart Capabilities
Host Group B
Unmanaged
VM Recovery Manager DR: V1.2 Feature Highlights 2017
•
Host Groups
Additional granularity - ability to perform the move of
only a Host Group instead of an entire Site
•
Support for Flex Capacity
Increase or Decrease CPU & Memory resources
at the backup site. i.e. Perform DR operation with
fewer resources at the remote site
•
Priority Based Restarts
Tier VMs in your environment to restart them in a
prioritized order. High | Medium | Low
•
DR Fallover Rehearsal
Ability to perform non-disruptive testing by
leveraging a Flashcopy at the DR site
•
Network Isolation
Enable VLAN based network customization across
Sites. VMs can be brought up on backup Site with
different VLAN and/or different vSwitch configuration
•
Support for Shared Storage
configurations
VM Restart Management w/out mirror management
i.e. SVC Stretched Clusters or EMC VPLEX
Primary
Secondary
1.
2.
3.
# ksysmgr discover site Primary dr_test=yes
# ksysmgr verify site Primary dr_test=yes
# ksysmgr move site from=Primary to=Secondary dr_test=yes
KSYS
VMR-DR
……………………… Test Period ………………………….
# ksysmgr cleanup site Primary dr_test=yes
VM01 – RHEL 7.5
HANA 2.0 DB
VM01 – Test Clone
VM02 – RHEL 7.5
HANA 2.0 DB
VM02 – Test Clone
VM03 – RHEL 7.5
App Server
VM03 – Test Clone
VM04 – SLES 12
HANA 2.0 DB
VM04 – Test Clone
VM05 – SLES 12
HANA 2.0 DB
VM05 – Test Clone
VM06 – SLES 12
App Server
VM06 – Test Clone
VIO 1
VIO 1
VIO 2
VIO 2
Native IP Replication
8
4
0
1
2
1
1
2
Consistency
Group
Flashcopy
Target LUNs
Isolated Network
Managed
4.
Wait … You skipped past a few other cool features
Host Group A
Primary
Managed VM
Secondary
50% CPU
60% Memory
Managed VM
Managed VM
Feature Highlights:
High Priority
•
Host Group B
Priority Based Restarts
–
Unmanaged
50% CPU
60% Memory
Managed VM
VMR-DR
Managed VM
Unmanaged
Managed VM
KSYS
Managed VM
•
Managed VM
Flex Capacity Support
–
–
Med Priority
Unmanaged
High, Medium, Low Priorities to control stop &
restart VM order
Lower or Higher CPU & Memory Values
Provision PEP Activations
Host Group C
Unmanaged
Managed VM
•
50% CPU
60% Memory
Managed VM
Managed VM
Managed VM
High Priority
Unmanaged
Network Mapping Policies
–
–
–
VLAN & Vswitch Map for Planned | Unplanned
VLAN & Vswitch Map for DR Rehearsal
Sample Scripts to prime VMs on boot
Host Group D
Unmanaged
Managed VM
Managed VM
Unmanaged
Unmanaged
50% CPU
60% Memory
Managed VM
Managed VM
Low Priority
Site 1
Site 2
DR Test
VLAN1
VLAN1
VLAN1T
VLAN12
VLAN22
VLAN2T
VLAN13
VLAN23
VLAN3T
VLAN5
VLAN5
VLAN5T
VMR for DR: Newest Feature Highlights [ Latest Version 1.3.0.2 ]
•
HA or DR deployment support
The VMR-DR code base now includes the VMR-HA packages.
Although this release only support the KSYS as type “HA” or type
“DR” be on the lookout for a new cluster type in 2019
•
Hitachi TrueCopy management
Hitachi Synchronous replication support
•
XIV / A9000 Replication management
XIV / A9000 replication support
•
Cluster KSYS LPAR with PowerHA
Ability to create KSYS pair & have it automatically Fallover with
the PowerHA SystemMirror software
•
Graphical User Interface
Latest release brings the Dashboard view into the UI server
& some of the Administrative controls.
Same UI server will manage VMR HA & VMR DR clusters
KSYS
HA pair
LPAR
KSYS #2
VMR-DR
PHA Cluster
KSYS LPAR #2
OS: AIX 7.2.2
PHA: V7.2.3.1
VMR: V1.3.0.2
KSYS LPAR #1
OS: AIX 7.2.2
PHA: V7.2.3.1
VMR: V1.3.0.2
CPU:
Memory:
OS Disk:
CAA Disk:
CPU:
1
Memory: 8GB
OS Disk: 30GB
CAA Disk: 1GB
1
8GB
30GB
1GB
Primary
KSYS #1
VMR-DR
PHA Cluster
Secondary
HMC
HMC
Managed VM
Managed
Managed VM
Managed VM
Managed VM
Managed VM
Managed VM
VIO 1
VIO 1
VIO 2
Block Level Replication
8
4
0
1
2
1
1
2
VIO 2
User Interface
Administrative
Controls
YouTube: Video Demonstrations & Education Series
VMR for HA videos:
Clustering the KSYS:
VMR for DR videos:
YouTube Channel: https://www.youtube.com/user/MrPowerHA/videos
•
•
•
•
Concepts
Requirements
Deployment
Demonstration
YouTube: Video Demonstration & Education Series
Introducing VMR DR into an
existing environment
Our Environment before VMR DR: User Defined Consistency Groups
Native IP Replication
1
24
System
Storage
8
4
0
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
17
19
1
1
2
1
2
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
17
19
P o w er
740
P ower
780
VIO
VIO
VIO
VIO
VM15 - Consistency Group 1
VM15
VM15_L1
VM15_L1_AUX
VM15_L2
VM15_L2_AUX
VM15_L3
VM15_L3_AUX
VM15_L4
VM15_L4_AUX
VM17 - Consistency Group 2
VM17
VM17_L1
VM17_L1_AUX
VM17_L2
VM17_L2_AUX
VM17_L3
VM17_L3_AUX
VM17_L4
VM17_L4_AUX
VM15_AUX
Inactive
Profile
Definitions
VM17_AUX
Introducing VMR DR into Environment: VMR Defined Consistency Group
KSYS
Native IP Replication
8
4
0
VMR-DR
1
24
1
1
2
System
Storage
1
2
SAN 32B- E4
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
GE
1
17
19
0
0
2
16
18
1
3
17
19
P o w er
740
P ower
780
VIO
VIO
VIO
VIO
VM15
VM17
VM30
VM32
VM35
VM15_L1
VM15_L1
VM15_L2
VM15_L2
VM15_L3
VM15_L3
VM15_L4
VM15_L4
VM17
VM17_L1
- Consistency Group 2
VM15_L1_AUX
VM15_L1_AUX
VM15_L2_AUX
VM15_L2_AUX
VM15_L3_AUX
VM15_L3_AUX
VM15_L4_AUX
VM15_L4_AUX
VM17_L1_AUX
VM17_L2
VM17_L1
VM17_L2_AUX
VM17_L1_AUX
VM17_L3
VM17_L2
VM17_L3_AUX
VM17_L2_AUX
VM17_L4
VM17_L3
VM17_L4_AUX
VM17_L3_AUX
VM17_L4
VM17_L4_AUX
VM15_AUX
Delete Duplicate
Target
Profiles
VM17_AUX
VM24
VM26
VM28
Unmanaged
Unmanaged
Managed
VM15 - Consistency Group 1
VMRDG_KSYS - Consistency Group
VM Recovery Manager DR: LUN Provisioning Steps
User Actions:
•
Define the LUNs for the Source VMs
•
Define Target LUNs & Establish Replication (Sync or Async)
•
Match zoning between the 2 Sites
View of GDR Consistency Group
KSYS:
• Discovery Process will create the Consistency Group encompassing all VMs
• Default behavior is to manage all VMs on that Host
(managing all 6 VMs)
VM Recovery Manager DR: Lab Environment High Level Zoning Example
1
1
2
1
2
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
17
19
VM15
VM17
WWPN-P1A
WWPN-P2A
WWPN-P3A
WWPN-P4A
WWPN-P5A
WWPN-P6A
WWPN-P7A
WWPN-P8A
WWPN-P1A
WWPN-P2A
WWPN-P3A
WWPN-P4A
WWPN-P5A
WWPN-P6A
WWPN-P7A
WWPN-P8A
VM15
WWPN-P1B
WWPN-P2B
WWPN-P3B
WWPN-P4B
WWPN-P5B
WWPN-P6B
WWPN-P7B
WWPN-P8B
Storage Controllers A
VM17
WWPN-P1B
WWPN-P2B
WWPN-P3B
WWPN-P4B
WWPN-P5B
WWPN-P6B
WWPN-P7B
WWPN-P8B
System
Storage
8
4
0
Site B - Zones
1
24
Site A - Zones
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
17
19
Storage Controllers B
P o w er
740
P ower
780
VIO 1
VIO 2
VIO 1
vfcs0
vfcs1
VMR Managed VM01
vfcs2
vfcs3
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
VM vFC adapter WWPNs are the same across
the two sites (manual input on target)
vfcs0
vfcs1
VMR Managed VM02
vfcs2
vfcs3
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
WWPN-P
WWPN-S
VIO 2
DR Rehearsal Feature
DR Testing without impacting Production
Primary
Secondary
1.
2.
3.
# ksysmgr discover site Primary dr_test=yes
# ksysmgr verify site Primary dr_test=yes
# ksysmgr move site from=Primary to=Secondary dr_test=yes
KSYS
VMR-DR
VM01 – RHEL 7.5
HANA 2.0 DB
VM01 – Test Clone
VM02 – RHEL 7.5
HANA 2.0 DB
VM02 – Test Clone
VM03 – RHEL 7.5
App Server
VM03 – Test Clone
VM04 – SLES 12
HANA 2.0 DB
VM04 – Test Clone
VM05 – SLES 12
HANA 2.0 DB
VM05 – Test Clone
VM06 – SLES 12
App Server
VM06 – Test Clone
VIO 1
VIO 1
VIO 2
VIO 2
Native IP Replication
8
4
0
1
2
1
1
2
Consistency
Group
Flashcopy
Target LUNs
Isolated Network
Managed
……………………… Test Period ………………………….
DR Rehearsal Feature: Additional Flashcopy LUN Configuration
Native IP Replication
8
4
0
1
24
1
1
2
System
Storage
1
2
SAN 32B- E4
SAN 32B- E4
GE
1
0
0
2
16
18
1
3
GE
1
17
19
0
0
2
16
18
1
3
17
19
P o w er
740
P ower
780
VMRDG_KSYS - Consistency Group
VM15
VM17
VM15_L1
VM15_L1_AUX
VM15_L2
VM15_L2_AUX
VM15_L3
VM15_L3_AUX
VM15_L4
VM15_L4_AUX
VM17_L1
VM17_L1_AUX
VM17_L2
VM17_L2_AUX
VM17_L3
VM17_L3_AUX
VM17_L4
VM17_L4_AUX
VM15_L1_DR_TEST
VM15_L2_DR_TEST
Provision Optional 3rd set of LUNs to
enable the creation of clone LPARs at the
remote site when a DR test is initiated
VM15_DRTest
VM15_L3_DR_TEST
VM15_L4_DR_TEST
VM17_L1_DR_TEST
VM17_L2_DR_TEST
VM17_L3_DR_TEST
VM17_L4_DR_TEST
VM17_DRTest
Target
Storage
Subsystem
* Target LUNs & DR Test Flashcopy LUNs
Primary Site
Secondary Site
VMRM_Consistency_Group
Flashcopy
Target LUNs
1) Creating Snapshot Copy:
# ssh admin@<hostname/IP> svctask mkfcmap -cleanrate 0 -copyrate 0 -source <source_disk_name> -target <target_disk_name>
2) Creating Clone Copy:
# ssh admin@<hostname/IP> svctask mkfcmap -source <source_disk_name> -target <target_disk_name> -copyrate 100
Establish DR_Test LUN Relationships from KSYS:
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM01_Boot_Aux -target S822_VM01_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM02_Boot_Aux -target S822_VM02_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM03_Boot_Aux -target S822_VM03_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM04_Boot_Aux -target S822_VM04_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM05_Boot_Aux -target S822_VM05_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM06_Boot_Aux -target S822_VM06_Boot_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM01_Work_Aux -target S822_VM01_Work_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM02_Work_Aux -target S822_VM02_Work_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM04_Work_Aux -target S822_VM04_Work_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM05_Work_Aux -target S822_VM05_Work_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM102_Shared0_Aux -target S822_VM102_Shared0_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM0102_Shared0_Aux -target S822_VM0102_Shared0_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM0102_Shared1_Aux -target S822_VM0102_Shared1_DR_Test -copyrate 100
ssh admin@9.30.175.200 svctask mkfcmap -source S822_VM0405_Shared0_Aux -target S822_VM0405_Shared0_DR_Test -copyrate 100
VM Recovery Manager DR: Configuring the Network Mapping Policy
VLAN Switch & VLAN Matrix
Description
System-level network mapping policy:
Variation of the command done at the
KSYS management VM level
# ksysmgr modify system network_mapping=enable network=vlanmap sites=siteA,siteB
siteA=VLAN1,VLAN12,VLAN13,VLAN5 siteB=VLAN11,VLAN22,VLAN23,VLAN25
Site-level network mapping policy:
# ksysmgr modify site site1 network=vlanmap backupsite=site2 site1=1,2,3 site2=4,5,6 dr_test=yes
Host-Group-level network policy:
Variation of the command done at the site
definition level
Variation of the command done at the
Host group definition level
# ksysmgr modify host_group HG1 options network=vswitchmap sites=site1,site2
site1=vswitch1,vswitch2 site2=vswitch2,vswitch1
Host-level network policy:
Variation of the command done at the
individual host level
# ksysmgr modify host host_1_2, host 2_2 network=vlanmap sites=Site1, Site2
site1= VLAN1,VLAN12,VLAN13,VLAN5 site2= VLAN1,VLAN22,VLAN23,VLAN5
Sample Matrix tables:
VSWITCH Policy
VLAN Policy
Active Site
Backup Site
DR Test
Active Site
Backup Site
DR Test
VLAN 1
VLAN 11
VLAN 21
vswitch1
vswitch10
vswitch11
VLAN 2
VLAN 22
VLAN 32
vswitch2
vswitch20
vswitch22
VLAN 3
VLAN 23
VLAN 33
vswitch3
vswitch30
vswitch33
VM Recovery Manager DR: VM Acquisition / Release Priority Modification
Server A
Managed VM
# ksysmgr modify VM VM01_atssg190,VM02_atssg191 host=8284_22A_SN218ABCV priority=high
For VM VM01_atssg190 attribute(s) 'host', 'priority' are successfully modified.
For VM VM02_atssg191 attribute(s) 'host', 'priority' are successfully modified.
Managed VM
High Priority
Managed VM
Managed VM
Med Priority
Server B
Managed VM
Name:
VM01_atssg190
UUID:
199AEE3A-702A-4AC5-AC41-A50BD1446C17
Host:
8284_22A_SN218ABCV
State:
READY_TO_MOVE
Priority:
High
skip_resource_check: No
skip_power_on: No
memory_capacity: none
cpu_capacity: none
Dr Test State: INIT
Name:
VM02_atssg191
UUID:
33D4C645-BB70-488E-A516-DE66B6394130
Host:
8284_22A_SN218ABCV
State:
READY_TO_MOVE
Priority:
High
skip_resource_check: No
skip_power_on: No
memory_capacity: none
cpu_capacity: none
Dr Test State: INIT
# ksysmgr modify VM VM03_atssg192,VM04_atssg193,VM05_atssg194 host=8284_22A_SN218ABCV priority=low
For VM VM03_atssg192 attribute(s) 'host', 'priority' are successfully modified.
For VM VM04_atssg193 attribute(s) 'host', 'priority' are successfully modified.
For VM VM05_atssg194 attribute(s) 'host', 'priority' are successfully modified.
Managed VM
High Priority
Managed VM
Managed VM
Low Priority
Name:
VM05_atssg194
Name:
VM04_atssg193
Name:
VM03_atssg192
UUID:
54B3E149-20D0-4270-8400-B1B67B70B6A4
UUID:
79282FC8-F4D2-4162-A53C-74E3A6CE6238
UUID:
12CD1A9E-D242-4758-9C33-5DD4C49E7293
Host:
8284_22A_SN218ABCV
Host:
8284_22A_SN218ABCV
Host:
8284_22A_SN218ABCV
State:
READY_TO_MOVE
State:
READY_TO_MOVE
State:
READY_TO_MOVE
Priority:
Low
Priority:
Low
Priority:
Low
skip_resource_check: No
skip_resource_check: No
skip_resource_check: No
skip_power_on: No
skip_power_on: No
skip_power_on: No
memory_capacity: none
memory_capacity: none
memory_capacity: none
cpu_capacity: none
cpu_capacity: none
cpu_capacity: none
Dr Test State: INIT
Dr Test State: INIT
Dr Test State: INIT
VM Recovery Manager DR: Managing & Unmanaging VMs
Managed / Unmanaged VMs recognized by KSYS cluster
Server A
Display VMs:
Managed VM
Managed VM
Managed VM
Managed VM
Managed
VM
Unmanaged
Managed
VM
Unmanaged
Managed
VM
Unmanaged
Managed
VM
Unmanaged
VIOS
VIOS
Server B
Managed VM
Managed VM
Managed VM
Managed VM
Managed
Unmanaged
VM
# ksysmgr query vm
Unmanaged VMs:
test_atssg195
Managed VMs:
VM05_atssg194
VM04_atssg193
VM03_atssg192
VM02_atssg191
VM01_atssg190
All VMs:
Name:
test_atssg195
UUID:
1EBC0937-9329-4AE2-9C2B-7DCAE8E45608
Host:
8284_22A_SN218ABCV
State:
UNMANAGED
Priority:
Medium
skip_resource_check: No
skip_power_on: No
memory_capacity: none
cpu_capacity: none
Dr Test State: INIT
Display Managed VMs (Short Listing)
Managed
Unmanaged
VM
# ksysmgr query vm state=manage | grep –p Managed
# ksysmgr query vm state=unmanage | grep -p Unmanaged
Managed
Unmanaged
VM
VIOS
Detailed listing of the VMs that the
KSYS management knows about in all
of the defined servers in the
environment
Name:
VM05_atssg194
UUID:
54B3E149-20D0-4270-8400-B1B67B70B6A4
Host:
8284_22A_SN218ABCV
State:
READY_TO_MOVE
Priority:
Low
skip_resource_check: No
skip_power_on: No
memory_capacity: none
cpu_capacity: none
Dr Test State: INIT
Managed
Unmanaged
VM
VIOS
Description
Manage / Unmanage VMs that you do not want KSYS to control
# ksysmgr manage vm name=VM,VM host=<host>
# ksysmgr unmanage vm name=VM,VM host=<host>
Display only the list of VMs the KSYS
node Is actually controlling / or not
controlling when a move is initiated
Syntax to add / remove VMs from
KSYS control (default behavior is for
the discovery to try to manage all VMs)
VM Recovery Manager DR: Test VM Management Scenario from Videos
Primary
Secondary
Managed
Unmanaged
KSYS
VM01
Unmanaged
VM02
Unmanaged
VM03
Unmanaged
VM04 – SLES 12
HANA 2.0 DB
VM05 – SLES 12
HANA 2.0 DB
VM06 – SLES 12
App Server
VIO 1
VIO 1
VIO 2
Block Level Replication
8
4
0
1
2
1
1
2
VIO 2
VMR-DR
Managed
VM02 – RHEL 7.5
HANA 2.0 DB
VM03 – RHEL 7.5
App Server
VM04 – SLES 12
HANA 2.0 DB
VM05 – SLES 12
HANA 2.0 DB
VM06 – SLES 12
App Server
VIO 1
VIO 2
Original Consistency Group
VM01 – RHEL 7.5
HANA 2.0 DB
1 ] # ksysmgr unmanage vm name=vm1,vm2,vm3 host=<name>
VM01
Unmanaged
VM02
Unmanaged
VM03
Unmanaged
VM04 – SLES 12
HANA 2.0 DB
VM05 – SLES 12
HANA 2.0 DB
VM06 – SLES 12
App Server
VIO 1
VIO 2
Modified Consistency Group
Managed
Unmanaged
2 ] # ksysmgr discover site <Site> verify=true
* Managing all 6 VMs
* Managing only 3 VMs
VM Recovery Manager DR: CPU & Memory Flexible Capacity
Flexible Capacity
Description
VM Level Values:
An xml file can be used to set the
desired values for a number of VMs at
the same time
# ksysmgr modify vm file=ListOf3VMs.xml memory_capacity=60 cpu_capacity=50
skip_resource_check=yes
Host Level Values:
# ksysmgr modify host Site1_Host1,Site2_Host1 memory_capacity=50 cpu_capacity=50
skip_power_on=yes skip_resource_check=yes
Host Group Level Values:
# ksysmgr modify host_group Site1_HG1,Site1_HG2 memory_capacity=50
cpu_capacity=50 skip_power_on=yes skip_resource_check=yes
Alterations may be specified for all of
the VMs on a specific host
Alterations may be specified at the
Host Group level to encompass all of
the VMs in that group
Power Enterprise Pool Manipulation
VM Level Values:
# ksysrppmgr -o execute -h :hmc_2_1:hmcuser -m host_2_2:set:n: <memory_amount>:
<no_of_processors>
Description
Sample command to manipulate CPU
& memory values in a scenario with
PEP at the remote site
Reference: https://www.ibm.com/support/knowledgecenter/en/SS3RG3_1.2.0/com.ibm.gdr/admin_cod_EP.htm
Custom Script Management [ Introduce Pre or Post Event Custom Logic ]
— To register scripts for specific KSYS operations such as discovery and verification:
# ksysmgr add script entity=site | host_group pre_offline | post_online | pre_verify | post_verify=<script_file_path>
— Receive Separate Notifications on specific Events:
# ksysmgr add notify script=<full_path_script> events=event_name
* You can add a maximum number of 10 event notification scripts to the KSYS configuration settings
2019 IBM Systems Technical University
Sample Scripts Provided to Prime VMs on Remote Boot
data_collection.ksh
— System host name
— Network adapter information
Run script
periodically on
source VMs
/usr/local/bin
— Host bus adapter (HBA) configuration
setup_dr.ksh
— Domain Name System (DNS) server and domain
— Reconfigures the HBA adapters of the backup LPAR to be
the same as the source LPAR
— LPAR attributes
— Volume group attributes and hard disk attributes
— AIX kernel (sys0) configuration
failover_config.cfg
You must manually
edit this file and fill
appropriate
information about the
AIX operating
system configuration
in the backup LPAR
/usr/local/dr/data
— IP address of LPAR at the source site
— IP address of LPAR at the backup site
— Network netmask to use at the backup site
— DNS server that must be used at the backup site
— Network domain name to use at backup site
— Default gateway IP address to use the backup site
— Reconfigures the Ethernet adapter of the backup LPAR by
reading the contents of the failover_config.cfg
configuration file and set the host name, IP address, and
the base network of the backup LPAR
— Reconfigures any additional Ethernet adapters on the
backup LPAR by using the appropriate IP addresses
— Imports any volume groups from the source LPAR to the
backup LPAR
Run this script on the
backup LPAR during a
disaster recovery event. It
will read the contents of the
failover_config.cfg file
/usr/local/bin
Check
/opt/IBM/ksys/samples/site_specific_nw/AIX/README
2019
IBMOut:
Systems
Technical University
Sample Scripts Provided to Prime VMs on Remote Boot
2019 IBM Systems Technical University
VM Recovery Manager DR : Deployment Requirements
HMC
Min Levels:
• V8 R8.7.0 (2017) + SP1
• V9 R9.1.0 (2018) +
KSYS VM
VMR-DR
•
•
•
AIX 7.2 TL1 SP1+
1 Proc | 8GB of memory
Connectivity to HMCs & Storage at both sites
•
Provides CLI & UI Server Management
Power
Hardware
Models
•
•
•
P7+
POWER 8
POWER 9
VM
Operating
Systems
•
•
•
•
•
AIX V6 or later
IBM i V7.2 or later
RedHat(LE/BE) 7.2 or later
SUSE(LE/BE) 12.1 or later
Ubuntu 16.04 or later
PowerVM
VIOs
•
•
VIOS 2.2.6 (2017) + Fixes
VIOS 3.1.0 (2018) +
•
•
•
•
•
EMC SRDF: VMAX Family, Solutions Enabler SYMAPI V8.1.0.0
DS8K PPRC: DS8700 or later DS8000 (DSCLI 7.7.51.48 or later)
SVC/Storwize Metro or Global: SVC 6.1.0 (or later) or Storwize (7.1.0 or later)
Hitachi VSP, G1000, G400: Universal Copy: Universal Copy (CCI version 01-39-03 or later)
XIV / A9000: XIV Storage System command-line interface (XCLI) Version 4.8.0.6, or later
Block Level
Replication
* PowerVM may be Standard or Enterprise Edition when using a VMR DR setup
* Usually hosted at the remote location
➔ VMR V1.3.0.2
VM Recovery Manager DR
Licensing & Architectural Design Variations
How to license VM Recovery Manager DR [ 5765-DRG ]
KSYS
VMR DR
• VMR is installed on an AIX partition that provides the
KSYS cluster functionality. The KSYS would be
configured as cluster type “DR”
Scenario #2:
(1) Manage additional IBM i VM consuming 2 processors
On Scale-out server: (+ $1020 X 2= $2040)
On Scale-up server: (+ $1575 X 2= $3150)
AIX
AIX
Linux
Linux
Linux
Managed VMs
Scenario #1:
(6) Managed AIX & Linux VMs consuming 10 processors
On Scale-out server: (1020 x 10 = $10200)
On Scale-up server: (1575 x 10 = $15750)
Host Group X
• KSYS is the Orchestrator that will enable planned,
unplanned & DR Test operations at either the Site
level of Host Group Level
Linux
IBM i
Unmanaged
Unmanaged
Unmanaged
Unmanaged
VM Recovery Manager DR: Active | Active Configurations
Primary
Secondary
KSYS #1
HGB_VMR1
HGA_VMR1
DR Managed
DR Managed
Host Pair
DR Managed
Unmanaged
VIO
VIO
VIO
VIO
DR Managed
Host Pair
DR Managed
Unmanaged
Unmanaged
VIO
VIO
VIO
VIO
Disk Group
Disk Group
VMR-DR
VM Recovery Manager DR: Active | Active Configurations
Primary
VMR-DR
Secondary
KSYS #2
DR Managed
Host Pair
DR Managed
VIO
VIO
VIO
VIO
DR Managed
Host Pair
DR Managed
VIO
VIO
VIO
VIO
HGB_VMR2
DR Managed
HGA_VMR2
DR Managed
VM Recovery Manager DR: Active | Active Configurations
Primary
VMR-DR
Secondary
KSYS #2
KSYS #1
HGA_VMR1
DR Managed
DR Managed
Unmanaged
DR Managed
Host Pair
DR Managed
VIO
VIO
VIO
VIO
DR Managed
DR Managed
Unmanaged
Unmanaged
DR Managed
Host Pair
DR Managed
DR Managed
VIO
VIO
VIO
VIO
Disk Group
Disk Group
HGB_VMR2
HGB_VMR1
DR Managed
HGA_VMR2
DR Managed
VMR-DR
VM Recovery Manager DR: Many to One Configurations
VM Recovery Manager
Upcoming Features in 2019
VM Recovery Recovery Manager Offering – Product Roadmap
Jun
2017
Oct
2017
Sept
2018
Dec
2017
1.3 Product Beta
HA
Nov
2018
Dec
2018
• Support for P7+, P8, P9
VMR HA Release 1.3
Advanced Policies:
• Graphical & CLI management
• Flex Capacity Management
• LPM Management
• Collocation / Anti-collocation
Automated End-to-End management:
• Priority Based Failovers
• Host, VM, & App Level HA
• Host Blacklisting
• Application start sequence mgmt
HA agents for key middleware:
Dec
2019
June
2019
VMR HA Release 1.4 (In-plan)
SP2
•
•
•
•
•
Proactive HA Management
Graphical Updates
Additional VM Agents
PowerVC Integration
and others…
• DB2, Oracle, SAP HANA
GDR Release 1.1.0.1
• Expanded storage
support:
✓ SVC/Storwize:
Sync,Async
DR
✓ DS8K Async
✓ EMC Sync
• IBM i Guest VM
• Boot list mgmt
1.2 Beta
• Host Group DR
• Failover
Rehearsal
GDR Release 1.2
VMR DR Release 1.3
VMR DR Release 1.4 (In-plan)
•
•
•
•
•
• HA or DR deployments support
• Hitachi TrueCopy management
• Failover Rehearsal support for
Hitachi storage
•
•
•
•
Advanced DR policies (Host Groups)
DR Failover Rehearsal
Hitachi HUR replication support
Flex Capacity Management
VLANs, vSwitches per Site
SP2
Many to One Configurations
Type ”HADR” KSYS
GUI Administrative Controls
VM Level Moves
VM Recovery Manager: Today its “HA” or “DR” for your VMs
To accomplish this you would need (2) separate independent KSYS clusters:
(1) Type=DR ➔ Would manage VMs had block level replication to remote site
(1) Type=HA ➔ Would manage VMs that could only move locally
Secondary
Primary
KSYS
HA Managed
HA Managed
Host Group C
HA Managed
Unmanaged
VIO
VIO
VIO
VIO
Host Group A
KSYS
Host Group B
VMR-HA
DR Managed
DR Managed
DR Managed
Host Pair
Unmanaged
VIO
VIO
DR Managed
DR Managed
Unmanaged
Unmanaged
Disk Group
Disk Group
Host Pair
VIO
VIO
VMR-DR
VMR HA : PID 5765-VRM
VM Recovery Manager: V1.3.0 Product Packaging
VMR HA Code:
ksys.ha.license
ksys.hautils.rte
ksys.main.cmds
ksys.main.msg.en_US.cmds
ksys.main.rte
ksys.ui.agent
ksys.ui.common
ksys.ui.server
1.3.0.0
1.3.0.0
1.3.0.0
1.3.0.0
1.3.0.0
1.3.0.0
1.3.0.0
1.3.0.0
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
VMRestart User Interface
VMRestart User Interface
VMRestart User Interface
VMR DR : PID 5765-DRG
UI Server Only Deployment:
ksys.ui.common: (GUI common)
ksys.ui.server (GUI server)
ksys.ui.agent (GUI agent)
• UI Server code may be installed on a
separate standalone AIX VM
• Recommended starting points are the
same as the KSYS baseline 1CPU &
8GB of memory
Prompted to run:
/opt/IBM/ksys/ui/server/dist/server/bin/vmruiinst.ksh
1.3.0.1
1.3.0.1
1.3.0.1
1.3.0.1
1.3.0.1
1.3.0.1
1.3.0.1
1.3.0.1
• Separate packages are available for the
VM and Application monitoring
available for the AIX, RHEL & SLES
distributions
• IBM i partitions only support Host level
monitoring in the V1.3 release
Prompted to run:
/opt/IBM/ksys/ui/server/dist/server/bin/vmruiinst.ksh
VMR DR (GDR) Code:
ksys.license
ksys.main.cmds
ksys.main.msg.en_US.cmds
ksys.main.rte
ksys.mirror.ds8k.rte
ksys.mirror.emc.rte
ksys.mirror.hitachi.rte
ksys.mirror.svc.rte
• UI Server code is optional and may be
installed on a different VM
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
COMMITTED
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
Base Server Runtime
• Code base includes the various storage
agent packages that enable us to
handshake with the block level
replication and orchestrate DR moves
• Some of the code is shared with VMRHA hence why the DR solution will
enable you to define a “HA” cluster type
VM Recovery Manager DR: Introducing 3rd Cluster Type in V1.4.0 (Q4 2019)
VMR DR License: Will allow 3 Cluster Types
(1) HA
(2) DR
(3) HADR
– local availability / shared storage
– VM mobility between remote sites / block level replication
– VMs can be restarted automatically locally and also replicated to a DR location for planned or unplanned DR events
KSYS
Primary
VMR-HADR
Secondary
DR VM Mobility
Managed
Local Restart
Managed
Managed
VIO
VIO
Unmanaged
VIO
VIO
VIO
VIO
Combined HA & DR Functionality
VM Recovery Manager DR: Many to One Configurations
KSYS
Primary
Secondary
Managed
Managed
Managed
VIO
VIO
Unmanaged
VIO
VIO
Managed
Managed
Managed
VIO
Unmanaged
VIO
Many to One DR Setup
VMR-HADR
VM Recovery Manager DR: Individual VM Level Granularity
KSYS
Primary
Secondary
Managed
Managed
VIO
VIO
VIO
VIO
Managed
Managed
VIO
VIO
VM Level Movement
VMR-HADR
Session Summary
VM Recovery Manager for HA [ 5765-VRM ]
VM Recovery Manager for DR [ 5765-DRG ]
Local Datacenter Solution for VM Restarts
DR Orchestration to relocate VMs between remote sites
Auto Restart of VMs locally or Advisory Mode:
• Host Level Failure
• VM Level Failure
• Application Level Failure
User Initiated Actions:
• Planned DR Movement
[ Site | Host Group ]
• Unplanned DR Movement [ Site | Host Group ]
• DR Testing Capability
[ Site | Host Group ]
Additional Capabilities:
Additional Capabilities:
Priority Based Restarts
Flexible Capacity (Recreate VMs with Lower | Higher CPU & Memory values)
LPM Management (Host Level or VM Level)
N/A
Remote Restart Feature (Automated or Manual)
N/A
Collocation | Anti-Collocation Policies
N/A
Application Agents (HANA, Oracle, DB2, Postgres & Custom App framework)
N/A
CLI or Graphical User Interface (GUI to be combined further in upcoming release)
VM Recovery Manager Intellectual Capital & References
•
2017 Redbook [SG24-8382]
➔ based on GDR V1.1
•
New 2019 Redbook [ SG24-8426 ]
➔ based on VMR DR V1.3.0
•
Product Documentation
•
IBM Knowledge Center
•
LinkedIn Group
gdr@us.ibm.com
Thank you for your time!
Download