LPARs revisited

advertisement
LPARs revisited
LPARs revisited
Jan Tits - STG
jantits@be.ibm.com
Thanks to Harv Emery and Jerry Moody
© 2009 IBM Corporation
Understanding and controlling LPARs
Agenda
2
© 2009 IBM Corporation
Understanding and controlling LPARs
What are logical partitions
  Divides physical resources of a single zSeries machine amongst multiple logical
machine images (partitions)
  Each partition looks and operates like its own physical machine
–  Independent of and without knowledge of other partitions
–  Potentially different configurations
•  Processors, Storage, I/O, Local Time
–  Its own operating system
3
© 2009 IBM Corporation
Understanding and controlling LPARs
What are logical partitions
  Usually used for hardware consolidation and workload balancing
  Partitions are managed by PR/SM Hypervisor which runs native on the machine
  Work is dispatched on a logical processor basis, not on partition as whole
–  Tasks from different partitions can be operating in parallel on different physical
processors
  At least one partition is required
4
© 2009 IBM Corporation
Understanding and controlling LPARs
What are logical partitions
  Up to 60 partitions can be defined/active at any given time
–  Defined via IOCDS
•  RESOURCE PARTITION=(CSS(0),(MICKEY,1),(MINNIE,2),(GOOFY,3),(*,
4)(*,5))
•  Reserved partition (*,id) no longer required for dynamic definition
–  Activation profile contains configuration details
•  Number/type of processors, amount of storage, etc
–  Activation done manually via HMC or automatically at POR
5
© 2009 IBM Corporation
Understanding and controlling LPARs
What are logical partitions
  Amount of physical processor time given to a partition is based on the partition's
logical processor Weight relative to the rest of the Active partitions.
–  If MICKEY has weight 200, MINNIE has weight 100
•  MICKEY gets up to 200/300 processing power of machine
•  MINNIE gets up to 100/300 processing power of machine
6
© 2009 IBM Corporation
Understanding and controlling LPARs
What are logical partitions
  Amount of physical processor time given to a partition is based on the partition's
logical processor Weight relative to the rest of the Active partitions.
–  If MICKEY has weight 200, MINNIE has weight 100
•  MICKEY gets up to 200/300 processing power of machine
•  MINNIE gets up to 100/300 processing power of machine
–  If GOOFY is Activated with weight 100
•  MICKEY gets up to 200/400 processing power of machine
•  MINNIE gets up to 100/400 processing power of machine
•  GOOFY gets up to 100/400 processing power of machine
7
© 2009 IBM Corporation
Understanding and controlling LPARs
Configuration Options
  Partition Types
–  ESA/390
–  ESA/390 TPF
–  Coupling Facility
–  Linux
–  VM
8
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 z/VM-mode partitions – z/VM 5.4
  New LPAR type for IBM System z10: z/VM-mode
–  Allows z/VM V5.4 users to configure all CPU types in a System z10 LPAR
  Offers added flexibility for hosting mainframe workloads
–  Add IFLs to an existing standard-engine z/VM LPAR to host Linux workloads
–  Add CPs to an existing IFL z/VM LPAR to host z/OS, z/VSE, or traditional CMS
workloads
–  Add zAAPs and zIIPs to host eligible z/OS specialty-engine workloads
–  Test integrated Linux and z/OS solutions in the same LPAR
  No change to software licensing
–  Software continues to be licensed according to CPU type
Linux Production
Dev/Test and Optional Failover
z/VM-mode LPAR
z/OS z/OS CFCC CMS Linux Linux Linux Linux Linux
z/OS Production
z/OS
z/OS
z/OS
CFCC
z/VM
z/VM
LPAR
LPAR
LPAR
LPAR
LPAR
LPAR
CP
CP
CP
CP
CP
zAAP
zAAP
zAAP
zIIP
zIIP
ICF
ICF
IFL
IFL
IFL
IFL
IFL
IBM System z10
9
© 2009 IBM Corporation
Understanding and controlling LPARs
Configuration Options
  Processor Types
–  General Processors
–  zAAP
Application Assist Processors (java)
–  zIIP
Integrated Information Processors (databases)
–  IFL
Integrated Facility for Linux
–  ICF
Internal Coupling Facility
  Each processor type has its own weight per partition
10
© 2009 IBM Corporation
Understanding and controlling LPARs
Configuration Options
  Physical processors are either...
–  Shared amongst all logical processors in any partition
•  Best way to maximize machine utilization
•  Excess processor resource from one partition can be used by another
Can be limited via per partition capping
–  Dedicated to a specific logical processor in a specific partition
•  Provides least LPAR management time
•  Does not allow excess processor time to be used by other logical
processors
11
© 2009 IBM Corporation
Understanding and controlling LPARs
PR/SM™ Hypervisor™ PU Dispatching “Pools”
  PU Pool – Physical PUs to dispatch to online logical PUs
  z10 or z9 EC with 10 CPs, 1 ICF, 2 IFLs, 1 zIIP and 3 zAAPs
– 
– 
– 
– 
– 
– 
– 
CP pool contains 10 CP engines
ICF pool contains 1 ICF
IFL pool contains 2 IFLs
zAAP pool contains 3 zAAPs
zIIP pool contains 1 zIIP
z/OS LPAR can have different CP, zAAP and zIIP weights
z/VM-mode LPAR (z10 only) can have different CP, zAAP, zIIP, IFL and ICF weights
  z990 with 11 CPs, 1 ICF, 2 IFLs, and 3 zAAPs
–  CP pool contains 11 CP engines
–  Specialty pool contains 6 engines – ICFs, IFLs, zAAPs
–  z/OS LPAR zAAP weight is set equal to the initial CP weight
12
© 2009 IBM Corporation
Understanding and controlling LPARs
PR/SM Hypervisor PU Pool Rules
  Logical PUs dispatched from supporting pool only
–  Logical CPs from CP pool only, for example
  Pool “width”
–  Width equals the number of physical PUs in the pool
–  Limits an LPAR’s maximum number of shared logical PUs brought online
  PUs placed in pools by
– 
– 
– 
– 
Activate (POR)
Concurrent Upgrade – OnDemand or Concurrent MES
Dedicated LPAR deactivation
Dedicated LPAR configure logical PU OFF
  PUs removed from pools by
–  Concurrent Downgrade - On/Off CoD, CBU, CPE, PU Conversion MES
–  Dedicated LPAR activation (“width” permitting)
–  Dedicated LPAR configure logical PU ON (“width” permitting)
13
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Coupling Facility Processors
  SoD: To be removed on System z “future”
–  Dynamic ICF Expansion
•  Dedicated ICF and Shared CP in the same CF partition
–  Dynamic ICF Expansion Across ICFs
•  Dedicated ICF and Shared ICF in the same CF partition
  CF partition processor options
–  Recommended for production
•  Dedicated ICFs (or dedicated CPs – Expensive!)
–  Not recommended (Exception: Backup or function test CF only)
•  Shared ICFs or shared CPs
•  Dynamic ICF Expansion
•  Read and follow “PR/SM Planning”, SB10-7153, recommendations on
weights, Dynamic Dispatch, and capping VERY carefully to avoid
performance problems, wasted resource, link checkstops, etc.
“Big” ICFs on System z10 do NOT address these issues.
14
© 2009 IBM Corporation
Understanding and controlling LPARs
Managing Partitions
  Dynamically add/remove partition processors/storage as workload requires
–  Requires specifying Reserve resources at partition activation to add
  Adjust a partition's weight by processor type
  Cap a partition to the specified weight
  Limit groups of partitions as an entity
15
© 2009 IBM Corporation
Understanding and controlling LPARs
Hardcapping
  Enforce the relative weight
–  Never allow the LPAR to use more than its share of resources
  Aka “hard cap” or “PR/SM hardcap”
16
© 2009 IBM Corporation
Understanding and controlling LPARs
Hardcapping
17
© 2009 IBM Corporation
Understanding and controlling LPARs
Hardcapping
18
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
  Set a defined capacity limit in support of Workload License Charges
  Aka “soft cap”
  Measured in MSU (millions of service units) per hour
  Enforce defined capacity limit using 4 hour average by WLM
–  when 4-hour average goes over the defined capacity limit, WLM caps the
partition
–  at IPL WLM defaults to a 4-hour time interval that contains no partition cpu
usage
  Software charges based upon highest observed rolling 4-hour average utilization
  Available to LPARs that meet these criteria:
–  zSeries hardware
–  z/OS in 64-bit mode
–  Shared general purpose engines (no dedicated engines)
–  Relative weight NOT enforced (no PR/SM hardcap)
19
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
20
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
21
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
  3 possible situations depending on relative sizes of defined capacity limit and the
weight of the partition
–  capacity share based on weight = defined capacity limit
•  WLM instructs PR/SM to fully cap LPAR at its weight
–  capacity share based on weight < defined capacity limit
•  not possible to cap LPAR at its weight all the time
•  cap LPAR at its weight part of the time
–  capacity share based on weight > defined capacity limit
•  WLM causes PR/SM to define a “phantom” weight
•  pretends utilization for an LPAR to make it possible to cap LPAR
22
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
  Phantom weight calculation
Phantom Weight(P)=
All active partitions
CEC Capacity
x Partition Weight(P) -
Σ Partition Weight(i)
i=1
Defined Capacity Limit(P)
  Value can amount to a maximum of up to 1000 times the number of active LPARs
–  not possible to specify a capacity limit that is very small compared to the
capacity based on the weight definition
23
© 2009 IBM Corporation
Understanding and controlling LPARs
Defined Capacity limit enforcement
24
© 2009 IBM Corporation
Understanding and controlling LPARs
LPAR group capacity limit
  Adds capability to define a z/OS LPAR as a member of a group of LPARs
–  Group can cross sysplex boundaries
–  Group can include LPARs not participating in a sysplex
  Adds capability to specify capacity of the group of LPARs in MSUs per hour
–  Synergy with LPAR defined capacity
  PR/SM™ and WLM work together to help:
–  Enforce the capacity defined for the group
–  Enforce the capacity optionally defined for each individual LPAR
  May provide better control of CP resource consumed for WLC pricing
  Exclusive to System z9/z10
  Requires at a minimum:
–  z/OS or z/OS.e Version 1 Release 8 (1.8)
25
© 2009 IBM Corporation
Understanding and controlling LPARs
LPAR group capacity limit
 May help reduce the
amount of ‘capping’
 For more productive
use of ‘white space’
and higher utilization
Individual LPAR capacity limits
Group capacity limit
Capacity Limit
Capacity
Limit
Capp
e
d
Capp
e
d
No
cap
LPAR2
LPAR2
LPAR3
LPAR3
LPAR1
LPAR1
No
cap
Capacity
Limit
LPAR1
26
LPAR2
LPAR3
LPAR1
LPAR2
LPAR3
© 2009 IBM Corporation
Understanding and controlling LPARs
LPAR group capacity limit
27
© 2009 IBM Corporation
Understanding and controlling LPARs
LPAR group capacity limit
28
© 2009 IBM Corporation
Understanding and controlling LPARs
LPAR group capacity limit
1
G R O U P
PAGE
C A P A C I T Y
R E P O R T
3
z/OS V1R8
INTERVAL 10.00.000
SYSTEM ID ESA7
DATE 03/02/2009
RPT VERSION V1R8 RMF
TIME 00.10.00
CYCLE 1.000 SECONDS
0GROUP-CAPACITY
NAME
LIMIT
-ALDGRP
35
PARTITION
SYSTEM
EURO
ESA9
EY2
ESA8
IS2
ESA6
----------------------------------TOTAL
0GROUP-CAPACITY
NAME
LIMIT
-BERGRP
174
PARTITION
SYSTEM
ACPT
ESA3
PROD
ESA1
----------------------------------TOTAL
29
-- MSU -WGT
-CAPPING-- ENTITLEMENT DEF
ACT
DEF
WLM%
MINIMUM MAXIMUM
0
5
50
NO
0.0
12
35
0
6
50
NO
0.0
12
35
0
4
50
NO
0.0
12
35
-----------------------------------------------15
150
-- MSU -WGT
-CAPPING-- ENTITLEMENT DEF
ACT
DEF
WLM%
MINIMUM MAXIMUM
0
29
150
NO
0.0
27
174
0
102
800
NO
0.0
147
174
-----------------------------------------------131
950
© 2009 IBM Corporation
Understanding and controlling LPARs
Intelligent Resource Director
  Automatic adjustments can be made by Workload Manager (WLM)
–  Add/remove partition processors (Vary CPU management)
•  Up to Reserve amount
–  Adjust individual partition weights (CPU weight management)
•  Within specified range
–  Shift weight between members of a sysplex on the same machine (a cluster)
–  2 other components:
•  Dynamic CHPID management
•  Channel subsystem I/O priority management
  Partition capping is mutually exclusive with WLM CPU Management
  Allows system to dynamically move resources to the work
  Introduces the concept of the LPAR cluster
30
© 2009 IBM Corporation
Understanding and controlling LPARs
CPU weight management
  WLM manages physical CPU
resources across z/OS images within
an LPAR cluster based on service
class goals
–  Dynamic changes to the LPAR
weight
–  Sum of LPAR weights can be
redistributed within the LPAR
cluster
2084
z/OS
SYSPLEX1
LPAR
Cluster
z/OS
SYSPLEX1
–  Partition(s) outside the cluster are
not affected
–  Move CP resource to the partition
which needs it
31
© 2009 IBM Corporation
Understanding and controlling LPARs
VARY CPU management
  Dynamic management of online CPs to each partition in the LPAR cluster
  Optimizes the number of CPs for the partition's current weight
  Prevents 'short' engines
–  Maximizes the effectiveness of the MVS dispatcher
Dynamic
Config On/Off
z/OS
SYSPLEX1
LCP
PCP
32
LCP
z/OS
SYSPLEX1
LCP
PCP
LCP
LCP
PCP
Dynamic
Config On/Off
LCP
PCP
© 2009 IBM Corporation
Understanding and controlling LPARs
RMF: LPAR Cluster Report
33
© 2009 IBM Corporation
Understanding and controlling LPARs
Intelligent Resource Director
34
© 2009 IBM Corporation
Understanding and controlling LPARs
What Are 'Short CP's?
  Term created by the WSC performance staff
–  Performance phenomenon created by LPAR hypervisor enforcing LPAR
weights on busy processors or capped partitions
  LPAR ensures each partition has access to the amount of processor specified by
the LPAR weights
–  This can reduce the MIPS delivered by the logical CPs in the partition
•  Controlled by a combination of LPAR weights and number of Logical CPs
•  Potential Performance Problems
  In a processor migration “short CPs” are not a problem as long as the partition on
the new CEC has access to an equal or greater number of MIPS per CP
–  Techdocs Item: WP100258 – Performance Considerations when moving to
Fewer, Faster CPUs
35
© 2009 IBM Corporation
Understanding and controlling LPARs
Logical to Physical CP Ratio
  Strive to keep logical to physical ratio in the 2:1 or 3:1 area
  A higher ratio will work but will cause an increased cost which needs to be
factored into the capacity plan
  Biggest issue to reducing the logical to physical CP ratio is the requirement to run
small LPARs as z/OS uni-processors
–  Availability issues of running z/OS as a uni-processor
–  Places greater emphasis on doing LPAR consolidation to make fewer
LPARs which need more than 1 CP of capacity
•  Virtual storage constraints need to be reviewed
36
© 2009 IBM Corporation
Understanding and controlling LPARs
HiperDispatch - Hardware and Hypervisor View
  Hypervisor (PR/SM)
– Virtualization layer at Operating
System image level
– Distributes physical resources
•  Memory
•  Channels
EMIF
•  Processors
Logical processors dispatched
on physical processors
Dedicated / Shared
Affinities
Share distribution based on
weights
37
Logical View of 2 Book System
Memory
Memory
L2 Cache
L2 Cache
L1.5
L1.5
L1
L1
CPU
CPU
PR/SM
L1.5
L1.5
L1.5
L1.5
L1
L1
L1
L1
CPU
CPU
CPU
CPU
© 2009 IBM Corporation
Understanding and controlling LPARs
The motivation for HiperDispatch
  Hardware caches are most efficiently used when each unit of work is
consistently dispatched on the same physical CPU (or related set of CPUs)
– In the past, System z hardware, firmware, and software have remained
relatively independent of each other
– But, modern processor and memory designs make a closer cooperation
appropriate. Topology is important:
•  Different CPUs in the complex have different distances to the various
sections of memory and cache (here, “distance” is measured in CPU
cycles.)
•  Memory access times can vary from less than 10 cycles to several
hundred cycles depending upon cache level and whether the access is
local or remote.
38
© 2009 IBM Corporation
Understanding and controlling LPARs
Horizontal CPU management
  PR/SM guarantees an amount of CPU service to a partition based on weights
  PR/SM distributes a partition’s share evenly across the logical processors
  Additional logicals are required to receive extra service which is left by other
partitions. The extra service is also distributed evenly across the logicals.
  The OS must run on all logicals to gather all its share [z/OS Alternate Wait
Management]
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
GP
zIIP
zIIP
LP Red: 16 GPs [weight 500] + 2 zAAPs [weight 50]
LP Blue: 16 GPs [weight 500] + 2 zAAPs [weight
Book 0
39
Book
50]
1
© 2009 IBM Corporation
Understanding and controlling LPARs
Vertical CPU Management
Logical processors are classified as vertical high, medium or low
PR/SM quasi-dedicates vertical high logicals to physical processors
The remainder of the share to distributed to the vertical medium processors
Vertical low processors are only given service when other partitions do not use
their entire share.
  Vertical low processors are parked by z/OS when no extra service is available
 
 
 
 
GP
GP
GP
GP
GP
GP
GP
Red: 7 Vertical High GPs
LP Blue: 8 Vertical Low GPs
Book 0
GP
GP
GP
GP
GP
GP
GP
GP
Blue: 7 Vertical High GPs
LP Red: 8
Vertical Low GPs
GP
zIIP
zIIP
M
M
L
L
Book 1
LP Red: 16 GPs [weight 500] + 2 zAAPs [weight 50]
LP Blue: 16 GPs [weight 500] + 2 zAAPs [weight 50]
40
© 2009 IBM Corporation
Understanding and controlling LPARs
HiperDispatch mode
  PR/SM
– Supplies topology information/updates to z/OS
– Ties high priority logicals to physicals (gives 100% share)
– Distributes remaining share to medium priority logicals
– Distributes any additional service to unparked low priority logicals
  z/OS
– Ties tasks to small subsets of logical processors
– Dispatches work to high priority subset of logicals
– Parks low priority processors that are not need or will not get service
The combination provides the processor affinity that maximized the efficiency of
the hardware caches
41
© 2009 IBM Corporation
Understanding and controlling LPARs
Addressing Workload Variability
  SRM Balancer stripes workload across Affinity Nodes by priority in an
attempt to keep the work evenly distributed
– Historic Address space utilization statistics are collected every 2 sec. in an
effort to “predict” future requirements
  Supervisor implements “needs-help” algorithm to address transient spikes in
utilization
– Maintains priority-based Affinity Node utilization statistics
– Responsively acts on statistics by asking other LPs for “Help”
  SRM tracks PR/SM white space attributes to dynamically address longer
term workload requirements
– Adds / removes Logical Processors to / from existing Affinity Nodes when
both the zOS workload warrants it, and the partner LPARs allow it
– Parks / unparks low priority LPs based on available excess capacity
42
© 2009 IBM Corporation
Understanding and controlling LPARs
Special processing for SYSSTC
  Work classified to SYSSTC typically contains lots of short-running local
SRBs required for transaction flow.
– Examples of address spaces recommended to be classified into
SYSSTC are VTAM, TCP/IP and IRLM.
  SRBs classified into SYSSTC can execute on any available logical
processor even HiperDispatch mode.
  WLM service policies should be reviewed with this in mind.
43
© 2009 IBM Corporation
Understanding and controlling LPARs
Controlling HiperDispatch
  HiperDispatch mode is enabled by specifying HIPERDISPATCH=YES in
IEAOPTxx
– The default is HIPERDISPATCH=NO for compatibility
– HIPERDISPATCH=YES is recommended
– There is a HealthChecker routine to remind if HIPERDISPATCH=NO
  Control authority for global performance data must be enabled for proper
operation in HiperDispatch mode
– This option is selected in the logical partition security controls on the
Hardware Management Console
– This is the default selection.
44
© 2009 IBM Corporation
Understanding and controlling LPARs
Hiperdispatch: RMF Report Example
z/OS V1R9
SYSTEM ID R71
DATE 01/28/2009
INTERVAL 00.59.753
CONVERTED TO z/OS V1R10 RMF
TIME 11.02.00
-CPU 2097
MODEL 716
H/W MODEL E26 SEQUENCE CODE 00000000000A73A2
HIPERDISPATCH=YES
0---CPU------------------ TIME % ---------------LOG PROC
--I/O INTERRUPTS-NUM TYPE
ONLINE
LPAR BUSY
MVS BUSY
PARKED
SHARE %
RATE
% VIA TPI
0
CP
100.00
99.50
100.0
0.00
100.0
29.40
0.00
1
CP
100.00
99.88
100.0
0.00
100.0
18.14
0.00
2
CP
100.00
99.83
100.0
0.00
100.0
31.71
0.00
3
CP
100.00
99.78
100.0
0.00
100.0
16.82
0.00
4
CP
100.00
72.24
100.0
0.00
66.4
0.00
0.00
Medium LCPs
5
CP
100.00
72.30
100.0
0.00
66.4
0.00
0.00
6
CP
100.00
35.16
100.0
46.14
0.0
0.00
0.00
Low Un-parked LCPs
7
CP
100.00
52.22
100.0
24.06
0.0
0.00
0.00
8
CP
100.00
0.00
----100.00
0.0
0.00
0.00
9
CP
100.00
0.00
----100.00
0.0
0.00
0.00
A
CP
100.00
0.00
----100.00
0.0
0.00
0.00
B
CP
100.00
0.00
----100.00
0.0
0.00
0.00
C
CP
100.00
0.00
----100.00
0.0
0.00
0.00
D
CP
100.00
0.00
----100.00
0.0
0.00
0.00
E
CP
100.00
0.00
----100.00
0.0
0.00
0.00
F
CP
100.00
0.00
----100.00
0.0
0.00
0.00
TOTAL/AVERAGE
39.43
100.0
532.8
96.08
0.00
45
© 2009 IBM Corporation
Understanding and controlling LPARs
z990 HMC Reset Profile – General Page (OS2)
Logical partition is the only mode supported, basic
mode is not available (HCD also provides only the
LPAR mode option)
Logical Partition 'Suffix' Naming Convention
LPnameXX
where LPname is the first 6 characters of the
customer required name
where xx = LPname suffix
1st character = LCSSid (0 = LCSS.0, 1 = LCSS.1)
2nd character = same as MIFid of 1 to F
46
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 - Reset Profile - General (CEC TSYS z10 E64)
Logical partition is the only mode supported, basic
mode is not available (HCD also provides only the
LPAR mode option)
Logical Partition 'Suffix' Naming Convention
LPnameXX
where LPname is the first 6 characters of the
customer required name
where xx = LPname suffix
1st character = CSSid (0 = CSS 0 to 3 = CSS 3)
2nd character = same as MIFid of 1 to F
47
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 - Reset Profile - Storage
Shows customer available storage only on System z10.
Fixed HSA is separate on System z10.
HSA is included on System z9 and earlier.
48
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 - Reset Profile - Dynamic
Global enable/disable of Dynamic I/O REMOVED on System z10.
- Dynamic I/O always enabled at POR on System z10.
- Dynamic I/O can be disabled/enabled later on System z10 globally
or by partition on the HMC.
S/390, z900 and z800 dynamic I/O expansion setting is removed.
For z990 to z9, dynamic I/O expansion requirement is supported
within HCD (IODF) by the MAXDEV option when defining a
subchannel set in an LCSS. This impacts HSA size.
On System z10, HSA is fixed with every supported LCSS defined
with 15 partitions, both subchannel sets, and maximum devices in
both subchannel sets. Dynamic LPAR add/delete is supported by
renaming from “*” to add, to “*” to delete.
Subchannel Set 0 – Up to 65,280 subchannels (63.75k)
Subchannel Set 1 – Up to 65,535 subchannels (64k - 1)
HSA Subchannels = MAXDEV times number of LPARs in LCSS
49
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Reset Profile - Options
No functional change
50
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Reset Profile - CP/SAP
New on System z10 EC: POR option to convert some
purchased CPs to SAPs for the duration of the POR
removed. Number of CPs, SAPs, zAAPs and zIIPs is
now a comment on System z10.
New on z9 EC: Option to view “Fenced Book” page
for Enhanced Book Availability”
51
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Reset Profile – Fenced Book Page
Shows purchased processors and available PUs if a 17 PU book is
removed. Allows selection of processors to be available after POR with
a fenced 17 PU book if all purchased processors do not fit.
“LIC Processors” 75 at top: 64 + 11 SAPs + 2 spares = 77 total on E64
With 17 fenced, 59 useable plus 1 spare = 60 remaining on E64
This case: All processors remain available with a 17 PU book fenced.
52
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Reset Profile – Fenced Book Page
Shows purchased processors and available PUs if a 20 PU book is
removed. Allows selection of processors to be available after POR with
a fenced 20 PU book if all purchased processors do not fit.
“LIC Processors” 75 at top: 64 + 11 SAPs + 2 spares = 77 total on E64
With 20 fenced, 57 available plus 0 spares = 57 remaining on E64
Note default removal of zIIPs, which can be changed.
53
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Reset Profile - Partitions
Remove partitions not to be activated automatically at POR.
Change Order of activation as desired.
Remember to activate Internal CFs before supported z/OS partitions.
54
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - General Page (Partition TOSP1)
New on System z10: z/VM-mode partition for z/VM 5.4
supports 5 different processor types.
The Logical partition 'Partition Identifier' is a 1 or 2 digit
unique hexadecimal value from 0 to 3F.
Recommended convention:
Assign first digit to partition’s LCSS ID – 0 to 3
Assign second digit to “Partition Number” – 1 to F
55
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - General Page
Partition Time Offset
from STP or ETR time
Time Offset Page
56
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile – Time Offset
Typically used with STP or the ETR set to local time
when a sysplex that is required to use LOCAL=GMT
needs to be set to a different time zone than the CEC.
Different sysplexes can also use different offsets to
operate on local time in multiple different time zones.
Note: This is somewhat unusual. Setting STP or the
ETR to GMT, not local time, is recommended.
57
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - Processor Page
Select Capacity Group Profile Name
Example: System z10 EC ESA/390 mode partition: CPs, zAAPs, zIIPs
Note: z10 EC allows Initial and Reserved to add up to 64 PUs. The operating
system “sees” all those PUs (31 in this case). Don’t specify more engines
than the operating system supports or engine types it doesn’t support.
58
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Customize Group Capacity Profile
Group Capacity in MSUs
Use “Customize/Delete Activation Profiles” task to copy
the Default Group Profile, customize it and save it with
the new Group name. This creates a new Capacity Group.
Operating System Support – z/OS 1.8 and above
59
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - Processor Page
Example: System z10 EC z/VM-mode
partition (z/VM 5.4 and later) which
supports all PU types.
60
New October, 2008: z10 allows creation
and save of an LPAR Image profile with
an Initial processor definition that can’t
be activated. Why? Allows creation of
profiles to be used when CBU
is IBM
active.
© 2009
Corporation
Understanding and controlling LPARs
System z10 Image Profile - Security
Global performance data = “Partition Data Report” information on
other partitions
I/O configuration control = Write IOCDS and HCD dynamic hardware
change.
Counter Facility Security Options – NEW System z10 October, 2008
Sampling Facility Security Options – NEW System z10 October, 2008
61
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - Storage
Recommended!
Central storage : z10 EC supports up to 1 TB Central
Storage ( Initial + Reserved) maximum in an LPAR
Check OS level for supported amounts.
Recommended!
Expanded Storage: Some OSs do not support. One
example is z/OS (64-bit) running on a System z10.
Initial Storage: Brought ON at LPAR activation.
Reserved Storage: Specify like reserved processors
to add storage to a running partition.
z/OS supports configuring storage ON and, if RSU
specified, OFF.
z/VM 5.4 supports configuring reserved central ON.
Storage origin (Central and Expanded storage)
It is recommended that you use the
'Determined by the system' option
62
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - Options
I/O
a
ity R
r
o
i
Pr
MSUs
nge
- WLC
Note: Sysplex cluster name is not
required for a z/OS member of a parallel
sysplex it is required for a non-member
partition to be managed as a workload.
63
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile – Load “Classic”
“Load during activation” = IPL when activated
Check box and provide IPL parameters (Recommended)
Allow dynamic change to IPL address/parameter if wanted.
Supports classic IPL or SCSI IPL (e.g. for Linux)
64
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Image Profile - Crypto for Crypto Express2
Note candidate list for concurrent
addition of Crypto Coprocessors.
65
© 2009 IBM Corporation
Understanding and controlling LPARs
Changing Running Partitions
66
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 EC Change LPAR Controls – CP Tab
Enter changes, “Change Running System” or
“Save to Profiles” or do both.
67
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Logical Processor Add
New on System z10!
Change is concurrent to the running partition.
Operating System Support: Concurrent change to the running OS z/
OS 1.10
z/VM 5.3
68
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Change LPAR Cryptographic Controls
New on System z10!
Change is concurrent to the running
partition.
Operating System Supprt:
z/OS ICSF: Concurrent change to the
running z/OS image.
69
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Change I/O Priority Queuing
70
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Change Logical Partition Security
Counter Facility Security Options – NEW System z10 October, 2008
Sampling Facility Security Options – NEW System z10 October, 2008
71
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Change LPAR Group Controls
Select “Edit” to reassign LPAR Group Membership
72
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Change LPAR Group Controls
Edit LPAR Group Membership
73
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Memory and Addressability
74
© 2009 IBM Corporation
Understanding and controlling LPARs
System z9, z990, and z890 Memory Granularity
 
Memory Granularity = Increment Size
– 
– 
– 
– 
Storage assignments/reconfiguration and HSA must be an even multiple
Physical increment size fixed at 64 MB
Expanded memory granularity always 64 MB
Central memory granularity is virtualized for each LP
• 
 
LP central memory increment is determined according to the size of the larger of the
two central memory elements defined in the activation profile: Initial central memory or
Reserved central memory
Single Storage Pool - All central storage
– 
 
ES configured as needed from CS - No POR needed
Review MVS™ RSU parameter. Large z990 increment size may result in too much memory being
reserved for reconfiguration after migration unless the new RSU options introduced in OS/390 2.10
are used.
Large Element Size
64 MB to 32 GB
>32 GB to 64 GB
>64 GB to 128 GB
>128 GB to 256 GB
75
Granularity
64 MB
128 MB
256 MB
512 MB
Rare to exceed today
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Memory Granularity
 
Memory Granularity = Increment Size
– 
– 
– 
– 
Storage assignments/reconfiguration must be an even multiple
System z10 physical increment size is fixed at
•  256 MB on z10 EC and 128 MB on z10 BC
(Was 64 MB on all z9, z990, z890)
Expanded memory granularity always 256 MB (64 MB on z9, z990, z890)
Central memory granularity is virtualized for each LP
• 
 
LP central memory increment is determined according to the size of the larger of the
two central memory elements defined in the activation profile: Initial central memory or
Reserved central memory
•  Virtualization determined by the limit of 512 increments per CS element
Single Storage Pool - All central storage
– 
 
ES configured as needed from CS - No POR needed
Review MVS™ RSU parameter. Large System z10 increment size may result in too much memory
being reserved for reconfiguration after migration unless the new RSU options introduced in OS/390
2.10 are used.
Large Element Size
128 MB up to 64 GB (BC only)
256 MB EC or > 64 GB BC up to 128 GB
>128 GB up to 256 GB (240 GB BC)
>256 GB to 512 GB (EC only)
>512 GB to 1 TB (EC only)
76
Granularity
128 MB
256 MB
512 MB
1 GB
2 GB
© 2009 IBM Corporation
Understanding and controlling LPARs
MVS RSU Parameter for System z
  In IEASYSxx. Specifies the number of central storage increments to be made
available for central storage reconfiguration
–  MVS attempts to keep this area free of long term fixed pages –
calculated as follows for RSU = number
CS amount to be reconfigured!
RSU = ---------------------------------------!
storage increment size!
  Or: Storage to be kept free = RSU * increment
–  If element size is upgraded, check the RSU parameter!
  z/OS - Recommended RSU coding instead of RSU = number
to allow z/OS to calculate the number of increments
–  RSU = % of storage, MBs or GBs
–  RSU = OFFLINE – Amount is Reserved Storage amount.
No storage is kept free of long term fixed pages except Reserved Storage
that has been configured ON.
77
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 SE – Base System Storage Allocation
Note: Memory only adds up to 98%
because this machine is in TEST
mode running an IBM internal tool
that steals 32 GB of memory.
78
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 Central Storage Addressability
 
 
 
79
Addressability for HSA is allocated top down from the highest
supported address
  System z10 – Highest address = 8 TB
  System z9/990/890 – Highest address = 1 TB
Partition Addressability is assigned below the fixed 64 GB HSA
addressability when the LPAR is activated
  Initial and Reserved CS are assigned contiguous CS
addressability at activation.
  Initial and Reserved ES (e.g. for z/VM) are assigned
contiguous CS addressability at activation. This does NOT
have to be contiguous with the partition’s CS
addressability for CS. In addition they have contiguous ES
addressability.
  Origin addresses are assigned top/down by default but a
specific origin can be requested. Allowing default
allocation is recommended in almost all cases.
  An addressability gap created, for example, by LPAR
deactivation, can be “filled in” by a subsequent activation
Physical Memory
  HSA starts at book/memory physical address 0 (in book 0)
  Physical memory is assigned for Initial ES and CS at
LPAR activation. It is assigned to Reserved ES and CS if
and when that memory is Configured ON.
  PR/SM is allowed to assign an LPAR any physical memory
anywhere in the machine. Limit is “purchased” memory
size.
  There is no requirement for LPAR physical memory to be
contiguous
  Enhanced Book Availability is designed to be able to move
partition memory and HSA to different book to enable
removing a book from a running machine without
disruption.
8,192 GB
LPAR/HSA
Addressability
64 GB
(z10 EC)
8,128 GB
Partition
Storage
Addressability
CS/ES
Storage
Pool
Note: Gaps may
exist due to
storage
reconfiguration
or LPAR
deactivation
Unused
Addressability
0 GB
© 2009 IBM Corporation
Understanding and controlling LPARs
System z10 SE - Logical Partition Storage Allocation
CS Addressability Math in MB:
8,388,608
Top End = 8TB
- 65,536
HSA = 64 GB
- 32,768
Gap = 32 GB
- 16,384
TOSPF CS = 16 GB
= 8,273,920
- 4,096
= 8,269,824
TOSPF Origin
TOSP1 CS = 4 GB
TOSP1 Origin
HSA is allocated 64 GB of addressability
but only 16 GB (8 on z10 BC) of storage.
Origin = Start of LPAR’s addressability
Initial = Initial CS
Maximum – Initial = Reserved CS
Current = Physical Memory Assigned
Gap = Unused addressability above
Expanded Storage Elements (eg z/VM)
Take BOTH CS Addressability below the
LPAR’s CS origin address (not shown)
AND separate ES Addressability (shown).
80
© 2009 IBM Corporation
Understanding and controlling LPARs
Additional Information
  SB10-7153-01a
  SG24-7516-01
  SC28-6873-00
  SA22-7602
  SG24-5952-00
PR/SM Planning Guide
IBM System z10 Enterprise Class Technical Guide
Hardware Management Console Operations Guide
z/OS Planning: Workload Management
z/OS Intelligent Resource Director
Available at www.ibm.com, search on full manual number
81
© 2009 IBM Corporation
Download